diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt
new file mode 100644
index 0000000000..5b7719993e
--- /dev/null
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt
@@ -0,0 +1,418 @@
+= Creating an Active/Passive Cluster =
+
+== Exploring the Existing Configuration ==
+
+When Pacemaker starts up, it automatically records the number and details
+of the nodes in the cluster as well as which stack is being used and the
+version of Pacemaker being used.
+
+This is what the base configuration should look like.
+
+[source,Bash]
+----
+# crm configure show
+node pcmk-1
+node pcmk-2
+property $id="cib-bootstrap-options" \
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2"
+----
+
+For those that are not of afraid of XML, you can see the raw
+configuration by appending "xml" to the previous command.
+
+.The last XML you'll see in this document
+[source,Bash]
+----
+# crm configure show xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+----
+
+Before we make any changes, its a good idea to check the validity of
+the configuration.
+
+[source,Bash]
+----
+# crm_verify -L
+crm_verify[2195]: 2009/08/27_16:57:12 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
+crm_verify[2195]: 2009/08/27_16:57:12 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
+crm_verify[2195]: 2009/08/27_16:57:12 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
+Errors found during check: config not valid -V may provide more details
+#
+----
+
+As you can see, the tool has found some errors.
+
+In order to guarantee the safety of your data
+footnote:[If the data is corrupt, there is little point in continuing to make it available]
+, Pacemaker ships with STONITH
+footnote:[A common node fencing mechanism. Used to ensure data integrity by powering off "bad" nodes]
+enabled. However it also knows when no STONITH configuration has been
+supplied and reports this as a problem (since the cluster would not be
+able to make progress if a situation requiring node fencing arose).
+
+For now, we will disable this feature and configure it later in the
+Configuring STONITH section. It is important to note that the use of
+STONITH is highly encouraged, turning it off tells the cluster to
+simply pretend that failed nodes are safely powered off. Some vendors
+will even refuse to support clusters that have it disabled.
+
+To disable STONITH, we set the stonith-enabled cluster option to
+false.
+
+[source,Bash]
+----
+# crm configure property stonith-enabled=false
+# crm_verify -L
+----
+
+With the new cluster option set, the configuration is now valid.
+
+[WARNING]
+=========
+
+The use of stonith-enabled=false is completely inappropriate for a
+production cluster. We use it here to defer the discussion of its
+configuration which can differ widely from one installation to the
+next. See <<_what_is_stonith>> for information on why STONITH is important
+and details on how to configure it.
+
+=========
+
+== Adding a Resource ==
+
+The first thing we should do is configure an IP address. Regardless of
+where the cluster service(s) are running, we need a consistent address
+to contact them on. Here I will choose and add 192.168.122.101 as the
+floating address, give it the imaginative name ClusterIP and tell the
+cluster to check that its running every 30 seconds.
+
+
+[IMPORTANT]
+===========
+The chosen address must not be one already associated with
+a physical node
+===========
+
+[source,Bash]
+----
+# crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip=192.168.122.101 cidr_netmask=32 \
+ op monitor interval=30s
+----
+
+The other important piece of information here is ocf:heartbeat:IPaddr2.
+
+This tells Pacemaker three things about the resource you want to
+add. The first field, ocf, is the standard to which the resource
+script conforms to and where to find it. The second field is specific
+to OCF resources and tells the cluster which namespace to find the
+resource script in, in this case heartbeat. The last field indicates
+the name of the resource script.
+
+To obtain a list of the available resource classes, run
+
+[source,Bash]
+----
+# crm ra classesheartbeat
+lsb ocf / heartbeat pacemakerstonith
+----
+
+To then find all the OCF resource agents provided by Pacemaker and
+Heartbeat, run
+
+[source,Bash]
+----
+# crm ra list ocf pacemaker
+ClusterMon Dummy Stateful SysInfo SystemHealth controld
+ping pingd
+# crm ra list ocf heartbeat
+AoEtarget AudibleAlarm ClusterMon Delay
+Dummy EvmsSCC Evmsd Filesystem
+ICP IPaddr IPaddr2 IPsrcaddr
+LVM LinuxSCSI MailTo ManageRAID
+ManageVE Pure-FTPd Raid1 Route
+SAPDatabase SAPInstance SendArp ServeRAID
+SphinxSearchDaemon Squid Stateful SysInfo
+VIPArip VirtualDomain WAS WAS6
+WinPopup Xen Xinetd anything
+apache db2 drbd eDir88
+iSCSILogicalUnit iSCSITarget ids iscsi
+ldirectord mysql mysql-proxy nfsserver
+oracle oralsnr pgsql pingd
+portblock rsyncd scsi2reservation sfex
+tomcat vmware
+#
+----
+
+Now verify that the IP resource has been added and display the cluster's
+status to see that it is now active.
+
+[source,Bash]
+----
+# crm configure shownode pcmk-1
+node pcmk-2primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip="192.168.122.101" cidr_netmask="32" \
+ op monitor interval="30s"
+property $id="cib-bootstrap-options" \
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="false" \
+# crm_mon
+============
+Last updated: Fri Aug 28 15:23:48 2009
+Stack: openais
+Current DC: pcmk-1 - partition with quorum
+Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
+2 Nodes configured, 2 expected votes
+1 Resources configured.
+============
+
+Online: [ pcmk-1 pcmk-2 ]
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-1
+----
+
+== Perform a Failover ==
+
+Being a high-availability cluster, we should test failover of our new
+resource before moving on.
+
+First, find the node on which the IP address is running.
+
+[source,Bash]
+----
+# crm resource status ClusterIP
+resource ClusterIP is running on: pcmk-1
+#
+----
+
+Shut down Pacemaker and Corosync on that machine.
+
+[source,Bash]
+----
+# ssh pcmk-1 -- /etc/init.d/pacemaker stop
+Signaling Pacemaker Cluster Manager to terminate: [ OK ]
+Waiting for cluster services to unload:. [ OK ]
+# ssh pcmk-1 -- /etc/init.d/corosync stop
+Stopping Corosync Cluster Engine (corosync): [ OK ]
+Waiting for services to unload: [ OK ]
+#
+----
+
+Once Corosync is no longer running, go to the other node and check the
+cluster status with crm_mon.
+
+[source,Bash]
+----
+# crm_mon
+============
+Last updated: Fri Aug 28 15:27:35 2009
+Stack: openais
+Current DC: pcmk-2 - partition WITHOUT quorum
+Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
+2 Nodes configured, 2 expected votes
+1 Resources configured.
+============
+
+Online: [ pcmk-2 ]OFFLINE: [ pcmk-1 ]
+----
+
+There are three things to notice about the cluster's current
+state. The first is that, as expected, pcmk-1 is now offline. However
+we can also see that ClusterIP isn't running anywhere!
+
+
+=== Quorum and Two-Node Clusters ===
+
+This is because the cluster no longer has quorum, as can be seen by
+the text "partition WITHOUT quorum" (emphasised green) in the output
+above. In order to reduce the possibility of data corruption,
+Pacemaker's default behavior is to stop all resources if the cluster
+does not have quorum.
+
+A cluster is said to have quorum when more than half the known or
+expected nodes are online, or for the mathematically inclined,
+whenever the following equation is true:
+
+....
+total_nodes < 2 * active_nodes
+....
+
+Therefore a two-node cluster only has quorum when both nodes are
+running, which is no longer the case for our cluster. This would
+normally make the creation of a two-node cluster pointless
+footnote:[Actually some would argue that two-node clusters are always pointless, but that is an argument for another time]
+, however it is possible to control how Pacemaker behaves when quorum
+is lost. In particular, we can tell the cluster to simply ignore
+quorum altogether.
+
+[source,Bash]
+----
+# crm configure property no-quorum-policy=ignore
+# crm configure show
+node pcmk-1
+node pcmk-2
+primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip="192.168.122.101" cidr_netmask="32" \
+ op monitor interval="30s"
+property $id="cib-bootstrap-options" \
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="false" \
+ no-quorum-policy="ignore"
+----
+
+After a few moments, the cluster will start the IP address on the
+remaining node. Note that the cluster still does not have quorum.
+
+[source,Bash]
+----
+# crm_mon
+============
+Last updated: Fri Aug 28 15:30:18 2009
+Stack: openais
+Current DC: pcmk-2 - partition WITHOUT quorum
+Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
+2 Nodes configured, 2 expected votes
+1 Resources configured.
+============
+Online: [ pcmk-2 ]
+OFFLINE: [ pcmk-1 ]
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
+----
+
+Now simulate node recovery by restarting the cluster stack on pcmk-1 and
+check the cluster's status.
+
+[source,Bash]
+----
+# /etc/init.d/corosync start
+Starting Corosync Cluster Engine (corosync): [ OK ]
+# /etc/init.d/pacemaker start
+Starting Pacemaker Cluster Manager: [ OK ]# crm_mon
+============
+Last updated: Fri Aug 28 15:32:13 2009
+Stack: openais
+Current DC: pcmk-2 - partition with quorum
+Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
+2 Nodes configured, 2 expected votes
+1 Resources configured.
+============
+Online: [ pcmk-1 pcmk-2 ]
+
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-1
+----
+
+Here we see something that some may consider surprising, the IP is back
+running at its original location!
+
+
+=== Prevent Resources from Moving after Recovery ===
+
+In some circumstances, it is highly desirable to prevent healthy
+resources from being moved around the cluster. Moving resources almost
+always requires a period of downtime. For complex services like Oracle
+databases, this period can be quite long.
+
+To address this, Pacemaker has the concept of resource stickiness
+which controls how much a service prefers to stay running where it
+is. You may like to think of it as the "cost" of any downtime. By
+default, Pacemaker assumes there is zero cost associated with moving
+resources and will do so to achieve "optimal"
+footnote:[It should be noted that Pacemaker's definition of
+optimal may not always agree with that of a human's. The order in which
+Pacemaker processes lists of resources and nodes creates implicit
+preferences in situations where the administrator has not explicitly
+specified them]
+resource placement. We can specify a different stickiness for every
+resource, but it is often sufficient to change the default.
+
+[source,Bash]
+----
+# crm configure rsc_defaults resource-stickiness=100
+# crm configure show
+node pcmk-1
+node pcmk-2
+primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip="192.168.122.101" cidr_netmask="32" \
+ op monitor interval="30s"
+property $id="cib-bootstrap-options" \
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="false" \
+ no-quorum-policy="ignore"rsc_defaults $id="rsc-options" \
+ resource-stickiness="100"
+----
+
+If we now retry the failover test, we see that as expected ClusterIP
+still moves to pcmk-2 when pcmk-1 is taken offline.
+
+[source,Bash]
+----
+# ssh pcmk-1 -- /etc/init.d/pacemaker stop
+Signaling Pacemaker Cluster Manager to terminate: [ OK ]
+Waiting for cluster services to unload:. [ OK ]
+# ssh pcmk-1 -- /etc/init.d/corosync stop
+Stopping Corosync Cluster Engine (corosync): [ OK ]
+Waiting for services to unload: [ OK ]
+# ssh pcmk-2 -- crm_mon -1
+============
+Last updated: Fri Aug 28 15:39:38 2009
+Stack: openais
+Current DC: pcmk-2 - partition WITHOUT quorum
+Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
+2 Nodes configured, 2 expected votes
+1 Resources configured.
+============
+
+Online: [ pcmk-2 ]
+OFFLINE: [ pcmk-1 ]
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
+----
+
+However when we bring pcmk-1 back online, ClusterIP now remains running
+on pcmk-2.
+
+[source,Bash]
+----
+# /etc/init.d/corosync start
+Starting Corosync Cluster Engine (corosync): [ OK ]
+# /etc/init.d/pacemaker start
+Starting Pacemaker Cluster Manager: [ OK ]
+# crm_mon
+============
+Last updated: Fri Aug 28 15:41:23 2009
+Stack: openais
+Current DC: pcmk-2 - partition with quorum
+Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
+2 Nodes configured, 2 expected votes
+1 Resources configured.
+============
+
+Online: [ pcmk-1 pcmk-2 ]
+
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
+----
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.xml b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.xml
index 0b36143024..c2e89c7492 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.xml
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.xml
@@ -1,408 +1,319 @@
-
-
-%BOOK_ENTITIES;
-]>
-
- Creating an Active/Passive Cluster
-
- Exploring the Existing Configuration
-
- When Pacemaker starts up, it automatically records the number and details of the nodes in the cluster as well as which stack is being used and the version of Pacemaker being used.
-
-
- This is what the base configuration should look like.
-
-
-
-[root@pcmk-2 ~]# crm configure show
+
+
+
+
+
+ Creating an Active/Passive Cluster
+
+
+Exploring the Existing Configuration
+When Pacemaker starts up, it automatically records the number and details
+of the nodes in the cluster as well as which stack is being used and the
+version of Pacemaker being used.
+This is what the base configuration should look like.
+# crm configure show
node pcmk-1
node pcmk-2
property $id="cib-bootstrap-options" \
- dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2"
-
-
- For those that are not of afraid of XML, you can see the raw configuration by appending "xml" to the previous command.
-
-
-
-[root@pcmk-2 ~]# crm configure show xml
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2"
+For those that are not of afraid of XML, you can see the raw
+configuration by appending "xml" to the previous command.
+# crm configure show xml
<?xml version="1.0" ?>
<cib admin_epoch="0" crm_feature_set="3.0.1" dc-uuid="pcmk-1" epoch="13" have-quorum="1" num_updates="7" validate-with="pacemaker-1.0">
- <configuration>
- <crm_config>
- <cluster_property_set id="cib-bootstrap-options">
- <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f"/>
- <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>
- <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>
- </cluster_property_set>
- </crm_config>
- <rsc_defaults/>
- <op_defaults/>
- <nodes>
- <node id="pcmk-1" type="normal" uname="pcmk-1"/>
- <node id="pcmk-2" type="normal" uname="pcmk-2"/>
- </nodes>
- <resources/>
- <constraints/>
- </configuration>
-</cib>
-
-
- The last XML you’ll see in this document
-
-
- Before we make any changes, its a good idea to check the validity of the configuration.
-
-
-
-[root@pcmk-1 ~]# crm_verify -L
-crm_verify[2195]: 2009/08/27_16:57:12 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
+ <configuration>
+ <crm_config>
+ <cluster_property_set id="cib-bootstrap-options">
+ <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f"/>
+ <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>
+ <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>
+ </cluster_property_set>
+ </crm_config>
+ <rsc_defaults/>
+ <op_defaults/>
+ <nodes>
+ <node id="pcmk-1" type="normal" uname="pcmk-1"/>
+ <node id="pcmk-2" type="normal" uname="pcmk-2"/>
+ </nodes>
+ <resources/>
+ <constraints/>
+ </configuration>
+</cib>
+Before we make any changes, its a good idea to check the validity of
+the configuration.
+# crm_verify -L
+crm_verify[2195]: 2009/08/27_16:57:12 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_verify[2195]: 2009/08/27_16:57:12 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_verify[2195]: 2009/08/27_16:57:12 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
-Errors found during check: config not valid
- -V may provide more details
-[root@pcmk-1 ~]#
-
-
- As you can see, the tool has found some errors.
-
-
- In order to guarantee the safety of your data
-
- If the data is corrupt, there is little point in continuing to make it available
-
- , Pacemaker ships with STONITH
-
- A common node fencing mechanism. Used to ensure data integrity by powering off "bad" nodes.
-
- enabled. However it also knows when no STONITH configuration has been supplied and reports this as a problem (since the cluster would not be able to make progress if a situation requiring node fencing arose).
-
-
- For now, we will disable this feature and configure it later in the Configuring STONITH section. It is important to note that the use of STONITH is highly encouraged, turning it off tells the cluster to simply pretend that failed nodes are safely powered off. Some vendors will even refuse to support clusters that have it disabled.
-
-
- To disable STONITH, we set the stonith-enabled cluster option to false.
-
-
-
- With the new cluster option set, the configuration is now valid.
-
-
-
- The use of stonith-enabled=false is completely inappropriate for a production cluster.
- We use it here to defer the discussion of its configuration which can differ widely from one installation to the next.
- See for information on why STONITH is important and details on how to configure it.
-
-
-
-
-
- Adding a Resource
-
- The first thing we should do is configure an IP address. Regardless of where the cluster service(s) are running, we need a consistent address to contact them on. Here I will choose and add 192.168.122.101 as the floating address, give it the imaginative name ClusterIP and tell the cluster to check that its running every 30 seconds.
-
-
-
- The chosen address must not be one already associated with a physical node
-
-
-
-
-crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
- params ip=192.168.122.101 cidr_netmask=32 \
- op monitor interval=30s
-
-
- The other important piece of information here is ocf:heartbeat:IPaddr2. This tells Pacemaker three things about the resource you want to add. The first field, ocf, is the standard to which the resource script conforms to and where to find it. The second field is specific to OCF resources and tells the cluster which namespace to find the resource script in, in this case heartbeat. The last field indicates the name of the resource script.
-
-
- To obtain a list of the available resource classes, run
-
-
-
-[root@pcmk-1 ~]# crm ra classes
-heartbeat
-lsb
-ocf / heartbeat pacemaker
-stonith
-
-
- To then find all the OCF resource agents provided by Pacemaker and Heartbeat, run
-
-
-
-[root@pcmk-1 ~]# crm ra list ocf pacemaker
-ClusterMon Dummy Stateful SysInfo SystemHealth controld
-ping pingd
-[root@pcmk-1 ~]# crm ra list ocf heartbeat
-AoEtarget AudibleAlarm ClusterMon Delay
-Dummy EvmsSCC Evmsd Filesystem
-ICP IPaddr IPaddr2 IPsrcaddr
-LVM LinuxSCSI MailTo ManageRAID
-ManageVE Pure-FTPd Raid1 Route
-SAPDatabase SAPInstance SendArp ServeRAID
-SphinxSearchDaemon Squid Stateful SysInfo
-VIPArip VirtualDomain WAS WAS6
-WinPopup Xen Xinetd anything
-apache db2 drbd eDir88
-iSCSILogicalUnit iSCSITarget ids iscsi
-ldirectord mysql mysql-proxy nfsserver
-oracle oralsnr pgsql pingd
-portblock rsyncd scsi2reservation sfex
-tomcat vmware
-[root@pcmk-1 ~]#
-
-
- Now verify that the IP resource has been added and display the cluster’s status to see that it is now active.
-
-
-
-[root@pcmk-1 ~]# crm configure show
-node pcmk-1
-node pcmk-2
-primitive ClusterIP ocf:heartbeat:IPaddr2 \
- params ip="192.168.122.101" cidr_netmask="32" \
- op monitor interval="30s"
+Errors found during check: config not valid -V may provide more details
+#
+As you can see, the tool has found some errors.
+In order to guarantee the safety of your data
+If the data is corrupt, there is little point in continuing to make it available
+, Pacemaker ships with STONITH
+A common node fencing mechanism. Used to ensure data integrity by powering off "bad" nodes
+enabled. However it also knows when no STONITH configuration has been
+supplied and reports this as a problem (since the cluster would not be
+able to make progress if a situation requiring node fencing arose).
+For now, we will disable this feature and configure it later in the
+Configuring STONITH section. It is important to note that the use of
+STONITH is highly encouraged, turning it off tells the cluster to
+simply pretend that failed nodes are safely powered off. Some vendors
+will even refuse to support clusters that have it disabled.
+To disable STONITH, we set the stonith-enabled cluster option to
+false.
+# crm configure property stonith-enabled=false
+# crm_verify -L
+With the new cluster option set, the configuration is now valid.
+
+The use of stonith-enabled=false is completely inappropriate for a
+production cluster. We use it here to defer the discussion of its
+configuration which can differ widely from one installation to the
+next. See for information on why STONITH is important
+and details on how to configure it.
+
+
+
+Adding a Resource
+The first thing we should do is configure an IP address. Regardless of
+where the cluster service(s) are running, we need a consistent address
+to contact them on. Here I will choose and add 192.168.122.101 as the
+floating address, give it the imaginative name ClusterIP and tell the
+cluster to check that its running every 30 seconds.
+
+The chosen address must not be one already associated with
+a physical node
+
+# crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip=192.168.122.101 cidr_netmask=32 \
+ op monitor interval=30s
+The other important piece of information here is ocf:heartbeat:IPaddr2.
+This tells Pacemaker three things about the resource you want to
+add. The first field, ocf, is the standard to which the resource
+script conforms to and where to find it. The second field is specific
+to OCF resources and tells the cluster which namespace to find the
+resource script in, in this case heartbeat. The last field indicates
+the name of the resource script.
+To obtain a list of the available resource classes, run
+# crm ra classesheartbeat
+lsb ocf / heartbeat pacemakerstonith
+To then find all the OCF resource agents provided by Pacemaker and
+Heartbeat, run
+# crm ra list ocf pacemaker
+ClusterMon Dummy Stateful SysInfo SystemHealth controld
+ping pingd
+# crm ra list ocf heartbeat
+AoEtarget AudibleAlarm ClusterMon Delay
+Dummy EvmsSCC Evmsd Filesystem
+ICP IPaddr IPaddr2 IPsrcaddr
+LVM LinuxSCSI MailTo ManageRAID
+ManageVE Pure-FTPd Raid1 Route
+SAPDatabase SAPInstance SendArp ServeRAID
+SphinxSearchDaemon Squid Stateful SysInfo
+VIPArip VirtualDomain WAS WAS6
+WinPopup Xen Xinetd anything
+apache db2 drbd eDir88
+iSCSILogicalUnit iSCSITarget ids iscsi
+ldirectord mysql mysql-proxy nfsserver
+oracle oralsnr pgsql pingd
+portblock rsyncd scsi2reservation sfex
+tomcat vmware
+#
+Now verify that the IP resource has been added and display the cluster’s
+status to see that it is now active.
+# crm configure shownode pcmk-1
+node pcmk-2primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip="192.168.122.101" cidr_netmask="32" \
+ op monitor interval="30s"
property $id="cib-bootstrap-options" \
- dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false" \
-[root@pcmk-1 ~]# crm_mon
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="false" \
+# crm_mon
============
Last updated: Fri Aug 28 15:23:48 2009
Stack: openais
Current DC: pcmk-1 - partition with quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ pcmk-1 pcmk-2 ]
-ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-1
-
-
-
-
- Perform a Failover
-
- Being a high-availability cluster, we should test failover of our new resource before moving on.
-
-
- First, find the node on which the IP address is running.
-
-
-
-[root@pcmk-1 ~]# crm resource status ClusterIP
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-1
+
+
+Perform a Failover
+Being a high-availability cluster, we should test failover of our new
+resource before moving on.
+First, find the node on which the IP address is running.
+# crm resource status ClusterIP
resource ClusterIP is running on: pcmk-1
-[root@pcmk-1 ~]#
-
-
- Shut down Pacemaker and Corosync on that machine.
-
-
-
-[root@pcmk-1 ~]# ssh pcmk-1 -- /etc/init.d/pacemaker stop
-Signaling Pacemaker Cluster Manager to terminate: [ OK ]
-Waiting for cluster services to unload:. [ OK ]
-[root@pcmk-1 ~]# ssh pcmk-1 -- /etc/init.d/corosync stop
-Stopping Corosync Cluster Engine (corosync): [ OK ]
-Waiting for services to unload: [ OK ]
-[root@pcmk-1 ~]#
-
-
- Once Corosync is no longer running, go to the other node and check the cluster status with crm_mon.
-
-
-
-[root@pcmk-2 ~]# crm_mon
+#
+Shut down Pacemaker and Corosync on that machine.
+# ssh pcmk-1 -- /etc/init.d/pacemaker stop
+Signaling Pacemaker Cluster Manager to terminate: [ OK ]
+Waiting for cluster services to unload:. [ OK ]
+# ssh pcmk-1 -- /etc/init.d/corosync stop
+Stopping Corosync Cluster Engine (corosync): [ OK ]
+Waiting for services to unload: [ OK ]
+#
+Once Corosync is no longer running, go to the other node and check the
+cluster status with crm_mon.
+# crm_mon
============
Last updated: Fri Aug 28 15:27:35 2009
Stack: openais
-Current DC: pcmk-2 - partition WITHOUT quorum
+Current DC: pcmk-2 - partition WITHOUT quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
-Online: [ pcmk-2 ]
-OFFLINE: [ pcmk-1 ]
-
-
- There are three things to notice about the cluster’s current state. The first is that, as expected, pcmk-1 is now offline. However we can also see that ClusterIP isn’t running anywhere!
-
-
- Quorum and Two-Node Clusters
-
- This is because the cluster no longer has quorum, as can be seen by the text "partition WITHOUT quorum" (emphasised green) in the output above. In order to reduce the possibility of data corruption, Pacemaker’s default behavior is to stop all resources if the cluster does not have quorum.
-
-
- A cluster is said to have quorum when more than half the known or expected nodes are online, or for the mathematically inclined, whenever the following equation is true:
-
-
- total_nodes < 2 * active_nodes
-
-
- Therefore a two-node cluster only has quorum when both nodes are running, which is no longer the case for our cluster. This would normally make the creation of a two-node cluster pointless
-
- Actually some would argue that two-node clusters are always pointless, but that is an argument for another time.
-
- , however it is possible to control how Pacemaker behaves when quorum is lost. In particular, we can tell the cluster to simply ignore quorum altogether.
-
-
-
-[root@pcmk-1 ~]# crm configure property no-quorum-policy=ignore
-[root@pcmk-1 ~]# crm configure show
+Online: [ pcmk-2 ]OFFLINE: [ pcmk-1 ]
+There are three things to notice about the cluster’s current
+state. The first is that, as expected, pcmk-1 is now offline. However
+we can also see that ClusterIP isn’t running anywhere!
+
+Quorum and Two-Node Clusters
+This is because the cluster no longer has quorum, as can be seen by
+the text "partition WITHOUT quorum" (emphasised green) in the output
+above. In order to reduce the possibility of data corruption,
+Pacemaker’s default behavior is to stop all resources if the cluster
+does not have quorum.
+A cluster is said to have quorum when more than half the known or
+expected nodes are online, or for the mathematically inclined,
+whenever the following equation is true:
+total_nodes < 2 * active_nodes
+Therefore a two-node cluster only has quorum when both nodes are
+running, which is no longer the case for our cluster. This would
+normally make the creation of a two-node cluster pointless
+Actually some would argue that two-node clusters are always pointless, but that is an argument for another time
+, however it is possible to control how Pacemaker behaves when quorum
+is lost. In particular, we can tell the cluster to simply ignore
+quorum altogether.
+# crm configure property no-quorum-policy=ignore
+# crm configure show
node pcmk-1
node pcmk-2
primitive ClusterIP ocf:heartbeat:IPaddr2 \
- params ip="192.168.122.101" cidr_netmask="32" \
- op monitor interval="30s"
+ params ip="192.168.122.101" cidr_netmask="32" \
+ op monitor interval="30s"
property $id="cib-bootstrap-options" \
- dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false" \
- no-quorum-policy="ignore"
-
-
- After a few moments, the cluster will start the IP address on the remaining node. Note that the cluster still does not have quorum.
-
-
-
-[root@pcmk-2 ~]# crm_mon
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="false" \
+ no-quorum-policy="ignore"
+After a few moments, the cluster will start the IP address on the
+remaining node. Note that the cluster still does not have quorum.
+# crm_mon
============
Last updated: Fri Aug 28 15:30:18 2009
Stack: openais
-Current DC: pcmk-2 - partition WITHOUT quorum
+Current DC: pcmk-2 - partition WITHOUT quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ pcmk-2 ]
-OFFLINE: [ pcmk-1 ]
-
-ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
-
-
- Now simulate node recovery by restarting the cluster stack on pcmk-1 and check the cluster’s status.
-
-
-
-[root@pcmk-1 ~]# /etc/init.d/corosync start
-Starting Corosync Cluster Engine (corosync): [ OK ]
-[root@pcmk-1 ~]# /etc/init.d/pacemaker start
-Starting Pacemaker Cluster Manager: [ OK ]
-[root@pcmk-1 ~]# crm_mon
+OFFLINE: [ pcmk-1 ]
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
+Now simulate node recovery by restarting the cluster stack on pcmk-1 and
+check the cluster’s status.
+# /etc/init.d/corosync start
+Starting Corosync Cluster Engine (corosync): [ OK ]
+# /etc/init.d/pacemaker start
+Starting Pacemaker Cluster Manager: [ OK ]# crm_mon
============
Last updated: Fri Aug 28 15:32:13 2009
Stack: openais
Current DC: pcmk-2 - partition with quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
-Online: [ pcmk-1 pcmk-2 ]
+Online: [ pcmk-1 pcmk-2 ]
-ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-1
-
-
- Here we see something that some may consider surprising, the IP is back running at its original location!
-
-
-
-
- Prevent Resources from Moving after Recovery
-
- In some circumstances, it is highly desirable to prevent healthy resources from being moved around the cluster. Moving resources almost always requires a period of downtime. For complex services like Oracle databases, this period can be quite long.
-
-
- To address this, Pacemaker has the concept of resource stickiness which controls how much a service prefers to stay running where it is. You may like to think of it as the "cost" of any downtime. By default, Pacemaker assumes there is zero cost associated with moving resources and will do so to achieve "optimal
-
- It should be noted that Pacemaker’s definition of optimal may not always agree with that of a human’s. The order in which Pacemaker processes lists of resources and nodes creates implicit preferences in situations where the administrator has not explicitly specified them.
-
- " resource placement. We can specify a different stickiness for every resource, but it is often sufficient to change the default.
-
-
-
-[root@pcmk-2 ~]# crm configure rsc_defaults resource-stickiness=100
-[root@pcmk-2 ~]# crm configure show
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-1
+Here we see something that some may consider surprising, the IP is back
+running at its original location!
+
+
+Prevent Resources from Moving after Recovery
+In some circumstances, it is highly desirable to prevent healthy
+resources from being moved around the cluster. Moving resources almost
+always requires a period of downtime. For complex services like Oracle
+databases, this period can be quite long.
+To address this, Pacemaker has the concept of resource stickiness
+which controls how much a service prefers to stay running where it
+is. You may like to think of it as the "cost" of any downtime. By
+default, Pacemaker assumes there is zero cost associated with moving
+resources and will do so to achieve "optimal"
+It should be noted that Pacemaker’s definition of
+optimal may not always agree with that of a human’s. The order in which
+Pacemaker processes lists of resources and nodes creates implicit
+preferences in situations where the administrator has not explicitly
+specified them
+resource placement. We can specify a different stickiness for every
+resource, but it is often sufficient to change the default.
+# crm configure rsc_defaults resource-stickiness=100
+# crm configure show
node pcmk-1
node pcmk-2
primitive ClusterIP ocf:heartbeat:IPaddr2 \
- params ip="192.168.122.101" cidr_netmask="32" \
- op monitor interval="30s"
+ params ip="192.168.122.101" cidr_netmask="32" \
+ op monitor interval="30s"
property $id="cib-bootstrap-options" \
- dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false" \
- no-quorum-policy="ignore"
-rsc_defaults $id="rsc-options" \
- resource-stickiness="100"
-
-
- If we now retry the failover test, we see that as expected ClusterIP still moves to pcmk-2 when pcmk-1 is taken offline.
-
-
-
-[root@pcmk-1 ~]# ssh pcmk-1 -- /etc/init.d/pacemaker stop
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="false" \
+ no-quorum-policy="ignore"rsc_defaults $id="rsc-options" \
+ resource-stickiness="100"
+If we now retry the failover test, we see that as expected ClusterIP
+still moves to pcmk-2 when pcmk-1 is taken offline.
+# ssh pcmk-1 -- /etc/init.d/pacemaker stop
Signaling Pacemaker Cluster Manager to terminate: [ OK ]
Waiting for cluster services to unload:. [ OK ]
-[root@pcmk-1 ~]# ssh pcmk-1 -- /etc/init.d/corosync stop
-Stopping Corosync Cluster Engine (corosync): [ OK ]
-Waiting for services to unload: [ OK ]
-[root@pcmk-1 ~]# ssh pcmk-2 -- crm_mon -1
+# ssh pcmk-1 -- /etc/init.d/corosync stop
+Stopping Corosync Cluster Engine (corosync): [ OK ]
+Waiting for services to unload: [ OK ]
+# ssh pcmk-2 -- crm_mon -1
============
Last updated: Fri Aug 28 15:39:38 2009
Stack: openais
Current DC: pcmk-2 - partition WITHOUT quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ pcmk-2 ]
-OFFLINE: [ pcmk-1 ]
-
-ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
-
-
- However when we bring pcmk-1 back online, ClusterIP now remains running on pcmk-2.
-
-
-
-[root@pcmk-1 ~]# /etc/init.d/corosync start
-Starting Corosync Cluster Engine (corosync): [ OK ]
-[root@pcmk-1 ~]# /etc/init.d/pacemaker start
-Starting Pacemaker Cluster Manager: [ OK ]
-[root@pcmk-1 ~]# crm_mon
+OFFLINE: [ pcmk-1 ]
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
+However when we bring pcmk-1 back online, ClusterIP now remains running
+on pcmk-2.
+# /etc/init.d/corosync start
+Starting Corosync Cluster Engine (corosync): [ OK ]
+# /etc/init.d/pacemaker start
+Starting Pacemaker Cluster Manager: [ OK ]
+# crm_mon
============
Last updated: Fri Aug 28 15:41:23 2009
Stack: openais
Current DC: pcmk-2 - partition with quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
-Online: [ pcmk-1 pcmk-2 ]
-
-ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
-
-
-
-
+Online: [ pcmk-1 pcmk-2 ]
+ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
+
+
-
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt
index 87125a26d1..0d85c1a78d 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt
@@ -1,754 +1,754 @@
= Installation =
== OS Installation ==
Detailed instructions for installing Fedora are available at
http://docs.fedoraproject.org/install-guide/f13/ in a number of
languages. The abbreviated version is as follows...
Point your browser to http://fedoraproject.org/en/get-fedora-all,
locate the Install Media section and download the install DVD that
matches your hardware.
Burn the disk image to a DVD
footnote:[http://docs.fedoraproject.org/readme-burning-isos/en-US.html]
and boot from it. Or use the image to boot a virtual machine as I have
done here. After clicking through the welcome screen, select your
language and keyboard layout
footnote:[http://docs.fedoraproject.org/install-guide/f13/en-US/html/s1-langselection-x86.html]
.Installation: Good choice
-image::images/f-13.1-welcome.png[Fedora Installation - Welcome]
+image::images/f-13.1-welcome.png[Welcome]
.Fedora Installation - Storage Devices
-image::images/f-13.2-devices.png[Fedora Installation - Storage Devices]
+image::images/f-13.2-devices.png[Storage Devices]
Assign your machine a host name.
footnote:[http://docs.fedoraproject.org/install-guide/f13/en-US/html/sn-networkconfig-fedora.html]
I happen to control the clusterlabs.org domain name, so I will use
that here.
.Fedora Installation - Hostname
-image::images/f-13.3-hostname.png[Fedora Installation - Hostname]
+image::images/f-13.3-hostname.png[Hostname]
You will then be prompted to indicate the machine's physical location
and to supply a root password.
footnote:[http://docs.fedoraproject.org/install-guide/f13/en-US/html/sn-account_configuration.html]
Now select where you want Fedora installed.
footnote:[http://docs.fedoraproject.org/install-guide/f13/en-US/html/s1-diskpartsetup-x86.html]
As I don’t care about any existing data, I will accept the default and
allow Fedora to use the complete drive. However I want to reserve some
space for DRBD, so I'll check the Review and modify partitioning
layout box.
.Fedora Installation - Installation Type
-image::images/f-13.4-partition-overview.png[Fedora Installation - Choose Install Type]
+image::images/f-13.4-partition-overview.png[Choose Install Type]
By default, Fedora will give all the space to the / (aka. root)
partition. Wel'll take some back so we can use DRBD.
.Fedora Installation - Default Partitioning
-image::images/f-13.5-partition-default.png[Fedora Installation - Default Partitioning]
+image::images/f-13.5-partition-default.png[Default Partitioning]
The finalized partition layout should look something like the diagram
below.
[IMPORTANT]
===========
If you plan on following the DRBD or GFS2 portions of this
guide, you should reserve at least 1Gb of space on each machine from
which to create a shared volume. Fedora Installation - Customize
PartitioningFedora Installation: Create a partition to use (later) for
website data
===========
.Fedora Installation - Customize Partitioning
image::images/f-13.6-partition-custom.png[Customize Partitioning]
.Fedora Installation - Bootloader
image::images/f-13.7-bootloader.png[Unless you have a strong reason not to, accept the default bootloader location]
Next choose which software should be installed. Change the selection to
Web Server since we plan on using Apache. Don't enable updates yet, we'll
do that (and install any extra software we need) later. After you click
next, Fedora will begin installing.
.Fedora Installation - Software
-image::images/f-13.8-software.png[Fedora Installation - Software selection]
+image::images/f-13.8-software.png[Software selection]
Go grab something to drink, this may take a while
.Fedora Installation - Installing
-image::images/f-13.9-installing.png[Fedora Installation - Installing]
+image::images/f-13.9-installing.png[Installing]
.Fedora Installation - Installation Complete
image::images/f-13.10-install-complete.png[Stage 1, completed]
Once the node reboots, follow the on screen instructions
footnote:[http://docs.fedoraproject.org/install-guide/f13/en-US/html/ch-firstboot.html]
to create a system user and configure the time.
.Fedora Installation - First Boot
image::images/f-13.11-post-welcome.png[First boot]
.Fedora Installation - Create Non-privileged User
image::images/f-13.12-new-user.png[Creating a new user, take note of the password, you'll need it soon]
[NOTE]
=======
It is highly recommended to enable NTP on your cluster nodes. Doing so
ensures all nodes agree on the current time and makes reading log files
significantly easier. Fedora Installation - Date and TimeFedora
Installation: Enable NTP to keep the times on all your nodes consistent
=======
.Fedora Installation - Date and Time
image::images/f-13.13-date-time.png[Date and time]
Click through the next screens until you reach the login window. Click on
the user you created and supply the password you indicated earlier.
.Fedora Installation - Customize Networking
image::images/f-13.14-networking.png[Click here to configure networking]
[IMPORTANT]
===========
Do not accept the default network settings. Cluster
machines should never obtain an ip address via DHCP. Here I will use
the internal addresses for the clusterlab.org network.
===========
.Fedora Installation - Specify Network Preferences
image::images/f-13.15-manual-networking.png[Specify network settings for your machine, never choose DHCP]
.Fedora Installation - Activate Networking
image::images/f-13.16-networking-activate.png[Click the big green button to activate your changes]
.Fedora Installation - Bring up the Terminal
image::images/f-13.17-terminal.png[Down to business, fire up the command line]
[NOTE]
======
That was the last screenshot, from here on in we’re going to be working
from the terminal.
======
== Cluster Software Installation ==
Go to the terminal window you just opened and switch to the super user
(aka. "root") account with the su command. You will need to supply the
password you entered earlier during the installation process.
[source,Bash]
----
[beekhof@pcmk-1 ~]$ su -
Password:
[root@pcmk-1 ~]#
----
[NOTE]
======
Note that the username (the text before the @ symbol) now indicates we’re
running as the super user “root”.
======
[source,Bash]
....
# ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:0c:29:6f:e1:58 brd ff:ff:ff:ff:ff:ff
inet 192.168.9.41/24 brd 192.168.9.255 scope global eth0
inet6 ::20c:29ff:fe6f:e158/64 scope global dynamic
valid_lft 2591667sec preferred_lft 604467sec
inet6 2002:57ae:43fc:0:20c:29ff:fe6f:e158/64 scope global dynamic
valid_lft 2591990sec preferred_lft 604790sec
inet6 fe80::20c:29ff:fe6f:e158/64 scope link
valid_lft forever preferred_lft forever
# ping -c 1 www.google.com
PING www.l.google.com (74.125.39.99) 56(84) bytes of data.
64 bytes from fx-in-f99.1e100.net (74.125.39.99): icmp_seq=1 ttl=56 time=16.7 ms
--- www.l.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 20ms
rtt min/avg/max/mdev = 16.713/16.713/16.713/0.000 ms
# /sbin/chkconfig network on
#
....
=== Security Shortcuts ===
To simplify this guide and focus on the aspects directly connected to
clustering, we will now disable the machine’s firewall and SELinux
installation. Both of these actions create significant security issues
and should not be performed on machines that will be exposed to the
outside world.
[IMPORTANT]
===========
TODO: Create an Appendix that deals with (at least) re-enabling the firewall.
===========
[source,Bash]
----
# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
# /sbin/chkconfig --del iptables
# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: [ OK ]
----
[NOTE]
================
You will need to reboot for the SELinux changes to take effect. Otherwise
you will see something like this when you start corosync:
May 4 19:30:54 pcmk-1 setroubleshoot: SELinux is preventing /usr/sbin/corosync "getattr" access on /. For complete SELinux messages. run sealert -l 6e0d4384-638e-4d55-9aaf-7dac011f29c1
May 4 19:30:54 pcmk-1 setroubleshoot: SELinux is preventing /usr/sbin/corosync "getattr" access on /. For complete SELinux messages. run sealert -l 6e0d4384-638e-4d55-9aaf-7dac011f29c1
================
=== Install the Cluster Software ===
Since version 12, Fedora comes with recent versions of everything you
need, so simply fire up the shell and run:
....
# sed -i.bak "s/enabled=0/enabled=1/g"
/etc/yum.repos.d/fedora.repo
# sed -i.bak "s/enabled=0/enabled=1/g"
/etc/yum.repos.d/fedora-updates.repo
# yum install -y pacemaker corosyncLoaded plugins: presto, refresh-packagekit
fedora/metalink | 22 kB 00:00
fedora-debuginfo/metalink | 16 kB 00:00
fedora-debuginfo | 3.2 kB 00:00
fedora-debuginfo/primary_db | 1.4 MB 00:04
fedora-source/metalink | 22 kB 00:00
fedora-source | 3.2 kB 00:00
fedora-source/primary_db | 3.0 MB 00:05
updates/metalink | 26 kB 00:00
updates | 2.6 kB 00:00
updates/primary_db | 1.1 kB 00:00
updates-debuginfo/metalink | 18 kB 00:00
updates-debuginfo | 2.6 kB 00:00
updates-debuginfo/primary_db | 1.1 kB 00:00
updates-source/metalink | 25 kB 00:00
updates-source | 2.6 kB 00:00
updates-source/primary_db | 1.1 kB 00:00
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package corosync.x86_64 0:1.2.1-1.fc13 set to be updated
--> Processing Dependency: corosynclib = 1.2.1-1.fc13 for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libquorum.so.4(COROSYNC_QUORUM_1.0)(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libvotequorum.so.4(COROSYNC_VOTEQUORUM_1.0)(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libcpg.so.4(COROSYNC_CPG_1.0)(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libconfdb.so.4(COROSYNC_CONFDB_1.0)(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libcfg.so.4(COROSYNC_CFG_0.82)(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libpload.so.4(COROSYNC_PLOAD_1.0)(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: liblogsys.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libconfdb.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libcoroipcc.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libcpg.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libquorum.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libcoroipcs.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libvotequorum.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libcfg.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libtotem_pg.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
--> Processing Dependency: libpload.so.4()(64bit) for package: corosync-1.2.1-1.fc13.x86_64
---> Package pacemaker.x86_64 0:1.1.5-1.fc13 set to be updated
--> Processing Dependency: heartbeat >= 3.0.0 for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: net-snmp >= 5.4 for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: resource-agents for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: cluster-glue for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libnetsnmp.so.20()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libcrmcluster.so.1()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libpengine.so.3()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libnetsnmpagent.so.20()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libesmtp.so.5()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libstonithd.so.1()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libhbclient.so.1()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libpils.so.2()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libpe_status.so.2()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libnetsnmpmibs.so.20()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libnetsnmphelpers.so.20()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libcib.so.1()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libccmclient.so.1()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libstonith.so.1()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: liblrm.so.2()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libtransitioner.so.1()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libpe_rules.so.2()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libcrmcommon.so.2()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Processing Dependency: libplumb.so.2()(64bit) for package: pacemaker-1.1.5-1.fc13.x86_64
--> Running transaction check
---> Package cluster-glue.x86_64 0:1.0.2-1.fc13 set to be updated
--> Processing Dependency: perl-TimeDate for package: cluster-glue-1.0.2-1.fc13.x86_64
--> Processing Dependency: libOpenIPMIutils.so.0()(64bit) for package: cluster-glue-1.0.2-1.fc13.x86_64
--> Processing Dependency: libOpenIPMIposix.so.0()(64bit) for package: cluster-glue-1.0.2-1.fc13.x86_64
--> Processing Dependency: libopenhpi.so.2()(64bit) for package: cluster-glue-1.0.2-1.fc13.x86_64
--> Processing Dependency: libOpenIPMI.so.0()(64bit) for package: cluster-glue-1.0.2-1.fc13.x86_64
---> Package cluster-glue-libs.x86_64 0:1.0.2-1.fc13 set to be updated
---> Package corosynclib.x86_64 0:1.2.1-1.fc13 set to be updated
--> Processing Dependency: librdmacm.so.1(RDMACM_1.0)(64bit) for package: corosynclib-1.2.1-1.fc13.x86_64
--> Processing Dependency: libibverbs.so.1(IBVERBS_1.0)(64bit) for package: corosynclib-1.2.1-1.fc13.x86_64
--> Processing Dependency: libibverbs.so.1(IBVERBS_1.1)(64bit) for package: corosynclib-1.2.1-1.fc13.x86_64
--> Processing Dependency: libibverbs.so.1()(64bit) for package: corosynclib-1.2.1-1.fc13.x86_64
--> Processing Dependency: librdmacm.so.1()(64bit) for package: corosynclib-1.2.1-1.fc13.x86_64
---> Package heartbeat.x86_64 0:3.0.0-0.7.0daab7da36a8.hg.fc13 set to be updated
--> Processing Dependency: PyXML for package: heartbeat-3.0.0-0.7.0daab7da36a8.hg.fc13.x86_64
---> Package heartbeat-libs.x86_64 0:3.0.0-0.7.0daab7da36a8.hg.fc13 set to be updated
---> Package libesmtp.x86_64 0:1.0.4-12.fc12 set to be updated
---> Package net-snmp.x86_64 1:5.5-12.fc13 set to be updated
--> Processing Dependency: libsensors.so.4()(64bit) for package: 1:net-snmp-5.5-12.fc13.x86_64
---> Package net-snmp-libs.x86_64 1:5.5-12.fc13 set to be updated
---> Package pacemaker-libs.x86_64 0:1.1.5-1.fc13 set to be updated
---> Package resource-agents.x86_64 0:3.0.10-1.fc13 set to be updated
--> Processing Dependency: libnet.so.1()(64bit) for package: resource-agents-3.0.10-1.fc13.x86_64
--> Running transaction check
---> Package OpenIPMI-libs.x86_64 0:2.0.16-8.fc13 set to be updated
---> Package PyXML.x86_64 0:0.8.4-17.fc13 set to be updated
---> Package libibverbs.x86_64 0:1.1.3-4.fc13 set to be updated
--> Processing Dependency: libibverbs-driver for package: libibverbs-1.1.3-4.fc13.x86_64
---> Package libnet.x86_64 0:1.1.4-3.fc12 set to be updated
---> Package librdmacm.x86_64 0:1.0.10-2.fc13 set to be updated
---> Package lm_sensors-libs.x86_64 0:3.1.2-2.fc13 set to be updated
---> Package openhpi-libs.x86_64 0:2.14.1-3.fc13 set to be updated
---> Package perl-TimeDate.noarch 1:1.20-1.fc13 set to be updated
--> Running transaction check
---> Package libmlx4.x86_64 0:1.0.1-5.fc13 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Installing:
corosync x86_64 1.2.1-1.fc13 fedora 136 k
pacemaker x86_64 1.1.5-1.fc13 fedora 543 k
Installing for dependencies:
OpenIPMI-libs x86_64 2.0.16-8.fc13 fedora 474 k
PyXML x86_64 0.8.4-17.fc13 fedora 906 k
cluster-glue x86_64 1.0.2-1.fc13 fedora 230 k
cluster-glue-libs x86_64 1.0.2-1.fc13 fedora 116 k
corosynclib x86_64 1.2.1-1.fc13 fedora 145 k
heartbeat x86_64 3.0.0-0.7.0daab7da36a8.hg.fc13 updates 172 k
heartbeat-libs x86_64 3.0.0-0.7.0daab7da36a8.hg.fc13 updates 265 k
libesmtp x86_64 1.0.4-12.fc12 fedora 54 k
libibverbs x86_64 1.1.3-4.fc13 fedora 42 k
libmlx4 x86_64 1.0.1-5.fc13 fedora 27 k
libnet x86_64 1.1.4-3.fc12 fedora 49 k
librdmacm x86_64 1.0.10-2.fc13 fedora 22 k
lm_sensors-libs x86_64 3.1.2-2.fc13 fedora 37 k
net-snmp x86_64 1:5.5-12.fc13 fedora 295 k
net-snmp-libs x86_64 1:5.5-12.fc13 fedora 1.5 M
openhpi-libs x86_64 2.14.1-3.fc13 fedora 135 k
pacemaker-libs x86_64 1.1.5-1.fc13 fedora 264 k
perl-TimeDate noarch 1:1.20-1.fc13 fedora 42 k
resource-agents x86_64 3.0.10-1.fc13 fedora 357 k
Transaction Summary
=========================================================================================
Install 21 Package(s)
Upgrade 0 Package(s)
Total download size: 5.7 M
Installed size: 20 M
Downloading Packages:
Setting up and reading Presto delta metadata
updates-testing/prestodelta | 164 kB 00:00
fedora/prestodelta | 150 B 00:00
Processing delta metadata
Package(s) data still to download: 5.7 M
(1/21): OpenIPMI-libs-2.0.16-8.fc13.x86_64.rpm | 474 kB 00:00
(2/21): PyXML-0.8.4-17.fc13.x86_64.rpm | 906 kB 00:01
(3/21): cluster-glue-1.0.2-1.fc13.x86_64.rpm | 230 kB 00:00
(4/21): cluster-glue-libs-1.0.2-1.fc13.x86_64.rpm | 116 kB 00:00
(5/21): corosync-1.2.1-1.fc13.x86_64.rpm | 136 kB 00:00
(6/21): corosynclib-1.2.1-1.fc13.x86_64.rpm | 145 kB 00:00
(7/21): heartbeat-3.0.0-0.7.0daab7da36a8.hg.fc13.x86_64.rpm | 172 kB 00:00
(8/21): heartbeat-libs-3.0.0-0.7.0daab7da36a8.hg.fc13.x86_64.rpm | 265 kB 00:00
(9/21): libesmtp-1.0.4-12.fc12.x86_64.rpm | 54 kB 00:00
(10/21): libibverbs-1.1.3-4.fc13.x86_64.rpm | 42 kB 00:00
(11/21): libmlx4-1.0.1-5.fc13.x86_64.rpm | 27 kB 00:00
(12/21): libnet-1.1.4-3.fc12.x86_64.rpm | 49 kB 00:00
(13/21): librdmacm-1.0.10-2.fc13.x86_64.rpm | 22 kB 00:00
(14/21): lm_sensors-libs-3.1.2-2.fc13.x86_64.rpm | 37 kB 00:00
(15/21): net-snmp-5.5-12.fc13.x86_64.rpm | 295 kB 00:00
(16/21): net-snmp-libs-5.5-12.fc13.x86_64.rpm | 1.5 MB 00:01
(17/21): openhpi-libs-2.14.1-3.fc13.x86_64.rpm | 135 kB 00:00
(18/21): pacemaker-1.1.5-1.fc13.x86_64.rpm | 543 kB 00:00
(19/21): pacemaker-libs-1.1.5-1.fc13.x86_64.rpm | 264 kB 00:00
(20/21): perl-TimeDate-1.20-1.fc13.noarch.rpm | 42 kB 00:00
(21/21): resource-agents-3.0.10-1.fc13.x86_64.rpm | 357 kB 00:00
Total 539 kB/s | 5.7 MB 00:10
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID e8e40fde: NOKEY
fedora/gpgkey | 3.2 kB 00:00 ...
Importing GPG key 0xE8E40FDE "Fedora (13) >/etc/corosync/service.d/pcmkservice {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 1
}
END
----
The final configuration should look something like the sample in
Appendix B, Sample Corosync Configuration.
[IMPORTANT]
===========
When run in version 1 mode, the plugin does not start the Pacemaker
daemons. Instead it just sets up the quorum and messaging interfaces
needed by the rest of the stack. Starting the dameons occurs when the
Pacemaker init script is invoked. This resolves two long standing issues:
.. Forking inside a multi-threaded process like Corosync causes all
sorts of pain. This has been problematic for Pacemaker as it needs a
number of daemons to be spawned.
.. Corosync was never designed for staggered shutdown - something
previously needed in order to prevent the cluster from leaving
before Pacemaker could stop all active resources.
===========
=== Propagate the Configuration ===
Now we need to copy the changes so far to the other node:
[source,Bash]
----
# for f in /etc/corosync/corosync.conf /etc/corosync/service.d/pcmk /etc/hosts; do scp $f pcmk-2:$f ; done
corosync.conf 100% 1528 1.5KB/s 00:00
hosts 100% 281 0.3KB/s 00:00
#
----
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt b/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt
new file mode 100644
index 0000000000..90a1c0e2b0
--- /dev/null
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt
@@ -0,0 +1,155 @@
+= Read-Me-First =
+
+== The Scope of this Document ==
+
+Computer clusters can be used to provide highly available services or
+resources. The redundancy of multiple machines is used to guard
+against failures of many types.
+
+This document will walk through the installation and setup of simple
+clusters using the Fedora distribution, version 14.
+
+The clusters described here will use Pacemaker and Corosync to provide
+resource management and messaging. Required packages and modifications
+to their configuration files are described along with the use of the
+Pacemaker command line tool for generating the XML used for cluster
+control.
+
+Pacemaker is a central component and provides the resource management
+required in these systems. This management includes detecting and
+recovering from the failure of various nodes, resources and services
+under its control.
+
+When more in depth information is required and for real world usage,
+please refer to the http://www.clusterlabs.org/doc/[Pacemaker Explained] manual.
+
+== What Is Pacemaker? ==
+
+Pacemaker is a cluster resource manager. It achieves maximum availability
+for your cluster services (aka. resources) by detecting and recovering
+from node and resource-level failures by making use of the messaging and
+membership capabilities provided by your preferred cluster infrastructure
+(either Corosync or Heartbeat).
+
+Pacemaker's key features include:
+
+ * Detection and recovery of node and service-level failures
+ * Storage agnostic, no requirement for shared storage
+ * Resource agnostic, anything that can be scripted can be clustered
+ * Supports STONITH for ensuring data integrity
+ * Supports large and small clusters
+ * Supports both quorate and resource driven clusters
+ * Supports practically any redundancy configuration
+ * Automatically replicated configuration that can be updated from any node
+ * Ability to specify cluster-wide service ordering, colocation and anti-colocation
+ * Support for advanced service types
+ ** Clones: for services which need to be active on multiple nodes
+ ** Multi-state: for services with multiple modes (eg. master/slave, primary/secondary)
+ * Unified, scriptable, cluster shell
+
+== Pacemaker Architecture ==
+
+At the highest level, the cluster is made up of three pieces:
+
+ * Non-cluster aware components (illustrated in green). These pieces
+ include the resources themselves, scripts that start, stop and
+ monitor them, and also a local daemon that masks the differences
+ between the different standards these scripts implement.
+
+ * Resource management Pacemaker provides the brain (illustrated in
+ blue) that processes and reacts to events regarding the cluster.
+ These events include nodes joining or leaving the cluster; resource
+ events caused by failures, maintenance, scheduled activities; and
+ other administrative actions. Pacemaker will compute the ideal
+ state of the cluster and plot a path to achieve it after any of
+ these events. This may include moving resources, stopping nodes and
+ even forcing them offline with remote power switches.
+
+ * Low level infrastructure Corosync provides reliable messaging,
+ membership and quorum information about the cluster (illustrated in
+ red).
+
+.Conceptual Stack Overview
+image::images/pcmk-overview.png[Conceptual overview of the cluster stack]
+
+When combined with Corosync, Pacemaker also supports popular open
+source cluster filesystems.
+footnote:[Even though Pacemaker also supports Heartbeat, the
+filesystems need to use the stack for messaging and membership and
+Corosync seems to be what they're standardizing on. Technically it
+would be possible for them to support Heartbeat as well, however there
+seems little interest in this.]
+
+Due to recent standardization within the cluster filesystem community,
+they make use of a common distributed lock manager which makes use of
+Corosync for its messaging capabilities and Pacemaker for its
+membership (which nodes are up/down) and fencing services.
+
+.The Pacemaker Stack
+image::images/pcmk-stack.png[The Pacemaker StackThe Pacemaker stack when running on Corosync]
+
+=== Internal Components ===
+
+Pacemaker itself is composed of four key components (illustrated below in
+the same color scheme as the previous diagram):
+
+ * CIB (aka. Cluster Information Base)
+ * CRMd (aka. Cluster Resource Management daemon)
+ * PEngine (aka. PE or Policy Engine)
+ * STONITHd
+
+.Internal Components
+image::images/pcmk-internals.png[Subsystems of a Pacemaker cluster running on Corosync]
+
+The CIB uses XML to represent both the cluster's configuration and
+current state of all resources in the cluster. The contents of the CIB
+are automatically kept in sync across the entire cluster and are used
+by the PEngine to compute the ideal state of the cluster and how it
+should be achieved.
+
+This list of instructions is then fed to the DC (Designated
+Co-ordinator). Pacemaker centralizes all cluster decision making by
+electing one of the CRMd instances to act as a master. Should the
+elected CRMd process, or the node it is on, fail... a new one is
+quickly established.
+
+The DC carries out the PEngine's instructions in the required order by
+passing them to either the LRMd (Local Resource Management daemon) or
+CRMd peers on other nodes via the cluster messaging infrastructure
+(which in turn passes them on to their LRMd process).
+
+The peer nodes all report the results of their operations back to the
+DC and based on the expected and actual results, will either execute
+any actions that needed to wait for the previous one to complete, or
+abort processing and ask the PEngine to recalculate the ideal cluster
+state based on the unexpected results.
+
+In some cases, it may be necessary to power off nodes in order to
+protect shared data or complete resource recovery. For this Pacemaker
+comes with STONITHd. STONITH is an acronym for
+Shoot-The-Other-Node-In-The-Head and is usually implemented with a
+remote power switch. In Pacemaker, STONITH devices are modeled as
+resources (and configured in the CIB) to enable them to be easily
+monitored for failure, however STONITHd takes care of understanding
+the STONITH topology such that its clients simply request a node be
+fenced and it does the rest.
+
+== Types of Pacemaker Clusters ==
+
+Pacemaker makes no assumptions about your environment, this allows it
+to support practically any
+http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations[redundancy
+configuration] including Active/Active, Active/Passive, N+1, N+M,
+N-to-1 and N-to-N.
+
+In this document we will focus on the setup of a highly available
+Apache web server with an Active/Passive cluster using DRBD and Ext4
+to store data. Then, we will upgrade this cluster to Active/Active
+using GFS2.
+
+.Active/Passive Redundancy
+image::images/pcmk-active-passive.png[Two-node Active/Passive clusters using Pacemaker and DRBD are a cost-effective solution for many High Availability situations.]
+
+
+.N to N Redundancy
+image::images/pcmk-active-active.png[When shared storage is available, every node can potentially be used for failover. Pacemaker can even run multiple copies of services to spread out the workload.]
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Intro.xml b/doc/Clusters_from_Scratch/en-US/Ch-Intro.xml
deleted file mode 100644
index 173f6e78d1..0000000000
--- a/doc/Clusters_from_Scratch/en-US/Ch-Intro.xml
+++ /dev/null
@@ -1,172 +0,0 @@
- Read-Me-First
-
- The Scope of this Document
-
- Computer clusters can be used to provide highly available services or resources. The redundancy of multiple machines is used to guard against failures of many types.
-
-
- This document will walk through the installation and setup of simple clusters using the Fedora distribution, version 14. The clusters described here will use Pacemaker and Corosync to provide resource management and messaging. Required packages and modifications to their configuration files are described along with the use of the Pacemaker command line tool for generating the XML used for cluster control.
-
-
- Pacemaker is a central component and provides the resource management required in these systems. This management includes detecting and recovering from the failure of various nodes, resources and services under its control.
-
-
- When more in depth information is required and for real world usage, please refer to the Pacemaker Explained manual.
-
-
-
- What Is Pacemaker?
-
- Pacemaker is a cluster resource manager. It achieves maximum availability for your cluster services (aka. resources) by detecting and recovering from node and resource-level failures by making use of the messaging and membership capabilities provided by your preferred cluster infrastructure (either Corosync or Heartbeat).
-
- Pacemaker's key features include:
-
- Detection and recovery of node and service-level failures
- Storage agnostic, no requirement for shared storage
- Resource agnostic, anything that can be scripted can be clustered
- Supports STONITH for ensuring data integrity
- Supports large and small clusters
- Supports both quorate and resource driven clusters
- Supports practically any redundancy configuration
- Automatically replicated configuration that can be updated from any node
- Ability to specify cluster-wide service ordering, colocation and anti-colocation
- Support for advanced service types
-
- Clones: for services which need to be active on multiple nodes
- Multi-state: for services with multiple modes (eg. master/slave, primary/secondary)
-
-
- Unified, scriptable, cluster shell
-
-
-
- Pacemaker Architecture
- At the highest level, the cluster is made up of three pieces:
-
-
-
- Non-cluster aware components (illustrated in green).
-
-
- These pieces include the resources themselves, scripts that start, stop and monitor them, and also a local daemon that masks the differences between the different standards these scripts implement.
-
-
-
- Resource management
-
-
- Pacemaker provides the brain (illustrated in blue) that processes and reacts to events regarding the cluster. These events include nodes joining or leaving the cluster; resource events caused by failures, maintenance, scheduled activities; and other administrative actions. Pacemaker will compute the ideal state of the cluster and plot a path to achieve it after any of these events. This may include moving resources, stopping nodes and even forcing them offline with remote power switches.
-
-
-
- Low level infrastructure
-
-
- Corosync provides reliable messaging, membership and quorum information about the cluster (illustrated in red).
-
-
-
-
-
-
- When combined with Corosync, Pacemaker also supports popular open source cluster filesystems
-
-
- Even though Pacemaker also supports Heartbeat, the filesystems need to use the stack for messaging and membership and Corosync seems to be what they're standardizing on.
- Technically it would be possible for them to support Heartbeat as well, however there seems little interest in this.
-
-
- Due to recent standardization within the cluster filesystem community, they make use of a common distributed lock manager which makes use of Corosync for its messaging capabilities and Pacemaker for its membership (which nodes are up/down) and fencing services.
-
-
-
-
-
- Internal Components
- Pacemaker itself is composed of four key components (illustrated below in the same color scheme as the previous diagram):
-
- CIB (aka. Cluster Information Base)
- CRMd (aka. Cluster Resource Management daemon)
- PEngine (aka. PE or Policy Engine)
- STONITHd
-
-
-
-
-
- The CIB uses XML to represent both the cluster's configuration and current state of all resources in the cluster.
- The contents of the CIB are automatically kept in sync across the entire cluster and are used by the PEngine to compute the ideal state of the cluster and how it should be achieved.
-
-
- This list of instructions is then fed to the DC (Designated Co-ordinator).
- Pacemaker centralizes all cluster decision making by electing one of the CRMd instances to act as a master.
- Should the elected CRMd process, or the node it is on, fail...
- a new one is quickly established.
-
- The DC carries out the PEngine's instructions in the required order by passing them to either the LRMd (Local Resource Management daemon) or CRMd peers on other nodes via the cluster messaging infrastructure (which in turn passes them on to their LRMd process).
- The peer nodes all report the results of their operations back to the DC and based on the expected and actual results, will either execute any actions that needed to wait for the previous one to complete, or abort processing and ask the PEngine to recalculate the ideal cluster state based on the unexpected results.
-
- In some cases, it may be necessary to power off nodes in order to protect shared data or complete resource recovery.
- For this Pacemaker comes with STONITHd.
- STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and is usually implemented with a remote power switch.
- In Pacemaker, STONITH devices are modeled as resources (and configured in the CIB) to enable them to be easily monitored for failure, however STONITHd takes care of understanding the STONITH topology such that its clients simply request a node be fenced and it does the rest.
-
-
-
-
- Types of Pacemaker Clusters
- Pacemaker makes no assumptions about your environment, this allows it to support practically any redundancy configuration including Active/Active, Active/Passive, N+1, N+M, N-to-1 and N-to-N.
-
- In this document we will focus on the setup of a highly available Apache web server with an Active/Passive cluster using DRBD and Ext4 to store data. Then, we will upgrade this cluster to Active/Active using GFS2.
-
-
-
-
-
-
-
-
-
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt
new file mode 100644
index 0000000000..0f0b7e0e5d
--- /dev/null
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt
@@ -0,0 +1,223 @@
+= Configure STONITH =
+
+== What Is STONITH ==
+
+STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and it
+protects your data from being corrupted by rogue nodes or concurrent
+access.
+
+Just because a node is unresponsive, this doesn't mean it isn't
+accessing your data. The only way to be 100% sure that your data is
+safe, is to use STONITH so we can be certain that the node is truly
+offline, before allowing the data to be accessed from another node.
+
+
+STONITH also has a role to play in the event that a clustered service
+cannot be stopped. In this case, the cluster uses STONITH to force the
+whole node offline, thereby making it safe to start the service
+elsewhere.
+
+== What STONITH Device Should You Use ==
+
+It is crucial that the STONITH device can allow the cluster to
+differentiate between a node failure and a network one.
+
+The biggest mistake people make in choosing a STONITH device is to
+use remote power switch (such as many on-board IMPI controllers) that
+shares power with the node it controls. In such cases, the cluster
+cannot be sure if the node is really offline, or active and suffering
+from a network fault.
+
+Likewise, any device that relies on the machine being active (such as
+SSH-based "devices" used during testing) are inappropriate.
+
+== Configuring STONITH ==
+
+. Find the correct driver: +stonith_admin --list-installed+
+
+. Since every device is different, the parameters needed to configure
+ it will vary. To find out the parameters associated with the device,
+ run: +stonith_admin --metadata --agent type+
+
+ The output should be XML formatted text containing additional
+ parameter descriptions. We will endevor to make the output more
+ friendly in a later version.
+
+. Enter the shell crm Create an editable copy of the existing
+ configuration cib new stonith Create a fencing resource containing a
+ primitive resource with a class of stonith, a type of type and a
+ parameter for each of the values returned in step 2: +configure
+ primitive ...+
+
+. If the device does not know how to fence nodes based on their uname,
+ you may also need to set the special +pcmk_host_map+ parameter. See
+ +man stonithd+ for details.
+
+. If the device does not support the list command, you may also need
+ to set the special +pcmk_host_list+ and/or +pcmk_host_check+
+ parameters. See +man stonithd+ for details.
+
+. If the device does not expect the victim to be specified with the
+ port parameter, you may also need to set the special
+ +pcmk_host_argument+ parameter. See +man stonithd+ for details.
+
+. Upload it into the CIB from the shell: +cib commit stonith+
+
+. Once the stonith resource is running, you can test it by executing:
+ +stonith_admin --reboot nodename+. Although you might want to stop the
+ cluster on that machine first.
+
+
+== Example ==
+
+Assuming we have an chassis containing four nodes and an IPMI device
+active on 10.0.0.1, then we would chose the fence_ipmilan driver in step
+2 and obtain the following list of parameters
+
+.Obtaining a list of STONITH Parameters
+[source,Bash]
+----
+# stonith_admin --metadata -a fence_ipmilan
+----
+[source,XML]
+----
+
+
+
+fence_ipmilan is an I/O Fencing agent which can be used with machines controlled by IPMI. This agent calls support software using ipmitool (http://ipmitool.sf.net/).
+
+To use fence_ipmilan with HP iLO 3 you have to enable lanplus option (lanplus / -P) and increase wait after operation to 4 seconds (power_wait=4 / -T 4)
+
+
+
+
+ IPMI Lan Auth type (md5, password, or none)
+
+
+
+
+ IPMI Lan IP to talk to
+
+
+
+
+ Password (if required) to control power on IPMI device
+
+
+
+
+ Script to retrieve password (if required)
+
+
+
+
+ Use Lanplus
+
+
+
+
+ Username/Login (if required) to control power on IPMI device
+
+
+
+
+ Operation to perform. Valid operations: on, off, reboot, status, list, diag, monitor or metadata
+
+
+
+
+ Timeout (sec) for IPMI operation
+
+
+
+
+ Ciphersuite to use (same as ipmitool -C parameter)
+
+
+
+
+ Method to fence (onoff or cycle)
+
+
+
+
+ Wait X seconds after on/off operation
+
+
+
+
+ Wait X seconds before fencing is started
+
+
+
+
+ Verbose mode
+
+
+
+
+
+
+
+
+
+
+
+
+
+----
+
+from which we would create a STONITH resource fragment that might look
+like this
+
+.Sample STONITH Resource
+[source,Bash]
+----
+# crm crm(live)# cib new stonith
+INFO: stonith shadow CIB created
+crm(stonith)# configure primitive impi-fencing stonith::fence_ipmilan \
+ params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \
+ op monitor interval="60s"
+----
+
+And finally, since we disabled it earlier, we need to re-enable STONITH.
+At this point we should have the following configuration..
+
+[source,Bash]
+----
+crm(stonith)# configure property stonith-enabled="true"crm(stonith)# configure shownode pcmk-1
+node pcmk-2
+primitive WebData ocf:linbit:drbd \
+ params drbd_resource="wwwdata" \
+ op monitor interval="60s"
+primitive WebFS ocf:heartbeat:Filesystem \
+ params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="gfs2"
+primitive WebSite ocf:heartbeat:apache \
+ params configfile="/etc/httpd/conf/httpd.conf" \
+ op monitor interval="1min"
+primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip="192.168.122.101" cidr_netmask="32" clusterip_hash="sourceip" \
+ op monitor interval="30s"primitive ipmi-fencing stonith::fence_ipmilan \ params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \ op monitor interval="60s"ms WebDataClone WebData \
+ meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
+clone WebFSClone WebFS
+clone WebIP ClusterIP \
+ meta globally-unique="true" clone-max="2" clone-node-max="2"
+clone WebSiteClone WebSite
+colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone
+colocation fs_on_drbd inf: WebFSClone WebDataClone:Master
+colocation website-with-ip inf: WebSiteClone WebIP
+order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start
+order WebSite-after-WebFS inf: WebFSClone WebSiteClone
+order apache-after-ip inf: WebIP WebSiteClone
+property $id="cib-bootstrap-options" \
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="true" \
+ no-quorum-policy="ignore"
+rsc_defaults $id="rsc-options" \
+ resource-stickiness="100"
+crm(stonith)# cib commit stonithINFO: commited 'stonith' shadow CIB to the cluster
+crm(stonith)# quit
+bye
+----
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Stonith.xml b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.xml
index b3fe0df923..5033e4c475 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Stonith.xml
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.xml
@@ -1,235 +1,231 @@
-
-
-%BOOK_ENTITIES;
-]>
-
- Configure STONITH
-
- Why You Need STONITH
- STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and it protects your data from being corrupted by rogue nodes or concurrent access.
-
- Just because a node is unresponsive, this doesn't mean it isn't accessing your data.
- The only way to be 100% sure that your data is safe, is to use STONITH so we can be certain that the node is truly offline, before allowing the data to be accessed from another node.
-
-
- STONITH also has a role to play in the event that a clustered service cannot be stopped.
- In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere.
-
-
-
- What STONITH Device Should You Use
- It is crucial that the STONITH device can allow the cluster to differentiate between a node failure and a network one.
-
- The biggest mistake people make in choosing a STONITH device is to use remote power switch (such as many on-board IMPI controllers) that shares power with the node it controls.
- In such cases, the cluster cannot be sure if the node is really offline, or active and suffering from a network fault.
-
- Likewise, any device that relies on the machine being active (such as SSH-based "devices" used during testing) are inappropriate.
-
-
- Configuring STONITH
-
-
- Find the correct driver: stonith_admin --list-installed
-
-
- Since every device is different, the parameters needed to configure it will vary.
- To find out the parameters associated with the device, run:
- stonith_admin --metadata --agent type
-
- The output should be XML formatted text containing additional parameter descriptions. We
- will endevor to make the output more friendly in a later version.
-
-
- Enter the shell crm
- Create an editable copy of the existing configuration cib new stonith
- Create a fencing resource containing a primitive resource with a class of
- stonith, a type of type and a parameter for each of the values
- returned in step 2: configure primitive ...
-
-
- If the device does not know how to fence nodes based on their uname, you may also need
- to set the special pcmk_host_map parameter. See man
- stonithd for details.
-
-
- If the device does not support the list command, you may also
- need to set the special pcmk_host_list and/or
- pcmk_host_check parameters. See man stonithd
- for details.
-
-
- If the device does not expect the victim to be specified with the
- port parameter, you may also need to set the special
- pcmk_host_argument parameter. See man stonithd
- for details.
-
-
- Upload it into the CIB from the shell: cib commit stonith
-
-
- Once the stonith resource is running, you can test it by executing:
- stonith_admin --reboot nodename. Although
- you might want to stop the cluster on that machine first.
-
-
-
-
- Example
- Assuming we have an chassis containing four nodes and an IPMI device active on 10.0.0.1, then
- we would chose the fence_ipmilan driver in step 2 and obtain the
- following list of parameters
-
- from which we would create a STONITH resource fragment that might look like this
-
- Sample STONITH Resource
-
-[root@pcmk-1 ~]# crm
-crm(live)# cib new stonith
+</resource-agent>
+from which we would create a STONITH resource fragment that might look
+like this
+# crm crm(live)# cib new stonith
INFO: stonith shadow CIB created
-crm(stonith)# configure primitive impi-fencing stonith::fence_ipmilan \
- params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \
- op monitor interval="60s"
-
-
-
- And finally, since we disabled it earlier, we need to re-enable STONITH.
- At this point we should have the following configuration..
-
-
-crm(stonith)# configure property stonith-enabled="true"
-crm(stonith)# configure show
-node pcmk-1
+crm(stonith)# configure primitive impi-fencing stonith::fence_ipmilan \
+ params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \
+ op monitor interval="60s"
+And finally, since we disabled it earlier, we need to re-enable STONITH.
+At this point we should have the following configuration..
+crm(stonith)# configure property stonith-enabled="true"crm(stonith)# configure shownode pcmk-1
node pcmk-2
primitive WebData ocf:linbit:drbd \
- params drbd_resource="wwwdata" \
- op monitor interval="60s"
+ params drbd_resource="wwwdata" \
+ op monitor interval="60s"
primitive WebFS ocf:heartbeat:Filesystem \
- params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="gfs2"
+ params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="gfs2"
primitive WebSite ocf:heartbeat:apache \
- params configfile="/etc/httpd/conf/httpd.conf" \
- op monitor interval="1min"
+ params configfile="/etc/httpd/conf/httpd.conf" \
+ op monitor interval="1min"
primitive ClusterIP ocf:heartbeat:IPaddr2 \
- params ip="192.168.122.101" cidr_netmask="32" clusterip_hash="sourceip" \
- op monitor interval="30s"
-primitive ipmi-fencing stonith::fence_ipmilan \
- params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \
- op monitor interval="60s"
-ms WebDataClone WebData \
- meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
+ params ip="192.168.122.101" cidr_netmask="32" clusterip_hash="sourceip" \
+ op monitor interval="30s"primitive ipmi-fencing stonith::fence_ipmilan \ params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \ op monitor interval="60s"ms WebDataClone WebData \
+ meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
clone WebFSClone WebFS
-clone WebIP ClusterIP \
- meta globally-unique="true" clone-max="2" clone-node-max="2"
+clone WebIP ClusterIP \
+ meta globally-unique="true" clone-max="2" clone-node-max="2"
clone WebSiteClone WebSite
colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone
colocation fs_on_drbd inf: WebFSClone WebDataClone:Master
colocation website-with-ip inf: WebSiteClone WebIP
order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start
order WebSite-after-WebFS inf: WebFSClone WebSiteClone
order apache-after-ip inf: WebIP WebSiteClone
property $id="cib-bootstrap-options" \
- dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="true" \
- no-quorum-policy="ignore"
+ dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \
+ cluster-infrastructure="openais" \
+ expected-quorum-votes="2" \
+ stonith-enabled="true" \
+ no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
- resource-stickiness="100"
-crm(stonith)# cib commit stonith
-INFO: commited 'stonith' shadow CIB to the cluster
+ resource-stickiness="100"
+crm(stonith)# cib commit stonithINFO: commited 'stonith' shadow CIB to the cluster
crm(stonith)# quit
-bye
-
-
-
-
+bye
+
+
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt b/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt
new file mode 100644
index 0000000000..8f1d6bc767
--- /dev/null
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt
@@ -0,0 +1,123 @@
+= Using Pacemaker Tools =
+
+In the dark past, configuring Pacemaker required the administrator to
+read and write XML. In true UNIX style, there were also a number of
+different commands that specialized in different aspects of querying
+and updating the cluster.
+
+Since Pacemaker 1.0, this has all changed and we have an integrated,
+scriptable, cluster shell that hides all the messy XML scaffolding. It
+even allows you to queue up several changes at once and commit them
+atomically.
+
+Take some time to familiarize yourself with what it can do.
+
+....
+# crm --help
+usage:
+ crm [-D display_type]
+ crm [-D display_type] args
+ crm [-D display_type] [-f file]
+
+ Use crm without arguments for an interactive session.
+ Supply one or more arguments for a "single-shot" use.
+ Specify with -f a file which contains a script. Use '-' for
+ standard input or use pipe/redirection.
+
+ crm displays cli format configurations using a color scheme
+ and/or in uppercase. Pick one of "color" or "uppercase", or
+ use "-D color,uppercase" if you want colorful uppercase.
+ Get plain output by "-D plain". The default may be set in
+ user preferences (options).
+
+Examples:
+
+ # crm -f stopapp2.cli
+ # crm < stopapp2.cli
+ # crm resource stop global_www
+ # crm status
+....
+
+The primary tool for monitoring the status of the cluster is crm_mon
+(also available as crm status). It can be run in a variety of modes
+and has a number of output options. To find out about any of the tools
+that come with Pacemaker, simply invoke them with the --help option or
+consult the included man pages. Both sets of output are created from
+the tool, and so will always be in sync with each other and the tool
+itself.
+
+Additionally, the Pacemaker version and supported cluster stack(s) is
+available via the --version option.
+
+....
+# crm_mon --version
+Pacemaker 1.1.5
+Written by Andrew Beekhof
+# crm_mon --help
+crm_mon - Provides a summary of cluster's current state.
+
+Outputs varying levels of detail in a number of different formats.
+
+Usage: crm_mon mode [options]
+Options:
+-?, --help This text
+-$, --version Version information
+-V, --verbose Increase debug output
+
+Modes:
+-h, --as-html=value Write cluster status to the named file
+-w, --web-cgi Web mode with output suitable for cgi
+-s, --simple-status Display the cluster status once as a simple one line output (suitable for nagios)
+-S, --snmp-traps=value Send SNMP traps to this station
+-T, --mail-to=value Send Mail alerts to this user. See also --mail-from, --mail-host, --mail-prefix
+
+Display Options:
+-n, --group-by-node Group resources by node
+-r, --inactive Display inactive resources
+-f, --failcounts Display resource fail counts
+-o, --operations Display resource operation history
+-t, --timing-details Display resource operation history with timing details
+
+
+Additional Options:
+-i, --interval=value Update frequency in seconds
+-1, --one-shot Display the cluster status once on the console and exit
+-N, --disable-ncurses Disable the use of ncurses
+-d, --daemonize Run in the background as a daemon
+-p, --pid-file=value (Advanced) Daemon pid file location
+-F, --mail-from=value Mail alerts should come from the named user
+-H, --mail-host=value Mail alerts should be sent via the named host
+-P, --mail-prefix=value Subjects for mail alerts should start with this string
+-E, --external-agent=value A program to run when resource operations take place.
+-e, --external-recipient=valueA recipient for your program (assuming you want the program to send something to someone).
+
+Examples:
+
+Display the cluster´s status on the console with updates as they occur:
+ # crm_mon
+
+Display the cluster´s status on the console just once then exit:
+ # crm_mon -1
+
+Display your cluster´s status, group resources by node, and include inactive resources in the list:
+ # crm_mon --group-by-node --inactive
+
+Start crm_mon as a background daemon and have it write the cluster´s status to an HTML file:
+ # crm_mon --daemonize --as-html /path/to/docroot/filename.html
+
+Start crm_mon as a background daemon and have it send email alerts:
+ # crm_mon --daemonize --mail-to user@example.com --mail-host mail.example.com
+
+Start crm_mon as a background daemon and have it send SNMP alerts:
+ # crm_mon --daemonize --snmp-traps snmptrapd.example.com
+
+Report bugs to pacemaker@oss.clusterlabs.org
+....
+
+[NOTE]
+======
+If the SNMP and/or email options are not listed, then Pacemaker was not
+built to support them. This may be by the choice of your distribution or
+the required libraries may not have been available. Please contact
+whoever supplied you with the packages for more details.
+======
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Tools.xml b/doc/Clusters_from_Scratch/en-US/Ch-Tools.xml
deleted file mode 100644
index 23a67cfd1b..0000000000
--- a/doc/Clusters_from_Scratch/en-US/Ch-Tools.xml
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
- Using Pacemaker Tools
-
- In the dark past, configuring Pacemaker required the administrator to read and write XML. In true UNIX style, there were also a number of different commands that specialized in different aspects of querying and updating the cluster.
-
-
- Since Pacemaker 1.0, this has all changed and we have an integrated, scriptable, cluster shell that hides all the messy XML scaffolding. It even allows you to queue up several changes at once and commit them atomically.
-
-
- Take some time to familiarize yourself with what it can do.
-
-
-
-[root@pcmk-1 ~]# crm --help
-
-usage:
- crm [-D display_type]
- crm [-D display_type] args
- crm [-D display_type] [-f file]
-
- Use crm without arguments for an interactive session.
- Supply one or more arguments for a "single-shot" use.
- Specify with -f a file which contains a script. Use '-' for
- standard input or use pipe/redirection.
-
- crm displays cli format configurations using a color scheme
- and/or in uppercase. Pick one of "color" or "uppercase", or
- use "-D color,uppercase" if you want colorful uppercase.
- Get plain output by "-D plain". The default may be set in
- user preferences (options).
-
-Examples:
-
- # crm -f stopapp2.cli
- # crm < stopapp2.cli
- # crm resource stop global_www
- # crm status
-
-
- The primary tool for monitoring the status of the cluster is crm_mon (also available as crm status). It can be run in a variety of modes and has a number of output options. To find out about any of the tools that come with Pacemaker, simply invoke them with the --help option or consult the included man pages. Both sets of output are created from the tool, and so will always be in sync with each other and the tool itself.
-
-
- Additionally, the Pacemaker version and supported cluster stack(s) is available via the --version option.
-
-
-
-[root@pcmk-1 ~]# crm_mon --version
-Pacemaker 1.1.5
-Written by Andrew Beekhof
-[root@pcmk-1 ~]# crm_mon --help
-crm_mon - Provides a summary of cluster's current state.
-
-Outputs varying levels of detail in a number of different formats.
-
-Usage: crm_mon mode [options]
-Options:
- -?, --help This text
- -$, --version Version information
- -V, --verbose Increase debug output
-
-Modes:
- -h, --as-html=value Write cluster status to the named file
- -w, --web-cgi Web mode with output suitable for cgi
- -s, --simple-status Display the cluster status once as a simple one line output (suitable for nagios)
- -S, --snmp-traps=value Send SNMP traps to this station
- -T, --mail-to=value Send Mail alerts to this user. See also --mail-from, --mail-host, --mail-prefix
-
-Display Options:
- -n, --group-by-node Group resources by node
- -r, --inactive Display inactive resources
- -f, --failcounts Display resource fail counts
- -o, --operations Display resource operation history
- -t, --timing-details Display resource operation history with timing details
-
-
-Additional Options:
- -i, --interval=value Update frequency in seconds
- -1, --one-shot Display the cluster status once on the console and exit
- -N, --disable-ncurses Disable the use of ncurses
- -d, --daemonize Run in the background as a daemon
- -p, --pid-file=value (Advanced) Daemon pid file location
- -F, --mail-from=value Mail alerts should come from the named user
- -H, --mail-host=value Mail alerts should be sent via the named host
- -P, --mail-prefix=value Subjects for mail alerts should start with this string
- -E, --external-agent=value A program to run when resource operations take place.
- -e, --external-recipient=value A recipient for your program (assuming you want the program to send something to someone).
-
-Examples:
-
-Display the cluster´s status on the console with updates as they occur:
- # crm_mon
-
-Display the cluster´s status on the console just once then exit:
- # crm_mon -1
-
-Display your cluster´s status, group resources by node, and include inactive resources in the list:
- # crm_mon --group-by-node --inactive
-
-Start crm_mon as a background daemon and have it write the cluster´s status to an HTML file:
- # crm_mon --daemonize --as-html /path/to/docroot/filename.html
-
-Start crm_mon as a background daemon and have it send email alerts:
- # crm_mon --daemonize --mail-to user@example.com --mail-host mail.example.com
-
-Start crm_mon as a background daemon and have it send SNMP alerts:
- # crm_mon --daemonize --snmp-traps snmptrapd.example.com
-
-Report bugs to pacemaker@oss.clusterlabs.org
-
-
-
- If the SNMP and/or email options are not listed, then Pacemaker was not built to support them. This may be by the choice of your distribution or the required libraries may not have been available. Please contact whoever supplied you with the packages for more details.
-
-
-
-
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Verification.txt b/doc/Clusters_from_Scratch/en-US/Ch-Verification.txt
new file mode 100644
index 0000000000..098f8afb51
--- /dev/null
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Verification.txt
@@ -0,0 +1,125 @@
+= Verify Cluster Installation =
+
+== Verify Corosync Installation ==
+
+Start Corosync on the first node
+
+[source,Bash]
+----
+# /etc/init.d/corosync start
+Starting Corosync Cluster Engine (corosync): [ OK ]
+----
+
+Check the cluster started correctly and that an initial membership was
+able to form
+
+[source,Bash]
+----
+# grep -e "corosync.*network interface" -e "Corosync Cluster Engine" -e "Successfully read main configuration file" /var/log/messages
+Aug 27 09:05:34 pcmk-1 corosync[1540]: [MAIN ] Corosync Cluster Engine ('1.1.0'): started and ready to provide service.
+Aug 27 09:05:34 pcmk-1 corosync[1540]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
+# grep TOTEM /var/log/messages
+Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transport (UDP/IP).
+Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
+Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] The network interface [192.168.122.101] is now up.
+Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
+----
+
+With one node functional, its now safe to start Corosync on the second
+node as well.
+
+[source,Bash]
+----
+# ssh pcmk-2 -- /etc/init.d/corosync start
+Starting Corosync Cluster Engine (corosync): [ OK ]
+#
+----
+
+Check the cluster formed correctly
+
+[source,Bash]
+----
+# grep TOTEM /var/log/messages
+Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transport (UDP/IP).
+Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
+Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] The network interface [192.168.122.101] is now up.
+Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
+Aug 27 09:12:11 pcmk-1 corosync[1540]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
+----
+
+== Verify Pacemaker Installation ==
+
+Now that we have confirmed that Corosync is functional we can check the
+rest of the stack.
+
+[source,Bash]
+----
+# grep pcmk_startup /var/log/messages
+Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: CRM: InitializedAug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] Logging: Initialized pcmk_startup
+Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
+Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Service: 9Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Local hostname: pcmk-1
+----
+
+Now try starting Pacemaker and check the necessary processes have been
+started
+
+[source,Bash]
+----
+# /etc/init.d/pacemaker start
+Starting Pacemaker Cluster Manager: [ OK ]
+
+# grep -e pacemakerd.*get_config_opt -e pacemakerd.*start_child -e "Starting Pacemaker" /var/log/messages
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'pacemaker' for option: name
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found '1' for option: ver
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Defaulting to 'no' for option: use_logd
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'on' for option: debug
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'yes' for option: to_logfile
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found '/var/log/corosync.log' for option: logfile
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'yes' for option: to_syslog
+Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'daemon' for option: syslog_facility
+Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: main: Starting Pacemaker 1.1.5 (Build: 31f088949239+): docbook-manpages publican ncurses trace-logging cman cs-quorum heartbeat corosync snmp libesmtp
+Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14022 for process stonith-ng
+Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14023 for process cib
+Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14024 for process lrmd
+Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14025 for process attrd
+Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14026 for process pengine
+Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14027 for process crmd
+
+# ps axf PID TTY STAT TIME COMMAND
+ 2 ? S< 0:00 [kthreadd]
+ 3 ? S< 0:00 \_ [migration/0]
+... lots of processes ...
+13990 ? S 0:01 pacemakerd
+14022 ? Sa 0:00 \_ /usr/lib64/heartbeat/stonithd
+14023 ? Sa 0:00 \_ /usr/lib64/heartbeat/cib
+14024 ? Sa 0:00 \_ /usr/lib64/heartbeat/lrmd
+14025 ? Sa 0:00 \_ /usr/lib64/heartbeat/attrd
+14026 ? Sa 0:00 \_ /usr/lib64/heartbeat/pengine
+14027 ? Sa 0:00 \_ /usr/lib64/heartbeat/crmd
+----
+
+Next, check for any ERRORs during startup - there shouldn't be any.
+
+[source,Bash]
+----
+# grep ERROR: /var/log/messages | grep -v unpack_resources
+#
+----
+
+Repeat on the other node and display the cluster's status.
+
+[source,Bash]
+----
+# ssh pcmk-2 -- /etc/init.d/pacemaker start
+Starting Pacemaker Cluster Manager: [ OK ]
+# crm_mon
+============
+Last updated: Thu Aug 27 16:54:55 2009Stack: openais
+Current DC: pcmk-1 - partition with quorum
+Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
+2 Nodes configured, 2 expected votes
+0 Resources configured.
+============
+Online: [ pcmk-1 pcmk-2 ]
+----
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Verification.xml b/doc/Clusters_from_Scratch/en-US/Ch-Verification.xml
deleted file mode 100644
index bc91bcab38..0000000000
--- a/doc/Clusters_from_Scratch/en-US/Ch-Verification.xml
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
- Verify Cluster Installation
-
- Verify Corosync Installation
-
- Start Corosync on the first node
-
-
-
-[root@pcmk-1 ~]# /etc/init.d/corosync start
-Starting Corosync Cluster Engine (corosync): [ OK ]
-
-
- Check the cluster started correctly and that an initial membership was able to form
-
-
-
-[root@pcmk-1 ~]# grep -e "corosync.*network interface" -e "Corosync Cluster Engine" \
--e "Successfully read main configuration file" /var/log/messages
-Aug 27 09:05:34 pcmk-1 corosync[1540]: [MAIN ] Corosync Cluster Engine ('1.1.0'): started and ready to provide service.
-Aug 27 09:05:34 pcmk-1 corosync[1540]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
-[root@pcmk-1 ~]# grep TOTEM /var/log/messages
-Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transport (UDP/IP).
-Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] The network interface [192.168.122.101] is now up.
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
-
-
- With one node functional, its now safe to start Corosync on the second node as well.
-
-
-
-[root@pcmk-1 ~]# ssh pcmk-2 -- /etc/init.d/corosync start
-Starting Corosync Cluster Engine (corosync): [ OK ]
-[root@pcmk-1 ~]#
-
-
- Check the cluster formed correctly
-
-
-
-[root@pcmk-1 ~]# grep TOTEM /var/log/messages
-Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transport (UDP/IP).
-Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] The network interface [192.168.122.101] is now up.
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
-Aug 27 09:12:11 pcmk-1 corosync[1540]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
-
-
-
-
- Verify Pacemaker Installation
-
- Now that we have confirmed that Corosync is functional we can check the rest of the stack.
-
-
-
-[root@pcmk-1 ~]# grep pcmk_startup /var/log/messages
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: CRM: Initialized
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] Logging: Initialized pcmk_startup
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Service: 9
-Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Local hostname: pcmk-1
-
-
- Now try starting Pacemaker and check the necessary processes have been started
-
-
-[root@pcmk-1 ~]# /etc/init.d/pacemaker start
-Starting Pacemaker Cluster Manager: [ OK ]
-
-
-
-[root@pcmk-1 ~]# grep -e pacemakerd.*get_config_opt -e pacemakerd.*start_child \
--e "Starting Pacemaker" /var/log/messages
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'pacemaker' for option: name
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found '1' for option: ver
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Defaulting to 'no' for option: use_logd
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'on' for option: debug
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'yes' for option: to_logfile
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found '/var/log/corosync.log' for option: logfile
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'yes' for option: to_syslog
-Feb 8 13:31:24 pcmk-1 pacemakerd: [13155]: info: get_config_opt: Found 'daemon' for option: syslog_facility
-Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: main: Starting Pacemaker 1.1.5 (Build: 31f088949239+): docbook-manpages publican ncurses trace-logging cman cs-quorum heartbeat corosync snmp libesmtp
-Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14022 for process stonith-ng
-Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14023 for process cib
-Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14024 for process lrmd
-Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14025 for process attrd
-Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14026 for process pengine
-Feb 8 16:50:38 pcmk-1 pacemakerd: [13990]: info: start_child: Forked child 14027 for process crmd
-
-
-
-[root@pcmk-1 ~]# ps axf
- PID TTY STAT TIME COMMAND
- 2 ? S< 0:00 [kthreadd]
- 3 ? S< 0:00 \_ [migration/0]
-... lots of processes ...
- 13990 ? S 0:01 pacemakerd
- 14022 ? Sa 0:00 \_ /usr/lib64/heartbeat/stonithd
- 14023 ? Sa 0:00 \_ /usr/lib64/heartbeat/cib
- 14024 ? Sa 0:00 \_ /usr/lib64/heartbeat/lrmd
- 14025 ? Sa 0:00 \_ /usr/lib64/heartbeat/attrd
- 14026 ? Sa 0:00 \_ /usr/lib64/heartbeat/pengine
- 14027 ? Sa 0:00 \_ /usr/lib64/heartbeat/crmd
-
-
- Next, check for any ERRORs during startup - there shouldn’t be any.
-
-
-[root@pcmk-1 ~]# grep ERROR: /var/log/messages | grep -v unpack_resources
-[root@pcmk-1 ~]#
-
-
- Repeat on the other node and display the cluster's status.
-
-
-
-[root@pcmk-1 ~]# ssh pcmk-2 -- /etc/init.d/pacemaker start
-Starting Pacemaker Cluster Manager: [ OK ]
-[root@pcmk-1 ~]# crm_mon
-============
-Last updated: Thu Aug 27 16:54:55 2009
-Stack: openais
-Current DC: pcmk-1 - partition with quorum
-Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
-2 Nodes configured, 2 expected votes
-0 Resources configured.
-============
-
-Online: [ pcmk-1 pcmk-2 ]
-
-
-
-
-