diff --git a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
index 5ba11dc7d5..d1cf176d70 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
@@ -1,710 +1,710 @@
 = Advanced Configuration =
 
 [[s-remote-connection]]
 == Connecting from a Remote Machine ==
 indexterm:[Cluster,Remote connection]
 indexterm:[Cluster,Remote administration]
 
 Provided Pacemaker is installed on a machine, it is possible to
 connect to the cluster even if the machine itself is not in the same
 cluster.  To do this, one simply sets up a number of environment
 variables and runs the same commands as when working on a cluster
 node.
 
 .Environment Variables Used to Connect to Remote Instances of the CIB
 [width="95%",cols="1m,1,3<",options="header",align="center"]
 |=========================================================
 
 |Environment Variable
 |Default
 |Description
 
 |CIB_user
 |$USER
 |The user to connect as. Needs to be part of the +hacluster+ group on
  the target host.
  indexterm:[Environment Variable,CIB_user]
 
 |CIB_passwd
 |
 |The user's password. Read from the command line if unset.
  indexterm:[Environment Variable,CIB_passwd]
 
 |CIB_server
 |localhost
 |The host to contact
  indexterm:[Environment Variable,CIB_server]
 
 |CIB_port
 |
 |The port on which to contact the server; required.
  indexterm:[Environment Variable,CIB_port]
 
 |CIB_encrypted
 |TRUE
 |Whether to encrypt network traffic
  indexterm:[Environment Variable,CIB_encrypted]
 
 |=========================================================
 
 So, if *c001n01* is an active cluster node and is listening on port 1234
 for connections, and *someuser* is a member of the *hacluster* group,
 then the following would prompt for *someuser*'s password and return
 the cluster's current configuration:
 
 ----
 # export CIB_port=1234; export CIB_server=c001n01; export CIB_user=someuser;
 # cibadmin -Q
 ----
 
 For security reasons, the cluster does not listen for remote
 connections by default.  If you wish to allow remote access, you need
 to set the +remote-tls-port+ (encrypted) or +remote-clear-port+
 (unencrypted) CIB properties (i.e., those kept in the +cib+ tag, like
 +num_updates+ and +epoch+).
 
 .Extra top-level CIB properties for remote access
 [width="95%",cols="1m,1,3<",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |remote-tls-port
 |_none_
 |Listen for encrypted remote connections on this port.
  indexterm:[remote-tls-port,Remote Connection Option]
  indexterm:[Remote Connection,Option,remote-tls-port]
 
 |remote-clear-port
 |_none_
 |Listen for plaintext remote connections on this port.
  indexterm:[remote-clear-port,Remote Connection Option]
  indexterm:[Remote Connection,Option,remote-clear-port]
 
 |=========================================================
 
 [[s-recurring-start]]
 == Specifying When Recurring Actions are Performed ==
 
 
 By default, recurring actions are scheduled relative to when the
 resource started.  So if your resource was last started at 14:32 and
 you have a backup set to be performed every 24 hours, then the backup
 will always run at in the middle of the business day -- hardly
 desirable.
 
 To specify a date and time that the operation should be relative to, set
 the operation's +interval-origin+.  The cluster uses this point to
 calculate the correct +start-delay+ such that the operation will occur
 at _origin + (interval * N)_.
 
 So, if the operation's interval is 24h, its interval-origin is set to
 02:00 and it is currently 14:32, then the cluster would initiate
 the operation with a start delay of 11 hours and 28 minutes.  If the
 resource is moved to another node before 2am, then the operation is
 cancelled.
 
 The value specified for +interval+ and +interval-origin+ can be any
 date/time conforming to the
 http://en.wikipedia.org/wiki/ISO_8601[ISO8601 standard].  By way of
 example, to specify an operation that would run on the first Monday of
 2009 and every Monday after that, you would add:
 
 .Specifying a Base for Recurring Action Intervals
 =====
 [source,XML]
 <op id="my-weekly-action" name="custom-action" interval="P7D" interval-origin="2009-W01-1"/> 
 =====
 
 == Moving Resources ==
 indexterm:[Moving,Resources] 
 indexterm:[Resource,Moving]
 
 === Moving Resources Manually ===
 
 There are primarily two occasions when you would want to move a
 resource from its current location: when the whole node is under
 maintenance, and when a single resource needs to be moved.
 
 ==== Standby Mode ====
 
 Since everything eventually comes down to a score, you could create
 constraints for every resource to prevent them from running on one
 node.  While pacemaker configuration can seem convoluted at times, not even
 we would require this of administrators.
 
 Instead, one can set a special node attribute which tells the cluster
 "don't let anything run here".  There is even a helpful tool to help
 query and set it, called `crm_standby`.  To check the standby status
 of the current machine, run:
 
 ----
 # crm_standby -G
 ----
 
 A value of +on+ indicates that the node is _not_ able to host any
 resources, while a value of +off+ says that it _can_.
 
 You can also check the status of other nodes in the cluster by
 specifying the `--node` option:
 
 ----
 # crm_standby -G --node sles-2
 ----
 
 To change the current node's standby status, use `-v` instead of `-G`:
 
 ----
 # crm_standby -v on
 ----
 
 Again, you can change another host's value by supplying a hostname with `--node`.
 
 ==== Moving One Resource ====
 
 When only one resource is required to move, we could do this by creating
 location constraints.  However, once again we provide a user-friendly
 shortcut as part of the `crm_resource` command, which creates and
 modifies the extra constraints for you.  If +Email+ were running on
 +sles-1+ and you wanted it moved to a specific location, the command
 would look something like:
         
 ----
 # crm_resource -M -r Email -H sles-2
 ----
 
 Behind the scenes, the tool will create the following location constraint:
 
 [source,XML]
 <rsc_location rsc="Email" node="sles-2" score="INFINITY"/>
 
 It is important to note that subsequent invocations of `crm_resource
 -M` are not cumulative. So, if you ran these commands
 
 ----
 # crm_resource -M -r Email -H sles-2
 # crm_resource -M -r Email -H sles-3
 ----
 
 then it is as if you had never performed the first command.
 
 To allow the resource to move back again, use:
 
 ----
 # crm_resource -U -r Email
 ----
 
 Note the use of the word _allow_.  The resource can move back to its
 original location but, depending on +resource-stickiness+, it might
 stay where it is.  To be absolutely certain that it moves back to
 +sles-1+, move it there before issuing the call to `crm_resource -U`:
         
 ----
 # crm_resource -M -r Email -H sles-1
 # crm_resource -U -r Email
 ----
 
 Alternatively, if you only care that the resource should be moved from
 its current location, try:
 
 ----
 # crm_resource -B -r Email
 ----
 
 Which will instead create a negative constraint, like
 
 [source,XML]
 <rsc_location rsc="Email" node="sles-1" score="-INFINITY"/>
 
 This will achieve the desired effect, but will also have long-term
 consequences.  As the tool will warn you, the creation of a
 +-INFINITY+ constraint will prevent the resource from running on that
 node until `crm_resource -U` is used.  This includes the situation
 where every other cluster node is no longer available!
 
 In some cases, such as when +resource-stickiness+ is set to
 +INFINITY+, it is possible that you will end up with the problem
 described in <<node-score-equal>>.  The tool can detect
 some of these cases and deals with them by creating both
 positive and negative constraints. E.g.
 
 +Email+ prefers +sles-1+ with a score of +-INFINITY+
 
 +Email+ prefers +sles-2+ with a score of +INFINITY+
 
 which has the same long-term consequences as discussed earlier.
 
 [[s-failure-migration]]
-=== Moving Resources Due to Repeated Failure ===
+=== Moving Resources Due to Failure ===
 
 Normally, if a running resource fails, pacemaker will try to start
 it again on the same node. However if a resource fails repeatedly,
 it is possible that there is an underlying problem on that node, and you
 might desire trying a different node in such a case.
 
 Pacemaker allows you to set your preference via the +migration-threshold+
 resource option.
 footnote:[
 The naming of this option was perhaps unfortunate as it is easily
 confused with live migration, the process of moving a resource from
 one node to another without stopping it.  Xen virtual guests are the
 most common example of resources that can be migrated in this manner.
 ]
 
 Simply define +migration-threshold=pass:[<replaceable>N</replaceable>]+ for a resource and it will
 migrate to a new node after 'N' failures.  There is no threshold defined
 by default.  To determine the resource's current failure status and
 limits, run `crm_mon --failcounts`.
 
 By default, once the threshold has been reached, the troublesome node will no
 longer be allowed to run the failed resource until the administrator
 manually resets the resource's failcount using `crm_failcount` (after
 hopefully first fixing the failure's cause).  Alternatively, it is possible
 to expire them by setting the +failure-timeout+ option for the resource.
 
 For example, a setting of +migration-threshold=2+ and +failure-timeout=60s+
 would cause the resource to move to a new node after 2 failures, and
 allow it to move back (depending on stickiness and constraint scores) after one
 minute.
 
 There are two exceptions to the migration threshold concept:
 when a resource either fails to start or fails to stop.
 
 Start failures cause the failcount to be set to +INFINITY+ and thus always
 cause the resource to move immediately.
 
 Stop failures are slightly different and crucial.  If a resource fails
 to stop and STONITH is enabled, then the cluster will fence the node
 in order to be able to start the resource elsewhere.  If STONITH is
 not enabled, then the cluster has no way to continue and will not try
 to start the resource elsewhere, but will try to stop it again after
 the failure timeout.
 
 [IMPORTANT]
 Please read <<s-rules-recheck>> to understand how timeouts work
 before configuring a +failure-timeout+.
 
 === Moving Resources Due to Connectivity Changes ===
 
 You can configure the cluster to move resources when external connectivity is
 lost in two steps.
 
 ==== Tell Pacemaker to Monitor Connectivity ====
 
 First, add an *ocf:pacemaker:ping* resource to the cluster.  The
 *ping* resource uses the system utility of the same name to a test whether
 list of machines (specified by DNS hostname or IPv4/IPv6 address) are
 reachable and uses the results to maintain a node attribute called +pingd+
 by default.
 footnote:[
 The attribute name is customizable, in order to allow multiple ping groups to be defined.
 ]
 
 [NOTE]
 ===========
 Older versions of Heartbeat required users to add ping nodes to +ha.cf+, but
 this is no longer required.
 
 Older versions of Pacemaker used a different agent *ocf:pacemaker:pingd* which
 is now deprecated in favor of *ping*. If your version of Pacemaker does not
 contain the *ping* resource agent, download the latest version from
 https://github.com/ClusterLabs/pacemaker/tree/master/extra/resources/ping
 ===========
 
 Normally, the ping resource should run on all cluster nodes, which means that
 you'll need to create a clone.  A template for this can be found below
 along with a description of the most interesting parameters.
           
 .Common Options for a 'ping' Resource
 [width="95%",cols="1m,4<",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |dampen
 |The time to wait (dampening) for further changes to occur. Use this
  to prevent a resource from bouncing around the cluster when cluster
  nodes notice the loss of connectivity at slightly different times.
  indexterm:[dampen,Ping Resource Option]
  indexterm:[Ping Resource,Option,dampen]
 
 |multiplier
 |The number of connected ping nodes gets multiplied by this value to
  get a score. Useful when there are multiple ping nodes configured.
  indexterm:[multiplier,Ping Resource Option]
  indexterm:[Ping Resource,Option,multiplier]
 
 |host_list
 |The machines to contact in order to determine the current
  connectivity status. Allowed values include resolvable DNS host
  names, IPv4 and IPv6 addresses.
  indexterm:[host_list,Ping Resource Option]
  indexterm:[Ping Resource,Option,host_list]
 
 |=========================================================
 
 .An example ping cluster resource that checks node connectivity once every minute
 =====
 [source,XML]
 ------------
 <clone id="Connected">
    <primitive id="ping" provider="pacemaker" class="ocf" type="ping">
     <instance_attributes id="ping-attrs">
       <nvpair id="pingd-dampen" name="dampen" value="5s"/>
       <nvpair id="pingd-multiplier" name="multiplier" value="1000"/>
       <nvpair id="pingd-hosts" name="host_list" value="my.gateway.com www.bigcorp.com"/>
     </instance_attributes>
     <operations>
       <op id="ping-monitor-60s" interval="60s" name="monitor"/>
     </operations>
    </primitive>
 </clone>
 ------------
 =====
 
 [IMPORTANT]
 ===========
 You're only half done.  The next section deals with telling Pacemaker
 how to deal with the connectivity status that +ocf:pacemaker:ping+ is
 recording.
 ===========
 
 ==== Tell Pacemaker How to Interpret the Connectivity Data ====
 
 [IMPORTANT]
 ======
 Before attempting the following, make sure you understand
 <<ch-rules>>.
 ======
 
 There are a number of ways to use the connectivity data.
 
 The most common setup is for people to have a single ping
 target (e.g. the service network's default gateway), to prevent the cluster
 from running a resource on any unconnected node.
 
 .Don't run a resource on unconnected nodes
 =====
 [source,XML]
 -------
 <rsc_location id="WebServer-no-connectivity" rsc="Webserver">
    <rule id="ping-exclude-rule" score="-INFINITY" >
     <expression id="ping-exclude" attribute="pingd" operation="not_defined"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 A more complex setup is to have a number of ping targets configured.
 You can require the cluster to only run resources on nodes that can
 connect to all (or a minimum subset) of them.
 
 .Run only on nodes connected to three or more ping targets.
 =====
 [source,XML]
 -------
 <primitive id="ping" provider="pacemaker" class="ocf" type="ping">
 ... <!-- omitting some configuration to highlight important parts -->
       <nvpair id="pingd-multiplier" name="multiplier" value="1000"/>
 ...
 </primitive>
 ...
 <rsc_location id="WebServer-connectivity" rsc="Webserver">
    <rule id="ping-prefer-rule" score="-INFINITY" >
       <expression id="ping-prefer" attribute="pingd" operation="lt" value="3000"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 Alternatively, you can tell the cluster only to _prefer_ nodes with the best
 connectivity.  Just be sure to set +multiplier+ to a value higher than
 that of +resource-stickiness+ (and don't set either of them to
 +INFINITY+).
 
 .Prefer the node with the most connected ping nodes
 =====
 [source,XML]
 -------
 <rsc_location id="WebServer-connectivity" rsc="Webserver">
    <rule id="ping-prefer-rule" score-attribute="pingd" >
     <expression id="ping-prefer" attribute="pingd" operation="defined"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 It is perhaps easier to think of this in terms of the simple
 constraints that the cluster translates it into.  For example, if
 *sles-1* is connected to all five ping nodes but *sles-2* is only
 connected to two, then it would be as if you instead had the following
 constraints in your configuration:
 
 .How the cluster translates the above location constraint
 =====
 [source,XML]
 -------
 <rsc_location id="ping-1" rsc="Webserver" node="sles-1" score="5000"/>
 <rsc_location id="ping-2" rsc="Webserver" node="sles-2" score="2000"/>
 -------
 =====
 
 The advantage is that you don't have to manually update any
 constraints whenever your network connectivity changes.
 
 You can also combine the concepts above into something even more
 complex.  The example below shows how you can prefer the node with the
 most connected ping nodes provided they have connectivity to at least
 three (again assuming that +multiplier+ is set to 1000).
 
 .A more complex example of choosing a location based on connectivity
 =====
 [source,XML]
 -------
 <rsc_location id="WebServer-connectivity" rsc="Webserver">
    <rule id="ping-exclude-rule" score="-INFINITY" >
     <expression id="ping-exclude" attribute="pingd" operation="lt" value="3000"/>
    </rule>
    <rule id="ping-prefer-rule" score-attribute="pingd" >
     <expression id="ping-prefer" attribute="pingd" operation="defined"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 === Migrating Resources ===
 
 Normally, when the cluster needs to move a resource, it fully restarts
 the resource (i.e. stops the resource on the current node
 and starts it on the new node).
 
 However, some types of resources, such as Xen virtual guests, are able to move to
 another location without loss of state (often referred to as live migration
 or hot migration). In pacemaker, this is called resource migration.
 Pacemaker can be configured to migrate a resource when moving it,
 rather than restarting it.
 
 Not all resources are able to migrate; see the Migration Checklist
 below, and those that can, won't do so in all situations.
 Conceptually, there are two requirements from which the other
 prerequisites follow:
 
 * The resource must be active and healthy at the old location; and
 * everything required for the resource to run must be available on
   both the old and new locations.
 
 The cluster is able to accommodate both 'push' and 'pull' migration models
 by requiring the resource agent to support two special actions:
 +migrate_to+ (performed on the current location) and +migrate_from+
 (performed on the destination).
 
 In push migration, the process on the current location transfers the
 resource to the new location where is it later activated.  In this
 scenario, most of the work would be done in the +migrate_to+ action
 and, if anything, the activation would occur during +migrate_from+.
 
 Conversely for pull, the +migrate_to+ action is practically empty and
 +migrate_from+ does most of the work, extracting the relevant resource
 state from the old location and activating it.
 
 There is no wrong or right way for a resource agent to implement migration,
 as long as it works.
 
 .Migration Checklist
 * The resource may not be a clone.
 * The resource must use an OCF style agent.
 * The resource must not be in a failed or degraded state.
 * The resource agent must support +migrate_to+ and
   +migrate_from+ actions, and advertise them in its metadata.
 * The resource must have the +allow-migrate+ meta-attribute set to
   +true+ (which is not the default).
 
 If an otherwise migratable resource depends on another resource
 via an ordering constraint, there are special situations in which it will be
 restarted rather than migrated.
 
 For example, if the resource depends on a clone, and at the time the resource
 needs to be moved, the clone has instances that are stopping and instances
 that are starting, then the resource will be restarted.
 The Policy Engine is not yet able to model this
 situation correctly and so takes the safer (if less optimal) path.
 
 In pacemaker 1.1.11 and earlier, a migratable resource will be restarted
 when moving if it directly or indirectly depends on 'any' primitive or group
 resources.
 
 Even in newer versions, if a migratable resource depends on a non-migratable
 resource, and both need to be moved, the migratable resource will be restarted.
 
 [[s-reusing-config-elements]]
 == Reusing Rules, Options and Sets of Operations ==
 
 Sometimes a number of constraints need to use the same set of rules,
 and resources need to set the same options and parameters.  To
 simplify this situation, you can refer to an existing object using an
 +id-ref+ instead of an id.
 
 So if for one resource you have
 
 [source,XML]
 ------
 <rsc_location id="WebServer-connectivity" rsc="Webserver">
    <rule id="ping-prefer-rule" score-attribute="pingd" >
     <expression id="ping-prefer" attribute="pingd" operation="defined"/>
    </rule>
 </rsc_location>
 ------
 
 Then instead of duplicating the rule for all your other resources, you can instead specify:
 
 .Referencing rules from other constraints
 =====
 [source,XML]
 -------
 <rsc_location id="WebDB-connectivity" rsc="WebDB">
       <rule id-ref="ping-prefer-rule"/>
 </rsc_location>
 -------
 =====
 
 [IMPORTANT]
 ===========
 The cluster will insist that the +rule+ exists somewhere.  Attempting
 to add a reference to a non-existing rule will cause a validation
 failure, as will attempting to remove a +rule+ that is referenced
 elsewhere.
 ===========
 
 The same principle applies for +meta_attributes+ and
 +instance_attributes+ as illustrated in the example below:
 
 .Referencing attributes, options, and operations from other resources
 =====
 [source,XML]
 -------
 <primitive id="mySpecialRsc" class="ocf" type="Special" provider="me">
    <instance_attributes id="mySpecialRsc-attrs" score="1" >
      <nvpair id="default-interface" name="interface" value="eth0"/>
      <nvpair id="default-port" name="port" value="9999"/>
    </instance_attributes>
    <meta_attributes id="mySpecialRsc-options">
      <nvpair id="failure-timeout" name="failure-timeout" value="5m"/>
      <nvpair id="migration-threshold" name="migration-threshold" value="1"/>
      <nvpair id="stickiness" name="resource-stickiness" value="0"/>
    </meta_attributes>
    <operations id="health-checks">
      <op id="health-check" name="monitor" interval="60s"/>
      <op id="health-check" name="monitor" interval="30min"/>
    </operations>
 </primitive>
 <primitive id="myOtherlRsc" class="ocf" type="Other" provider="me">
    <instance_attributes id-ref="mySpecialRsc-attrs"/>
    <meta_attributes id-ref="mySpecialRsc-options"/>
    <operations id-ref="health-checks"/>
 </primitive>
 -------
 =====
 
 == Reloading Services After a Definition Change ==
 
 The cluster automatically detects changes to the definition of
 services it manages.  The normal response is to stop the
 service (using the old definition) and start it again (with the new
 definition).  This works well, but some services are smarter and can
 be told to use a new set of options without restarting.
 
 To take advantage of this capability, the resource agent must:
 
 . Accept the +reload+ operation and perform any required actions.
   _The actions here depend completely on your application!_
 +
 .The DRBD agent's logic for supporting +reload+
 =====
 [source,Bash]
 -------
 case $1 in
     start)
         drbd_start
         ;;
     stop)
         drbd_stop
         ;;
     reload)
         drbd_reload
         ;;
     monitor)
         drbd_monitor
         ;;
     *)
         drbd_usage
         exit $OCF_ERR_UNIMPLEMENTED
         ;;
 esac
 exit $?
 -------
 =====
 . Advertise the +reload+ operation in the +actions+ section of its metadata
 +
 .The DRBD Agent Advertising Support for the +reload+ Operation
 =====
 [source,XML]
 -------
 <?xml version="1.0"?>
   <!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
   <resource-agent name="drbd">
     <version>1.1</version>
     
     <longdesc lang="en">
       Master/Slave OCF Resource Agent for DRBD
     </longdesc>
     
     ...
     
     <actions>
       <action name="start"   timeout="240" />
       <action name="reload"  timeout="240" />
       <action name="promote" timeout="90" />
       <action name="demote"  timeout="90" />
       <action name="notify"  timeout="90" />
       <action name="stop"    timeout="100" />
       <action name="meta-data"    timeout="5" />
       <action name="validate-all" timeout="30" />
     </actions>
   </resource-agent>
 -------
 =====
 . Advertise one or more parameters that can take effect using +reload+.
 +
 Any parameter with the +unique+ set to 0 is eligible to be used in this way.
 +
 .Parameter that can be changed using reload
 =====
 [source,XML]
 -------
 <parameter name="drbdconf" unique="0">
     <longdesc lang="en">Full path to the drbd.conf file.</longdesc>
     <shortdesc lang="en">Path to drbd.conf</shortdesc>
     <content type="string" default="${OCF_RESKEY_drbdconf_default}"/>
 </parameter>
 -------
 =====
 
 Once these requirements are satisfied, the cluster will automatically
 know to reload the resource (instead of restarting) when a non-unique
 field changes.
       
 [NOTE]
 ======
 Metadata will not be re-read unless the resource needs to be started. This may
 mean that the resource will be restarted the first time, even though you
 changed a parameter with +unique=0+.
 ======
 
 [NOTE]
 ======
 If both a unique and non-unique field are changed simultaneously, the
 resource will still be restarted.
 ======
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Basics.txt b/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
index 2da8c3ac80..5134e69a2c 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
@@ -1,389 +1,393 @@
 = Configuration Basics =
 
 == Configuration Layout ==
 
 The cluster is defined by the Cluster Information Base (CIB),
 which uses XML notation. The simplest CIB, an empty one, looks like this:
 
 .An empty configuration
 ======
 [source,XML]
 -------
 <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" admin_epoch="1" epoch="0" num_updates="0">
   <configuration>
     <crm_config/>
     <nodes/>
     <resources/>
     <constraints/>
   </configuration>
   <status/>
 </cib>
 -------
 ======
 
 The empty configuration above contains the major sections that make up a CIB:
 
 * +cib+: The entire CIB is enclosed with a +cib+ tag. Certain fundamental settings
   are defined as attributes of this tag.
 
   ** +configuration+: This section -- the primary focus of this document --
      contains traditional configuration information such as what resources the
      cluster serves and the relationships among them.
 
     *** +crm_config+: cluster-wide configuration options
     *** +nodes+: the machines that host the cluster
     *** +resources+: the services run by the cluster
     *** +constraints+: indications of how resources should be placed
 
   ** +status+: This section contains the history of each resource on each node.
     Based on this data, the cluster can construct the complete current
     state of the cluster.  The authoritative source for this section
     is the local resource manager (lrmd process) on each cluster node, and
     the cluster will occasionally repopulate the entire section.  For this
     reason, it is never written to disk, and administrators are advised
     against modifying it in any way.
 
 In this document, configuration settings will be described as 'properties' or 'options'
 based on how they are defined in the CIB:
 
 * Properties are XML attributes of an XML element.
 * Options are name-value pairs expressed as +nvpair+ child elements of an XML element.
 
 Normally you will use command-line tools that abstract the XML, so the
 distinction will be unimportant; both properties and options are
 cluster settings you can tweak.
 
 == The Current State of the Cluster ==
 
 Before one starts to configure a cluster, it is worth explaining how
 to view the finished product.  For this purpose we have created the
 `crm_mon` utility, which will display the
 current state of an active cluster.  It can show the cluster status by
 node or by resource and can be used in either single-shot or
 dynamically-updating mode.  There are also modes for displaying a list
 of the operations performed (grouped by node and resource) as well as
 information about failures.
 
 Using this tool, you can examine the state of the cluster for
 irregularities and see how it responds when you cause or simulate
 failures.
 
 Details on all the available options can be obtained using the
 `crm_mon --help` command.
       
 .Sample output from crm_mon
 ======
 -------
   ============
   Last updated: Fri Nov 23 15:26:13 2007
   Current DC: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec)
   3 Nodes configured.
   5 Resources configured.
   ============
   
   Node: sles-1 (1186dc9a-324d-425a-966e-d757e693dc86): online
       192.168.100.181    (heartbeat::ocf:IPaddr):    Started sles-1
       192.168.100.182    (heartbeat:IPaddr):         Started sles-1
       192.168.100.183    (heartbeat::ocf:IPaddr):    Started sles-1
       rsc_sles-1         (heartbeat::ocf:IPaddr):    Started sles-1
       child_DoFencing:2  (stonith:external/vmware):  Started sles-1
   Node: sles-2 (02fb99a8-e30e-482f-b3ad-0fb3ce27d088): standby
   Node: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec): online
       rsc_sles-2    (heartbeat::ocf:IPaddr):    Started sles-3
       rsc_sles-3    (heartbeat::ocf:IPaddr):    Started sles-3
       child_DoFencing:0    (stonith:external/vmware):    Started sles-3
 -------
 ======
       
 .Sample output from crm_mon -n
 ======
 -------
   ============
   Last updated: Fri Nov 23 15:26:13 2007
   Current DC: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec)
   3 Nodes configured.
   5 Resources configured.
   ============
 
   Node: sles-1 (1186dc9a-324d-425a-966e-d757e693dc86): online
   Node: sles-2 (02fb99a8-e30e-482f-b3ad-0fb3ce27d088): standby
   Node: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec): online
 
   Resource Group: group-1
     192.168.100.181    (heartbeat::ocf:IPaddr):    Started sles-1
     192.168.100.182    (heartbeat:IPaddr):        Started sles-1
     192.168.100.183    (heartbeat::ocf:IPaddr):    Started sles-1
   rsc_sles-1    (heartbeat::ocf:IPaddr):    Started sles-1
   rsc_sles-2    (heartbeat::ocf:IPaddr):    Started sles-3
   rsc_sles-3    (heartbeat::ocf:IPaddr):    Started sles-3
   Clone Set: DoFencing
     child_DoFencing:0    (stonith:external/vmware):    Started sles-3
     child_DoFencing:1    (stonith:external/vmware):    Stopped
     child_DoFencing:2    (stonith:external/vmware):    Started sles-1
 -------
 ======
 
 The DC (Designated Controller) node is where all the decisions are
 made, and if the current DC fails a new one is elected from the
 remaining cluster nodes.  The choice of DC is of no significance to an
 administrator beyond the fact that its logs will generally be more
 interesting.
 
 == How Should the Configuration be Updated? ==
 
 There are three basic rules for updating the cluster configuration:
 
  * Rule 1 - Never edit the +cib.xml+ file manually. Ever. I'm not making this up.
  * Rule 2 - Read Rule 1 again.
  * Rule 3 - The cluster will notice if you ignored rules 1 &amp; 2 and refuse to use the configuration.
 
 Now that it is clear how 'not' to update the configuration, we can begin
 to explain how you 'should'.
 
 === Editing the CIB Using XML ===
 
 The most powerful tool for modifying the configuration is the
 +cibadmin+ command.  With +cibadmin+, you can query, add, remove, update
 or replace any part of the configuration. All changes take effect immediately,
 so there is no need to perform a reload-like operation.
 
 The simplest way of using `cibadmin` is to use it to save the current
 configuration to a temporary file, edit that file with your favorite
-text or XML editor, and then upload the revised configuration.
+text or XML editor, and then upload the revised configuration. footnote:[This
+process might appear to risk overwriting changes that happen after the initial
+cibadmin call, but pacemaker will reject any update that is "too old". If the
+CIB is updated in some other fashion after the initial cibadmin, the second
+cibadmin will be rejected because the version number will be too low.]
       
 .Safely using an editor to modify the cluster configuration
 ======
 --------
 # cibadmin --query > tmp.xml
 # vi tmp.xml
 # cibadmin --replace --xml-file tmp.xml
 --------
 ======
 
 Some of the better XML editors can make use of a Relax NG schema to
 help make sure any changes you make are valid.  The schema describing
 the configuration can be found in +pacemaker.rng+, which may be
 deployed in a location such as +/usr/share/pacemaker+ or
 +/usr/lib/heartbeat+ depending on your operating system and how you
 installed the software.
 
 If you want to modify just one section of the configuration, you can
 query and replace just that section to avoid modifying any others.
       
 .Safely using an editor to modify only the resources section
 ======
 --------
 # cibadmin --query --obj_type resources > tmp.xml
 # vi tmp.xml
 # cibadmin --replace --obj_type resources --xml-file tmp.xml
 --------
 ======
 
 === Quickly Deleting Part of the Configuration ===
 
 Identify the object you wish to delete by XML tag and id. For example,
 you might search the CIB for all STONITH-related configuration:
       
 .Searching for STONITH-related configuration items
 ======
 ----
 # cibadmin -Q | grep stonith
  <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
  <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="1"/>
  <primitive id="child_DoFencing" class="stonith" type="external/vmware">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:1" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:2" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:3" type="external/vmware" class="stonith">
 ----
 ======
 
 If you wanted to delete the +primitive+ tag with id +child_DoFencing+,
 you would run:
 
 ----
 # cibadmin --delete --crm_xml '<primitive id="child_DoFencing"/>'
 ----
 
 === Updating the Configuration Without Using XML ===
 
 Most tasks can be performed with one of the other command-line
 tools provided with pacemaker, avoiding the need to read or edit XML.
 
 To enable STONITH for example, one could run:
 
 ----
 # crm_attribute --name stonith-enabled --update 1
 ----
 
 Or, to check whether *somenode* is allowed to run resources, there is:
 
 ----
 # crm_standby --get-value --node somenode
 ----
 
 Or, to find the current location of *my-test-rsc*, one can use:
 
 ----
 # crm_resource --locate --resource my-test-rsc
 ----
 
 Examples of using these tools for specific cases will be given throughout this
 document where appropriate.
 
 [NOTE]
 ====
 Old versions of pacemaker (1.0.3 and earlier) had different
 command-line tool syntax. If you are using an older version,
 check your installed manual pages for the proper syntax to use.
 ====
 
 [[s-config-sandboxes]]
 == Making Configuration Changes in a Sandbox ==
 
 Often it is desirable to preview the effects of a series of changes
 before updating the configuration atomically.  For this purpose we
 have created `crm_shadow` which creates a
 "shadow" copy of the configuration and arranges for all the command
 line tools to use it.
 
 To begin, simply invoke `crm_shadow --create` with
 the name of a configuration to create footnote:[Shadow copies are
 identified with a name, making it possible to have more than one.],
 and follow the simple on-screen instructions.
 
 [WARNING]
 ====
 Read this section and the on-screen instructions carefully; failure to do so could
 result in destroying the cluster's active configuration!
 ====
       
       
 .Creating and displaying the active sandbox
 ======
 ----
 # crm_shadow --create test
 Setting up shadow instance
 Type Ctrl-D to exit the crm_shadow shell
 shadow[test]: 
 shadow[test] # crm_shadow --which
 test
 ----
 ======
 
 From this point on, all cluster commands will automatically use the
 shadow copy instead of talking to the cluster's active configuration.
 Once you have finished experimenting, you can either make the
 changes active via the `--commit` option, or discard them using the `--delete`
 option.  Again, be sure to follow the on-screen instructions carefully!
       
 For a full list of `crm_shadow` options and
 commands, invoke it with the `--help` option.
 
 .Using a sandbox to make multiple changes atomically, discard them and verify the real configuration is untouched
 ======
 ----
  shadow[test] # crm_failcount -G -r rsc_c001n01
   name=fail-count-rsc_c001n01 value=0
  shadow[test] # crm_standby -v on -N c001n02
  shadow[test] # crm_standby -G -N c001n02
  name=c001n02 scope=nodes value=on
  shadow[test] # cibadmin --erase --force
  shadow[test] # cibadmin --query
  <cib cib_feature_revision="1" validate-with="pacemaker-1.0" admin_epoch="0" crm_feature_set="3.0" have-quorum="1" epoch="112"
       dc-uuid="c001n01" num_updates="1" cib-last-written="Fri Jun 27 12:17:10 2008">
     <configuration>
        <crm_config/>
        <nodes/>
        <resources/>
        <constraints/>
     </configuration>
     <status/>
  </cib>
   shadow[test] # crm_shadow --delete test --force
   Now type Ctrl-D to exit the crm_shadow shell
   shadow[test] # exit
   # crm_shadow --which
   No active shadow configuration defined
   # cibadmin -Q
  <cib cib_feature_revision="1" validate-with="pacemaker-1.0" admin_epoch="0" crm_feature_set="3.0" have-quorum="1" epoch="110"
        dc-uuid="c001n01" num_updates="551">
     <configuration>
        <crm_config>
           <cluster_property_set id="cib-bootstrap-options">
              <nvpair id="cib-bootstrap-1" name="stonith-enabled" value="1"/>
              <nvpair id="cib-bootstrap-2" name="pe-input-series-max" value="30000"/>
 ----
 ======
 
 [[s-config-testing-changes]]
 == Testing Your Configuration Changes ==
 
 We saw previously how to make a series of changes to a "shadow" copy
 of the configuration.  Before loading the changes back into the
 cluster (e.g. `crm_shadow --commit mytest --force`), it is often
 advisable to simulate the effect of the changes with +crm_simulate+.
 For example:
       
 ----
 # crm_simulate --live-check -VVVVV --save-graph tmp.graph --save-dotfile tmp.dot
 ----
 
 This tool uses the same library as the live cluster to show what it
 would have done given the supplied input.  Its output, in addition to
 a significant amount of logging, is stored in two files +tmp.graph+
 and +tmp.dot+. Both files are representations of the same thing: the
 cluster's response to your changes.
 
 The graph file stores the complete transition from the existing cluster state
 to your desired new state, containing a list of all the actions, their
 parameters and their pre-requisites. Because the transition graph is not
 terribly easy to read, the tool also generates a Graphviz
 footnote:[Graph visualization software. See http://www.graphviz.org/ for details.]
 dot-file representing the same information.
 
 For information on the options supported by `crm_simulate`, use
 its `--help` option.
 
 .Interpreting the Graphviz output
  * Arrows indicate ordering dependencies
  * Dashed arrows indicate dependencies that are not present in the transition graph
  * Actions with a dashed border of any color do not form part of the transition graph
  * Actions with a green border form part of the transition graph
  * Actions with a red border are ones the cluster would like to execute but cannot run
  * Actions with a blue border are ones the cluster does not feel need to be executed
  * Actions with orange text are pseudo/pretend actions that the cluster uses to simplify the graph
  * Actions with black text are sent to the LRM
  * Resource actions have text of the form pass:[<replaceable>rsc</replaceable>]_pass:[<replaceable>action</replaceable>]_pass:[<replaceable>interval</replaceable>] pass:[<replaceable>node</replaceable>]
  * Any action depending on an action with a red border will not be able to execute. 
  * Loops are _really_ bad. Please report them to the development team. 
 
 === Small Cluster Transition ===
 
 image::images/Policy-Engine-small.png["An example transition graph as represented by Graphviz",width="16cm",height="6cm",align="center"]      
 
 In the above example, it appears that a new node, *pcmk-2*, has come
 online and that the cluster is checking to make sure *rsc1*, *rsc2*
 and *rsc3* are not already running there (Indicated by the
 *rscN_monitor_0* entries).  Once it did that, and assuming the resources
 were not active there, it would have liked to stop *rsc1* and *rsc2*
 on *pcmk-1* and move them to *pcmk-2*.  However, there appears to be
 some problem and the cluster cannot or is not permitted to perform the
 stop actions which implies it also cannot perform the start actions.
 For some reason the cluster does not want to start *rsc3* anywhere.
 
 === Complex Cluster Transition ===
 
 image::images/Policy-Engine-big.png["Another, slightly more complex, transition graph that you're not expected to be able to read",width="16cm",height="20cm",align="center"]
 
 == Do I Need to Update the Configuration on All Cluster Nodes? ==
 
 No. Any changes are immediately synchronized to the other active
 members of the cluster.
 
 To reduce bandwidth, the cluster only broadcasts the incremental
 updates that result from your changes and uses MD5 checksums to ensure
 that each copy is completely consistent.
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt b/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt
index dd5e9b8744..dc17610b49 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt
@@ -1,323 +1,331 @@
 = Multi-Site Clusters and Tickets =
 
 Apart from local clusters, Pacemaker also supports multi-site clusters.
 That means you can have multiple, geographically dispersed sites, each with a
 local cluster. Failover between these clusters can be coordinated
 manually by the administrator, or automatically by a higher-level entity called
 a 'Cluster Ticket Registry (CTR)'.
 
 == Challenges for Multi-Site Clusters ==
 
 Typically, multi-site environments are too far apart to support
 synchronous communication and data replication between the sites.
 That leads to significant challenges:
 
 - How do we make sure that a cluster site is up and running?
 
 - How do we make sure that resources are only started once?
 
 - How do we make sure that quorum can be reached between the different
 sites and a split-brain scenario avoided?
 
 - How do we manage failover between sites?
 
 - How do we deal with high latency in case of resources that need to be
 stopped? 
 
 In the following sections, learn how to meet these challenges.
 
 == Conceptual Overview ==
 
 Multi-site clusters can be considered as “overlay” clusters where
 each cluster site corresponds to a cluster node in a traditional cluster.
 The overlay cluster can be managed by a CTR in order to
 guarantee that the cluster resources will be highly
 available across different cluster sites. This is achieved by using
 'tickets' that are treated as failover domain between cluster
 sites, in case a site should be down.
 
 The following sections explain the individual components and mechanisms
 that were introduced for multi-site clusters in more detail.
 
 === Ticket ===
 
 Tickets are, essentially, cluster-wide attributes. A ticket grants the
 right to run certain resources on a specific cluster site. Resources can
 be bound to a certain ticket by +rsc_ticket+ constraints. Only if the
 ticket is available at a site can the respective resources be started there.
 Vice versa, if the ticket is revoked, the resources depending on that
 ticket must be stopped.
 
 The ticket thus is similar to a 'site quorum', i.e. the permission to
 manage/own resources associated with that site. (One can also think of the
 current +have-quorum+ flag as a special, cluster-wide ticket that is granted in
 case of node majority.)
 
 Tickets can be granted and revoked either manually by administrators
 (which could be the default for classic enterprise clusters), or via
 the automated CTR mechanism described below.
 
 A ticket can only be owned by one site at a time. Initially, none
 of the sites has a ticket. Each ticket must be granted once by the cluster
 administrator. 
 
 The presence or absence of tickets for a site is stored in the CIB as a
 cluster status. With regards to a certain ticket, there are only two states
 for a site: +true+ (the site has the ticket) or +false+ (the site does
 not have the ticket). The absence of a certain ticket (during the initial
 state of the multi-site cluster) is the same as the value +false+.
 
 === Dead Man Dependency ===
 
 A site can only activate resources safely if it can be sure that the
 other site has deactivated them. However after a ticket is revoked, it can
 take a long time until all resources depending on that ticket are stopped
 "cleanly", especially in case of cascaded resources. To cut that process
 short, the concept of a 'Dead Man Dependency' was introduced.
 
 If a dead man dependency is in force, if a ticket is revoked from a site, the
 nodes that are hosting dependent resources are fenced. This considerably speeds
 up the recovery process of the cluster and makes sure that resources can be
 migrated more quickly.
 
 This can be configured by specifying a +loss-policy="fence"+ in
 +rsc_ticket+ constraints.
 
 === Cluster Ticket Registry ===
 
 A CTR is a network daemon that automatically handles granting, revoking, and
 timing out tickets (instead of the administrator revoking the ticket somewhere,
 waiting for everything to stop, and then granting it on the desired site).
 
 Pacemaker does not implement its own CTR, but interoperates with external
 software designed for that purpose (similar to how resource and fencing agents
 are not directly part of pacemaker).
 
 Participating clusters run the CTR daemons, which connect to each other, exchange
 information about their connectivity, and vote on which sites gets which
 tickets.
 
 A ticket is granted to a site only once the CTR is sure that the ticket
 has been relinquished by the previous owner, implemented via a timer in most
 scenarios. If a site loses connection to its peers, its tickets time out and
 recovery occurs. After the connection timeout plus the recovery timeout has
 passed, the other sites are allowed to re-acquire the ticket and start the
 resources again.
 
 This can also be thought of as a "quorum server", except that it is not
 a single quorum ticket, but several.
 
 === Configuration Replication ===
 
 As usual, the CIB is synchronized within each cluster, but it is 'not' synchronized
 across cluster sites of a multi-site cluster. You have to configure the resources
 that will be highly available across the multi-site cluster for every site
 accordingly.
 
 
 [[s-ticket-constraints]]
 == Configuring Ticket Dependencies ==
 
 The `rsc_ticket` constraint lets you specify the resources depending on a certain
 ticket. Together with the constraint, you can set a `loss-policy` that defines
 what should happen to the respective resources if the ticket is revoked. 
 
 The attribute `loss-policy` can have the following values:
 
 * +fence:+ Fence the nodes that are running the relevant resources.
 
 * +stop:+ Stop the relevant resources.
 
 * +freeze:+ Do nothing to the relevant resources.
 
 * +demote:+ Demote relevant resources that are running in master mode to slave mode. 
 
 
 .Constraint that fences node if +ticketA+ is revoked
 ====
 [source,XML]
 -------
 <rsc_ticket id="rsc1-req-ticketA" rsc="rsc1" ticket="ticketA" loss-policy="fence"/>
 -------
 ====
 
 The example above creates a constraint with the ID +rsc1-req-ticketA+. It
 defines that the resource +rsc1+ depends on +ticketA+ and that the node running
 the resource should be fenced if +ticketA+ is revoked.
 
 If resource +rsc1+ were a multi-state resource (i.e. it could run in master or
 slave mode), you might want to configure that only master mode
 depends on +ticketA+. With the following configuration, +rsc1+ will be
 demoted to slave mode if +ticketA+ is revoked:
 
 .Constraint that demotes +rsc1+ if +ticketA+ is revoked
 ====
 [source,XML]
 -------
 <rsc_ticket id="rsc1-req-ticketA" rsc="rsc1" rsc-role="Master" ticket="ticketA" loss-policy="demote"/>
 -------
 ====
 
 You can create multiple `rsc_ticket` constraints to let multiple resources
 depend on the same ticket. However, `rsc_ticket` also supports resource sets,
 so one can easily list all the resources in one `rsc_ticket` constraint instead.
 
 .Ticket constraint for multiple resources
 ====
 [source,XML]
 -------
 <rsc_ticket id="resources-dep-ticketA" ticket="ticketA" loss-policy="fence">
   <resource_set id="resources-dep-ticketA-0" role="Started">
     <resource_ref id="rsc1"/>
     <resource_ref id="group1"/>
     <resource_ref id="clone1"/>
   </resource_set>
   <resource_set id="resources-dep-ticketA-1" role="Master">
     <resource_ref id="ms1"/>
   </resource_set>
 </rsc_ticket>
 -------
 ====
 
 In the example above, there are two resource sets, so we can list resources
 with different roles in a single +rsc_ticket+ constraint. There's no dependency
 between the two resource sets, and there's no dependency among the
 resources within a resource set. Each of the resources just depends on
 +ticketA+.
 
 Referencing resource templates in +rsc_ticket+ constraints, and even
 referencing them within resource sets, is also supported. 
 
 If you want other resources to depend on further tickets, create as many
 constraints as necessary with +rsc_ticket+.
 
 
 == Managing Multi-Site Clusters ==
 
 === Granting and Revoking Tickets Manually ===
 
 You can grant tickets to sites or revoke them from sites manually.
 If you want to re-distribute a ticket, you should wait for
 the dependent resources to stop cleanly at the previous site before you
 grant the ticket to the new site.
 
 Use the `crm_ticket` command line tool to grant and revoke tickets. 
 
 To grant a ticket to this site:
 -------
 # crm_ticket --ticket ticketA --grant
 -------
 
 To revoke a ticket from this site:
 -------
 # crm_ticket --ticket ticketA --revoke
 -------
 
 [IMPORTANT]
 ====
 If you are managing tickets manually, use the `crm_ticket` command with
 great care, because it cannot check whether the same ticket is already
 granted elsewhere. 
 ====
 
 
 === Granting and Revoking Tickets via a Cluster Ticket Registry ===
 
 We will use https://github.com/ClusterLabs/booth[Booth] here as an example of
 software that can be used with pacemaker as a Cluster Ticket Registry.  Booth
 implements the
 http://en.wikipedia.org/wiki/Paxos_%28computer_science%29['Paxos'] lease
 algorithm to guarantee the distributed consensus among different
 cluster sites, and manages the ticket distribution (and thus the failover
 process between sites).
 
 Each of the participating clusters and 'arbitrators' runs the Booth daemon
 `boothd`.
 
 An 'arbitrator' is the multi-site equivalent of a quorum-only node in a local
 cluster. If you have a setup with an even number of sites,
 you need an additional instance to reach consensus about decisions such
 as failover of resources across sites. In this case, add one or more
 arbitrators running at additional sites. Arbitrators are single machines
 that run a booth instance in a special mode. An arbitrator is especially
 important for a two-site scenario, otherwise there is no way for one site
 to distinguish between a network failure between it and the other site, and
 a failure of the other site.
 
 The most common multi-site scenario is probably a multi-site cluster with two
 sites and a single arbitrator on a third site. However, technically, there are
 no limitations with regards to the number of sites and the number of
 arbitrators involved.
 
-Nodes belonging to the same cluster site should be synchronized via NTP. However,
-time synchronization is not required between the individual cluster sites.
-
 `Boothd` at each site connects to its peers running at the other sites and
 exchanges connectivity details. Once a ticket is granted to a site, the
 booth mechanism will manage the ticket automatically: If the site which
 holds the ticket is out of service, the booth daemons will vote which
 of the other sites will get the ticket. To protect against brief
 connection failures, sites that lose the vote (either explicitly or
 implicitly by being disconnected from the voting body) need to
 relinquish the ticket after a time-out. Thus, it is made sure that a
 ticket will only be re-distributed after it has been relinquished by the
 previous site.  The resources that depend on that ticket will fail over
 to the new site holding the ticket. The nodes that have run the 
 resources before will be treated according to the `loss-policy` you set
 within the `rsc_ticket` constraint.
 
 Before the booth can manage a certain ticket within the multi-site cluster,
 you initially need to grant it to a site manually via the `booth` command-line
 tool. After you have initially granted a ticket to a site, `boothd`
 will take over and manage the ticket automatically.  
 
 [IMPORTANT]
 ====
 The `booth` command-line tool can be used to grant, list, or
 revoke tickets and can be run on any machine where `boothd` is running. 
 If you are managing tickets via Booth, use only `booth` for manual
 intervention, not `crm_ticket`. That ensures the same ticket
 will only be owned by one cluster site at a time.
 ====
 
+==== Booth Requirements ====
+
+* All clusters that will be part of the multi-site cluster must be based on
+  Pacemaker.
+
+* Booth must be installed on all cluster nodes and on all arbitrators that will
+  be part of the multi-site cluster. 
+
+* Nodes belonging to the same cluster site should be synchronized via NTP. However,
+  time synchronization is not required between the individual cluster sites.
+
 === General Management of Tickets ===
 
 Display the information of tickets:
 -------
 # crm_ticket --info
 -------
 
 Or you can monitor them with:
 -------
 # crm_mon --tickets
 -------
 
 Display the +rsc_ticket+ constraints that apply to a ticket:
 -------
 # crm_ticket --ticket ticketA --constraints
 -------
 
 When you want to do maintenance or manual switch-over of a ticket,
 revoking the ticket would trigger the loss policies. If
 +loss-policy="fence"+, the dependent resources could not be gracefully
 stopped/demoted, and other unrelated resources could even be affected. 
 
 The proper way is making the ticket 'standby' first with:
 -------
 # crm_ticket --ticket ticketA --standby
 -------
 
 Then the dependent resources will be stopped or demoted gracefully without
 triggering the loss policies.
 
 If you have finished the maintenance and want to activate the ticket again,
 you can run:
 -------
 # crm_ticket --ticket ticketA --activate
 -------
 
 == For more information ==
 
 * http://doc.opensuse.org/products/draft/SLE-HA/SLE-ha-guide_sd_draft/cha.ha.geo.html[SUSE's Multi-site Clusters guide]
 
 * https://github.com/ClusterLabs/booth[Booth]
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
index f004c3ae48..5d5fa333ca 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
@@ -1,760 +1,760 @@
 = Cluster Resources =
 
 == What is a Cluster Resource? ==
 
 indexterm:[Resource]
 
 A resource is a service made highly available by a cluster.
 The simplest type of resource, a 'primitive' resource, is described
 in this chapter. More complex forms, such as groups and clones,
 are described in later chapters.
 
 Every primitive resource has a 'resource agent'. A resource agent is an
 external program that abstracts the service it provides and present a
 consistent view to the cluster.
 
 This allows the cluster to be agnostic about the resources it manages.
 The cluster doesn't need to understand how the resource works because
 it relies on the resource agent to do the right thing when given a
 `start`, `stop` or `monitor` command. For this reason, it is crucial that
 resource agents are well-tested.
 
 Typically, resource agents come in the form of shell scripts. However,
 they can be written using any technology (such as C, Python or Perl)
 that the author is comfortable with.
 
 [[s-resource-supported]]
 == Resource Classes ==
 
 indexterm:[Resource,class]
 
-Pacemaker supports six classes of agents:
+Pacemaker supports several classes of agents:
 
 * OCF
+* LSB
+* Upstart
+* Systemd
 * Service
-** LSB
-** Upstart
-** Systemd
 * Fencing
 * Nagios Plugins
 
 === Open Cluster Framework ===
 
 indexterm:[Resource,OCF]
 indexterm:[OCF,Resources]
 indexterm:[Open Cluster Framework,Resources]
 
 The OCF standard
 footnote:[See
 http://www.opencf.org/cgi-bin/viewcvs.cgi/specs/ra/resource-agent-api.txt?rev=HEAD
  -- at least as it relates to resource agents.  The Pacemaker implementation has
 been somewhat extended from the OCF specs, but none of those changes are
 incompatible with the original OCF specification.]
 is basically an extension of the Linux Standard Base conventions for
 init scripts to:
 
 * support parameters,
 * make them self-describing, and
 * make them extensible
 
 OCF specs have strict definitions of the exit codes that actions must return.
 footnote:[
 The resource-agents source code includes the `ocf-tester` script, which
 can be useful in this regard.
 ]
 
 The cluster follows these specifications exactly, and giving the wrong
 exit code will cause the cluster to behave in ways you will likely
 find puzzling and annoying.  In particular, the cluster needs to
 distinguish a completely stopped resource from one which is in some
 erroneous and indeterminate state.
 
 Parameters are passed to the resource agent as environment variables, with the
 special prefix +OCF_RESKEY_+.  So, a parameter which the user thinks
 of as +ip+ will be passed to the resource agent as +OCF_RESKEY_ip+.  The
 number and purpose of the parameters is left to the resource agent; however,
 the resource agent should use the `meta-data` command to advertise any that it
 supports.
 
 The OCF class is the most preferred as it is an industry standard,
 highly flexible (allowing parameters to be passed to agents in a
 non-positional manner) and self-describing.
 
 For more information, see the
 http://www.linux-ha.org/wiki/OCF_Resource_Agents[reference] and
 <<ap-ocf>>.
 
 === Linux Standard Base ===
 indexterm:[Resource,LSB]
 indexterm:[LSB,Resources]
 indexterm:[Linux Standard Base,Resources]
 
 LSB resource agents are those found in +/etc/init.d+.
 
 Generally, they are provided by the OS distribution and, in order to be used
 with the cluster, they must conform to the LSB Spec.
 footnote:[
 See
 http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
 for the LSB Spec as it relates to init scripts.
 ]
 
 [WARNING]
 ====
 Many distributions claim LSB compliance but ship with broken init
 scripts.  For details on how to check whether your init script is
 LSB-compatible, see <<ap-lsb>>. Common problematic violations of
 the LSB standard include:
 
 * Not implementing the status operation at all
 * Not observing the correct exit status codes for `start/stop/status` actions
 * Starting a started resource returns an error
 * Stopping a stopped resource returns an error
 ====
 
 [IMPORTANT]
 ====
 Remember to make sure the computer is _not_ configured to start any
 services at boot time -- that should be controlled by the cluster.
 ====
 
 === Systemd ===
 indexterm:[Resource,Systemd]
 indexterm:[Systemd,Resources]
 
 Some newer distributions have replaced the old
 http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of
 initialization daemons and scripts with an alternative called
 http://www.freedesktop.org/wiki/Software/systemd[Systemd].
 
 Pacemaker is able to manage these services _if they are present_.
 
 Instead of init scripts, systemd has 'unit files'.  Generally, the
 services (unit files) are provided by the OS distribution, but there
 are online guides for converting from init scripts.
 footnote:[For example,
 http://0pointer.de/blog/projects/systemd-for-admins-3.html]
 
 [IMPORTANT]
 ====
 Remember to make sure the computer is _not_ configured to start any
 services at boot time -- that should be controlled by the cluster.
 ====
 
 === Upstart ===
 indexterm:[Resource,Upstart]
 indexterm:[Upstart,Resources]
 
 Some newer distributions have replaced the old
 http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of
 initialization daemons (and scripts) with an alternative called
 http://upstart.ubuntu.com/[Upstart].
 
 Pacemaker is able to manage these services _if they are present_.
 
 Instead of init scripts, upstart has 'jobs'.  Generally, the
 services (jobs) are provided by the OS distribution.
 
 [IMPORTANT]
 ====
 Remember to make sure the computer is _not_ configured to start any
 services at boot time -- that should be controlled by the cluster.
 ====
 
 === System Services ===
 indexterm:[Resource,System Services]
 indexterm:[System Service,Resources]
 
 Since there are various types of system services (+systemd+,
 +upstart+, and +lsb+), Pacemaker supports a special +service+ alias which
 intelligently figures out which one applies to a given cluster node.
 
 This is particularly useful when the cluster contains a mix of
 +systemd+, +upstart+, and +lsb+.
 
 In order, Pacemaker will try to find the named service as:
 
 . an LSB init script
 . a Systemd unit file
 . an Upstart job
 
 === STONITH ===
 indexterm:[Resource,STONITH]
 indexterm:[STONITH,Resources]
 
 The STONITH class is used exclusively for fencing-related resources.  This is
 discussed later in <<ch-stonith>>.
 
 === Nagios Plugins ===
 indexterm:[Resource,Nagios Plugins]
 indexterm:[Nagios Plugins,Resources]
 
 Nagios Plugins
 footnote:[The project has two independent forks, hosted at
 https://www.nagios-plugins.org/ and https://www.monitoring-plugins.org/. Output
 from both projects' plugins is similar, so plugins from either project can be
 used with pacemaker.]
 allow us to monitor services on remote hosts.
 
 Pacemaker is able to do remote monitoring with the plugins _if they are
 present_.
 
 A common use case is to configure them as resources belonging to a resource
 container (usually a virtual machine), and the container will be restarted
 if any of them has failed. Another use is to configure them as ordinary
 resources to be used for monitoring hosts or services via the network.
 
 The supported parameters are same as the long options of the plugin.
 
 [[primitive-resource]]
 == Resource Properties ==
 
 These values tell the cluster which resource agent to use for the resource,
 where to find that resource agent and what standards it conforms to.
 
 .Properties of a Primitive Resource
 [width="95%",cols="1m,6<",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |id
 |Your name for the resource
  indexterm:[id,Resource]
  indexterm:[Resource,Property,id]
 
 |class
 
 |The standard the resource agent conforms to. Allowed values:
 +lsb+, +nagios+, +ocf+, +service+, +stonith+, +systemd+, +upstart+
  indexterm:[class,Resource]
  indexterm:[Resource,Property,class]
 
 |type
 |The name of the Resource Agent you wish to use. E.g. +IPaddr+ or +Filesystem+
  indexterm:[type,Resource]
  indexterm:[Resource,Property,type]
 
 |provider
 |The OCF spec allows multiple vendors to supply the same
  resource agent. To use the OCF resource agents supplied by
  the Heartbeat project, you would specify +heartbeat+ here.
  indexterm:[provider,Resource]
  indexterm:[Resource,Property,provider]
 
 |=========================================================
 
 The XML definition of a resource can be queried with the `crm_resource` tool.
 For example:
 
 ----
 # crm_resource --resource Email --query-xml
 ----
 
 might produce:
 
 .A system resource definition
 =====
 [source,XML]
 <primitive id="Email" class="service" type="exim"/>
 =====
 
 [NOTE]
 =====
 One of the main drawbacks to system services (LSB, systemd or
 Upstart) resources is that they do not allow any parameters!
 =====
 
 ////
 See https://tools.ietf.org/html/rfc5737 for choice of example IP address
 ////
 
 .An OCF resource definition
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <instance_attributes id="Public-IP-params">
       <nvpair id="Public-IP-ip" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 [[s-resource-options]]
 == Resource Options ==
 
 Resources have two types of options: 'meta-attributes' and 'instance attributes'.
 Meta-attributes apply to any type of resource, while instance attributes
 are specific to each resource agent.
 
 === Resource Meta-Attributes ===
 
 Meta-attributes are used by the cluster to decide how a resource should
 behave and can be easily set using the `--meta` option of the
 `crm_resource` command.
 
 .Meta-attributes of a Primitive Resource
 [width="95%",cols="2m,2,5<a",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |priority
 |0
 |If not all resources can be active, the cluster will stop lower
 priority resources in order to keep higher priority ones active.
 indexterm:[priority,Resource Option]
 indexterm:[Resource,Option,priority]
 
 |target-role
 |started
 |What state should the cluster attempt to keep this resource in? Allowed values:
 
 * +stopped:+ Force the resource to be stopped
 * +started:+ Allow the resource to be started (In the case of
   <<s-resource-multistate,multi-state>> resources, they will not promoted to
   master)
 * +master:+ Allow the resource to be started and, if appropriate, promoted
 indexterm:[target-role,Resource Option]
 indexterm:[Resource,Option,target-role]
 
 |is-managed
 |TRUE
 |Is the cluster allowed to start and stop the resource?  Allowed
  values: +true+, +false+
  indexterm:[is-managed,Resource Option]
  indexterm:[Resource,Option,is-managed]
 
 |resource-stickiness
 |value of +resource-stickiness+ in the +rsc_defaults+ section
 |How much does the resource prefer to stay where it is?
  indexterm:[resource-stickiness,Resource Option]
  indexterm:[Resource,Option,resource-stickiness]
 
 |requires
 |fencing (unless +stonith-enabled+ is +false+ or +class+ is
 +stonith+, in which case it defaults to quorum)
 |Conditions under which the resource can be started ('Since 1.1.8')
 Allowed values:
 
 * +nothing:+ can always be started
 * +quorum:+ The cluster can only start this resource if a majority of
   the configured nodes are active
 * +fencing:+ The cluster can only start this resource if a majority
   of the configured nodes are active _and_ any failed or unknown nodes
   have been powered off
 * +unfencing:+ The cluster can only start this resource if a majority
   of the configured nodes are active _and_ any failed or unknown nodes
   have been powered off _and_ only on nodes that have been 'unfenced'
 
 indexterm:[requires,Resource Option]
 indexterm:[Resource,Option,requires]
 
 |migration-threshold
 |INFINITY
 |How many failures may occur for this resource on a node, before this
  node is marked ineligible to host this resource. A value of INFINITY
  indicates that this feature is disabled.
  indexterm:[migration-threshold,Resource Option]
  indexterm:[Resource,Option,migration-threshold]
 
 |failure-timeout
 |0
 |How many seconds to wait before acting as if the failure had not
  occurred, and potentially allowing the resource back to the node on
  which it failed. A value of 0 indicates that this feature is disabled.
  indexterm:[failure-timeout,Resource Option]
  indexterm:[Resource,Option,failure-timeout]
 
 |multiple-active
 |stop_start
 |What should the cluster do if it ever finds the resource active on
  more than one node? Allowed values:
 
 * +block:+ mark the resource as unmanaged
 * +stop_only:+ stop all active instances and leave them that way
 * +stop_start:+ stop all active instances and start the resource in
   one location only
 
 indexterm:[multiple-active,Resource Option]
 indexterm:[Resource,Option,multiple-active]
 
 |remote-node
 |
 |The name of the remote-node this resource defines.  This both enables the
 resource as a remote-node and defines the unique name used to identify the
 remote-node. If no other parameters are set, this value will also be assumed as
 the hostname to connect to at the port specified by +remote-port+. +WARNING:+
 This value cannot overlap with any resource or node IDs. If not specified,
 this feature is disabled.
 
 |remote-port
 |3121
 |Port to use for the guest connection to pacemaker_remote
 
 |remote-addr
 |value of +remote-node+
 |The IP address or hostname to connect to if remote-node's name is not the
 hostname of the guest.
 
 |+remote-connect-timeout+
 |60s
 |How long before a pending guest connection will time out.
 
 |=========================================================
 
 [NOTE]
 ====
 Support for remote nodes was added in pacemaker 1.1.10. If you are using an
 earlier version, options related to remote nodes will not be available.
 ====
 
 As an example of setting resource options, if you performed the following
 commands on an LSB Email resource:
 
 -------
 # crm_resource --meta --resource Email --set-parameter priority --parameter-value 100
 # crm_resource -m -r Email -p multiple-active -v block
 -------
 
 the resulting resource definition might be:
 
 .An LSB resource with cluster options
 =====
 [source,XML]
 -------
 <primitive id="Email" class="lsb" type="exim">
   <meta_attributes id="Email-meta_attributes">
     <nvpair id="Email-meta_attributes-priority" name="priority" value="100"/>
     <nvpair id="Email-meta_attributes-multiple-active" name="multiple-active" value="block"/>
   </meta_attributes>
 </primitive>
 -------
 =====
 
 [[s-resource-defaults]]
 === Setting Global Defaults for Resource Meta-Attributes ===
 
 To set a default value for a resource option, add it to the
 +rsc_defaults+ section with `crm_attribute`. For example,
 
 ----
 # crm_attribute --type rsc_defaults --name is-managed --update false
 ----
 
 would prevent the cluster from starting or stopping any of the
 resources in the configuration (unless of course the individual
 resources were specifically enabled by having their +is-managed+ set to
 +true+).
 
 === Resource Instance Attributes ===
 
 The resource agents of some resource classes (lsb, systemd and upstart 'not' among them)
 can be given parameters which determine how they behave and which instance
 of a service they control.
 
 If your resource agent supports parameters, you can add them with the
 `crm_resource` command. For example,
 
 ----
 # crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2
 ----
 
 would create an entry in the resource like this:
 
 .An example OCF resource with instance attributes
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <instance_attributes id="params-public-ip">
       <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 For an OCF resource, the result would be an environment variable
 called +OCF_RESKEY_ip+ with a value of +192.0.2.2+.
 
 The list of instance attributes supported by an OCF resource agent can be
 found by calling the resource agent with the `meta-data` command.
 The output contains an XML description of all the supported
 attributes, their purpose and default values.
 
 .Displaying the metadata for the Dummy resource agent template
 =====
 ----
 # export OCF_ROOT=/usr/lib/ocf
 # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data
 ----
 [source,XML]
 -------
 <?xml version="1.0"?>
 <!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
 <resource-agent name="Dummy" version="1.0">
 <version>1.0</version>
 
 <longdesc lang="en">
 This is a Dummy Resource Agent. It does absolutely nothing except 
 keep track of whether its running or not.
 Its purpose in life is for testing and to serve as a template for RA writers.
 
 NB: Please pay attention to the timeouts specified in the actions
 section below. They should be meaningful for the kind of resource
 the agent manages. They should be the minimum advised timeouts,
 but they shouldn't/cannot cover _all_ possible resource
 instances. So, try to be neither overly generous nor too stingy,
 but moderate. The minimum timeouts should never be below 10 seconds.
 </longdesc>
 <shortdesc lang="en">Example stateless resource agent</shortdesc>
 
 <parameters>
 <parameter name="state" unique="1">
 <longdesc lang="en">
 Location to store the resource state in.
 </longdesc>
 <shortdesc lang="en">State file</shortdesc>
 <content type="string" default="/var/run//Dummy-{OCF_RESOURCE_INSTANCE}.state" />
 </parameter>
 
 <parameter name="fake" unique="0">
 <longdesc lang="en">
 Fake attribute that can be changed to cause a reload
 </longdesc>
 <shortdesc lang="en">Fake attribute that can be changed to cause a reload</shortdesc>
 <content type="string" default="dummy" />
 </parameter>
 
 <parameter name="op_sleep" unique="1">
 <longdesc lang="en">
 Number of seconds to sleep during operations.  This can be used to test how
 the cluster reacts to operation timeouts.
 </longdesc>
 <shortdesc lang="en">Operation sleep duration in seconds.</shortdesc>
 <content type="string" default="0" />
 </parameter>
 
 </parameters>
 
 <actions>
 <action name="start"        timeout="20" />
 <action name="stop"         timeout="20" />
 <action name="monitor"      timeout="20" interval="10" depth="0"/>
 <action name="reload"       timeout="20" />
 <action name="migrate_to"   timeout="20" />
 <action name="migrate_from" timeout="20" />
 <action name="validate-all" timeout="20" />
 <action name="meta-data"    timeout="5" />
 </actions>
 </resource-agent>
 -------
 =====
 
 == Resource Operations ==
 
 indexterm:[Resource,Action]
 
 Operations are actions the cluster can perform on a resource,
 such as start, stop and monitor.
 
 As an example, by default the cluster will not ensure your resources are still
 healthy.  To instruct the cluster to do this, you need to add a
 +monitor+ operation to the resource's definition.
 
 .An OCF resource with a recurring health check
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
   <operations>
      <op id="public-ip-check" name="monitor" interval="60s"/>
   </operations>
   <instance_attributes id="params-public-ip">
      <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
   </instance_attributes>
 </primitive>
 -------
 =====
 
 .Properties of an Operation
 [width="95%",cols="2m,3,6<a",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the operation.
  indexterm:[id,Action Property]
  indexterm:[Action,Property,id]
 
 |name
 |
 |The action to perform. Common values: +monitor+, +start+, +stop+
  indexterm:[name,Action Property]
  indexterm:[Action,Property,name]
 
 |interval
 |0
 |How frequently (in seconds) to perform the operation. A value of 0 means never.
  indexterm:[interval,Action Property]
  indexterm:[Action,Property,interval]
 
 |timeout
 |
 |How long to wait before declaring the action has failed
  indexterm:[timeout,Action Property]
  indexterm:[Action,Property,timeout]
 
 |on-fail
 |restart (except for +stop+ operations, which default to fence when STONITH is enabled and block otherwise)
 |The action to take if this action ever fails. Allowed values:
 
 * +ignore:+ Pretend the resource did not fail.
 * +block:+ Don't perform any further operations on the resource.
 * +stop:+ Stop the resource and do not start it elsewhere.
 * +restart:+ Stop the resource and start it again (possibly on a different node).
 * +fence:+ STONITH the node on which the resource failed.
 * +standby:+ Move _all_ resources away from the node on which the resource failed.
 
 indexterm:[on-fail,Action Property]
 indexterm:[Action,Property,on-fail]
 
 |enabled
 |TRUE
 |If +false+, the operation is treated as if it does not exist. Allowed
  values: +true+, +false+
  indexterm:[enabled,Action Property]
  indexterm:[Action,Property,enabled]
 
 |record-pending
 |
 |If +true+, the intention to perform the operation is recorded so that
  GUIs and CLI tools can indicate that an operation is in progress.
  This is best set as an 'operation default' (see next section).
  Allowed values: +true+, +false+.
  indexterm:[enabled,Action Property]
  indexterm:[Action,Property,enabled]
 
 |=========================================================
 
 [[s-operation-defaults]]
 === Setting Global Defaults for Operations ===
 
 You can change the global default values for operation properties
 in a given cluster. These are defined in an +op_defaults+ section 
 of the CIB's +configuration+ section, and can be set with `crm_attribute`.
 For example,
 
 ----
 # crm_attribute --type op_defaults --name timeout --update 20s
 ----
 
 would default each operation's +timeout+ to 20 seconds.  If an
 operation's definition also includes a value for +timeout+, then that
 value would be used for that operation instead.
 
 === When Resources Take a Long Time to Start/Stop ===
 
 The cluster will always perform a number of implicit operations: +start+,
 +stop+ and a non-recurring +monitor+ operation used at startup to check
 whether the resource is already active.  If one of these is taking too long,
 then you can create an entry for them and specify a longer timeout.
 
 .An OCF resource with custom timeouts for its implicit actions
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
   <operations>
      <op id="public-ip-startup" name="monitor" interval="0" timeout="90s"/>
      <op id="public-ip-start" name="start" interval="0" timeout="180s"/>
      <op id="public-ip-stop" name="stop" interval="0" timeout="15min"/>
   </operations>
   <instance_attributes id="params-public-ip">
      <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
   </instance_attributes>
 </primitive>
 -------
 =====
 
 === Multiple Monitor Operations ===
 
 Provided no two operations (for a single resource) have the same name
 and interval, you can have as many monitor operations as you like.  In
 this way, you can do a superficial health check every minute and
 progressively more intense ones at higher intervals.
 
 To tell the resource agent what kind of check to perform, you need to
 provide each monitor with a different value for a common parameter.
 The OCF standard creates a special parameter called +OCF_CHECK_LEVEL+
 for this purpose and dictates that it is "made available to the
 resource agent without the normal +OCF_RESKEY+ prefix".
 
 Whatever name you choose, you can specify it by adding an
 +instance_attributes+ block to the +op+ tag. It is up to each
 resource agent to look for the parameter and decide how to use it.
 
 .An OCF resource with two recurring health checks, performing different levels of checks specified via +OCF_CHECK_LEVEL+.
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <operations>
       <op id="public-ip-health-60" name="monitor" interval="60">
          <instance_attributes id="params-public-ip-depth-60">
             <nvpair id="public-ip-depth-60" name="OCF_CHECK_LEVEL" value="10"/>
          </instance_attributes>
       </op>
       <op id="public-ip-health-300" name="monitor" interval="300">
          <instance_attributes id="params-public-ip-depth-300">
             <nvpair id="public-ip-depth-300" name="OCF_CHECK_LEVEL" value="20"/>
        </instance_attributes>
      </op>
    </operations>
    <instance_attributes id="params-public-ip">
        <nvpair id="public-ip-level" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 === Disabling a Monitor Operation ===
 
 The easiest way to stop a recurring monitor is to just delete it.
 However, there can be times when you only want to disable it
 temporarily.  In such cases, simply add +enabled="false"+ to the
 operation's definition.
 
 .Example of an OCF resource with a disabled health check
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <operations>
       <op id="public-ip-check" name="monitor" interval="60s" enabled="false"/>
    </operations>
    <instance_attributes id="params-public-ip">
       <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 This can be achieved from the command line by executing:
 
 ----
 # cibadmin --modify --xml-text '<op id="public-ip-check" enabled="false"/>'
 ----
 
 Once you've done whatever you needed to do, you can then re-enable it with
 ----
 # cibadmin --modify --xml-text '<op id="public-ip-check" enabled="true"/>'
 ----
diff --git a/doc/Pacemaker_Explained/en-US/NOTES b/doc/Pacemaker_Explained/en-US/NOTES
index 0c11bbb721..d6c6f2f3a7 100644
--- a/doc/Pacemaker_Explained/en-US/NOTES
+++ b/doc/Pacemaker_Explained/en-US/NOTES
@@ -1,18 +1,16 @@
-2.3.1 editing CIB copy via VI: isn't that racy? Are concurrent changes detected?
-
 why sometimes <example> and sometimes <figure>? examples have title at top, figures have title at bottom
 
 Example 2.8 (and others) XML line too long, line broken
 
 some <command> are in <para>, some in <programlisting> ... I'd like the latter more, or perhaps in a <screen>.
 Indentation makes whitespace at start of lines ... remove?
 
 tables 3.1 and 3.2 incomplete (crm-feature-set, ...)
 
 Ch 7 missing?
 
 Remove Ex9.9?
 
 Ap-Debug.xml not used?
 
 <indexterm> alias for primary?