diff --git a/doc/Pacemaker_Administration/en-US/Ch-Agents.txt b/doc/Pacemaker_Administration/en-US/Ch-Agents.txt
index ab82420f58..1cb2e252a3 100644
--- a/doc/Pacemaker_Administration/en-US/Ch-Agents.txt
+++ b/doc/Pacemaker_Administration/en-US/Ch-Agents.txt
@@ -1,337 +1,337 @@
 = Resource Agents =
 
 == OCF Resource Agents ==
 
 === Location of Custom Scripts ===
 
 indexterm:[OCF Resource Agents]
 OCF Resource Agents are found in +/usr/lib/ocf/resource.d/pass:[<replaceable>provider</replaceable>]+
 
 When creating your own agents, you are encouraged to create a new
 directory under +/usr/lib/ocf/resource.d/+ so that they are not
 confused with (or overwritten by) the agents shipped by existing providers.
 
 So, for example, if you choose the provider name of bigCorp and want
 a new resource named bigApp, you would create a resource agent called
 +/usr/lib/ocf/resource.d/bigCorp/bigApp+ and define a resource:
  
 [source,XML]
 ----
 <primitive id="custom-app" class="ocf" provider="bigCorp" type="bigApp"/>
 ----
 
 === Actions ===
 
 All OCF resource agents are required to implement the following actions.
 
 .Required Actions for OCF Agents
 [width="95%",cols="3m,3,7",options="header",align="center"]
 |=========================================================
 |Action
 |Description
 |Instructions
 
 |start
 |Start the resource
 |Return 0 on success and an appropriate error code otherwise. Must not
  report success until the resource is fully active.
  indexterm:[start,OCF Action]
  indexterm:[OCF,Action,start]
 
 |stop
 |Stop the resource
 |Return 0 on success and an appropriate error code otherwise. Must not
  report success until the resource is fully stopped.
  indexterm:[stop,OCF Action]
  indexterm:[OCF,Action,stop]
 
 |monitor
 |Check the resource's state 
 
 |Exit 0 if the resource is running, 7 if it is stopped, and anything
  else if it is failed. 
  indexterm:[monitor,OCF Action]
  indexterm:[OCF,Action,monitor]
 
 NOTE: The monitor script should test the state of the resource on the local machine only.
 
 |meta-data
 |Describe the resource
 |Provide information about this resource as an XML snippet. Exit with 0.
  indexterm:[meta-data,OCF Action]
  indexterm:[OCF,Action,meta-data]
 
 NOTE: This is _not_ performed as root.
 
 |validate-all
 |Verify the supplied parameters
 |Return 0 if parameters are valid, 2 if not valid, and 6 if resource is not configured.
  indexterm:[validate-all,OCF Action]
  indexterm:[OCF,Action,validate-all]
 
 |=========================================================
 
 Additional requirements (not part of the OCF specification) are placed on
 agents that will be used for advanced concepts such as clone resources.
 
 .Optional Actions for OCF Resource Agents
 [width="95%",cols="2m,6,3",options="header",align="center"]
 |=========================================================
 
 |Action
 |Description
 |Instructions
 
 |promote
 |Promote the local instance of a promotable clone resource to the master (primary) state.
 |Return 0 on success
  indexterm:[promote,OCF Action]
  indexterm:[OCF,Action,promote]
 
 |demote
 |Demote the local instance of a promotable clone resource to the slave (secondary) state.
 |Return 0 on success
  indexterm:[demote,OCF Action]
  indexterm:[OCF,Action,demote]
 
 |notify
 |Used by the cluster to send the agent pre- and post-notification
  events telling the resource what has happened and will happen.
 |Must not fail. Must exit with 0
  indexterm:[notify,OCF Action]
  indexterm:[OCF,Action,notify]
 
 |=========================================================
 
 One action specified in the OCF specs, +recover+, is not currently used by the
 cluster. It is intended to be a variant of the +start+ action that tries to
 recover a resource locally.
 
 [IMPORTANT]
 ====
 If you create a new OCF resource agent, use indexterm:[ocf-tester]`ocf-tester`
 to verify that the agent complies with the OCF standard properly.
 ====
 
 === How are OCF Return Codes Interpreted? ===
 
 The first thing the cluster does is to check the return code against
 the expected result.  If the result does not match the expected value,
 then the operation is considered to have failed, and recovery action is
 initiated.
 
 There are three types of failure recovery:
 
 .Types of recovery performed by the cluster
 [width="95%",cols="1m,4,4",options="header",align="center"]
 |=========================================================
 
 |Type
 |Description
 |Action Taken by the Cluster
 
 |soft
 |A transient error occurred
 |Restart the resource or move it to a new location
 indexterm:[soft,OCF error]
 indexterm:[OCF,error,soft]
 
 |hard
 |A non-transient error that may be specific to the current node occurred
 |Move the resource elsewhere and prevent it from being retried on the current node
 indexterm:[hard,OCF error]
 indexterm:[OCF,error,hard]
 
 |fatal
 |A non-transient error that will be common to all cluster nodes (e.g. a bad configuration was specified)
 |Stop the resource and prevent it from being started on any cluster node
 indexterm:[fatal,OCF error]
 indexterm:[OCF,error,fatal]
 
 |=========================================================
 
 [[s-ocf-return-codes]]
 === OCF Return Codes ===
 
 The following table outlines the different OCF return codes and the type of
 recovery the cluster will initiate when a failure code is received.
 Although counterintuitive, even actions that return 0
 (aka. +OCF_SUCCESS+) can be considered to have failed, if 0 was not
 the expected return value.
 
 .OCF Return Codes and their Recovery Types
-[width="95%",cols="1m,4<m,6<,1m",options="header",align="center"]
+[width="95%",cols="1m,<4m,<6,1m",options="header",align="center"]
 |=========================================================
 
 |RC
 |OCF Alias
 |Description
 |RT
 
 |0
 |OCF_SUCCESS
 |Success. The command completed successfully. This is the expected result for all start, stop, promote and demote commands.
 indexterm:[Return Code,OCF_SUCCESS]
 indexterm:[Return Code,0,OCF_SUCCESS]
 |soft
 
 |1
 |OCF_ERR_GENERIC
 |Generic "there was a problem" error code.
 indexterm:[Return Code,OCF_ERR_GENERIC]
 indexterm:[Return Code,1,OCF_ERR_GENERIC]
 |soft
 
 |2
 |OCF_ERR_ARGS
 |The resource's configuration is not valid on this machine. E.g. it refers to a location not found on the node. 
 indexterm:[Return Code,OCF_ERR_ARGS]
 indexterm:[Return Code,2,OCF_ERR_ARGS]
 |hard
 
 |3
 |OCF_ERR_UNIMPLEMENTED
 |The requested action is not implemented.
 indexterm:[Return Code,OCF_ERR_UNIMPLEMENTED]
 indexterm:[Return Code,3,OCF_ERR_UNIMPLEMENTED]
 |hard
 
 |4
 |OCF_ERR_PERM
 |The resource agent does not have sufficient privileges to complete the task.
 indexterm:[Return Code,OCF_ERR_PERM]
 indexterm:[Return Code,4,OCF_ERR_PERM]
 |hard
 
 |5
 |OCF_ERR_INSTALLED
 |The tools required by the resource are not installed on this machine.
 indexterm:[Return Code,OCF_ERR_INSTALLED]
 indexterm:[Return Code,5,OCF_ERR_INSTALLED]
 |hard
 
 |6
 |OCF_ERR_CONFIGURED
 |The resource's configuration is invalid. E.g. required parameters are missing.
 indexterm:[Return Code,OCF_ERR_CONFIGURED]
 indexterm:[Return Code,6,OCF_ERR_CONFIGURED]
 |fatal
 
 |7
 |OCF_NOT_RUNNING
 |The resource is safely stopped. The cluster will not attempt to stop a resource that returns this for any action.
 indexterm:[Return Code,OCF_NOT_RUNNING]
 indexterm:[Return Code,7,OCF_NOT_RUNNING]
 |N/A
 
 |8
 |OCF_RUNNING_MASTER
 |The resource is running in master mode.
 indexterm:[Return Code,OCF_RUNNING_MASTER]
 indexterm:[Return Code,8,OCF_RUNNING_MASTER]
 |soft
 
 |9
 |OCF_FAILED_MASTER
 |The resource is in master mode but has failed. The resource will be demoted,
 stopped and then started (and possibly promoted) again.
 indexterm:[Return Code,OCF_FAILED_MASTER]
 indexterm:[Return Code,9,OCF_FAILED_MASTER]
 |soft
 
 |other
 |N/A
 |Custom error code.
 indexterm:[Return Code,other]
 |soft
 
 |=========================================================
 
 Exceptions to the recovery handling described above:
 
 * Probes (non-recurring monitor actions) that find a resource active
   (or in master mode) will not result in recovery action unless it is
   also found active elsewhere.
 * The recovery action taken when a resource is found active more than
   once is determined by the resource's +multiple-active+ property.
 * Recurring actions that return +OCF_ERR_UNIMPLEMENTED+
   do not cause any type of recovery.
 
 == Init Script LSB Compliance ==
 
 The relevant part of the
 http://refspecs.linuxfoundation.org/lsb.shtml[LSB specifications]
 includes a description of all the return codes listed here.
     
 Assuming `some_service` is configured correctly and currently
 inactive, the following sequence will help you determine if it is
 LSB-compatible:
 
 . Start (stopped):
 +
 ----
 # /etc/init.d/some_service start ; echo "result: $?"
 ----
 +
   .. Did the service start?
   .. Did the command print *result: 0* (in addition to its usual output)?
 +
 . Status (running):
 +
 ----
 # /etc/init.d/some_service status ; echo "result: $?"
 ----
 +
   .. Did the script accept the command?
   .. Did the script indicate the service was running?
   .. Did the command print *result: 0* (in addition to its usual output)?
 +
 . Start (running):
 +
 ----
 # /etc/init.d/some_service start ; echo "result: $?"
 ----
 +
   .. Is the service still running?
   .. Did the command print *result: 0* (in addition to its usual output)?
 +
 . Stop (running):
 +
 ----
 # /etc/init.d/some_service stop ; echo "result: $?"
 ----
 +
   .. Was the service stopped?
   .. Did the command print *result: 0* (in addition to its usual output)?
 +
 . Status (stopped):
 +
 ----
 # /etc/init.d/some_service status ; echo "result: $?"
 ----
 +
   .. Did the script accept the command?
   .. Did the script indicate the service was not running?
   .. Did the command print *result: 3* (in addition to its usual output)?
 +
 . Stop (stopped):
 +
 ----
 # /etc/init.d/some_service stop ; echo "result: $?"
 ----
 +
   .. Is the service still stopped?
   .. Did the command print *result: 0* (in addition to its usual output)?
 +
 . Status (failed):
 +
 .. This step is not readily testable and relies on manual inspection of the script.
 +
 The script can use one of the error codes (other than 3) listed in the
 LSB spec to indicate that it is active but failed. This tells the
 cluster that before moving the resource to another node, it needs to
 stop it on the existing one first.
 
 If the answer to any of the above questions is no, then the script is
 not LSB-compliant. Your options are then to either fix the script or
 write an OCF agent based on the existing script.
diff --git a/doc/Pacemaker_Administration/en-US/Ch-Configuring.txt b/doc/Pacemaker_Administration/en-US/Ch-Configuring.txt
index cffe780bbf..473e5b5299 100644
--- a/doc/Pacemaker_Administration/en-US/Ch-Configuring.txt
+++ b/doc/Pacemaker_Administration/en-US/Ch-Configuring.txt
@@ -1,435 +1,435 @@
 = Configuring Pacemaker =
 
 == How Should the Configuration be Updated? ==
 
 There are three basic rules for updating the cluster configuration:
 
  * Rule 1 - Never edit the +cib.xml+ file manually. Ever. I'm not making this up.
  * Rule 2 - Read Rule 1 again.
  * Rule 3 - The cluster will notice if you ignored rules 1 &amp; 2 and refuse to use the configuration.
 
 Now that it is clear how 'not' to update the configuration, we can begin
 to explain how you 'should'.
 
 === Editing the CIB Using XML ===
 
 The most powerful tool for modifying the configuration is the
 +cibadmin+ command.  With +cibadmin+, you can query, add, remove, update
 or replace any part of the configuration. All changes take effect immediately,
 so there is no need to perform a reload-like operation.
 
 The simplest way of using `cibadmin` is to use it to save the current
 configuration to a temporary file, edit that file with your favorite
 text or XML editor, and then upload the revised configuration. footnote:[This
 process might appear to risk overwriting changes that happen after the initial
 cibadmin call, but pacemaker will reject any update that is "too old". If the
 CIB is updated in some other fashion after the initial cibadmin, the second
 cibadmin will be rejected because the version number will be too low.]
       
 .Safely using an editor to modify the cluster configuration
 ======
 --------
 # cibadmin --query > tmp.xml
 # vi tmp.xml
 # cibadmin --replace --xml-file tmp.xml
 --------
 ======
 
 Some of the better XML editors can make use of a Relax NG schema to
 help make sure any changes you make are valid.  The schema describing
 the configuration can be found in +pacemaker.rng+, which may be
 deployed in a location such as +/usr/share/pacemaker+ or
 +/usr/lib/heartbeat+ depending on your operating system and how you
 installed the software.
 
 If you want to modify just one section of the configuration, you can
 query and replace just that section to avoid modifying any others.
       
 .Safely using an editor to modify only the resources section
 ======
 --------
 # cibadmin --query --scope resources > tmp.xml
 # vi tmp.xml
 # cibadmin --replace --scope resources --xml-file tmp.xml
 --------
 ======
 
 === Quickly Deleting Part of the Configuration ===
 
 Identify the object you wish to delete by XML tag and id. For example,
 you might search the CIB for all STONITH-related configuration:
       
 .Searching for STONITH-related configuration items
 ======
 ----
 # cibadmin -Q | grep stonith
  <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
  <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="1"/>
  <primitive id="child_DoFencing" class="stonith" type="external/vmware">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:1" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:2" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:3" type="external/vmware" class="stonith">
 ----
 ======
 
 If you wanted to delete the +primitive+ tag with id +child_DoFencing+,
 you would run:
 
 ----
 # cibadmin --delete --xml-text '<primitive id="child_DoFencing"/>'
 ----
 
 === Updating the Configuration Without Using XML ===
 
 Most tasks can be performed with one of the other command-line
 tools provided with pacemaker, avoiding the need to read or edit XML.
 
 To enable STONITH for example, one could run:
 
 ----
 # crm_attribute --name stonith-enabled --update 1
 ----
 
 Or, to check whether *somenode* is allowed to run resources, there is:
 
 ----
 # crm_standby --query --node somenode
 ----
 
 Or, to find the current location of *my-test-rsc*, one can use:
 
 ----
 # crm_resource --locate --resource my-test-rsc
 ----
 
 Examples of using these tools for specific cases will be given throughout this
 document where appropriate.
 
 [[s-config-sandboxes]]
 == Making Configuration Changes in a Sandbox ==
 
 Often it is desirable to preview the effects of a series of changes
 before updating the configuration all at once. For this purpose, we
 have created `crm_shadow` which creates a
 "shadow" copy of the configuration and arranges for all the command
 line tools to use it.
 
 To begin, simply invoke `crm_shadow --create` with
 the name of a configuration to create footnote:[Shadow copies are
 identified with a name, making it possible to have more than one.],
 and follow the simple on-screen instructions.
 
 [WARNING]
 ====
 Read this section and the on-screen instructions carefully; failure to do so could
 result in destroying the cluster's active configuration!
 ====
       
       
 .Creating and displaying the active sandbox
 ======
 ----
 # crm_shadow --create test
 Setting up shadow instance
 Type Ctrl-D to exit the crm_shadow shell
 shadow[test]: 
 shadow[test] # crm_shadow --which
 test
 ----
 ======
 
 From this point on, all cluster commands will automatically use the
 shadow copy instead of talking to the cluster's active configuration.
 Once you have finished experimenting, you can either make the
 changes active via the `--commit` option, or discard them using the `--delete`
 option.  Again, be sure to follow the on-screen instructions carefully!
       
 For a full list of `crm_shadow` options and
 commands, invoke it with the `--help` option.
 
 .Use sandbox to make multiple changes all at once, discard them, and verify real configuration is untouched
 ======
 ----
  shadow[test] # crm_failcount -r rsc_c001n01 -G
  scope=status  name=fail-count-rsc_c001n01 value=0
  shadow[test] # crm_standby --node c001n02 -v on
  shadow[test] # crm_standby --node c001n02 -G
  scope=nodes  name=standby value=on
 
  shadow[test] # cibadmin --erase --force
  shadow[test] # cibadmin --query
  <cib crm_feature_set="3.0.14" validate-with="pacemaker-3.0" epoch="112" num_updates="2" admin_epoch="0" cib-last-written="Mon Jan  8 23:26:47 2018" update-origin="rhel7-1" update-client="crm_node" update-user="root" have-quorum="1" dc-uuid="1">
    <configuration>
      <crm_config/>
      <nodes/>
      <resources/>
      <constraints/>
    </configuration>
    <status/>
  </cib>
   shadow[test] # crm_shadow --delete test --force
   Now type Ctrl-D to exit the crm_shadow shell
   shadow[test] # exit
   # crm_shadow --which
   No active shadow configuration defined
   # cibadmin -Q
  <cib crm_feature_set="3.0.14" validate-with="pacemaker-3.0" epoch="110" num_updates="2" admin_epoch="0" cib-last-written="Mon Jan  8 23:26:47 2018" update-origin="rhel7-1" update-client="crm_node" update-user="root" have-quorum="1">
     <configuration>
        <crm_config>
           <cluster_property_set id="cib-bootstrap-options">
              <nvpair id="cib-bootstrap-1" name="stonith-enabled" value="1"/>
              <nvpair id="cib-bootstrap-2" name="pe-input-series-max" value="30000"/>
 ----
 ======
 
 [[s-config-testing-changes]]
 == Testing Your Configuration Changes ==
 
 We saw previously how to make a series of changes to a "shadow" copy
 of the configuration.  Before loading the changes back into the
 cluster (e.g. `crm_shadow --commit mytest --force`), it is often
 advisable to simulate the effect of the changes with +crm_simulate+.
 For example:
       
 ----
 # crm_simulate --live-check -VVVVV --save-graph tmp.graph --save-dotfile tmp.dot
 ----
 
 This tool uses the same library as the live cluster to show what it
 would have done given the supplied input.  Its output, in addition to
 a significant amount of logging, is stored in two files +tmp.graph+
 and +tmp.dot+. Both files are representations of the same thing: the
 cluster's response to your changes.
 
 The graph file stores the complete transition from the existing cluster state
 to your desired new state, containing a list of all the actions, their
 parameters and their pre-requisites. Because the transition graph is not
 terribly easy to read, the tool also generates a Graphviz
 footnote:[Graph visualization software. See http://www.graphviz.org/ for details.]
 dot-file representing the same information.
 
 For information on the options supported by `crm_simulate`, use
 its `--help` option.
 
 .Interpreting the Graphviz output
  * Arrows indicate ordering dependencies
  * Dashed arrows indicate dependencies that are not present in the transition graph
  * Actions with a dashed border of any color do not form part of the transition graph
  * Actions with a green border form part of the transition graph
  * Actions with a red border are ones the cluster would like to execute but cannot run
  * Actions with a blue border are ones the cluster does not feel need to be executed
  * Actions with orange text are pseudo/pretend actions that the cluster uses to simplify the graph
  * Actions with black text are sent to the LRM
  * Resource actions have text of the form pass:[<replaceable>rsc</replaceable>]_pass:[<replaceable>action</replaceable>]_pass:[<replaceable>interval</replaceable>] pass:[<replaceable>node</replaceable>]
  * Any action depending on an action with a red border will not be able to execute. 
  * Loops are _really_ bad. Please report them to the development team. 
 
 === Small Cluster Transition ===
 
 image::images/Policy-Engine-small.png["An example transition graph as represented by Graphviz",width="16cm",height="6cm",align="center"]      
 
 In the above example, it appears that a new node, *pcmk-2*, has come
 online and that the cluster is checking to make sure *rsc1*, *rsc2*
 and *rsc3* are not already running there (Indicated by the
 *rscN_monitor_0* entries).  Once it did that, and assuming the resources
 were not active there, it would have liked to stop *rsc1* and *rsc2*
 on *pcmk-1* and move them to *pcmk-2*.  However, there appears to be
 some problem and the cluster cannot or is not permitted to perform the
 stop actions which implies it also cannot perform the start actions.
 For some reason the cluster does not want to start *rsc3* anywhere.
 
 === Complex Cluster Transition ===
 
 image::images/Policy-Engine-big.png["Another, slightly more complex, transition graph that you're not expected to be able to read",width="16cm",height="20cm",align="center"]
 
 == Do I Need to Update the Configuration on All Cluster Nodes? ==
 
 No. Any changes are immediately synchronized to the other active
 members of the cluster.
 
 To reduce bandwidth, the cluster only broadcasts the incremental
 updates that result from your changes and uses MD5 checksums to ensure
 that each copy is completely consistent.
 
 == Working with CIB Properties ==
 
 Although these fields can be written to by the user, in
 most cases the cluster will overwrite any values specified by the
 user with the "correct" ones.
 
 To change the ones that can be specified by the user,
 for example +admin_epoch+, one should use:
 ----
 # cibadmin --modify --xml-text '<cib admin_epoch="42"/>'
 ----
 
 A complete set of CIB properties will look something like this:
 
 .Attributes set for a cib object
 ======
 [source,XML]
 -------
 <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" 
    admin_epoch="42" epoch="116" num_updates="1"
    cib-last-written="Mon Jan 12 15:46:39 2015" update-origin="rhel7-1"
    update-client="crm_attribute" have-quorum="1" dc-uuid="1">
 -------
 ======
 
 == Querying and Setting Cluster Options ==
 
 indexterm:[Querying,Cluster Option]
 indexterm:[Setting,Cluster Option]
 indexterm:[Cluster,Querying Options]
 indexterm:[Cluster,Setting Options]
 
 Cluster options can be queried and modified using the `crm_attribute` tool. To
 get the current value of +cluster-delay+, you can run:
 
 ----
 # crm_attribute --query --name cluster-delay
 ----
 
 which is more simply written as
 
 ----
 # crm_attribute -G -n cluster-delay
 ----
 
 If a value is found, you'll see a result like this:
 
 ----
 # crm_attribute -G -n cluster-delay
 scope=crm_config name=cluster-delay value=60s
 ----
 
 If no value is found, the tool will display an error:
 
 ----
 # crm_attribute -G -n clusta-deway
 scope=crm_config name=clusta-deway value=(null)
 Error performing operation: No such device or address
 ----
 
 To use a different value (for example, 30 seconds), simply run:
 
 ----
 # crm_attribute --name cluster-delay --update 30s
 ----
 
 To go back to the cluster's default value, you can delete the value, for example:
 
 ----
 # crm_attribute --name cluster-delay --delete
 Deleted crm_config option: id=cib-bootstrap-options-cluster-delay name=cluster-delay
 ----
 
 === When Options are Listed More Than Once ===
 
 If you ever see something like the following, it means that the option you're modifying is present more than once.
 
 .Deleting an option that is listed twice
 =======
 ------
 # crm_attribute --name batch-limit --delete
 
 Multiple attributes match name=batch-limit in crm_config:
 Value: 50          (set=cib-bootstrap-options, id=cib-bootstrap-options-batch-limit)
 Value: 100         (set=custom, id=custom-batch-limit)
 Please choose from one of the matches above and supply the 'id' with --id
 -------
 =======
 
 In such cases, follow the on-screen instructions to perform the
 requested action.  To determine which value is currently being used by
 the cluster, refer to the 'Rules' chapter of 'Pacemaker Explained'.
 
 [[s-remote-connection]]
 == Connecting from a Remote Machine ==
 indexterm:[Cluster,Remote connection]
 indexterm:[Cluster,Remote administration]
 
 Provided Pacemaker is installed on a machine, it is possible to
 connect to the cluster even if the machine itself is not in the same
 cluster.  To do this, one simply sets up a number of environment
 variables and runs the same commands as when working on a cluster
 node.
 
 .Environment Variables Used to Connect to Remote Instances of the CIB
-[width="95%",cols="1m,1,3<",options="header",align="center"]
+[width="95%",cols="1m,1,<3",options="header",align="center"]
 |=========================================================
 
 |Environment Variable
 |Default
 |Description
 
 |CIB_user
 |$USER
 |The user to connect as. Needs to be part of the +haclient+ group on
  the target host.
  indexterm:[Environment Variable,CIB_user]
 
 |CIB_passwd
 |
 |The user's password. Read from the command line if unset.
  indexterm:[Environment Variable,CIB_passwd]
 
 |CIB_server
 |localhost
 |The host to contact
  indexterm:[Environment Variable,CIB_server]
 
 |CIB_port
 |
 |The port on which to contact the server; required.
  indexterm:[Environment Variable,CIB_port]
 
 |CIB_encrypted
 |TRUE
 |Whether to encrypt network traffic
  indexterm:[Environment Variable,CIB_encrypted]
 
 |=========================================================
 
 So, if *c001n01* is an active cluster node and is listening on port 1234
 for connections, and *someuser* is a member of the *haclient* group,
 then the following would prompt for *someuser*'s password and return
 the cluster's current configuration:
 
 ----
 # export CIB_port=1234; export CIB_server=c001n01; export CIB_user=someuser;
 # cibadmin -Q
 ----
 
 For security reasons, the cluster does not listen for remote
 connections by default.  If you wish to allow remote access, you need
 to set the +remote-tls-port+ (encrypted) or +remote-clear-port+
 (unencrypted) CIB properties (i.e., those kept in the +cib+ tag, like
 +num_updates+ and +epoch+).
 
 .Extra top-level CIB properties for remote access
-[width="95%",cols="1m,1,3<",options="header",align="center"]
+[width="95%",cols="1m,1,<3",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |remote-tls-port
 |_none_
 |Listen for encrypted remote connections on this port.
  indexterm:[remote-tls-port,Remote Connection Option]
  indexterm:[Remote Connection,Option,remote-tls-port]
 
 |remote-clear-port
 |_none_
 |Listen for plaintext remote connections on this port.
  indexterm:[remote-clear-port,Remote Connection Option]
  indexterm:[Remote Connection,Option,remote-clear-port]
 
 |=========================================================
 
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
index fbd992267d..c662c60a49 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
@@ -1,728 +1,728 @@
 = Advanced Configuration =
 
 [[s-recurring-start]]
 == Specifying When Recurring Actions are Performed ==
 
 
 By default, recurring actions are scheduled relative to when the
 resource started.  So if your resource was last started at 14:32 and
 you have a backup set to be performed every 24 hours, then the backup
 will always run in the middle of the business day -- hardly
 desirable.
 
 To specify a date and time that the operation should be relative to, set
 the operation's +interval-origin+.  The cluster uses this point to
 calculate the correct +start-delay+ such that the operation will occur
 at _origin + (interval * N)_.
 
 So, if the operation's interval is 24h, its interval-origin is set to
 02:00 and it is currently 14:32, then the cluster would initiate
 the operation with a start delay of 11 hours and 28 minutes.  If the
 resource is moved to another node before 2am, then the operation is
 cancelled.
 
 The value specified for +interval+ and +interval-origin+ can be any
 date/time conforming to the
 http://en.wikipedia.org/wiki/ISO_8601[ISO8601 standard].  By way of
 example, to specify an operation that would run on the first Monday of
 2009 and every Monday after that, you would add:
 
 .Specifying a Base for Recurring Action Intervals
 =====
 [source,XML]
 <op id="my-weekly-action" name="custom-action" interval="P7D" interval-origin="2009-W01-1"/> 
 =====
 
 [[s-failure-handling]]
 == Handling Resource Failure ==
 
 By default, Pacemaker will attempt to recover failed resources by restarting
 them. However, failure recovery is highly configurable.
 
 === Failure Counts ===
 
 Pacemaker tracks resource failures for each combination of node, resource, and
 operation (start, stop, monitor, etc.).
 
 You can query the fail count for a particular node, resource, and/or operation
 using the `crm_failcount` command. For example, to see how many times the
 10-second monitor for +myrsc+ has failed on +node1+, run:
 
 ----
 # crm_failcount --query -r myrsc -N node1 -n monitor -I 10s
 ----
 
 If you omit the node, `crm_failcount` will use the local node. If you omit the
 operation and interval, `crm_failcount` will display the sum of the fail counts
 for all operations on the resource.
 
 You can use `crm_resource --cleanup` or `crm_failcount --delete` to clear
 fail counts. For example, to clear the above monitor failures, run:
 
 ----
 # crm_resource --cleanup -r myrsc -N node1 -n monitor -I 10s
 ----
 
 If you omit the resource, `crm_resource --cleanup` will clear failures for all
 resources. If you omit the node, it will clear failures on all nodes. If you
 omit the operation and interval, it will clear the failures for all operations
 on the resource.
 
 [NOTE]
 ====
 Even when cleaning up only a single operation, all failed operations will
 disappear from the status display. This allows us to trigger a re-check of the
 resource's current status.
 ====
 
 Higher-level tools may provide other commands for querying and clearing
 fail counts.
 
 The `crm_mon` tool shows the current cluster status, including any failed
 operations. To see the current fail counts for any failed resources, call
 `crm_mon` with the `--failcounts` option. This shows the fail counts per
 resource (that is, the sum of any operation fail counts for the resource).
 
 === Failure Response ===
 
 Normally, if a running resource fails, pacemaker will try to stop it and start
 it again. Pacemaker will choose the best location to start it each time, which
 may be the same node that it failed on.
 
 However, if a resource fails repeatedly, it is possible that there is an
 underlying problem on that node, and you might desire trying a different node
 in such a case. Pacemaker allows you to set your preference via the
 +migration-threshold+ resource meta-attribute.
 footnote:[
 The naming of this option was perhaps unfortunate as it is easily
 confused with live migration, the process of moving a resource from
 one node to another without stopping it.  Xen virtual guests are the
 most common example of resources that can be migrated in this manner.
 ]
 
 If you define +migration-threshold=pass:[<replaceable>N</replaceable>]+ for a
 resource, it will be banned from the original node after 'N' failures.
 
 [NOTE]
 ====
 The +migration-threshold+ is per 'resource', even though fail counts are
 tracked per 'operation'. The operation fail counts are added together
 to compare against the +migration-threshold+.
 ====
 
 By default, fail counts remain until manually cleared by an administrator
 using `crm_resource --cleanup` or `crm_failcount --delete` (hopefully after
 first fixing the failure's cause). It is possible to have fail counts expire
 automatically by setting the +failure-timeout+ resource meta-attribute.
 
 [IMPORTANT]
 ====
 A successful operation does not clear past failures. If a recurring monitor
 operation fails once, succeeds many times, then fails again days later, its
 fail count is 2. Fail counts are cleared only by manual intervention or
 falure timeout.
 ====
 
 For example, a setting of +migration-threshold=2+ and +failure-timeout=60s+
 would cause the resource to move to a new node after 2 failures, and
 allow it to move back (depending on stickiness and constraint scores) after one
 minute.
 
 [NOTE]
 ====
 +failure-timeout+ is measured since the most recent failure. That is, older
 failures do not individually time out and lower the fail count. Instead, all
 failures are timed out simultaneously (and the fail count is reset to 0) if
 there is no new failure for the timeout period.
 ====
 
 There are two exceptions to the migration threshold concept:
 when a resource either fails to start or fails to stop.
 
 If the cluster property +start-failure-is-fatal+ is set to +true+ (which is the
 default), start failures cause the fail count to be set to +INFINITY+ and thus
 always cause the resource to move immediately.
 
 Stop failures are slightly different and crucial.  If a resource fails
 to stop and STONITH is enabled, then the cluster will fence the node
 in order to be able to start the resource elsewhere.  If STONITH is
 not enabled, then the cluster has no way to continue and will not try
 to start the resource elsewhere, but will try to stop it again after
 the failure timeout.
 
 [IMPORTANT]
 Please read <<s-rules-recheck>> to understand how timeouts work
 before configuring a +failure-timeout+.
 
 == Moving Resources ==
 indexterm:[Moving,Resources] 
 indexterm:[Resource,Moving]
 
 === Moving Resources Manually ===
 
 There are primarily two occasions when you would want to move a
 resource from its current location: when the whole node is under
 maintenance, and when a single resource needs to be moved.
 
 ==== Standby Mode ====
 
 Since everything eventually comes down to a score, you could create
 constraints for every resource to prevent them from running on one
 node.  While pacemaker configuration can seem convoluted at times, not even
 we would require this of administrators.
 
 Instead, one can set a special node attribute which tells the cluster
 "don't let anything run here".  There is even a helpful tool to help
 query and set it, called `crm_standby`.  To check the standby status
 of the current machine, run:
 
 ----
 # crm_standby -G
 ----
 
 A value of +on+ indicates that the node is _not_ able to host any
 resources, while a value of +off+ says that it _can_.
 
 You can also check the status of other nodes in the cluster by
 specifying the `--node` option:
 
 ----
 # crm_standby -G --node sles-2
 ----
 
 To change the current node's standby status, use `-v` instead of `-G`:
 
 ----
 # crm_standby -v on
 ----
 
 Again, you can change another host's value by supplying a hostname with `--node`.
 
 ==== Moving One Resource ====
 
 When only one resource is required to move, we could do this by creating
 location constraints.  However, once again we provide a user-friendly
 shortcut as part of the `crm_resource` command, which creates and
 modifies the extra constraints for you.  If +Email+ were running on
 +sles-1+ and you wanted it moved to a specific location, the command
 would look something like:
         
 ----
 # crm_resource -M -r Email -H sles-2
 ----
 
 Behind the scenes, the tool will create the following location constraint:
 
 [source,XML]
 <rsc_location rsc="Email" node="sles-2" score="INFINITY"/>
 
 It is important to note that subsequent invocations of `crm_resource
 -M` are not cumulative. So, if you ran these commands
 
 ----
 # crm_resource -M -r Email -H sles-2
 # crm_resource -M -r Email -H sles-3
 ----
 
 then it is as if you had never performed the first command.
 
 To allow the resource to move back again, use:
 
 ----
 # crm_resource -U -r Email
 ----
 
 Note the use of the word _allow_.  The resource can move back to its
 original location but, depending on +resource-stickiness+, it might
 stay where it is.  To be absolutely certain that it moves back to
 +sles-1+, move it there before issuing the call to `crm_resource -U`:
         
 ----
 # crm_resource -M -r Email -H sles-1
 # crm_resource -U -r Email
 ----
 
 Alternatively, if you only care that the resource should be moved from
 its current location, try:
 
 ----
 # crm_resource -B -r Email
 ----
 
 Which will instead create a negative constraint, like
 
 [source,XML]
 <rsc_location rsc="Email" node="sles-1" score="-INFINITY"/>
 
 This will achieve the desired effect, but will also have long-term
 consequences.  As the tool will warn you, the creation of a
 +-INFINITY+ constraint will prevent the resource from running on that
 node until `crm_resource -U` is used.  This includes the situation
 where every other cluster node is no longer available!
 
 In some cases, such as when +resource-stickiness+ is set to
 +INFINITY+, it is possible that you will end up with the problem
 described in <<node-score-equal>>.  The tool can detect
 some of these cases and deals with them by creating both
 positive and negative constraints. E.g.
 
 +Email+ prefers +sles-1+ with a score of +-INFINITY+
 
 +Email+ prefers +sles-2+ with a score of +INFINITY+
 
 which has the same long-term consequences as discussed earlier.
 
 === Moving Resources Due to Connectivity Changes ===
 
 You can configure the cluster to move resources when external connectivity is
 lost in two steps.
 
 ==== Tell Pacemaker to Monitor Connectivity ====
 
 First, add an *ocf:pacemaker:ping* resource to the cluster.  The
 *ping* resource uses the system utility of the same name to a test whether
 list of machines (specified by DNS hostname or IPv4/IPv6 address) are
 reachable and uses the results to maintain a node attribute called +pingd+
 by default.
 footnote:[
 The attribute name is customizable, in order to allow multiple ping groups to be defined.
 ]
 
 [NOTE]
 ===========
 Older versions of Pacemaker used a different agent *ocf:pacemaker:pingd* which
 is now deprecated in favor of *ping*. If your version of Pacemaker does not
 contain the *ping* resource agent, download the latest version from
 https://github.com/ClusterLabs/pacemaker/tree/master/extra/resources/ping
 ===========
 
 Normally, the ping resource should run on all cluster nodes, which means that
 you'll need to create a clone.  A template for this can be found below
 along with a description of the most interesting parameters.
           
 .Common Options for a 'ping' Resource
-[width="95%",cols="1m,4<",options="header",align="center"]
+[width="95%",cols="1m,<4",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |dampen
 |The time to wait (dampening) for further changes to occur. Use this
  to prevent a resource from bouncing around the cluster when cluster
  nodes notice the loss of connectivity at slightly different times.
  indexterm:[dampen,Ping Resource Option]
  indexterm:[Ping Resource,Option,dampen]
 
 |multiplier
 |The number of connected ping nodes gets multiplied by this value to
  get a score. Useful when there are multiple ping nodes configured.
  indexterm:[multiplier,Ping Resource Option]
  indexterm:[Ping Resource,Option,multiplier]
 
 |host_list
 |The machines to contact in order to determine the current
  connectivity status. Allowed values include resolvable DNS host
  names, IPv4 and IPv6 addresses.
  indexterm:[host_list,Ping Resource Option]
  indexterm:[Ping Resource,Option,host_list]
 
 |=========================================================
 
 .An example ping cluster resource that checks node connectivity once every minute
 =====
 [source,XML]
 ------------
 <clone id="Connected">
    <primitive id="ping" provider="pacemaker" class="ocf" type="ping">
     <instance_attributes id="ping-attrs">
       <nvpair id="pingd-dampen" name="dampen" value="5s"/>
       <nvpair id="pingd-multiplier" name="multiplier" value="1000"/>
       <nvpair id="pingd-hosts" name="host_list" value="my.gateway.com www.bigcorp.com"/>
     </instance_attributes>
     <operations>
       <op id="ping-monitor-60s" interval="60s" name="monitor"/>
     </operations>
    </primitive>
 </clone>
 ------------
 =====
 
 [IMPORTANT]
 ===========
 You're only half done.  The next section deals with telling Pacemaker
 how to deal with the connectivity status that +ocf:pacemaker:ping+ is
 recording.
 ===========
 
 ==== Tell Pacemaker How to Interpret the Connectivity Data ====
 
 [IMPORTANT]
 ======
 Before attempting the following, make sure you understand
 <<ch-rules>>.
 ======
 
 There are a number of ways to use the connectivity data.
 
 The most common setup is for people to have a single ping
 target (e.g. the service network's default gateway), to prevent the cluster
 from running a resource on any unconnected node.
 
 .Don't run a resource on unconnected nodes
 =====
 [source,XML]
 -------
 <rsc_location id="WebServer-no-connectivity" rsc="Webserver">
    <rule id="ping-exclude-rule" score="-INFINITY" >
     <expression id="ping-exclude" attribute="pingd" operation="not_defined"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 A more complex setup is to have a number of ping targets configured.
 You can require the cluster to only run resources on nodes that can
 connect to all (or a minimum subset) of them.
 
 .Run only on nodes connected to three or more ping targets.
 =====
 [source,XML]
 -------
 <primitive id="ping" provider="pacemaker" class="ocf" type="ping">
 ... <!-- omitting some configuration to highlight important parts -->
       <nvpair id="pingd-multiplier" name="multiplier" value="1000"/>
 ...
 </primitive>
 ...
 <rsc_location id="WebServer-connectivity" rsc="Webserver">
    <rule id="ping-prefer-rule" score="-INFINITY" >
       <expression id="ping-prefer" attribute="pingd" operation="lt" value="3000"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 Alternatively, you can tell the cluster only to _prefer_ nodes with the best
 connectivity.  Just be sure to set +multiplier+ to a value higher than
 that of +resource-stickiness+ (and don't set either of them to
 +INFINITY+).
 
 .Prefer the node with the most connected ping nodes
 =====
 [source,XML]
 -------
 <rsc_location id="WebServer-connectivity" rsc="Webserver">
    <rule id="ping-prefer-rule" score-attribute="pingd" >
     <expression id="ping-prefer" attribute="pingd" operation="defined"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 It is perhaps easier to think of this in terms of the simple
 constraints that the cluster translates it into.  For example, if
 *sles-1* is connected to all five ping nodes but *sles-2* is only
 connected to two, then it would be as if you instead had the following
 constraints in your configuration:
 
 .How the cluster translates the above location constraint
 =====
 [source,XML]
 -------
 <rsc_location id="ping-1" rsc="Webserver" node="sles-1" score="5000"/>
 <rsc_location id="ping-2" rsc="Webserver" node="sles-2" score="2000"/>
 -------
 =====
 
 The advantage is that you don't have to manually update any
 constraints whenever your network connectivity changes.
 
 You can also combine the concepts above into something even more
 complex.  The example below shows how you can prefer the node with the
 most connected ping nodes provided they have connectivity to at least
 three (again assuming that +multiplier+ is set to 1000).
 
 .A more complex example of choosing a location based on connectivity
 =====
 [source,XML]
 -------
 <rsc_location id="WebServer-connectivity" rsc="Webserver">
    <rule id="ping-exclude-rule" score="-INFINITY" >
     <expression id="ping-exclude" attribute="pingd" operation="lt" value="3000"/>
    </rule>
    <rule id="ping-prefer-rule" score-attribute="pingd" >
     <expression id="ping-prefer" attribute="pingd" operation="defined"/>
    </rule>
 </rsc_location>
 -------
 =====
 
 [[s-migrating-resources]]
 === Migrating Resources ===
 
 Normally, when the cluster needs to move a resource, it fully restarts
 the resource (i.e. stops the resource on the current node
 and starts it on the new node).
 
 However, some types of resources, such as Xen virtual guests, are able to move to
 another location without loss of state (often referred to as live migration
 or hot migration). In pacemaker, this is called resource migration.
 Pacemaker can be configured to migrate a resource when moving it,
 rather than restarting it.
 
 Not all resources are able to migrate; see the Migration Checklist
 below, and those that can, won't do so in all situations.
 Conceptually, there are two requirements from which the other
 prerequisites follow:
 
 * The resource must be active and healthy at the old location; and
 * everything required for the resource to run must be available on
   both the old and new locations.
 
 The cluster is able to accommodate both 'push' and 'pull' migration models
 by requiring the resource agent to support two special actions:
 +migrate_to+ (performed on the current location) and +migrate_from+
 (performed on the destination).
 
 In push migration, the process on the current location transfers the
 resource to the new location where is it later activated.  In this
 scenario, most of the work would be done in the +migrate_to+ action
 and, if anything, the activation would occur during +migrate_from+.
 
 Conversely for pull, the +migrate_to+ action is practically empty and
 +migrate_from+ does most of the work, extracting the relevant resource
 state from the old location and activating it.
 
 There is no wrong or right way for a resource agent to implement migration,
 as long as it works.
 
 .Migration Checklist
 * The resource may not be a clone.
 * The resource must use an OCF style agent.
 * The resource must not be in a failed or degraded state.
 * The resource agent must support +migrate_to+ and
   +migrate_from+ actions, and advertise them in its metadata.
 * The resource must have the +allow-migrate+ meta-attribute set to
   +true+ (which is not the default).
 
 If an otherwise migratable resource depends on another resource
 via an ordering constraint, there are special situations in which it will be
 restarted rather than migrated.
 
 For example, if the resource depends on a clone, and at the time the resource
 needs to be moved, the clone has instances that are stopping and instances
 that are starting, then the resource will be restarted. The scheduler is not
 yet able to model this situation correctly and so takes the safer (if less
 optimal) path.
 
 Also, if a migratable resource depends on a non-migratable resource, and both
 need to be moved, the migratable resource will be restarted.
 
 [[s-node-health]]
 == Tracking Node Health ==
 
 A node may be functioning adequately as far as cluster membership is concerned,
 and yet be "unhealthy" in some respect that makes it an undesirable location
 for resources. For example, a disk drive may be reporting SMART errors, or the
 CPU may be highly loaded.
 
 Pacemaker offers a way to automatically move resources off unhealthy nodes.
 
 === Node Health Attributes ===
 
 Pacemaker will treat any node attribute whose name starts with +#health+ as an
 indicator of node health. Node health attributes may have one of the following
 values:
 
 .Allowed Values for Node Health Attributes
-[width="95%",cols="1,3<",options="header",align="center"]
+[width="95%",cols="1,<3",options="header",align="center"]
 |=========================================================
 
 |Value
 |Intended significance
 
 |+red+
 |This indicator is unhealthy
  indexterm:[Node health,red]
 
 |+yellow+
 |This indicator is becoming unhealthy
  indexterm:[Node health,yellow]
 
 |+green+
 |This indicator is healthy
  indexterm:[Node health,green]
 
 |'integer'
 |A numeric score to apply to all resources on this node
  (0 or positive is healthy, negative is unhealthy)
  indexterm:[Node health,score]
 
 |=========================================================
 
 === Node Health Strategy ===
 
 Pacemaker assigns a node health score to each node, as the sum of the values of
 all its node health attributes. This score will be used as a location
 constraint applied to this node for all resources.
 
 The +node-health-strategy+ cluster option controls how Pacemaker responds to
 changes in node health attributes, and how it translates +red+, +yellow+, and
 +green+ to scores.
 
 Allowed values are:
 
 .Node Health Strategies
-[width="95%",cols="1m,3<",options="header",align="center"]
+[width="95%",cols="1m,<3",options="header",align="center"]
 |=========================================================
 
 |Value
 |Effect
 
 |none
 |Do not track node health attributes at all.
  indexterm:[Node health,none]
 
 |migrate-on-red
 |Assign the value of +-INFINITY+ to +red+, and 0 to +yellow+ and +green+.
  This will cause all resources to move off the node if any attribute is +red+.
  indexterm:[Node health,migrate-on-red]
 
 |only-green
 |Assign the value of +-INFINITY+ to +red+ and +yellow+, and 0 to +green+.
  This will cause all resources to move off the node if any attribute is +red+
  or +yellow+.
  indexterm:[Node health,only-green]
 
 |progressive
 |Assign the value of the +node-health-red+ cluster option to +red+, the value
  of +node-health-yellow+ to +yellow+, and the value of +node-health-green+ to
  +green+. Each node is additionally assigned a score of +node-health-base+
  (this allows resources to start even if some attributes are +yellow+). This
  strategy gives the administrator finer control over how important each value
  is.
  indexterm:[Node health,progressive]
 
 |custom
 |Track node health attributes using the same values as +progressive+ for
  +red+, +yellow+, and +green+, but do not take them into account.
  The administrator is expected to implement a policy by defining rules
  (see <<ch-rules>>) referencing node health attributes.
  indexterm:[Node health,custom]
 
 |=========================================================
 
 === Measuring Node Health ===
 
 Since Pacemaker calculates node health based on node attributes,
 any method that sets node attributes may be used to measure node
 health. The most common ways are resource agents or separate daemons.
 
 Pacemaker provides examples that can be used directly or as a basis for
 custom code. The +ocf:pacemaker:HealthCPU+ and +ocf:pacemaker:HealthSMART+
 resource agents set node health attributes based on CPU and disk parameters.
 The +ipmiservicelogd+ daemon sets node health attributes based on IPMI
 values (the +ocf:pacemaker:SystemHealth+ resource agent can be used to manage
 the daemon as a cluster resource).
 
 == Reloading Services After a Definition Change ==
 
 The cluster automatically detects changes to the definition of
 services it manages.  The normal response is to stop the
 service (using the old definition) and start it again (with the new
 definition).  This works well, but some services are smarter and can
 be told to use a new set of options without restarting.
 
 To take advantage of this capability, the resource agent must:
 
 . Accept the +reload+ operation and perform any required actions.
   _The actions here depend completely on your application!_
 +
 .The DRBD agent's logic for supporting +reload+
 =====
 [source,Bash]
 -------
 case $1 in
     start)
         drbd_start
         ;;
     stop)
         drbd_stop
         ;;
     reload)
         drbd_reload
         ;;
     monitor)
         drbd_monitor
         ;;
     *)
         drbd_usage
         exit $OCF_ERR_UNIMPLEMENTED
         ;;
 esac
 exit $?
 -------
 =====
 . Advertise the +reload+ operation in the +actions+ section of its metadata
 +
 .The DRBD Agent Advertising Support for the +reload+ Operation
 =====
 [source,XML]
 -------
 <?xml version="1.0"?>
   <!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
   <resource-agent name="drbd">
     <version>1.1</version>
     
     <longdesc lang="en">
       Master/Slave OCF Resource Agent for DRBD
     </longdesc>
     
     ...
     
     <actions>
       <action name="start"   timeout="240" />
       <action name="reload"  timeout="240" />
       <action name="promote" timeout="90" />
       <action name="demote"  timeout="90" />
       <action name="notify"  timeout="90" />
       <action name="stop"    timeout="100" />
       <action name="meta-data"    timeout="5" />
       <action name="validate-all" timeout="30" />
     </actions>
   </resource-agent>
 -------
 =====
 . Advertise one or more parameters that can take effect using +reload+.
 +
 Any parameter with the +unique+ set to 0 is eligible to be used in this way.
 +
 .Parameter that can be changed using reload
 =====
 [source,XML]
 -------
 <parameter name="drbdconf" unique="0">
     <longdesc lang="en">Full path to the drbd.conf file.</longdesc>
     <shortdesc lang="en">Path to drbd.conf</shortdesc>
     <content type="string" default="${OCF_RESKEY_drbdconf_default}"/>
 </parameter>
 -------
 =====
 
 Once these requirements are satisfied, the cluster will automatically
 know to reload the resource (instead of restarting) when a non-unique
 field changes.
       
 [NOTE]
 ======
 Metadata will not be re-read unless the resource needs to be started. This may
 mean that the resource will be restarted the first time, even though you
 changed a parameter with +unique=0+.
 ======
 
 [NOTE]
 ======
 If both a unique and non-unique field are changed simultaneously, the
 resource will still be restarted.
 ======
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt
index 345ccaa042..4c401d1dd1 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt
@@ -1,1454 +1,1454 @@
 = Advanced Resource Types =
 
 [[group-resources]]
 == Groups - A Syntactic Shortcut ==
 indexterm:[Group Resources]
 indexterm:[Resource,Groups]
 
 
 One of the most common elements of a cluster is a set of resources
 that need to be located together, start sequentially, and stop in the
 reverse order.  To simplify this configuration, we support the concept
 of groups.
 
 .A group of two primitive resources
 ======
 [source,XML]
 -------
 <group id="shortcut">
    <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
     <instance_attributes id="params-public-ip">
        <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
     </instance_attributes>
    </primitive>
    <primitive id="Email" class="lsb" type="exim"/>
 </group> 
 -------
 ======
 
 
 Although the example above contains only two resources, there is no
 limit to the number of resources a group can contain.  The example is
 also sufficient to explain the fundamental properties of a group:
 
 * Resources are started in the order they appear in (+Public-IP+
   first, then +Email+)
 * Resources are stopped in the reverse order to which they appear in
   (+Email+ first, then +Public-IP+)
 
 If a resource in the group can't run anywhere, then nothing after that
 is allowed to run, too.
 
 * If +Public-IP+ can't run anywhere, neither can +Email+;
 * but if +Email+ can't run anywhere, this does not affect +Public-IP+
   in any way
 
 The group above is logically equivalent to writing:
 
 .How the cluster sees a group resource
 ======
 [source,XML]
 -------
 <configuration>
    <resources>
     <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
      <instance_attributes id="params-public-ip">
         <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
      </instance_attributes>
     </primitive>
     <primitive id="Email" class="lsb" type="exim"/>
    </resources>
    <constraints>
       <rsc_colocation id="xxx" rsc="Email" with-rsc="Public-IP" score="INFINITY"/>
       <rsc_order id="yyy" first="Public-IP" then="Email"/>
    </constraints>
 </configuration> 
 -------
 ======
 
 Obviously as the group grows bigger, the reduced configuration effort
 can become significant.
 
 Another (typical) example of a group is a DRBD volume, the filesystem
 mount, an IP address, and an application that uses them.
 
 === Group Properties ===
 .Properties of a Group Resource
-[width="95%",cols="3m,5<",options="header",align="center"]
+[width="95%",cols="3m,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |id
 |A unique name for the group
  indexterm:[id,Group Resource Property]
  indexterm:[Resource,Group Property,id]
 
 |=========================================================
 
 === Group Options ===
 
 Groups inherit the +priority+, +target-role+, and +is-managed+ properties
 from primitive resources. See <<s-resource-options>> for information about
 those properties.
 
 === Group Instance Attributes ===
 
 Groups have no instance attributes. However, any that are set for the group
 object will be inherited by the group's children.
 
 === Group Contents ===
 
 Groups may only contain a collection of cluster resources (see
 <<primitive-resource>>).  To refer to a child of a group resource, just use
 the child's +id+ instead of the group's.
 
 === Group Constraints ===
 
 Although it is possible to reference a group's children in
 constraints, it is usually preferable to reference the group itself.
 
 .Some constraints involving groups
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_location id="group-prefers-node1" rsc="shortcut" node="node1" score="500"/>
     <rsc_colocation id="webserver-with-group" rsc="Webserver" with-rsc="shortcut"/>
     <rsc_order id="start-group-then-webserver" first="Webserver" then="shortcut"/>
 </constraints> 
 -------
 ======
 
 === Group Stickiness ===
 indexterm:[resource-stickiness,Groups]
 
 Stickiness, the measure of how much a resource wants to stay where it
 is, is additive in groups.  Every active resource of the group will
 contribute its stickiness value to the group's total.  So if the
 default +resource-stickiness+ is 100, and a group has seven members,
 five of which are active, then the group as a whole will prefer its
 current location with a score of 500.
 
 [[s-resource-clone]]
 == Clones - Resources That Can Have Multiple Active Instances ==
 indexterm:[Clone Resources]
 indexterm:[Resource,Clones]
 
 'Clone' resources are resources that can have more than one copy active at the
 same time. This allows you, for example, to run a copy of a daemon on every
 node. You can clone any primitive or group resource.
 footnote:[
 Of course, the service must support running multiple instances.
 ]
 
 === Anonymous versus Unique Clones ===
 
 A clone resource is configured to be either 'anonymous' or 'globally unique'.
 
 Anonymous clones are the simplest. These behave completely identically
 everywhere they are running. Because of this, there can be only one instance of
 an anonymous clone active per node.
       
 The instances of globally unique clones are distinct entities. All instances
 are launched identically, but one instance of the clone is not identical to any
 other instance, whether running on the same node or a different node. As an
 example, a cloned IP address can use special kernel functionality such that
 each instance handles a subset of requests for the same IP address.
 
 [[s-resource-promotable]]
 === Promotable clones ===
 
 indexterm:[Promotable Clone Resources]
 indexterm:[Resource,Promotable]
 
 If a clone is 'promotable', its instances can perform a special role that
 Pacemaker will manage via the +promote+ and +demote+ actions of the resource
 agent.
 
 Services that support such a special role have various terms for the special
 role and the default role: primary and secondary, master and replica,
 controller and worker, etc. Pacemaker uses the terms 'master' and 'slave',
 footnote:[
 These are historical terms that will eventually be replaced, but the extensive
 use of them and the need for backward compatibility makes it a long process.
 You may see examples using a +master+ tag instead of a +clone+ tag with the
 +promotable+ meta-attribute set to +true+; the +master+ tag is supported, but
 deprecated, and will be removed in a future version. You may also see such
 services referred to as 'multi-state' or 'stateful'; these means the same thing
 as 'promotable'.
 ]
 but is agnostic to what the service calls them or what they do.
 
 All that Pacemaker cares about is that an instance comes up in the default role
 when started, and the resource agent supports the +promote+ and +demote+ actions
 to manage entering and exiting the special role.
 
 === Clone Properties ===
 
 .Properties of a Clone Resource
-[width="95%",cols="3m,5<",options="header",align="center"]
+[width="95%",cols="3m,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |id
 |A unique name for the clone
  indexterm:[id,Clone Property]
  indexterm:[Clone,Property,id]
 
 |=========================================================
 
 === Clone Options ===
 
 <<s-resource-options,Options>> inherited from primitive resources:
 +priority, target-role, is-managed+
 
 .Clone-specific configuration options
-[width="95%",cols="1m,1,3<",options="header",align="center"]
+[width="95%",cols="1m,1,<3",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |globally-unique
 |false
 |If +true+, each clone instance performs a distinct function
  indexterm:[globally-unique,Clone Option]
  indexterm:[Clone,Option,globally-unique]
   
 |clone-max
 |number of nodes in cluster
 |The maximum number of clone instances that can be started across the entire
  cluster
  indexterm:[clone-max,Clone Option]
  indexterm:[Clone,Option,clone-max]
 
 |clone-node-max
 |1
 |If +globally-unique+ is +true+, the maximum number of clone instances that can
  be started on a single node
  indexterm:[clone-node-max,Clone Option]
  indexterm:[Clone,Option,clone-node-max]
   
 |clone-min
 |0
 |Require at least this number of clone instances to be runnable before allowing
  resources depending on the clone to be runnable. A value of 0 means require
  all clone instances to be runnable.
  indexterm:[clone-min,Clone Option]
  indexterm:[Clone,Option,clone-min]
 
 |notify
 |false
 |Call the resource agent's +notify+ action for all active instances, before and
  after starting or stopping any clone instance. The resource agent must support
  this action. Allowed values: +false+, +true+
  indexterm:[notify,Clone Option]
  indexterm:[Clone,Option,notify]
 
 |ordered
 |false
 |If +true+, clone instances must be started sequentially instead of in parallel
  Allowed values: +false+, +true+
  indexterm:[ordered,Clone Option]
  indexterm:[Clone,Option,ordered]
 
 |interleave
 |false
 |When this clone is ordered relative to another clone, if this option is
  +false+ (the default), the ordering is relative to 'all' instances of the
  other clone, whereas if this option is +true+, the ordering is relative only
  to instances on the same node.
  Allowed values: +false+, +true+
  indexterm:[interleave,Clone Option]
  indexterm:[Clone,Option,interleave]
 
 |promotable
 |false
 |If +true+, clone instances can perform a special role that Pacemaker will
  manage via the resource agent's +promote+ and +demote+ actions. The resource
  agent must support these actions.
  Allowed values: +false+, +true+
  indexterm:[promotable,Clone Option]
  indexterm:[Clone,Option,promotable]
 
 |promoted-max
 |1
 |If +promotable+ is +true+, the number of instances that can be promoted at one
  time across the entire cluster
  indexterm:[promoted-max,Clone Option]
  indexterm:[Clone,Option,promoted-max]
 
 |promoted-node-max
 |1
 |If +promotable+ is +true+ and +globally-unique+ is +false+, the number of
  clone instances can be promoted at one time on a single node
  indexterm:[promoted-node-max,Clone Option]
  indexterm:[Clone,Option,promoted-node-max]
 
 |=========================================================
 
 For backward compatibility, +master-max+ and +master-node-max+ are accepted as
 aliases for +promoted-max+ and +promoted-node-max+, but are deprecated since
 2.0.0, and support for them will be removed in a future version.
 
 === Clone Contents ===
 
 Clones must contain exactly one primitive or group resource.
 
 .A clone that runs a web server on all nodes
 ====
 [source,XML]
 ----
 <clone id="apache-clone">
     <primitive id="apache" class="lsb" type="apache">
         <operations>
            <op id="apache-monitor" name="monitor" interval="30"/>
         </operations>
     </primitive>
 </clone> 
 ----
 ====
 
 [WARNING]
 You should never reference the name of a clone's child (the primitive or group
 resource being cloned). If you think you need to do this, you probably need to
 re-evaluate your design.
 
 === Clone Instance Attributes ===
 
 Clones have no instance attributes; however, any that are set here will be
 inherited by the clone's child.
 
 === Clone Constraints ===
 
 In most cases, a clone will have a single instance on each active cluster
 node.  If this is not the case, you can indicate which nodes the
 cluster should preferentially assign copies to with resource location
 constraints.  These constraints are written no differently from those
 for primitive resources except that the clone's +id+ is used.
 
 .Some constraints involving clones
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_location id="clone-prefers-node1" rsc="apache-clone" node="node1" score="500"/>
     <rsc_colocation id="stats-with-clone" rsc="apache-stats" with="apache-clone"/>
     <rsc_order id="start-clone-then-stats" first="apache-clone" then="apache-stats"/>
 </constraints> 
 -------
 ======
 
 Ordering constraints behave slightly differently for clones.  In the
 example above, +apache-stats+ will wait until all copies of +apache-clone+
 that need to be started have done so before being started itself.
 Only if _no_ copies can be started will +apache-stats+ be prevented
 from being active.  Additionally, the clone will wait for
 +apache-stats+ to be stopped before stopping itself.
 
 Colocation of a primitive or group resource with a clone means that
 the resource can run on any node with an active instance of the clone.
 The cluster will choose an instance based on where the clone is running and
 the resource's own location preferences.
 
 Colocation between clones is also possible.  If one clone +A+ is colocated
 with another clone +B+, the set of allowed locations for +A+ is limited to
 nodes on which +B+ is (or will be) active.  Placement is then performed
 normally.
 
 ==== Promotable Clone Constraints ====
 
 For promotable clone resources, the +first-action+ and/or +then-action+ fields
 for ordering constraints may be set to +promote+ or +demote+ to constrain the
 master role, and colocation constraints may contain +rsc-role+ and/or
 +with-rsc-role+ fields.
           
 .Additional colocation constraint options for promotable clone resources
-[width="95%",cols="1m,1,3<",options="header",align="center"]
+[width="95%",cols="1m,1,<3",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |rsc-role
 |Started
 |An additional attribute of colocation constraints that specifies the
  role that +rsc+ must be in.  Allowed values: +Started+, +Master+,
  +Slave+.
  indexterm:[rsc-role,Ordering Constraints]
  indexterm:[Constraints,Ordering,rsc-role]
 
 |with-rsc-role
 |Started
 |An additional attribute of colocation constraints that specifies the
  role that +with-rsc+ must be in.  Allowed values: +Started+,
  +Master+, +Slave+.
  indexterm:[with-rsc-role,Ordering Constraints]
  indexterm:[Constraints,Ordering,with-rsc-role]
 
 |=========================================================
 
 .Constraints involving promotable clone resources       
 ======
 [source,XML]
 -------
 <constraints>
    <rsc_location id="db-prefers-node1" rsc="database" node="node1" score="500"/>
    <rsc_colocation id="backup-with-db-slave" rsc="backup"
      with-rsc="database" with-rsc-role="Slave"/>
    <rsc_colocation id="myapp-with-db-master" rsc="myApp"
      with-rsc="database" with-rsc-role="Master"/>
    <rsc_order id="start-db-before-backup" first="database" then="backup"/>
    <rsc_order id="promote-db-then-app" first="database" first-action="promote"
      then="myApp" then-action="start"/>
 </constraints> 
 -------
 ======
 
 In the example above, +myApp+ will wait until one of the database
 copies has been started and promoted to master before being started
 itself on the same node.  Only if no copies can be promoted will +myApp+ be
 prevented from being active.  Additionally, the cluster will wait for
 +myApp+ to be stopped before demoting the database.
 
 Colocation of a primitive or group resource with a promotable clone
 resource means that it can run on any node with an active instance of
 the promotable clone resource that has the specified role (+master+ or
 +slave+).  In the example above, the cluster will choose a location based on
 where database is running as a +master+, and if there are multiple
 +master+ instances it will also factor in +myApp+'s own location
 preferences when deciding which location to choose.
 
 Colocation with regular clones and other promotable clone resources is also
 possible.  In such cases, the set of allowed locations for the +rsc+
 clone is (after role filtering) limited to nodes on which the
 +with-rsc+ promotable clone resource is (or will be) in the specified role.
 Placement is then performed as normal.
 
 ==== Using Promotable Clone Resources in Colocation Sets ====
 
 .Additional colocation set options relevant to promotable clone resources
-[width="95%",cols="1m,1,6<",options="header",align="center"]
+[width="95%",cols="1m,1,<6",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |role
 |Started
 |The role that 'all members' of the set must be in.  Allowed values: +Started+, +Master+,
  +Slave+.
  indexterm:[role,Ordering Constraints]
  indexterm:[Constraints,Ordering,role]
 
 |=========================================================
 
 In the following example +B+'s master must be located on the same node as +A+'s master.
 Additionally resources +C+ and +D+ must be located on the same node as +A+'s
 and +B+'s masters.
 
 .Colocate C and D with A's and B's master instances
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_colocation id="coloc-1" score="INFINITY" >
       <resource_set id="colocated-set-example-1" sequential="true" role="Master">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
       </resource_set>
       <resource_set id="colocated-set-example-2" sequential="true">
         <resource_ref id="C"/>
         <resource_ref id="D"/>
       </resource_set>
     </rsc_colocation>
 </constraints>
 -------
 ======
 
 ==== Using Promotable Clone Resources in Ordered Sets ====
 
 .Additional ordered set options relevant to promotable clone resources
-[width="95%",cols="1m,1,3<",options="header",align="center"]
+[width="95%",cols="1m,1,<3",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |action
 |value of +first-action+
 |An additional attribute of ordering constraint sets that specifies the
  action that applies to 'all members' of the set.  Allowed
  values: +start+, +stop+, +promote+, +demote+.
  indexterm:[action,Ordering Constraints]
  indexterm:[Constraints,Ordering,action]
 
 |=========================================================
 
 .Start C and D after first promoting A and B
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_order id="order-1" score="INFINITY" >
       <resource_set id="ordered-set-1" sequential="true" action="promote">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
       </resource_set>
       <resource_set id="ordered-set-2" sequential="true" action="start">
         <resource_ref id="C"/>
         <resource_ref id="D"/>
       </resource_set>
     </rsc_order>
 </constraints>
 -------
 ======
 
 In the above example, +B+ cannot be promoted to a master role until +A+ has
 been promoted. Additionally, resources +C+ and +D+ must wait until +A+ and +B+
 have been promoted before they can start.
 
 
 [[s-clone-stickiness]]
 === Clone Stickiness ===
 
 indexterm:[resource-stickiness,Clones]
 
 To achieve a stable allocation pattern, clones are slightly sticky by
 default.  If no value for +resource-stickiness+ is provided, the clone
 will use a value of 1.  Being a small value, it causes minimal
 disturbance to the score calculations of other resources but is enough
 to prevent Pacemaker from needlessly moving copies around the cluster.
 
 [NOTE]
 ====
 For globally unique clones, this may result in multiple instances of the
 clone staying on a single node, even after another eligible node becomes
 active (for example, after being put into standby mode then made active again).
 If you do not want this behavior, specify a +resource-stickiness+ of 0
 for the clone temporarily and let the cluster adjust, then set it back
 to 1 if you want the default behavior to apply again.
 ====
 
 === Clone Resource Agent Requirements ===
 
 Any resource can be used as an anonymous clone, as it requires no
 additional support from the resource agent.  Whether it makes sense to
 do so depends on your resource and its resource agent.
 
 ==== Resource Agent Requirements for Globally Unique Clones ====
 
 Globally unique clones require additional support in the resource agent. In
 particular, it must only respond with +$\{OCF_SUCCESS}+ if the node has that
 exact instance active. All other probes for instances of the clone should
 result in +$\{OCF_NOT_RUNNING}+ (or one of the other OCF error codes if
 they are failed).
 
 Individual instances of a clone are identified by appending a colon and a
 numerical offset, e.g. +apache:2+.
 
 Resource agents can find out how many copies there are by examining
 the +OCF_RESKEY_CRM_meta_clone_max+ environment variable and which
 instance it is by examining +OCF_RESKEY_CRM_meta_clone+.
 
 The resource agent must not make any assumptions (based on
 +OCF_RESKEY_CRM_meta_clone+) about which numerical instances are active.  In
 particular, the list of active copies will not always be an unbroken
 sequence, nor always start at 0.
 
 ==== Resource Agent Requirements for Promotable Clones ====
 
 Promotable clone resources require two extra actions, +demote+ and +promote+,
 which are responsible for changing the state of the resource. Like +start+ and
 +stop+, they should return +$\{OCF_SUCCESS}+ if they completed successfully or
 a relevant error code if they did not.
 
 The states can mean whatever you wish, but when the resource is
 started, it must come up in the mode called +slave+.  From there the
 cluster will decide which instances to promote to +master+.
 
 In addition to the clone requirements for monitor actions, agents must
 also _accurately_ report which state they are in.  The cluster relies
 on the agent to report its status (including role) accurately and does
 not indicate to the agent what role it currently believes it to be in.
 
 .Role implications of OCF return codes
-[width="95%",cols="1,1<",options="header",align="center"]
+[width="95%",cols="1,<1",options="header",align="center"]
 |=========================================================
 
 |Monitor Return Code
 |Description
 
 |OCF_NOT_RUNNING
 |Stopped
  indexterm:[Return Code,OCF_NOT_RUNNING]
  
 |OCF_SUCCESS
 |Running (Slave)
  indexterm:[Return Code,OCF_SUCCESS]
  
 |OCF_RUNNING_MASTER
 |Running (Master)
  indexterm:[Return Code,OCF_RUNNING_MASTER]
 
 |OCF_FAILED_MASTER
 |Failed (Master)
  indexterm:[Return Code,OCF_FAILED_MASTER]
  
 |Other
 |Failed (Slave)
 
 |=========================================================
 
 ==== Clone Notifications ====
 
 If the clone has the +notify+ meta-attribute set to +true+, and the resource
 agent supports the +notify+ action, Pacemaker will call the action when
 appropriate, passing a number of extra variables which, when combined with
 additional context, can be used to calculate the current state of the cluster
 and what is about to happen to it.
 
 .Environment variables supplied with Clone notify actions
-[width="95%",cols="5,3<",options="header",align="center"]
+[width="95%",cols="5,<3",options="header",align="center"]
 |=========================================================
 
 |Variable
 |Description
 
 |OCF_RESKEY_CRM_meta_notify_type
 |Allowed values: +pre+, +post+
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,type]
  indexterm:[type,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_operation
 |Allowed values: +start+, +stop+
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,operation]
  indexterm:[operation,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_start_resource
 |Resources to be started
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_resource]
  indexterm:[start_resource,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_stop_resource
 |Resources to be stopped
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_resource]
  indexterm:[stop_resource,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_active_resource
 |Resources that are running
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_resource]
  indexterm:[active_resource,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_inactive_resource
 |Resources that are not running
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,inactive_resource]
  indexterm:[inactive_resource,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_start_uname
 |Nodes on which resources will be started
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_uname]
  indexterm:[start_uname,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_stop_uname
 |Nodes on which resources will be stopped
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_uname]
  indexterm:[stop_uname,Notification Environment Variable]
 
 |OCF_RESKEY_CRM_meta_notify_active_uname
 |Nodes on which resources are running
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_uname]
  indexterm:[active_uname,Notification Environment Variable]
 
 |=========================================================
 
 The variables come in pairs, such as
 +OCF_RESKEY_CRM_meta_notify_start_resource+ and
 +OCF_RESKEY_CRM_meta_notify_start_uname+ and should be treated as an
 array of whitespace-separated elements.
 
 +OCF_RESKEY_CRM_meta_notify_inactive_resource+ is an exception as the
 matching +uname+ variable does not exist since inactive resources
 are not running on any node.
 
 Thus in order to indicate that +clone:0+ will be started on +sles-1+,
 +clone:2+ will be started on +sles-3+, and +clone:3+ will be started
 on +sles-2+, the cluster would set
 
 .Notification variables
 ======
 [source,Bash]
 -------
 OCF_RESKEY_CRM_meta_notify_start_resource="clone:0 clone:2 clone:3"
 OCF_RESKEY_CRM_meta_notify_start_uname="sles-1 sles-3 sles-2"
 -------
 ======
 
 ==== Interpretation of Notification Variables ====
 
 .Pre-notification (stop):
 
 * Active resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+
 * Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 
 
 .Post-notification (stop) / Pre-notification (start):
 
 * Active resources
 ** +$OCF_RESKEY_CRM_meta_notify_active_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 * Inactive resources
 ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ 
 * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 
 
 .Post-notification (start):
 
 * Active resources:
 ** +$OCF_RESKEY_CRM_meta_notify_active_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Inactive resources:
 ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 
 ==== Extra Notifications for Promotable Clones ====
 
 .Extra environment variables supplied for promotable clones
-[width="95%",cols="5,3<",options="header",align="center"]
+[width="95%",cols="5,<3",options="header",align="center"]
 |=========================================================
 
 |_OCF_RESKEY_CRM_meta_notify_master_resource_
 |Resources that are running in +Master+ mode
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,master_resource]
  indexterm:[master_resource,Notification Environment Variable]
 
 |_OCF_RESKEY_CRM_meta_notify_slave_resource_
 |Resources that are running in +Slave+ mode
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,slave_resource]
  indexterm:[slave_resource,Notification Environment Variable]
    
 |_OCF_RESKEY_CRM_meta_notify_promote_resource_
 |Resources to be promoted
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,promote_resource]
  indexterm:[promote_resource,Notification Environment Variable]
    
 |_OCF_RESKEY_CRM_meta_notify_demote_resource_
 |Resources to be demoted
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,demote_resource]
  indexterm:[demote_resource,Notification Environment Variable]
 
 |_OCF_RESKEY_CRM_meta_notify_promote_uname_
 |Nodes on which resources will be promoted
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,promote_uname]
  indexterm:[promote_uname,Notification Environment Variable]
 
 |_OCF_RESKEY_CRM_meta_notify_demote_uname_
 |Nodes on which resources will be demoted
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,demote_uname]
  indexterm:[demote_uname,Notification Environment Variable]
 
 |_OCF_RESKEY_CRM_meta_notify_master_uname_
 |Nodes on which resources are running in +Master+ mode
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,master_uname]
  indexterm:[master_uname,Notification Environment Variable]
 
 |_OCF_RESKEY_CRM_meta_notify_slave_uname_
 |Nodes on which resources are running in +Slave+ mode
  indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,slave_uname]
  indexterm:[slave_uname,Notification Environment Variable]
 
 |=========================================================
 
 ==== Interpretation of Promotable Notification Variables ====
 
 .Pre-notification (demote):
 
 * +Active+ resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+
 * +Master+ resources: +$OCF_RESKEY_CRM_meta_notify_master_resource+
 * +Slave+ resources: +$OCF_RESKEY_CRM_meta_notify_slave_resource+
 * Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
 * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 
 
 .Post-notification (demote) / Pre-notification (stop):
 
 * +Active+ resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+
 * +Master+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_master_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+ 
 * +Slave+ resources: +$OCF_RESKEY_CRM_meta_notify_slave_resource+
 * Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
 * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 
 
 .Post-notification (stop) / Pre-notification (start)
 
 * +Active+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_active_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ 
 * +Master+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_master_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+ 
 * +Slave+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_slave_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ 
 * Inactive resources:
 ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ 
 * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
 * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 
 
 .Post-notification (start) / Pre-notification (promote)
 
 * +Active+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_active_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ 
 * +Master+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_master_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+ 
 * +Slave+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_slave_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ 
 * Inactive resources:
 ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+           
 * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
 * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 
 .Post-notification (promote)
 
 * +Active+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_active_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ 
 * +Master+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_master_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_promote_resource+
 * +Slave+ resources:
 ** +$OCF_RESKEY_CRM_meta_notify_slave_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_promote_resource+ 
 * Inactive resources:
 ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
 ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 ** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+ 
 * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
 * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
 * Resources that were promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
 * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
 * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
 
 === Monitoring Promotable Clone Resources ===
 
 The usual monitor actions are insufficient to monitor a promotable clone
 resource, because Pacemaker needs to verify not only that the resource is
 active, but also that its actual role matches its intended one.
 
 Define two monitoring actions: the usual one will cover the slave role,
 and an additional one with +role="master"+ will cover the master role.
 
 .Monitoring both states of a promotable clone resource
 ======
 [source,XML]
 -------
 <clone id="myMasterRsc">
    <meta_attributes id="myMasterRsc-meta">
        <nvpair name="promotable" value="true"/>
    </meta_attributes>
    <primitive id="myRsc" class="ocf" type="myApp" provider="myCorp">
     <operations>
      <op id="public-ip-slave-check" name="monitor" interval="60"/>
      <op id="public-ip-master-check" name="monitor" interval="61" role="Master"/>
     </operations>
    </primitive>
 </clone> 
 -------
 ======
 
 [IMPORTANT]
 ===========
 It is crucial that _every_ monitor operation has a different interval!
 Pacemaker currently differentiates between operations
 only by resource and interval; so if (for example) a promotable clone resource
 had the same monitor interval for both roles, Pacemaker would ignore the
 role when checking the status -- which would cause unexpected return
 codes, and therefore unnecessary complications.
 ===========
 
 [[s-promotion-scores]]
 === Determining Which Instance is Promoted ===
 
 Pacemaker can choose a promotable clone instance to be promoted in one of two
 ways:
 
 * Promotion scores: These are node attributes set via the `crm_master` utility,
   which generally would be called by the resource agent's start action if it
   supports promotable clones. This tool automatically detects both the resource
   and host, and should be used to set a preference for being promoted. Based on
   this, +promoted-max+, and +promoted-node-max+, the instance(s) with the
   highest preference will be promoted.
 
 * Constraints: Location constraints can indicate which nodes are most preferred
   as masters.
 
 .Explicitly preferring node1 to be promoted to master
 ======
 [source,XML]
 -------
 <rsc_location id="master-location" rsc="myMasterRsc">
     <rule id="master-rule" score="100" role="Master">
       <expression id="master-exp" attribute="#uname" operation="eq" value="node1"/>
     </rule>
 </rsc_location> 
 -------
 ======
 
 [[s-resource-bundle]]
 == Bundles - Isolated Environments ==
 indexterm:[bundle]
 indexterm:[Resource,bundle]
 indexterm:[Docker,bundle]
 indexterm:[rkt,bundle]
 
 Pacemaker supports a special syntax for launching a
 https://en.wikipedia.org/wiki/Operating-system-level_virtualization[container]
 with any infrastructure it requires: the 'bundle'.
 
 Pacemaker bundles support https://www.docker.com/[Docker] and
 https://coreos.com/rkt/[rkt] container technologies.
 footnote:[Docker is a trademark of Docker, Inc. No endorsement by or
 association with Docker, Inc. is implied.]
 
 .A bundle for a containerized web server
 ====
 [source,XML]
 ----
 <bundle id="httpd-bundle">
    <docker image="pcmk:http" replicas="3"/>
    <network ip-range-start="192.168.122.131"
             host-netmask="24"
             host-interface="eth0">
       <port-mapping id="httpd-port" port="80"/>
    </network>
    <storage>
       <storage-mapping id="httpd-syslog"
                        source-dir="/dev/log"
                        target-dir="/dev/log"
                        options="rw"/>
       <storage-mapping id="httpd-root"
                        source-dir="/srv/html"
                        target-dir="/var/www/html"
                        options="rw"/>
       <storage-mapping id="httpd-logs"
                        source-dir-root="/var/log/pacemaker/bundles"
                        target-dir="/etc/httpd/logs"
                        options="rw"/>
    </storage>
    <primitive class="ocf" id="httpd" provider="heartbeat" type="apache"/>
 </bundle>
 ----
 ====
 
 === Bundle Properties ===
 
 .Properties of a Bundle
-[width="95%",cols="3m,5<",options="header",align="center"]
+[width="95%",cols="3m,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |id
 |A unique name for the bundle (required)
  indexterm:[id,bundle]
  indexterm:[bundle,Property,id]
 
 |description
 |Arbitrary text (not used by Pacemaker)
  indexterm:[description,bundle]
  indexterm:[bundle,Property,description]
 
 |=========================================================
 
 A bundle must contain exactly one +<docker>+ or +<rkt>+ element.
 
 === Docker Properties ===
 
 Before configuring a Docker bundle in Pacemaker, the user must install Docker
 and supply a fully configured Docker image on every node allowed to run the
 bundle.
 
 Pacemaker will create an implicit +ocf:heartbeat:docker+ resource to manage
 a bundle's Docker container. The user must ensure that resource agent is
 installed on every node allowed to run the bundle.
 
 .Properties of a Bundle's Docker Element
-[width="95%",cols="3m,4,5<",options="header",align="center"]
+[width="95%",cols="3m,4,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |image
 |
 |Docker image tag (required)
  indexterm:[image,Docker]
  indexterm:[Docker,Property,image]
 
 |replicas
 |Value of +promoted-max+ if that is positive, else 1
 |A positive integer specifying the number of container instances to launch
  indexterm:[replicas,Docker]
  indexterm:[Docker,Property,replicas]
 
 |replicas-per-host
 |1
 |A positive integer specifying the number of container instances allowed to run
  on a single node
  indexterm:[replicas-per-host,Docker]
  indexterm:[Docker,Property,replicas-per-host]
 
 |promoted-max
 |0
 |A non-negative integer that, if positive, indicates that the containerized
  service should be treated as a promotable service, with this many replicas
  allowed to run the service in the master role
  indexterm:[promoted-max,Docker]
  indexterm:[Docker,Property,promoted-max]
 
 |network
 |
 |If specified, this will be passed to +docker run+ as the
  https://docs.docker.com/engine/reference/run/#network-settings[network setting]
  for the Docker container.
  indexterm:[network,Docker]
  indexterm:[Docker,Property,network]
 
 |run-command
 |`/usr/sbin/pacemaker-remoted` if bundle contains a +primitive+, otherwise none
 |This command will be run inside the container when launching it ("PID 1"). If
  the bundle contains a +primitive+, this command 'must' start pacemaker-remoted
  (but could, for example, be a script that does other stuff, too). If the
  container image has a pre-2.0.0 version of Pacemaker, set this to
  +/usr/sbin/pacemaker_remoted+ (note the underbar instead of dash).
  indexterm:[run-command,Docker]
  indexterm:[Docker,Property,run-command]
 
 |options
 |
 |Extra command-line options to pass to `docker run`
  indexterm:[options,Docker]
  indexterm:[Docker,Property,options]
 
 |=========================================================
 
 For backward compatibility, +masters+ is accepted as an alias for
 +promoted-max+, but is deprecated since 2.0.0, and support for it will be
 removed in a future version.
 
 === rkt Properties ===
 
 Before configuring a rkt bundle in Pacemaker, the user must install rkt
 and supply a fully configured container image on every node allowed to run the
 bundle.
 
 Pacemaker will create an implicit +ocf:heartbeat:rkt+ resource to manage
 a bundle's rkt container. The user must ensure that resource agent is
 installed on every node allowed to run the bundle.
 
 .Properties of a Bundle's rkt Element
-[width="95%",cols="3m,4,5<",options="header",align="center"]
+[width="95%",cols="3m,4,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |image
 |
 |Container image tag (required)
  indexterm:[image,rkt]
  indexterm:[rkt,Property,image]
 
 |replicas
 |Value of +promoted-max+ if that is positive, else 1
 |A positive integer specifying the number of container instances to launch
  indexterm:[replicas,rkt]
  indexterm:[rkt,Property,replicas]
 
 |replicas-per-host
 |1
 |A positive integer specifying the number of container instances allowed to run
  on a single node
  indexterm:[replicas-per-host,rkt]
  indexterm:[rkt,Property,replicas-per-host]
 
 |promoted-max
 |0
 |A non-negative integer that, if positive, indicates that the containerized
  service should be treated as a promotable service, with this many replicas
  allowed to run the service in the master role
  indexterm:[promoted-max,rkt]
  indexterm:[rkt,Property,promoted-max]
 
 |network
 |
 |If specified, this will be passed to +rkt run+ as the
  network setting for the rkt container.
  indexterm:[network,rkt]
  indexterm:[rkt,Property,network]
 
 |run-command
 |`/usr/sbin/pacemaker-remoted` if bundle contains a +primitive+, otherwise none
 |This command will be run inside the container when launching it ("PID 1"). If
  the bundle contains a +primitive+, this command 'must' start pacemaker-remoted
  (but could, for example, be a script that does other stuff, too). If the
  container image has a pre-2.0.0 version of Pacemaker, set this to
  +/usr/sbin/pacemaker_remoted+ (note the underbar instead of dash).
  indexterm:[run-command,rkt]
  indexterm:[rkt,Property,run-command]
 
 |options
 |
 |Extra command-line options to pass to `rkt run`
  indexterm:[options,rkt]
  indexterm:[rkt,Property,options]
 
 |=========================================================
 
 For backward compatibility, +masters+ is accepted as an alias for
 +promoted-max+, but is deprecated since 2.0.0, and support for it will be
 removed in a future version.
 
 === Bundle Network Properties ===
 
 A bundle may optionally contain one +<network>+ element.
 indexterm:[bundle,network]
 
 .Properties of a Bundle's Network Element
-[width="95%",cols="2m,1,4<",options="header",align="center"]
+[width="95%",cols="2m,1,<4",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |add-host
 |TRUE
 |If TRUE, and +ip-range-start+ is used, Pacemaker will automatically ensure
  that +/etc/hosts+ inside the containers has entries for each
  <<s-resource-bundle-note-replica-names,replica name>> and its assigned IP.
  indexterm:[add-host,network]
  indexterm:[network,Property,add-host]
 
 |ip-range-start
 |
 |If specified, Pacemaker will create an implicit +ocf:heartbeat:IPaddr2+
  resource for each container instance, starting with this IP address,
  using up to +replicas+ sequential addresses. These addresses can be used
  from the host's network to reach the service inside the container, though
  it is not visible within the container itself. Only IPv4 addresses are
  currently supported.
  indexterm:[ip-range-start,network]
  indexterm:[network,Property,ip-range-start]
 
 |host-netmask
 |32
 |If +ip-range-start+ is specified, the IP addresses are created with this
  CIDR netmask (as a number of bits).
  indexterm:[host-netmask,network]
  indexterm:[network,Property,host-netmask]
 
 |host-interface
 |
 |If +ip-range-start+ is specified, the IP addresses are created on this
  host interface (by default, it will be determined from the IP address).
  indexterm:[host-interface,network]
  indexterm:[network,Property,host-interface]
 
 |control-port
 |3121
 |If the bundle contains a +primitive+, the cluster will use this integer TCP
  port for communication with Pacemaker Remote inside the container. Changing
  this is useful when the container is unable to listen on the default port,
  for example, when the container uses the host's network rather than
  +ip-range-start+ (in which case +replicas-per-host+ must be 1), or when the
  bundle may run on a Pacemaker Remote node that is already listening on the
  default port. Any PCMK_remote_port environment variable set on the host or in
  the container is ignored for bundle connections.
  indexterm:[control-port,network]
  indexterm:[network,Property,control-port]
 
 |=========================================================
 
 [[s-resource-bundle-note-replica-names]]
 [NOTE]
 ====
 Replicas are named by the bundle id plus a dash and an integer counter starting
 with zero. For example, if a bundle named +httpd-bundle+ has +replicas=2+, its
 containers will be named +httpd-bundle-0+ and +httpd-bundle-1+.
 ====
 
 Additionally, a +<network>+ element may optionally contain one or more
 +<port-mapping>+ elements.
 indexterm:[bundle,network,port-mapping]
 
 .Properties of a Bundle's Port-Mapping Element
-[width="95%",cols="2m,1,4<",options="header",align="center"]
+[width="95%",cols="2m,1,<4",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the port mapping (required)
  indexterm:[id,port-mapping]
  indexterm:[port-mapping,Property,id]
 
 |port
 |
 |If this is specified, connections to this TCP port number on the host network
  (on the container's assigned IP address, if +ip-range-start+ is specified)
  will be forwarded to the container network. Exactly one of +port+ or +range+
  must be specified in a +port-mapping+.
  indexterm:[port,port-mapping]
  indexterm:[port-mapping,Property,port]
 
 |internal-port
 |value of +port+
 |If +port+ and this are specified, connections to +port+ on the host's network
  will be forwarded to this port on the container network.
  indexterm:[internal-port,port-mapping]
  indexterm:[port-mapping,Property,internal-port]
 
 |range
 |
 |If this is specified, connections to these TCP port numbers (expressed as
  'first_port'-'last_port') on the host network (on the container's assigned IP
  address, if +ip-range-start+ is specified) will be forwarded to the same ports
  in the container network. Exactly one of +port+ or +range+ must be specified
  in a +port-mapping+.
  indexterm:[range,port-mapping]
  indexterm:[port-mapping,Property,range]
 
 |=========================================================
 
 [NOTE]
 ====
 If the bundle contains a +primitive+, Pacemaker will automatically map the
 +control-port+, so it is not necessary to specify that port in a
 +port-mapping+.
 ====
 
 === Bundle Storage Properties ===
 
 A bundle may optionally contain one +<storage>+ element. A +<storage>+ element
 has no properties of its own, but may contain one or more +<storage-mapping>+
 elements.
 indexterm:[bundle,storage,storage-mapping]
 
 .Properties of a Bundle's Storage-Mapping Element
-[width="95%",cols="2m,1,4<",options="header",align="center"]
+[width="95%",cols="2m,1,<4",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the storage mapping (required)
  indexterm:[id,storage-mapping]
  indexterm:[storage-mapping,Property,id]
 
 |source-dir
 |
 |The absolute path on the host's filesystem that will be mapped into the
  container. Exactly one of +source-dir+ and +source-dir-root+ must be specified
  in a +storage-mapping+.
  indexterm:[source-dir,storage-mapping]
  indexterm:[storage-mapping,Property,source-dir]
 
 |source-dir-root
 |
 |The start of a path on the host's filesystem that will be mapped into the
  container, using a different subdirectory on the host for each container
  instance. The subdirectory will be named the same as the
  <<s-resource-bundle-note-replica-names,replica name>>.
  Exactly one of +source-dir+ and +source-dir-root+ must be specified in a
  +storage-mapping+.
  indexterm:[source-dir-root,storage-mapping]
  indexterm:[storage-mapping,Property,source-dir-root]
 
 |target-dir
 |
 |The path name within the container where the host storage will be mapped
  (required)
  indexterm:[target-dir,storage-mapping]
  indexterm:[storage-mapping,Property,target-dir]
 
 |options
 |
 |File system mount options to use when mapping the storage
  indexterm:[options,storage-mapping]
  indexterm:[storage-mapping,Property,options]
 
 |=========================================================
 
 [NOTE]
 ====
 Pacemaker does not define the behavior if the source directory does not already
 exist on the host. However, it is expected that the container technology and/or
 its resource agent will create the source directory in that case.
 ====
 
 [NOTE]
 ====
 If the bundle contains a +primitive+,
 Pacemaker will automatically map the equivalent of
 +source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey+
 and +source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log+ into the
 container, so it is not necessary to specify those paths in a
 +storage-mapping+.
 ====
 
 [IMPORTANT]
 ====
 The +PCMK_authkey_location+ environment variable must not be set to anything
 other than the default of `/etc/pacemaker/authkey` on any node in the cluster.
 ====
 
 === Bundle Primitive ===
 
 A bundle may optionally contain one +<primitive>+ resource
 (see <<s-resource-primitive>>). The primitive may have operations,
 instance attributes and meta-attributes defined, as usual.
 
 If a bundle contains a primitive resource, the container image must include
 the Pacemaker Remote daemon, and at least one of +ip-range-start+ or
 +control-port+ must be configured in the bundle. Pacemaker will create an
 implicit +ocf:pacemaker:remote+ resource for the connection, launch
 Pacemaker Remote within the container, and monitor and manage the primitive
 resource via Pacemaker Remote.
 
 If the bundle has more than one container instance (replica), the primitive
 resource will function as an implicit clone (see <<s-resource-clone>>) --
 a promotable clone if the bundle has +masters+ greater than zero
 (see <<s-resource-promotable>>).
  
 [IMPORTANT]
 ====
 Containers in bundles with a +primitive+ must have an accessible networking
 environment, so that Pacemaker on the cluster nodes can contact
 Pacemaker Remote inside the container. For example, the Docker option
 `--net=none` should not be used with a +primitive+. The default (using a
 distinct network space inside the container) works in combination with
 +ip-range-start+. If the Docker option `--net=host` is used (making the
 container share the host's network space), a unique +control-port+ should be
 specified for each bundle. Any firewall must allow access to the
 +control-port+.
 ====
 
 [[s-bundle-attributes]]
 === Bundle Node Attributes ===
 
 If the bundle has a +primitive+, the primitive's resource agent may want to set
 node attributes such as <<s-promotion-scores,promotion scores>>. However, with
 containers, it is not apparent which node should get the attribute.
 
 If the container uses shared storage that is the same no matter which node the
 container is hosted on, then it is appropriate to use the promotion score on the
 bundle node itself.
 
 On the other hand, if the container uses storage exported from the underlying host,
 then it may be more appropriate to use the promotion score on the underlying host.
 
 Since this depends on the particular situation, the
 +container-attribute-target+ resource meta-attribute allows the user to specify
 which approach to use. If it is set to +host+, then user-defined node attributes
 will be checked on the underlying host. If it is anything else, the local node
 (in this case the bundle node) is used as usual.
 
 This only applies to user-defined attributes; the cluster will always check the
 local node for cluster-defined attributes such as +#uname+.
 
 If +container-attribute-target+ is +host+, the cluster will pass additional
 environment variables to the primitive's resource agent that allow it to set
 node attributes appropriately: +CRM_meta_container_attribute_target+ (identical
 to the meta-attribute value) and +CRM_meta_physical_host+ (the name of the
 underlying host).
 
 [NOTE]
 ====
 When called by a resource agent, the attrd_updater and crm_attribute commands
 will automatically check those environment variables and set attributes
 appropriately.
 ====
 
 === Bundle Meta-Attributes ===
 
 Any meta-attribute set on a bundle will be inherited by the bundle's
 primitive and any resources implicitly created by Pacemaker for the bundle.
 
 This includes options such as +priority+, +target-role+, and +is-managed+. See
 <<s-resource-options>> for more information.
 
 === Limitations of Bundles ===
 
 Restarting pacemaker while a bundle is unmanaged or the cluster is in
 maintenance mode may cause the bundle to fail.
 
 Bundles may not be explicitly cloned or included in groups. This includes the
 bundle's primitive and any resources implicitly created by Pacemaker for the
 bundle. (If +replicas+ is greater than 1, the bundle will behave like a clone
 implicitly.)
 
 Bundles do not have instance attributes, utilization attributes, or operations,
 though a bundle's primitive may have them.
 
 A bundle with a primitive can run on a Pacemaker Remote node only if the bundle
 uses a distinct +control-port+.
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Alerts.txt b/doc/Pacemaker_Explained/en-US/Ch-Alerts.txt
index afc6d1b553..34daeece5f 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Alerts.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Alerts.txt
@@ -1,423 +1,423 @@
 = Alerts =
 
 //// 
 We prefer [[ch-alerts]], but older versions of asciidoc don't deal well
 with that construct for chapter headings
 ////
 anchor:ch-alerts[Chapter 7, Alerts]
 indexterm:[Resource,Alerts]
 
 'Alerts' may be configured to take some external action when a cluster event
 occurs (node failure, resource starting or stopping, etc.).
 
 
 == Alert Agents ==
 
 As with resource agents, the cluster calls an external program (an
 'alert agent') to handle alerts. The cluster passes information about the event
 to the agent via environment variables. Agents can do anything
 desired with this information (send an e-mail, log to a file,
 update a monitoring system, etc.).
 
 
 .Simple alert configuration
 =====
 [source,XML]
 -----
 <configuration>
     <alerts>
         <alert id="my-alert" path="/path/to/my-script.sh" />
     </alerts>
 </configuration>
 -----
 =====
 
 In the example above, the cluster will call +my-script.sh+ for each event.
 
 Multiple alert agents may be configured; the cluster will call all of them for
 each event.
 
 Alert agents will be called only on cluster nodes. They will be called for
 events involving Pacemaker Remote nodes, but they will never be called _on_
 those nodes.
 
 == Alert Recipients ==
 
 Usually alerts are directed towards a recipient. Thus each alert may be additionally configured with one or more recipients.
 The cluster will call the agent separately for each recipient.
 
 .Alert configuration with recipient
 =====
 [source,XML]
 -----
 <configuration>
     <alerts>
         <alert id="my-alert" path="/path/to/my-script.sh">
             <recipient id="my-alert-recipient" value="some-address"/>
         </alert>
     </alerts>
 </configuration>
 -----
 =====
 
 In the above example, the cluster will call +my-script.sh+ for each event,
 passing the recipient +some-address+ as an environment variable.
 
 The recipient may be anything the alert agent can recognize --
 an IP address, an e-mail address, a file name, whatever the particular
 agent supports.
 
 
 == Alert Meta-Attributes ==
 
 As with resource agents, meta-attributes can be configured for alert agents
 to affect how Pacemaker calls them.
 
 .Meta-Attributes of an Alert
 
-[width="95%",cols="m,1,2<a",options="header",align="center"]
+[width="95%",cols="m,1,<2",options="header",align="center"]
 |=========================================================
 
 |Meta-Attribute
 |Default
 |Description
 
 |timestamp-format
 |%H:%M:%S.%06N
 |Format the cluster will use when sending the event's timestamp to the agent.
  This is a string as used with the `date(1)` command.
  indexterm:[Alert,Option,timestamp-format]
  
 |timeout
 |30s
 |If the alert agent does not complete within this amount of time, it will be
  terminated.
  indexterm:[Alert,Option,timeout]
 
 |=========================================================
 
 Meta-attributes can be configured per alert agent and/or per recipient.
 
 .Alert configuration with meta-attributes
 =====
 [source,XML]
 -----
 <configuration>
     <alerts>
         <alert id="my-alert" path="/path/to/my-script.sh">
             <meta_attributes id="my-alert-attributes">
 		<nvpair id="my-alert-attributes-timeout" name="timeout"
                     value="15s"/>
             </meta_attributes>
             <recipient id="my-alert-recipient1" value="someuser@example.com">
                 <meta_attributes id="my-alert-recipient1-attributes">
                     <nvpair id="my-alert-recipient1-timestamp-format"
                         name="timestamp-format" value="%D %H:%M"/>
                 </meta_attributes>
             </recipient>
             <recipient id="my-alert-recipient2" value="otheruser@example.com">
                 <meta_attributes id="my-alert-recipient2-attributes">
                     <nvpair id="my-alert-recipient2-timestamp-format"
                         name="timestamp-format" value="%c"/>
                 </meta_attributes>
             </recipient>
         </alert>
     </alerts>
 </configuration>
 -----
 =====
 
 In the above example, the +my-script.sh+ will get called twice for each event,
 with each call using a 15-second timeout. One call will be passed the recipient
 +someuser@example.com+ and a timestamp in the format +%D %H:%M+, while the
 other call will be passed the recipient +otheruser@example.com+ and a timestamp
 in the format +%c+.
 
 
 == Alert Instance Attributes ==
 
 As with resource agents, agent-specific configuration values may be configured
 as instance attributes. These will be passed to the agent as additional
 environment variables. The number, names and allowed values of these
 instance attributes are completely up to the particular agent.
 
 .Alert configuration with instance attributes
 =====
 [source,XML]
 -----
 <configuration>
     <alerts>
         <alert id="my-alert" path="/path/to/my-script.sh">
             <meta_attributes id="my-alert-attributes">
                 <nvpair id="my-alert-attributes-timeout" name="timeout"
                     value="15s"/>
             </meta_attributes>
             <instance_attributes id="my-alert-options">
                 <nvpair id="my-alert-options-debug" name="debug" value="false"/>
             </instance_attributes>
             <recipient id="my-alert-recipient1" value="someuser@example.com"/>
         </alert>
     </alerts>
 </configuration>
 -----
 =====
 
 
 == Alert Filters ==
 
 By default, an alert agent will be called for node events, fencing events, and
 resource events. An agent may choose to ignore certain types of events, but
 there is still the overhead of calling it for those events. To eliminate that
 overhead, you may select which types of events the agent should receive.
 
 .Alert configuration to receive only node events and fencing events
 =====
 [source,XML]
 -----
 <configuration>
     <alerts>
         <alert id="my-alert" path="/path/to/my-script.sh">
             <select>
               <select_nodes />
               <select_fencing />
             </select>
             <recipient id="my-alert-recipient1" value="someuser@example.com"/>
         </alert>
     </alerts>
 </configuration>
 -----
 =====
 
 The possible options within +<select>+ are +<select_nodes>+,
 +<select_fencing>+, +<select_resources>+, and +<select_attributes>+.
 
 With +<select_attributes>+ (the only event type not enabled by default), the
 agent will receive alerts when a node attribute changes. If you wish the agent
 to be called only when certain attributes change, you can configure that as well.
 
 .Alert configuration to be called when certain node attributes change
 =====
 [source,XML]
 -----
 <configuration>
     <alerts>
         <alert id="my-alert" path="/path/to/my-script.sh">
             <select>
               <select_attributes>
                 <attribute id="alert-standby" name="standby" />
                 <attribute id="alert-shutdown" name="shutdown" />
               </select_attributes>
             </select>
             <recipient id="my-alert-recipient1" value="someuser@example.com"/>
         </alert>
     </alerts>
 </configuration>
 -----
 =====
 
 Node attribute alerts are currently considered experimental. Alerts may be
 limited to attributes set via attrd_updater, and agents may be called multiple
 times with the same attribute value.
 
 
 == Using the Sample Alert Agents ==
 
 Pacemaker provides several sample alert agents, installed in
 +/usr/share/pacemaker/alerts+ by default.
 
 While these sample scripts may be copied and used as-is, they are provided
 mainly as templates to be edited to suit your purposes.
 See their source code for the full set of instance attributes they support.
 
 .Sending cluster events as SNMP traps
 =====
 [source,XML]
 -----
 <configuration>
     <alerts>
         <alert id="snmp_alert" path="/path/to/alert_snmp.sh">
             <instance_attributes id="config_for_alert_snmp">
                 <nvpair id="trap_node_states" name="trap_node_states" value="all"/>
             </instance_attributes>
             <meta_attributes id="config_for_timestamp">
                 <nvpair id="ts_fmt" name="timestamp-format"
                     value="%Y-%m-%d,%H:%M:%S.%01N"/>
             </meta_attributes>
             <recipient id="snmp_destination" value="192.168.1.2"/>
         </alert>
     </alerts>
 </configuration>
 -----
 =====
 
 .Sending cluster events as e-mails
 =====
 [source,XML]
 -----
     <configuration>
         <alerts>
             <alert id="smtp_alert" path="/path/to/alert_smtp.sh">
               <instance_attributes id="config_for_alert_smtp">
                   <nvpair id="email_sender" name="email_sender"
                       value="donotreply@example.com"/>
               </instance_attributes>
               <recipient id="smtp_destination" value="admin@example.com"/>
             </alert>
         </alerts>
     </configuration>
 -----
 =====
 
 
 == Writing an Alert Agent ==
 
 .Environment variables passed to alert agents
 
-[width="95%",cols="m,2<a",options="header",align="center"]
+[width="95%",cols="m,<2",options="header",align="center"]
 |=========================================================
 
 |Environment Variable
 |Description
 
 |CRM_alert_kind
 |The type of alert (+node+, +fencing+, +resource+, or +attribute+)
  indexterm:[Environment Variable,CRM_alert_,kind]
 
 |CRM_alert_version
 |The version of Pacemaker sending the alert
  indexterm:[Environment Variable,CRM_alert_,version]
 
 |CRM_alert_recipient
 |The configured recipient
  indexterm:[Environment Variable,CRM_alert_,recipient]
 
 |CRM_alert_node_sequence
 |A sequence number increased whenever an alert is being issued on the
  local node, which can be used to reference the order in which alerts have been
  issued by Pacemaker. An alert for an event that happened later in time
  reliably has a higher sequence number than alerts for earlier events.
  Be aware that this number has no cluster-wide meaning.
  indexterm:[Environment Variable,CRM_alert_node_,sequence]
 
 |CRM_alert_timestamp
 |A timestamp created prior to executing the agent, in the format
  specified by the +timestamp-format+ meta-attribute. This allows the agent
  to have a reliable, high-precision time of when the event occurred,
  regardless of when the agent itself was invoked (which could potentially
  be delayed due to system load, etc.).
  indexterm:[Environment Variable,CRM_alert_,timestamp]
 
 |CRM_alert_timestamp_epoch
 |The same time as +CRM_alert_timestamp+, expressed as the integer number of
  seconds since January 1, 1970. This (along with +CRM_alert_timestamp_usec+)
  can be useful for alert agents that need to format time in a specific way
  rather than let the user configure it.
  indexterm:[Environment Variable,CRM_alert_,timestamp_epoch]
 
 |CRM_alert_timestamp_usec
 |The same time as +CRM_alert_timestamp+, expressed as the integer number of
  microseconds since +CRM_alert_timestamp_epoch+.
  indexterm:[Environment Variable,CRM_alert_,timestamp_usec]
 
 |CRM_alert_node
 |Name of affected node
  indexterm:[Environment Variable,CRM_alert_,node]
 
 |CRM_alert_desc
 |Detail about event. For +node+ alerts, this is the node's current state
  (+member+ or +lost+). For +fencing+ alerts, this is a summary of the
  requested fencing operation, including origin, target, and fencing operation
  error code, if any. For +resource+ alerts, this is a readable string
  equivalent of +CRM_alert_status+.
  indexterm:[Environment Variable,CRM_alert_,desc]
 
 |CRM_alert_nodeid
 |ID of node whose status changed (provided with +node+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,nodeid]
 
 |CRM_alert_task
 |The requested fencing or resource operation
  (provided with +fencing+ and +resource+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,task]
 
 |CRM_alert_rc
 |The numerical return code of the fencing or resource operation
  (provided with +fencing+ and +resource+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,rc]
 
 |CRM_alert_rsc
 |The name of the affected resource (+resource+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,rsc]
 
 |CRM_alert_interval
 |The interval of the resource operation (+resource+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,interval]
 
 |CRM_alert_target_rc
 |The expected numerical return code of the operation (+resource+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,target_rc]
 
 |CRM_alert_status
 |A numerical code used by Pacemaker to represent the operation result
  (+resource+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,status]
 
 |CRM_alert_exec_time
 |The (wall-clock) time, in milliseconds, that it took to execute the action. If
  the action timed out, +CRM_alert_status+ will be 2, +CRM_alert_desc+ will be
  "Timed Out", and this value will be the action timeout. May not be supported
  on all platforms. (+resource+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,exec_time]
 
 |CRM_alert_attribute_name
 |The name of the node attribute that changed (+attribute+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,attribute_name]
 
 |CRM_alert_attribute_value
 |The new value of the node attribute that changed (+attribute+ alerts only)
  indexterm:[Environment Variable,CRM_alert_,attribute_value]
 
 |=========================================================
 
 Special concerns when writing alert agents:
 
 * Alert agents may be called with no recipient (if none is configured),
   so the agent must be able to handle this situation, even if it
   only exits in that case. (Users may modify the configuration in
   stages, and add a recipient later.)
 
 * If more than one recipient is configured for an alert, the alert agent will
   be called once per recipient. If an agent is not able to run concurrently, it
   should be configured with only a single recipient. The agent is free,
   however, to interpret the recipient as a list.
 
 * When a cluster event occurs, all alerts are fired off at the same time as
   separate processes. Depending on how many alerts and recipients are
   configured, and on what is done within the alert agents,
   a significant load burst may occur. The agent could be written to take
   this into consideration, for example by queueing resource-intensive actions
   into some other instance, instead of directly executing them.
 
 * Alert agents are run as the +hacluster+ user, which has a minimal set
   of permissions. If an agent requires additional privileges, it is
   recommended to configure +sudo+ to allow the agent to run the necessary
   commands as another user with the appropriate privileges.
 
 * As always, take care to validate and sanitize user-configured parameters,
   such as CRM_alert_timestamp (whose content is specified by the
   user-configured timestamp-format), CRM_alert_recipient, and all instance
   attributes. Mostly this is needed simply to protect against configuration
   errors, but if some user can modify the CIB without having hacluster-level
   access to the cluster nodes, it is a potential security concern as well, to
   avoid the possibility of code injection.
 
 [NOTE]
 =====
 The alerts interface is designed to be backward compatible with the external
 scripts interface used by the +ocf:pacemaker:ClusterMon+ resource, which is
 now deprecated. To preserve this compatibility, the environment variables
 passed to alert agents are available prepended with +CRM_notify_+
 as well as +CRM_alert_+. One break in compatibility is that ClusterMon ran
 external scripts as the +root+ user, while alert agents are run as the
 +hacluster+ user.
 =====
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt b/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt
index 694c35d053..49864c9dfd 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt
@@ -1,881 +1,881 @@
 = Resource Constraints =
 
 indexterm:[Resource,Constraints]
 
 == Scores ==
 
 Scores of all kinds are integral to how the cluster works.
 Practically everything from moving a resource to deciding which
 resource to stop in a degraded cluster is achieved by manipulating
 scores in some way.
 
 Scores are calculated per resource and node. Any node with a
 negative score for a resource can't run that resource. The cluster
 places a resource on the node with the highest score for it.
 
 === Infinity Math ===
 
 Pacemaker implements +INFINITY+ (or equivalently, ++INFINITY+) internally as a
 score of 1,000,000. Addition and subtraction with it follow these three basic
 rules:
 
 * Any value + +INFINITY+ = +INFINITY+
 * Any value - +INFINITY+ = +-INFINITY+
 * +INFINITY+ - +INFINITY+ = +-INFINITY+
 
 [NOTE]
 ======
 What if you want to use a score higher than 1,000,000? Typically this possibility
 arises when someone wants to base the score on some external metric that might
 go above 1,000,000.
 
 The short answer is you can't.
 
 The long answer is it is sometimes possible work around this limitation
 creatively. You may be able to set the score to some computed value based on
 the external metric rather than use the metric directly. For nodes, you can
 store the metric as a node attribute, and query the attribute when computing
 the score (possibly as part of a custom resource agent).
 ======
 
 == Deciding Which Nodes a Resource Can Run On ==
 
 indexterm:[Location Constraints]
 indexterm:[Resource,Constraints,Location]
 'Location constraints' tell the cluster which nodes a resource can run on.
 
 There are two alternative strategies. One way is to say that, by default,
 resources can run anywhere, and then the location constraints specify nodes
 that are not allowed (an 'opt-out' cluster). The other way is to start with
 nothing able to run anywhere, and use location constraints to selectively
 enable allowed nodes (an 'opt-in' cluster).
 
 Whether you should choose opt-in or opt-out depends on your
 personal preference and the make-up of your cluster.  If most of your
 resources can run on most of the nodes, then an opt-out arrangement is
 likely to result in a simpler configuration.  On the other-hand, if
 most resources can only run on a small subset of nodes, an opt-in
 configuration might be simpler.
 
 === Location Properties ===
 
 .Properties of a rsc_location Constraint
-[width="95%",cols="2m,1,5<a",options="header",align="center"]
+[width="95%",cols="2m,1,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the constraint
 indexterm:[id,Location Constraints]
 indexterm:[Constraints,Location,id]
 
 |rsc
 |
 |The name of the resource to which this constraint applies
 indexterm:[rsc,Location Constraints]
 indexterm:[Constraints,Location,rsc]
 
 |rsc-pattern
 |
 |An extended regular expression (as defined in
  http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04[POSIX])
  matching the names of resources to which this constraint
  applies, if +rsc+ is not specified; if the regular expression contains
  submatches and the constraint is governed by a rule (see <<ch-rules>>), the
  submatches can be referenced as +%0+ through +%9+ in the rule's
  +score-attribute+ or a rule expression's +attribute+
 indexterm:[rsc-pattern,Location Constraints]
 indexterm:[Constraints,Location,rsc-pattern]
 
 |node
 |
 |A node's name
 indexterm:[node,Location Constraints]
 indexterm:[Constraints,Location,node]
 
 |score
 |
 |Positive values indicate a preference for running the affected resource(s) on
  this node -- the higher the value, the stronger the preference. Negative values
  indicate the resource(s) should avoid this node (a value of +-INFINITY+
  changes "should" to "must").
 indexterm:[score,Location Constraints]
 indexterm:[Constraints,Location,score]
 
 |resource-discovery
 |always
-|Whether Pacemaker should perform resource discovery (that is, check whether
+a|Whether Pacemaker should perform resource discovery (that is, check whether
  the resource is already running) for this resource on this node. This should
  normally be left as the default, so that rogue instances of a service can be
  stopped when they are running where they are not supposed to be. However,
  there are two situations where disabling resource discovery is a good idea:
  when a service is not installed on a node, discovery might return an error
  (properly written OCF agents will not, so this is usually only seen with other
  agent types); and when Pacemaker Remote is used to scale a cluster to hundreds
  of nodes, limiting resource discovery to allowed nodes can significantly boost
  performance.
 
 * +always:+ Always perform resource discovery for the specified resource on this node.
 * +never:+ Never perform resource discovery for the specified resource on this node.
   This option should generally be used with a -INFINITY score, although that is not strictly
   required.
 * +exclusive:+ Perform resource discovery for the specified resource only on
   this node (and other nodes similarly marked as +exclusive+). Multiple location
   constraints using +exclusive+ discovery for the same resource across
   different nodes creates a subset of nodes resource-discovery is exclusive to.
   If a resource is marked for +exclusive+ discovery on one or more nodes, that
   resource is only allowed to be placed within that subset of nodes.
 
 indexterm:[Resource Discovery,Location Constraints]
 indexterm:[Constraints,Location,Resource Discovery]
 
 |=========================================================
 
 [WARNING]
 =========
 Setting resource-discovery to +never+ or +exclusive+ removes Pacemaker's
 ability to detect and stop unwanted instances of a service running
 where it's not supposed to be. It is up to the system administrator (you!)
 to make sure that the service can 'never' be active on nodes without
 resource-discovery (such as by leaving the relevant software uninstalled).
 =========
 
 === Asymmetrical "Opt-In" Clusters ===
 indexterm:[Asymmetrical Opt-In Clusters]
 indexterm:[Cluster Type,Asymmetrical Opt-In]
 
 To create an opt-in cluster, start by preventing resources from
 running anywhere by default:
 
 ----
 # crm_attribute --name symmetric-cluster --update false
 ----
 
 Then start enabling nodes.  The following fragment says that the web
 server prefers *sles-1*, the database prefers *sles-2* and both can
 fail over to *sles-3* if their most preferred node fails.
 
 .Opt-in location constraints for two resources
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
     <rsc_location id="loc-2" rsc="Webserver" node="sles-3" score="0"/>
     <rsc_location id="loc-3" rsc="Database" node="sles-2" score="200"/>
     <rsc_location id="loc-4" rsc="Database" node="sles-3" score="0"/>
 </constraints>
 -------
 ======
 
 === Symmetrical "Opt-Out" Clusters ===
 indexterm:[Symmetrical Opt-Out Clusters]
 indexterm:[Cluster Type,Symmetrical Opt-Out]
 
 To create an opt-out cluster, start by allowing resources to run
 anywhere by default:
 
 ----
 # crm_attribute --name symmetric-cluster --update true
 ----
 
 Then start disabling nodes.  The following fragment is the equivalent
 of the above opt-in configuration.
 
 .Opt-out location constraints for two resources
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
     <rsc_location id="loc-2-dont-run" rsc="Webserver" node="sles-2" score="-INFINITY"/>
     <rsc_location id="loc-3-dont-run" rsc="Database" node="sles-1" score="-INFINITY"/>
     <rsc_location id="loc-4" rsc="Database" node="sles-2" score="200"/>
 </constraints>
 -------
 ======
 
 [[node-score-equal]]
 === What if Two Nodes Have the Same Score ===
 
 If two nodes have the same score, then the cluster will choose one.
 This choice may seem random and may not be what was intended, however
 the cluster was not given enough information to know any better.
 
 .Constraints where a resource prefers two nodes equally
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="INFINITY"/>
     <rsc_location id="loc-2" rsc="Webserver" node="sles-2" score="INFINITY"/>
     <rsc_location id="loc-3" rsc="Database" node="sles-1" score="500"/>
     <rsc_location id="loc-4" rsc="Database" node="sles-2" score="300"/>
     <rsc_location id="loc-5" rsc="Database" node="sles-2" score="200"/>
 </constraints>
 -------
 ======
 
 In the example above, assuming no other constraints and an inactive
 cluster, +Webserver+ would probably be placed on +sles-1+ and +Database+ on
 +sles-2+.  It would likely have placed +Webserver+ based on the node's
 uname and +Database+ based on the desire to spread the resource load
 evenly across the cluster.  However other factors can also be involved
 in more complex configurations.
 
 [[s-resource-ordering]]
 == Specifying the Order in which Resources Should Start/Stop ==
 
 indexterm:[Resource,Constraints,Ordering]
 indexterm:[Resource,Start Order]
 indexterm:[Ordering Constraints]
 
 'Ordering constraints' tell the cluster the order in which resources should
 start.
 
 [IMPORTANT]
 ====
 Ordering constraints affect 'only' the ordering of resources;
 they do 'not' require that the resources be placed on the
 same node. If you want resources to be started on the same node
 'and' in a specific order, you need both an ordering constraint 'and'
 a colocation constraint (see <<s-resource-colocation>>), or
 alternatively, a group (see <<group-resources>>).
 ====
 
 === Ordering Properties ===
 
 .Properties of a rsc_order Constraint
-[width="95%",cols="1m,1,4<a",options="header",align="center"]
+[width="95%",cols="1m,1,<4",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the constraint
 indexterm:[id,Ordering Constraints]
 indexterm:[Constraints,Ordering,id]
 
 |first
 |
 |Name of the resource that the +then+ resource depends on
 indexterm:[first,Ordering Constraints]
 indexterm:[Constraints,Ordering,first]
 
 |then
 |
 |Name of the dependent resource
 indexterm:[then,Ordering Constraints]
 indexterm:[Constraints,Ordering,then]
 
 |first-action
 |start
 |The action that the +first+ resource must complete before +then-action+
  can be initiated for the +then+ resource.  Allowed values: +start+,
  +stop+, +promote+, +demote+.
  indexterm:[first-action,Ordering Constraints]
  indexterm:[Constraints,Ordering,first-action]
 
 |then-action
 |value of +first-action+
 |The action that the +then+ resource can execute only after the
  +first-action+ on the +first+ resource has completed.  Allowed
  values: +start+, +stop+, +promote+, +demote+.
  indexterm:[then-action,Ordering Constraints]
  indexterm:[Constraints,Ordering,then-action]
 
 |kind
 |
-|How to enforce the constraint. Allowed values:
+a|How to enforce the constraint. Allowed values:
 
 * +Optional:+ Just a suggestion. Only applies if both resources are
   executing the specified actions. Any change in state by the +first+ resource
   will have no effect on the +then+ resource.
 * +Mandatory:+ Always. If +first+ does not perform +first-action+, +then+ will
   not be allowed to performed +then-action+. If +first+ is restarted, +then+
   (if running) will be stopped beforehand and started afterward.
 * +Serialize:+ Ensure that no two stop/start actions occur concurrently
   for the resources. +First+ and +then+ can start in either order,
   but one must complete starting before the other can be started. A typical use
   case is when resource start-up puts a high load on the host.
 
 indexterm:[kind,Ordering Constraints]
 indexterm:[Constraints,Ordering,kind]
 
 |symmetrical
 |TRUE for +Mandatory+ and +Optional+ kinds. FALSE for +Serialize+ kind.
 |If true, the reverse of the constraint applies for the opposite action (for
  example, if B starts after A starts, then B stops before A stops).
  +Serialize+ orders cannot be symmetrical.
 indexterm:[symmetrical,Ordering Constraints]
 indexterm:[Ordering Constraints,symmetrical]
 
 |=========================================================
 
 +Promote+ and +demote+ apply to the master role of
 <<s-resource-promotable,promotable>> resources.
 
 === Optional and mandatory ordering ===
 
 Here is an example of ordering constraints where +Database+ 'must' start before
 +Webserver+, and +IP+ 'should' start before +Webserver+ if they both need to be
 started:
 
 .Optional and mandatory ordering constraints
 ======
 [source,XML]
 -------
 <constraints>
 <rsc_order id="order-1" first="IP" then="Webserver" kind="Optional"/>
 <rsc_order id="order-2" first="Database" then="Webserver" kind="Mandatory" />
 </constraints>
 -------
 ======
 
 Because the above example lets +symmetrical+ default to TRUE, 
 +Webserver+ must be stopped before +Database+ can be stopped,
 and +Webserver+ should be stopped before +IP+
 if they both need to be stopped.
 
 [[s-resource-colocation]]
 == Placing Resources Relative to other Resources ==
 
 indexterm:[Resource,Constraints,Colocation]
 indexterm:[Resource,Location Relative to other Resources]
 'Colocation constraints' tell the cluster that the location of one resource
 depends on the location of another one.
 
 Colocation has an important side-effect: it affects the order in which
 resources are assigned to a node. Think about it: You can't place A relative to
 B unless you know where B is.
 footnote:[
 While the human brain is sophisticated enough to read the constraint
 in any order and choose the correct one depending on the situation,
 the cluster is not quite so smart. Yet.
 ]
 
 So when you are creating colocation constraints, it is important to
 consider whether you should colocate A with B, or B with A.
 
 Another thing to keep in mind is that, assuming A is colocated with
 B, the cluster will take into account A's preferences when
 deciding which node to choose for B.
 
 For a detailed look at exactly how this occurs, see
 http://clusterlabs.org/doc/Colocation_Explained.pdf[Colocation Explained].
 
 [IMPORTANT]
 ====
 Colocation constraints affect 'only' the placement of resources; they do 'not'
 require that the resources be started in a particular order. If you want
 resources to be started on the same node 'and' in a specific order, you need
 both an ordering constraint (see <<s-resource-ordering>>) 'and' a colocation
 constraint, or alternatively, a group (see <<group-resources>>).
 ====
 
 === Colocation Properties ===
 
 .Properties of a rsc_colocation Constraint
-[width="95%",cols="1m,1,4<",options="header",align="center"]
+[width="95%",cols="1m,1,<4",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the constraint (required).
  indexterm:[id,Colocation Constraints]
  indexterm:[Constraints,Colocation,id]
 
 |rsc
 |
 |The name of a resource that should be located relative to +with-rsc+ (required).
  indexterm:[rsc,Colocation Constraints]
  indexterm:[Constraints,Colocation,rsc]
 
 |with-rsc
 |
 |The name of the resource used as the colocation target. The cluster will
  decide where to put this resource first and then decide where to put +rsc+ (required).
  indexterm:[with-rsc,Colocation Constraints]
  indexterm:[Constraints,Colocation,with-rsc]
 
 |node-attribute
 |#uname
 |The node attribute that must be the same on the node running +rsc+ and the
  node running +with-rsc+ for the constraint to be satisfied. (For details,
  see <<s-coloc-attribute>>.)
  indexterm:[node-attribute,Colocation Constraints]
  indexterm:[Constraints,Colocation,node-attribute]
 
 |score
 |
 |Positive values indicate the resources should run on the same
  node. Negative values indicate the resources should run on
  different nodes. Values of \+/- +INFINITY+ change "should" to "must".
  indexterm:[score,Colocation Constraints]
  indexterm:[Constraints,Colocation,score]
 
 |=========================================================
 
 === Mandatory Placement ===
 
 Mandatory placement occurs when the constraint's score is
 ++INFINITY+ or +-INFINITY+.  In such cases, if the constraint can't be
 satisfied, then the +rsc+ resource is not permitted to run.  For
 +score=INFINITY+, this includes cases where the +with-rsc+ resource is
 not active.
 
 If you need resource +A+ to always run on the same machine as
 resource +B+, you would add the following constraint:
 
 .Mandatory colocation constraint for two resources
 ====
 [source,XML]
 <rsc_colocation id="colocate" rsc="A" with-rsc="B" score="INFINITY"/>
 ====
 
 Remember, because +INFINITY+ was used, if +B+ can't run on any
 of the cluster nodes (for whatever reason) then +A+ will not
 be allowed to run. Whether +A+ is running or not has no effect on +B+.
 
 Alternatively, you may want the opposite -- that +A+ 'cannot'
 run on the same machine as +B+.  In this case, use
 +score="-INFINITY"+.
 
 .Mandatory anti-colocation constraint for two resources
 ====
 [source,XML]
 <rsc_colocation id="anti-colocate" rsc="A" with-rsc="B" score="-INFINITY"/>
 ====
 
 Again, by specifying +-INFINITY+, the constraint is binding.  So if the
 only place left to run is where +B+ already is, then
 +A+ may not run anywhere.
 
 As with +INFINITY+, +B+ can run even if +A+ is stopped.
 However, in this case +A+ also can run if +B+ is stopped, because it still
 meets the constraint of +A+ and +B+ not running on the same node.
 
 === Advisory Placement ===
 
 If mandatory placement is about "must" and "must not", then advisory
 placement is the "I'd prefer if" alternative.  For constraints with
 scores greater than +-INFINITY+ and less than +INFINITY+, the cluster
 will try to accommodate your wishes but may ignore them if the
 alternative is to stop some of the cluster resources.
 
 As in life, where if enough people prefer something it effectively
 becomes mandatory, advisory colocation constraints can combine with
 other elements of the configuration to behave as if they were
 mandatory.
 
 .Advisory colocation constraint for two resources
 ====
 [source,XML]
 <rsc_colocation id="colocate-maybe" rsc="A" with-rsc="B" score="500"/>
 ====
 
 [[s-coloc-attribute]]
 === Colocation by Node Attribute ===
 
 The +node-attribute+ property of a colocation constraints allows you to express
 the requirement, "these resources must be on similar nodes".
 
 As an example, imagine that you have two Storage Area Networks (SANs) that are
 not controlled by the cluster, and each node is connected to one or the other.
 You may have two resources +r1+ and +r2+ such that +r2+ needs to use the same
 SAN as +r1+, but doesn't necessarily have to be on the same exact node.
 In such a case, you could define a <<s-node-attributes,node attribute>> named
 +san+, with the value +san1+ or +san2+ on each node as appropriate. Then, you
 could colocate +r2+ with +r1+ using +node-attribute+ set to +san+.
 
 [[s-resource-sets]]
 == Resource Sets ==
 
 'Resource sets' allow multiple resources to be affected by a single constraint.
 
 .A set of 3 resources
 ====
 [source,XML]
 ----
 <resource_set id="resource-set-example">
    <resource_ref id="A"/>
    <resource_ref id="B"/>
    <resource_ref id="C"/>
 </resource_set>
 ----
 ====
 
 Resource sets are valid inside +rsc_location+,
 +rsc_order+ (see <<s-resource-sets-ordering>>),
 +rsc_colocation+ (see <<s-resource-sets-colocation>>),
 and +rsc_ticket+ (see <<s-ticket-constraints>>) constraints.
 
 A resource set has a number of properties that can be set,
 though not all have an effect in all contexts.
 
 .Properties of a resource_set
-[width="95%",cols="2m,1,5<a",options="header",align="center"]
+[width="95%",cols="2m,1,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the set
 indexterm:[id,Resource Sets]
 indexterm:[Constraints,Resource Sets,id]
 
 |sequential
 |true
 |Whether the members of the set must be acted on in order.
  Meaningful within +rsc_order+ and +rsc_colocation+.
 indexterm:[sequential,Resource Sets]
 indexterm:[Constraints,Resource Sets,sequential]
 
 |require-all
 |true
 |Whether all members of the set must be active before continuing.
  With the current implementation, the cluster may continue even if only one
  member of the set is started, but if more than one member of the set is
  starting at the same time, the cluster will still wait until all of those have
  started before continuing (this may change in future versions).
  Meaningful within +rsc_order+.
 indexterm:[require-all,Resource Sets]
 indexterm:[Constraints,Resource Sets,require-all]
 
 |role
 |
 |Limit the effect of the constraint to the specified role.
  Meaningful within +rsc_location+, +rsc_colocation+ and +rsc_ticket+.
 indexterm:[role,Resource Sets]
 indexterm:[Constraints,Resource Sets,role]
 
 |action
 |
 |Limit the effect of the constraint to the specified action.
  Meaningful within +rsc_order+.
 indexterm:[action,Resource Sets]
 indexterm:[Constraints,Resource Sets,action]
 
 |score
 |
 |'Advanced use only.' Use a specific score for this set within the constraint.
 indexterm:[score,Resource Sets]
 indexterm:[Constraints,Resource Sets,score]
 
 |=========================================================
   
 [[s-resource-sets-ordering]]
 == Ordering Sets of Resources ==
 
 A common situation is for an administrator to create a chain of
 ordered resources, such as:
 
 .A chain of ordered resources
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_order id="order-1" first="A" then="B" />
     <rsc_order id="order-2" first="B" then="C" />
     <rsc_order id="order-3" first="C" then="D" />
 </constraints>
 -------
 ======
 
 .Visual representation of the four resources' start order for the above constraints
 image::images/resource-set.png["Ordered set",width="16cm",height="2.5cm",align="center"]
 
 === Ordered Set ===
 
 To simplify this situation, resource sets (see <<s-resource-sets>>) can be used
 within ordering constraints:
 
 .A chain of ordered resources expressed as a set
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_order id="order-1">
       <resource_set id="ordered-set-example" sequential="true">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
         <resource_ref id="C"/>
         <resource_ref id="D"/>
       </resource_set>
     </rsc_order>
 </constraints>
 -------
 ======
 
 While the set-based format is not less verbose, it is significantly
 easier to get right and maintain.
 
 [IMPORTANT]
 =========
 If you use a higher-level tool, pay attention to how it exposes this
 functionality. Depending on the tool, creating a set +A B+ may be equivalent to
 +A then B+, or +B then A+.
 =========
 
 === Ordering Multiple Sets ===
 
 The syntax can be expanded to allow sets of resources to be ordered relative to
 each other, where the members of each individual set may be ordered or
 unordered (controlled by the +sequential+ property). In the example below, +A+
 and +B+ can both start in parallel, as can +C+ and +D+, however +C+ and +D+ can
 only start once _both_ +A+ _and_ +B+ are active.
 
 .Ordered sets of unordered resources
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_order id="order-1">
       <resource_set id="ordered-set-1" sequential="false">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
       </resource_set>
       <resource_set id="ordered-set-2" sequential="false">
         <resource_ref id="C"/>
         <resource_ref id="D"/>
       </resource_set>
     </rsc_order>
   </constraints>
 -------
 ======
 
 .Visual representation of the start order for two ordered sets of unordered resources
 image::images/two-sets.png["Two ordered sets",width="13cm",height="7.5cm",align="center"]
 
 Of course either set -- or both sets -- of resources can also be
 internally ordered (by setting +sequential="true"+) and there is no
 limit to the number of sets that can be specified.
 
 .Advanced use of set ordering - Three ordered sets, two of which are internally unordered
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_order id="order-1">
       <resource_set id="ordered-set-1" sequential="false">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
       </resource_set>
       <resource_set id="ordered-set-2" sequential="true">
         <resource_ref id="C"/>
         <resource_ref id="D"/>
       </resource_set>
       <resource_set id="ordered-set-3" sequential="false">
         <resource_ref id="E"/>
         <resource_ref id="F"/>
       </resource_set>
     </rsc_order>
 </constraints>
 -------
 ======
 
 .Visual representation of the start order for the three sets defined above
 image::images/three-sets.png["Three ordered sets",width="16cm",height="7.5cm",align="center"]
 
 [IMPORTANT]
 ====
 An ordered set with +sequential=false+ makes sense only if there is another
 set in the constraint. Otherwise, the constraint has no effect.
 ====
 
 === Resource Set OR Logic ===
 
 The unordered set logic discussed so far has all been "AND" logic.
 To illustrate this take the 3 resource set figure in the previous section.
 Those sets can be expressed, +(A and B) then \(C) then (D) then (E and F)+.
 
 Say for example we want to change the first set, +(A and B)+, to use "OR" logic
 so the sets look like this: +(A or B) then \(C) then (D) then (E and F)+.
 This functionality can be achieved through the use of the +require-all+
 option.  This option defaults to TRUE which is why the
 "AND" logic is used by default.  Setting +require-all=false+ means only one
 resource in the set needs to be started before continuing on to the next set.
 
 .Resource Set "OR" logic: Three ordered sets, where the first set is internally unordered with "OR" logic
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_order id="order-1">
       <resource_set id="ordered-set-1" sequential="false" require-all="false">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
       </resource_set>
       <resource_set id="ordered-set-2" sequential="true">
         <resource_ref id="C"/>
         <resource_ref id="D"/>
       </resource_set>
       <resource_set id="ordered-set-3" sequential="false">
         <resource_ref id="E"/>
         <resource_ref id="F"/>
       </resource_set>
     </rsc_order>
 </constraints>
 -------
 ======
 
 [IMPORTANT]
 ====
 An ordered set with +require-all=false+ makes sense only in conjunction with
 +sequential=false+. Think of it like this: +sequential=false+ modifies the set
 to be an unordered set using "AND" logic by default, and adding
 +require-all=false+ flips the unordered set's "AND" logic to "OR" logic.
 ====
 
 [[s-resource-sets-colocation]]
 == Colocating Sets of Resources ==
 
 Another common situation is for an administrator to create a set of
 colocated resources.
 
 One way to do this would be to define a resource group (see
 <<group-resources>>), but that cannot always accurately express the desired
 state.
 
 Another way would be to define each relationship as an individual constraint,
 but that causes a constraint explosion as the number of resources and
 combinations grow. An example of this approach:
 
 .Chain of colocated resources
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_colocation id="coloc-1" rsc="D" with-rsc="C" score="INFINITY"/>
     <rsc_colocation id="coloc-2" rsc="C" with-rsc="B" score="INFINITY"/>
     <rsc_colocation id="coloc-3" rsc="B" with-rsc="A" score="INFINITY"/>
 </constraints>
 -------
 ======
 
 To make things easier, resource sets (see <<s-resource-sets>>) can be used
 within colocation constraints. As with the chained version, a
 resource that can't be active prevents any resource that must be
 colocated with it from being active.  For example, if +B+ is not
 able to run, then both +C+ and by inference +D+ must also remain
 stopped. Here is an example +resource_set+:
 
 .Equivalent colocation chain expressed using +resource_set+
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_colocation id="coloc-1" score="INFINITY" >
       <resource_set id="colocated-set-example" sequential="true">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
         <resource_ref id="C"/>
         <resource_ref id="D"/>
       </resource_set>
     </rsc_colocation>
 </constraints>
 -------
 ======
 
 [IMPORTANT]
 =========
 If you use a higher-level tool, pay attention to how it exposes this
 functionality. Depending on the tool, creating a set +A B+ may be equivalent to
 +A with B+, or +B with A+.
 =========
 
 This notation can also be used to tell the cluster that sets of resources must
 be colocated relative to each other, where the individual members of each set
 may or may not depend on each other being active (controlled by the
 +sequential+ property).
 
 In this example, +A+, +B+, and +C+ will each be colocated with +D+.
 +D+ must be active, but any of +A+, +B+, or +C+ may be inactive without
 affecting any other resources.
 
 .Using colocated sets to specify a common peer
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_colocation id="coloc-1" score="INFINITY" >
       <resource_set id="colocated-set-1" sequential="false">
         <resource_ref id="A"/>
         <resource_ref id="B"/>
         <resource_ref id="C"/>
       </resource_set>
       <resource_set id="colocated-set-2" sequential="true">
         <resource_ref id="D"/>
       </resource_set>
     </rsc_colocation>
 </constraints>
 -------
 ======
 
 [IMPORTANT]
 ====
 A colocated set with +sequential=false+ makes sense only if there is another
 set in the constraint. Otherwise, the constraint has no effect.
 ====
 
 There is no inherent limit to the number and size of the sets used.
 The only thing that matters is that in order for any member of one set
 in the constraint to be active, all members of sets listed after it must also
 be active (and naturally on the same node); and if a set has +sequential="true"+,
 then in order for one member of that set to be active, all members listed
 before it must also be active.
 
 If desired, you can restrict the dependency to instances of promotable clone
 resources that are in a specific role, using the set's +role+ property.
 
 .Colocation chain in which the members of the middle set have no interdependencies, and the last listed set (which the cluster places first) is restricted to instances in master status.
 ======
 [source,XML]
 -------
 <constraints>
     <rsc_colocation id="coloc-1" score="INFINITY" >
       <resource_set id="colocated-set-1" sequential="true">
         <resource_ref id="B"/>
         <resource_ref id="A"/>
       </resource_set>
       <resource_set id="colocated-set-2" sequential="false">
         <resource_ref id="C"/>
         <resource_ref id="D"/>
         <resource_ref id="E"/>
       </resource_set>
       <resource_set id="colocated-set-3" sequential="true" role="Master">
         <resource_ref id="G"/>
         <resource_ref id="F"/>
       </resource_set>
     </rsc_colocation>
 </constraints>
 -------
 ======
 
 .Visual representation the above example (resources to the left are placed first)
 image::images/three-sets-complex.png["Colocation chain",width="16cm",height="9cm",align="center"]
 
 [NOTE]
 ====
 Pay close attention to the order in which resources and sets are listed.
 While the colocation dependency for members of any one set is last-to-first,
 the colocation dependency for multiple sets is first-to-last. In the above
 example, +B+ is colocated with +A+, but +colocated-set-1+ is
 colocated with +colocated-set-2+.
 
 Unlike ordered sets, colocated sets do not use the +require-all+ option.
 ====
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Options.txt b/doc/Pacemaker_Explained/en-US/Ch-Options.txt
index e2431181b0..b9cc00912c 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Options.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Options.txt
@@ -1,409 +1,410 @@
 = Cluster-Wide Configuration =
 
 == Configuration Layout ==
 
 The cluster is defined by the Cluster Information Base (CIB),
 which uses XML notation. The simplest CIB, an empty one, looks like this:
 
 .An empty configuration
 ======
 [source,XML]
 -------
 <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" admin_epoch="1" epoch="0" num_updates="0">
   <configuration>
     <crm_config/>
     <nodes/>
     <resources/>
     <constraints/>
   </configuration>
   <status/>
 </cib>
 -------
 ======
 
 The empty configuration above contains the major sections that make up a CIB:
 
 * +cib+: The entire CIB is enclosed with a +cib+ tag. Certain fundamental settings
   are defined as attributes of this tag.
 
   ** +configuration+: This section -- the primary focus of this document --
      contains traditional configuration information such as what resources the
      cluster serves and the relationships among them.
 
     *** +crm_config+: cluster-wide configuration options
     *** +nodes+: the machines that host the cluster
     *** +resources+: the services run by the cluster
     *** +constraints+: indications of how resources should be placed
 
   ** +status+: This section contains the history of each resource on each node.
     Based on this data, the cluster can construct the complete current
     state of the cluster.  The authoritative source for this section
     is the local executor (pacemaker-execd process) on each cluster node, and
     the cluster will occasionally repopulate the entire section.  For this
     reason, it is never written to disk, and administrators are advised
     against modifying it in any way.
 
 In this document, configuration settings will be described as 'properties' or 'options'
 based on how they are defined in the CIB:
 
 * Properties are XML attributes of an XML element.
 * Options are name-value pairs expressed as +nvpair+ child elements of an XML element.
 
 Normally, you will use command-line tools that abstract the XML, so the
 distinction will be unimportant; both properties and options are
 cluster settings you can tweak.
 
 == CIB Properties ==
 
 Certain settings are defined by CIB properties (that is, attributes of the
 +cib+ tag) rather than with the rest of the cluster configuration in the
 +configuration+ section.
 
 The reason is simply a matter of parsing. These options are used by the
 configuration database which is, by design, mostly ignorant of the content it
 holds.  So the decision was made to place them in an easy-to-find location.
 
 .CIB Properties
-[width="95%",cols="2m,5<",options="header",align="center"]
+[width="95%",cols="2m,<5",options="header",align="center"]
 |=========================================================
 |Field |Description
 
 | admin_epoch |
 indexterm:[Configuration Version,Cluster]
 indexterm:[Cluster,Option,Configuration Version]
 indexterm:[admin_epoch,Cluster Option]
 indexterm:[Cluster,Option,admin_epoch]
 When a node joins the cluster, the cluster performs a check to see
 which node has the best configuration. It asks the node with the highest
 (+admin_epoch+, +epoch+, +num_updates+) tuple to replace the configuration on
 all the nodes -- which makes setting them, and setting them correctly, very
 important. +admin_epoch+ is never modified by the cluster; you can use this
 to make the configurations on any inactive nodes obsolete. _Never set this
 value to zero_. In such cases, the cluster cannot tell the difference between
 your configuration and the "empty" one used when nothing is found on disk.
 
 | epoch |
 indexterm:[epoch,Cluster Option]
 indexterm:[Cluster,Option,epoch]
 The cluster increments this every time the configuration is updated (usually by
 the administrator).
 
 | num_updates |
 indexterm:[num_updates,Cluster Option]
 indexterm:[Cluster,Option,num_updates]
 The cluster increments this every time the configuration or status is updated
 (usually by the cluster) and resets it to 0 when epoch changes.
 
 | validate-with |
 indexterm:[validate-with,Cluster Option]
 indexterm:[Cluster,Option,validate-with]
 Determines the type of XML validation that will be done on the configuration.
 If set to +none+, the cluster will not verify that updates conform to the
 DTD (nor reject ones that don't). This option can be useful when
 operating a mixed-version cluster during an upgrade.
 
 |cib-last-written |
 indexterm:[cib-last-written,Cluster Property]
 indexterm:[Cluster,Property,cib-last-written]
 Indicates when the configuration was last written to disk. Maintained by the
 cluster; for informational purposes only.
 
 |have-quorum |
 indexterm:[have-quorum,Cluster Property]
 indexterm:[Cluster,Property,have-quorum]
 Indicates if the cluster has quorum. If false, this may mean that the
 cluster cannot start resources or fence other nodes (see
 +no-quorum-policy+ below). Maintained by the cluster.
 
 |dc-uuid |
 indexterm:[dc-uuid,Cluster Property]
 indexterm:[Cluster,Property,dc-uuid]
 Indicates which cluster node is the current leader. Used by the
 cluster when placing resources and determining the order of some
 events. Maintained by the cluster.
 
 |=========================================================
 
 [[s-cluster-options]]
 == Cluster Options ==
 
 Cluster options, as you might expect, control how the cluster behaves
 when confronted with certain situations.
 
 They are grouped into sets within the +crm_config+ section, and, in advanced
 configurations, there may be more than one set. (This will be described later
 in the section on <<ch-rules>> where we will show how to have the cluster use
 different sets of options during working hours than during weekends.) For now,
 we will describe the simple case where each option is present at most once.
 
 You can obtain an up-to-date list of cluster options, including
 their default values, by running the `man pacemaker-schedulerd` and
 `man pacemaker-controld` commands.
 
 .Cluster Options
-[width="95%",cols="5m,2,11<a",options="header",align="center"]
+[width="95%",cols="5m,2,<11",options="header",align="center"]
 |=========================================================
 |Option |Default |Description
 
 | dc-version | |
 indexterm:[dc-version,Cluster Property]
 indexterm:[Cluster,Property,dc-version]
 Version of Pacemaker on the cluster's DC.
 Determined automatically by the cluster.
 Often includes the hash which identifies the exact Git changeset it was built
 from.  Used for diagnostic purposes.
 
 | cluster-infrastructure | |
 indexterm:[cluster-infrastructure,Cluster Property]
 indexterm:[Cluster,Property,cluster-infrastructure]
 The messaging stack on which Pacemaker is currently running.
 Determined automatically by the cluster.
 Used for informational and diagnostic purposes.
 
-| no-quorum-policy | stop |
+| no-quorum-policy | stop
+a|
 indexterm:[no-quorum-policy,Cluster Option]
 indexterm:[Cluster,Option,no-quorum-policy]
 What to do when the cluster does not have quorum.  Allowed values:
 
 * +ignore:+ continue all resource management
 * +freeze:+ continue resource management, but don't recover resources from nodes not in the affected partition
 * +stop:+ stop all resources in the affected cluster partition
 * +suicide:+ fence all nodes in the affected cluster partition
 
 | batch-limit | 0 |
 indexterm:[batch-limit,Cluster Option]
 indexterm:[Cluster,Option,batch-limit]
 The maximum number of actions that the cluster may execute in parallel across
 all nodes. The "correct" value will depend on the speed and load of your
 network and cluster nodes. If zero, the cluster will impose a dynamically
 calculated limit only when any node has high load.
 
 | migration-limit | -1 |
 indexterm:[migration-limit,Cluster Option]
 indexterm:[Cluster,Option,migration-limit]
 The number of migration jobs that the TE is allowed to execute in
 parallel on a node. A value of -1 means unlimited.
 
 | symmetric-cluster | TRUE |
 indexterm:[symmetric-cluster,Cluster Option]
 indexterm:[Cluster,Option,symmetric-cluster]
 Can all resources run on any node by default?
 
 | stop-all-resources | FALSE |
 indexterm:[stop-all-resources,Cluster Option]
 indexterm:[Cluster,Option,stop-all-resources]
 Should the cluster stop all resources?
 
 | stop-orphan-resources | TRUE |
 indexterm:[stop-orphan-resources,Cluster Option]
 indexterm:[Cluster,Option,stop-orphan-resources]
  Should deleted resources be stopped? This value takes precedence over
  +is-managed+ (i.e. even unmanaged resources will be stopped if deleted from
  the configuration when this value is TRUE).
 
 | stop-orphan-actions | TRUE |
 indexterm:[stop-orphan-actions,Cluster Option]
 indexterm:[Cluster,Option,stop-orphan-actions]
 Should deleted actions be cancelled?
 
 | start-failure-is-fatal | TRUE |
 indexterm:[start-failure-is-fatal,Cluster Option]
 indexterm:[Cluster,Option,start-failure-is-fatal]
 Should a failure to start a resource on a particular node prevent further start
 attempts on that node? If FALSE, the cluster will decide whether the same
 node is still eligible based on the resource's current failure count
 and +migration-threshold+ (see <<s-failure-handling>>).
 
 | enable-startup-probes | TRUE |
 indexterm:[enable-startup-probes,Cluster Option]
 indexterm:[Cluster,Option,enable-startup-probes]
 Should the cluster check for active resources during startup?
 
 | maintenance-mode | FALSE |
 indexterm:[maintenance-mode,Cluster Option]
 indexterm:[Cluster,Option,maintenance-mode]
 Should the cluster refrain from monitoring, starting and stopping resources?
 
 | stonith-enabled | TRUE |
 indexterm:[stonith-enabled,Cluster Option]
 indexterm:[Cluster,Option,stonith-enabled]
 Should failed nodes and nodes with resources that can't be stopped be
 shot? If you value your data, set up a STONITH device and enable this.
 
 If true, or unset, the cluster will refuse to start resources unless
 one or more STONITH resources have been configured.
 If false, unresponsive nodes are immediately assumed to be running no
 resources, and resource takeover to online nodes starts without any
 further protection (which means _data loss_ if the unresponsive node
 still accesses shared storage, for example).  See also the +requires+
 meta-attribute in <<s-resource-options>>.
 
 | stonith-action | reboot |
 indexterm:[stonith-action,Cluster Option]
 indexterm:[Cluster,Option,stonith-action]
 Action to send to STONITH device. Allowed values are +reboot+ and +off+.
 The value +poweroff+ is also allowed, but is only used for
 legacy devices.
 
 | stonith-timeout | 60s |
 indexterm:[stonith-timeout,Cluster Option]
 indexterm:[Cluster,Option,stonith-timeout]
 How long to wait for STONITH actions (reboot, on, off) to complete
 
 | stonith-max-attempts | 10 |
 indexterm:[stonith-max-attempts,Cluster Option]
 indexterm:[Cluster,Option,stonith-max-attempts]
 How many times fencing can fail for a target before the cluster will no longer
 immediately re-attempt it.
 
 | stonith-watchdog-timeout | 0 |
 indexterm:[stonith-watchdog-timeout,Cluster Option]
 indexterm:[Cluster,Option,stonith-watchdog-timeout]
 If nonzero, rely on hardware watchdog self-fencing. If positive, assume unseen
 nodes self-fence within this much time. If negative, and the
 SBD_WATCHDOG_TIMEOUT environment variable is set, use twice that value.
 
 | concurrent-fencing | FALSE |
 indexterm:[concurrent-fencing,Cluster Option]
 indexterm:[Cluster,Option,concurrent-fencing]
 Is the cluster allowed to initiate multiple fence actions concurrently?
 
 | cluster-delay | 60s |
 indexterm:[cluster-delay,Cluster Option]
 indexterm:[Cluster,Option,cluster-delay]
 Estimated maximum round-trip delay over the network (excluding action
 execution). If the TE requires an action to be executed on another node,
 it will consider the action failed if it does not get a response
 from the other node in this time (after considering the action's
 own timeout). The "correct" value will depend on the speed and load of your
 network and cluster nodes.
 
 | dc-deadtime | 20s |
 indexterm:[dc-deadtime,Cluster Option]
 indexterm:[Cluster,Option,dc-deadtime]
 How long to wait for a response from other nodes during startup.
 
 The "correct" value will depend on the speed/load of your network and the type of switches used.
 
 | cluster-recheck-interval | 15min |
 indexterm:[cluster-recheck-interval,Cluster Option]
 indexterm:[Cluster,Option,cluster-recheck-interval]
 Polling interval for time-based changes to options, resource parameters and constraints.
 
 The Cluster is primarily event-driven, but your configuration can have
 elements that take effect based on the time of day. To ensure these changes
 take effect, we can optionally poll the cluster's status for changes. A value
 of 0 disables polling. Positive values are an interval (in seconds unless other
 SI units are specified, e.g. 5min).
 
 | cluster-ipc-limit | 500 |
 indexterm:[cluster-ipc-limit,Cluster Option]
 indexterm:[Cluster,Option,cluster-ipc-limit]
 The maximum IPC message backlog before one cluster daemon will disconnect
 another. This is of use in large clusters, for which a good value is the number
 of resources in the cluster multiplied by the number of nodes. The default of
 500 is also the minimum. Raise this if you see "Evicting client" messages for
 cluster daemon PIDs in the logs.
 
 | pe-error-series-max | -1 |
 indexterm:[pe-error-series-max,Cluster Option]
 indexterm:[Cluster,Option,pe-error-series-max]
 The number of PE inputs resulting in ERRORs to save. Used when reporting problems.
 A value of -1 means unlimited (report all).
 
 | pe-warn-series-max | -1 |
 indexterm:[pe-warn-series-max,Cluster Option]
 indexterm:[Cluster,Option,pe-warn-series-max]
 The number of PE inputs resulting in WARNINGs to save. Used when reporting problems.
 A value of -1 means unlimited (report all).
 
 | pe-input-series-max | -1 |
 indexterm:[pe-input-series-max,Cluster Option]
 indexterm:[Cluster,Option,pe-input-series-max]
 The number of "normal" PE inputs to save. Used when reporting problems.
 A value of -1 means unlimited (report all).
 
 | placement-strategy | default |
 indexterm:[placement-strategy,Cluster Option]
 indexterm:[Cluster,Option,placement-strategy]
  How the cluster should allocate resources to nodes (see <<s-utilization>>).
  Allowed values are +default+, +utilization+, +balanced+, and +minimal+.
 
 | node-health-strategy | none |
 indexterm:[node-health-strategy,Cluster Option]
 indexterm:[Cluster,Option,node-health-strategy]
  How the cluster should react to node health attributes (see <<s-node-health>>).
  Allowed values are +none+, +migrate-on-red+, +only-green+, +progressive+, and
  +custom+.
 
 | node-health-base | 0 |
 indexterm:[node-health-base,Cluster Option]
 indexterm:[Cluster,Option,node-health-base]
  The base health score assigned to a node. Only used when
  +node-health-strategy+ is +progressive+.
 
 | node-health-green | 0 |
 indexterm:[node-health-green,Cluster Option]
 indexterm:[Cluster,Option,node-health-green]
  The score to use for a node health attribute whose value is +green+.
  Only used when +node-health-strategy+ is +progressive+ or +custom+.
 
 | node-health-yellow | 0 |
 indexterm:[node-health-yellow,Cluster Option]
 indexterm:[Cluster,Option,node-health-yellow]
  The score to use for a node health attribute whose value is +yellow+.
  Only used when +node-health-strategy+ is +progressive+ or +custom+.
 
 | node-health-red | 0 |
 indexterm:[node-health-red,Cluster Option]
 indexterm:[Cluster,Option,node-health-red]
  The score to use for a node health attribute whose value is +red+.
  Only used when +node-health-strategy+ is +progressive+ or +custom+.
 
 | remove-after-stop | FALSE |
 indexterm:[remove-after-stop,Cluster Option]
 indexterm:[Cluster,Option,remove-after-stop]
 _Advanced Use Only:_ Should the cluster remove resources from the LRM after
 they are stopped? Values other than the default are, at best, poorly tested and
 potentially dangerous.
 
 | startup-fencing | TRUE |
 indexterm:[startup-fencing,Cluster Option]
 indexterm:[Cluster,Option,startup-fencing]
 _Advanced Use Only:_ Should the cluster shoot unseen nodes?
 Not using the default is very unsafe!
 
 | election-timeout | 2min |
 indexterm:[election-timeout,Cluster Option]
 indexterm:[Cluster,Option,election-timeout]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | shutdown-escalation | 20min |
 indexterm:[shutdown-escalation,Cluster Option]
 indexterm:[Cluster,Option,shutdown-escalation]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | join-integration-timeout | 3min |
 indexterm:[join-integration-timeout,Cluster Option]
 indexterm:[Cluster,Option,join-integration-timeout]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | join-finalization-timeout | 30min |
 indexterm:[join-finalization-timeout,Cluster Option]
 indexterm:[Cluster,Option,join-finalization-timeout]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | transition-delay | 0s |
 indexterm:[transition-delay,Cluster Option]
 indexterm:[Cluster,Option,transition-delay]
 _Advanced Use Only:_ Delay cluster recovery for the configured interval to
 allow for additional/related events to occur. Useful if your configuration is
 sensitive to the order in which ping updates arrive.
 Enabling this option will slow down cluster recovery under
 all conditions.
 
 |=========================================================
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
index 84b8ca35ea..3f23151793 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
@@ -1,886 +1,886 @@
 = Cluster Resources =
 
 [[s-resource-primitive]]
 == What is a Cluster Resource? ==
 
 indexterm:[Resource]
 
 A resource is a service made highly available by a cluster.
 The simplest type of resource, a 'primitive' resource, is described
 in this chapter. More complex forms, such as groups and clones,
 are described in later chapters.
 
 Every primitive resource has a 'resource agent'. A resource agent is an
 external program that abstracts the service it provides and present a
 consistent view to the cluster.
 
 This allows the cluster to be agnostic about the resources it manages.
 The cluster doesn't need to understand how the resource works because
 it relies on the resource agent to do the right thing when given a
 `start`, `stop` or `monitor` command. For this reason, it is crucial that
 resource agents are well-tested.
 
 Typically, resource agents come in the form of shell scripts. However,
 they can be written using any technology (such as C, Python or Perl)
 that the author is comfortable with.
 
 [[s-resource-supported]]
 == Resource Classes ==
 
 indexterm:[Resource,class]
 
 Pacemaker supports several classes of agents:
 
 * OCF
 * LSB
 * Upstart
 * Systemd
 * Service
 * Fencing
 * Nagios Plugins
 
 === Open Cluster Framework ===
 
 indexterm:[Resource,OCF]
 indexterm:[OCF,Resources]
 indexterm:[Open Cluster Framework,Resources]
 
 The OCF standard
 footnote:[See
 http://www.opencf.org/cgi-bin/viewcvs.cgi/specs/ra/resource-agent-api.txt?rev=HEAD
  -- at least as it relates to resource agents.  The Pacemaker implementation has
 been somewhat extended from the OCF specs, but none of those changes are
 incompatible with the original OCF specification.]
 is basically an extension of the Linux Standard Base conventions for
 init scripts to:
 
 * support parameters,
 * make them self-describing, and
 * make them extensible
 
 OCF specs have strict definitions of the exit codes that actions must return.
 footnote:[
 The resource-agents source code includes the `ocf-tester` script, which
 can be useful in this regard.
 ]
 
 The cluster follows these specifications exactly, and giving the wrong
 exit code will cause the cluster to behave in ways you will likely
 find puzzling and annoying.  In particular, the cluster needs to
 distinguish a completely stopped resource from one which is in some
 erroneous and indeterminate state.
 
 Parameters are passed to the resource agent as environment variables, with the
 special prefix +OCF_RESKEY_+.  So, a parameter which the user thinks
 of as +ip+ will be passed to the resource agent as +OCF_RESKEY_ip+.  The
 number and purpose of the parameters is left to the resource agent; however,
 the resource agent should use the `meta-data` command to advertise any that it
 supports.
 
 The OCF class is the most preferred as it is an industry standard,
 highly flexible (allowing parameters to be passed to agents in a
 non-positional manner) and self-describing.
 
 For more information, see the
 http://www.linux-ha.org/wiki/OCF_Resource_Agents[reference] and
 the 'Resource Agents' chapter of 'Pacemaker Administration'.
 
 === Linux Standard Base ===
 indexterm:[Resource,LSB]
 indexterm:[LSB,Resources]
 indexterm:[Linux Standard Base,Resources]
 
 'LSB' resource agents are rather known as 'init scripts' (service startup
 scripts), located in +/etc/init.d+.
 
 Commonly, they are provided by the OS distribution and, in order to be used
 with the cluster, they must conform to the LSB Spec.
 footnote:[
 See
 http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
 for the LSB Spec as it relates to init scripts.
 ]
 
 [WARNING]
 ====
 Many distributions or particular software packages claim LSB compliance
 but ship with broken init scripts.  For details on how to check whether
 your init script is LSB-compatible, see the 'Resource Agents' chapter of
 'Pacemaker Administration'. Common problematic violations of the LSB
 standard include:
 
 * Not implementing the +status+ operation at all
 * Not observing the correct exit status codes for
   +start+/+stop+/+status+ actions
 * Starting a started resource returns an error
 * Stopping a stopped resource returns an error
 
 Since the LSB standard is pragmatic enough so as _not_ to elaborate
 on clean and reliable (busy-waiting-free) service dependency chains beyond
 symbolic system facilities names to order against (one of the strongest
 guarantees set forth is with _syslog_ in particular, denoting that,
 when satisfied, it's actually _operational_ -- something not demanded
 universally with the standard) and because explicit dependency-based
 ordering is crucial for stacked HA applications, additionally this
 imminent setback, possibly rooted deeper in the lack of synchronization
 after initial forking in daemons themselves (something that currently
 spoils also Pacemaker's own user-facing ones) and hence nothing init
 scripts alone could be blamed for, stands out:
 
 * Insufficient causality discreetness on either service start-up (for
   the dependency chains, it's rather essential the service is also
   _operational_, with the minimal viable interpretation being that
   subsequent +status+ returns success but preferably in the strict
   sense, once the respective init script invocation finishes with
   success) or shutdown (ditto with no child processes left behind)
 footnote:[
 There's an inherent difference between _started_ and _ready_ state
 of the service at hand, see discussion at
 https://jdebp.eu/FGA/unix-daemon-readiness-protocol-problems.html
 also showing how suitably prepared <<s-resource-supported-systemd,systemd
 resources>> may possibly improve on this through a native arrangement scheme.
 ]
 ====
 
 [IMPORTANT]
 ====
 Remember to make sure the computer is _not_ configured to start any
 services at boot time -- that should be controlled by the cluster.
 ====
 
 [[s-resource-supported-systemd]]
 === Systemd ===
 indexterm:[Resource,Systemd]
 indexterm:[Systemd,Resources]
 
 Some newer distributions have replaced the old
 http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of
 initialization daemons and scripts with an alternative called
 http://www.freedesktop.org/wiki/Software/systemd[Systemd].
 
 Pacemaker is able to manage these services _if they are present_.
 
 Instead of init scripts, systemd has 'unit files'.  Generally, the
 services (unit files) are provided by the OS distribution, but there
 are online guides for converting from init scripts.
 footnote:[For example,
 http://0pointer.de/blog/projects/systemd-for-admins-3.html]
 
 [IMPORTANT]
 ====
 Remember to make sure the computer is _not_ configured to start any
 services at boot time -- that should be controlled by the cluster.
 ====
 
 === Upstart ===
 indexterm:[Resource,Upstart]
 indexterm:[Upstart,Resources]
 
 Some newer distributions have replaced the old
 http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of
 initialization daemons (and scripts) with an alternative called
 http://upstart.ubuntu.com/[Upstart].
 
 Pacemaker is able to manage these services _if they are present_.
 
 Instead of init scripts, upstart has 'jobs'.  Generally, the
 services (jobs) are provided by the OS distribution.
 
 [IMPORTANT]
 ====
 Remember to make sure the computer is _not_ configured to start any
 services at boot time -- that should be controlled by the cluster.
 ====
 
 === System Services ===
 indexterm:[Resource,System Services]
 indexterm:[System Service,Resources]
 
 Since there are various types of system services (+systemd+,
 +upstart+, and +lsb+), Pacemaker supports a special +service+ alias which
 intelligently figures out which one applies to a given cluster node.
 
 This is particularly useful when the cluster contains a mix of
 +systemd+, +upstart+, and +lsb+.
 
 In order, Pacemaker will try to find the named service as:
 
 . an LSB init script
 . a Systemd unit file
 . an Upstart job
 
 === STONITH ===
 indexterm:[Resource,STONITH]
 indexterm:[STONITH,Resources]
 
 The STONITH class is used exclusively for fencing-related resources.  This is
 discussed later in <<ch-stonith>>.
 
 === Nagios Plugins ===
 indexterm:[Resource,Nagios Plugins]
 indexterm:[Nagios Plugins,Resources]
 
 Nagios Plugins
 footnote:[The project has two independent forks, hosted at
 https://www.nagios-plugins.org/ and https://www.monitoring-plugins.org/. Output
 from both projects' plugins is similar, so plugins from either project can be
 used with pacemaker.]
 allow us to monitor services on remote hosts.
 
 Pacemaker is able to do remote monitoring with the plugins _if they are
 present_.
 
 A common use case is to configure them as resources belonging to a resource
 container (usually a virtual machine), and the container will be restarted
 if any of them has failed. Another use is to configure them as ordinary
 resources to be used for monitoring hosts or services via the network.
 
 The supported parameters are same as the long options of the plugin.
 
 [[primitive-resource]]
 == Resource Properties ==
 
 These values tell the cluster which resource agent to use for the resource,
 where to find that resource agent and what standards it conforms to.
 
 .Properties of a Primitive Resource
-[width="95%",cols="1m,6<",options="header",align="center"]
+[width="95%",cols="1m,<6",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |id
 |Your name for the resource
  indexterm:[id,Resource]
  indexterm:[Resource,Property,id]
 
 |class
 
 |The standard the resource agent conforms to. Allowed values:
 +lsb+, +nagios+, +ocf+, +service+, +stonith+, +systemd+, +upstart+
  indexterm:[class,Resource]
  indexterm:[Resource,Property,class]
 
 |type
 |The name of the Resource Agent you wish to use. E.g. +IPaddr+ or +Filesystem+
  indexterm:[type,Resource]
  indexterm:[Resource,Property,type]
 
 |provider
 |The OCF spec allows multiple vendors to supply the same
  resource agent. To use the OCF resource agents supplied by
  the Heartbeat project, you would specify +heartbeat+ here.
  indexterm:[provider,Resource]
  indexterm:[Resource,Property,provider]
 
 |=========================================================
 
 The XML definition of a resource can be queried with the `crm_resource` tool.
 For example:
 
 ----
 # crm_resource --resource Email --query-xml
 ----
 
 might produce:
 
 .A system resource definition
 =====
 [source,XML]
 <primitive id="Email" class="service" type="exim"/>
 =====
 
 [NOTE]
 =====
 One of the main drawbacks to system services (LSB, systemd or
 Upstart) resources is that they do not allow any parameters!
 =====
 
 ////
 See https://tools.ietf.org/html/rfc5737 for choice of example IP address
 ////
 
 .An OCF resource definition
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <instance_attributes id="Public-IP-params">
       <nvpair id="Public-IP-ip" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 [[s-resource-options]]
 == Resource Options ==
 
 Resources have two types of options: 'meta-attributes' and 'instance attributes'.
 Meta-attributes apply to any type of resource, while instance attributes
 are specific to each resource agent.
 
 === Resource Meta-Attributes ===
 
 Meta-attributes are used by the cluster to decide how a resource should
 behave and can be easily set using the `--meta` option of the
 `crm_resource` command.
 
 .Meta-attributes of a Primitive Resource
-[width="95%",cols="2m,2,5<a",options="header",align="center"]
+[width="95%",cols="2m,2,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |priority
 |0
 |If not all resources can be active, the cluster will stop lower
 priority resources in order to keep higher priority ones active.
 indexterm:[priority,Resource Option]
 indexterm:[Resource,Option,priority]
 
 |target-role
 |Started
-|What state should the cluster attempt to keep this resource in? Allowed values:
+a|What state should the cluster attempt to keep this resource in? Allowed values:
 
 * +Stopped:+ Force the resource to be stopped
 * +Started:+ Allow the resource to be started (and in the case of
   <<s-resource-promotable,promotable clone resources>>, promoted to master if
   appropriate)
 * +Slave:+ Allow the resource to be started, but only in Slave mode if
   the resource is <<s-resource-promotable,promotable>>
 * +Master:+ Equivalent to +Started+
 indexterm:[target-role,Resource Option]
 indexterm:[Resource,Option,target-role]
 
 |is-managed
 |TRUE
 |Is the cluster allowed to start and stop the resource?  Allowed
  values: +true+, +false+
  indexterm:[is-managed,Resource Option]
  indexterm:[Resource,Option,is-managed]
 
 |resource-stickiness
 |value of +resource-stickiness+ in the +rsc_defaults+ section
 |How much does the resource prefer to stay where it is?
  indexterm:[resource-stickiness,Resource Option]
  indexterm:[Resource,Option,resource-stickiness]
 
 |requires
 |+quorum+ for resources with a +class+ of +stonith+,
  otherwise +unfencing+ if unfencing is active in the cluster,
  otherwise +fencing+ if +stonith-enabled+ is true, otherwise +quorum+
-|Conditions under which the resource can be started
+a|Conditions under which the resource can be started
 Allowed values:
 
 * +nothing:+ can always be started
 * +quorum:+ The cluster can only start this resource if a majority of
   the configured nodes are active
 * +fencing:+ The cluster can only start this resource if a majority
   of the configured nodes are active _and_ any failed or unknown nodes
   have been <<ch-stonith,fenced>>
 * +unfencing:+
   The cluster can only start this resource if a majority
   of the configured nodes are active _and_ any failed or unknown nodes
   have been fenced _and_ only on nodes that have been
   <<s-unfencing,unfenced>>
 
 indexterm:[requires,Resource Option]
 indexterm:[Resource,Option,requires]
 
 |migration-threshold
 |INFINITY
 |How many failures may occur for this resource on a node, before this
  node is marked ineligible to host this resource. A value of 0 indicates that
  this feature is disabled (the node will never be marked ineligible); by
  constrast, the cluster treats INFINITY (the default) as a very large but
  finite number. This option has an effect only if the failed operation
  specifies +on-fail+ as +restart+ (the default), and additionally for
  failed +start+ operations, if the cluster property +start-failure-is-fatal+
  is +false+.
  indexterm:[migration-threshold,Resource Option]
  indexterm:[Resource,Option,migration-threshold]
 
 |failure-timeout
 |0
 |How many seconds to wait before acting as if the failure had not
  occurred, and potentially allowing the resource back to the node on
  which it failed. A value of 0 indicates that this feature is disabled.
  As with any time-based actions, this is not guaranteed to be checked more
  frequently than the value of +cluster-recheck-interval+ (see
  <<s-cluster-options>>).
  indexterm:[failure-timeout,Resource Option]
  indexterm:[Resource,Option,failure-timeout]
 
 |multiple-active
 |stop_start
-|What should the cluster do if it ever finds the resource active on
+a|What should the cluster do if it ever finds the resource active on
  more than one node? Allowed values:
 
 * +block:+ mark the resource as unmanaged
 * +stop_only:+ stop all active instances and leave them that way
 * +stop_start:+ stop all active instances and start the resource in
   one location only
 
 indexterm:[multiple-active,Resource Option]
 indexterm:[Resource,Option,multiple-active]
 
 |allow-migrate
 |TRUE for ocf:pacemaker:remote resources, FALSE otherwise
 |Whether the cluster should try to "live migrate" this resource when it needs
 to be moved (see <<s-migrating-resources>>)
 
 |container-attribute-target
 |
 |Specific to bundle resources; see <<s-bundle-attributes>>
 
 |remote-node
 |
 |The name of the Pacemaker Remote guest node this resource is associated with,
  if any. If specified, this both enables the resource as a guest node and
  defines the unique name used to identify the guest node. The guest must be
  configured to run the Pacemaker Remote daemon when it is started. +WARNING:+
  This value cannot overlap with any resource or node IDs.
 
 |remote-port
 |3121
 |If +remote-node+ is specified, the port on the guest used for its
  Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be
  configured to listen on this port.
 
 |remote-addr
 |value of +remote-node+
 |If +remote-node+ is specified, the IP address or hostname used to connect to
  the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest
  must be configured to accept connections on this address.
 
 |remote-connect-timeout
 |60s
 |If +remote-node+ is specified, how long before a pending guest connection will
  time out.
 
 |=========================================================
 
 As an example of setting resource options, if you performed the following
 commands on an LSB Email resource:
 
 -------
 # crm_resource --meta --resource Email --set-parameter priority --parameter-value 100
 # crm_resource -m -r Email -p multiple-active -v block
 -------
 
 the resulting resource definition might be:
 
 .An LSB resource with cluster options
 =====
 [source,XML]
 -------
 <primitive id="Email" class="lsb" type="exim">
   <meta_attributes id="Email-meta_attributes">
     <nvpair id="Email-meta_attributes-priority" name="priority" value="100"/>
     <nvpair id="Email-meta_attributes-multiple-active" name="multiple-active" value="block"/>
   </meta_attributes>
 </primitive>
 -------
 =====
 
 [[s-resource-defaults]]
 === Setting Global Defaults for Resource Meta-Attributes ===
 
 To set a default value for a resource option, add it to the
 +rsc_defaults+ section with `crm_attribute`. For example,
 
 ----
 # crm_attribute --type rsc_defaults --name is-managed --update false
 ----
 
 would prevent the cluster from starting or stopping any of the
 resources in the configuration (unless of course the individual
 resources were specifically enabled by having their +is-managed+ set to
 +true+).
 
 === Resource Instance Attributes ===
 
 The resource agents of some resource classes (lsb, systemd and upstart 'not' among them)
 can be given parameters which determine how they behave and which instance
 of a service they control.
 
 If your resource agent supports parameters, you can add them with the
 `crm_resource` command. For example,
 
 ----
 # crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2
 ----
 
 would create an entry in the resource like this:
 
 .An example OCF resource with instance attributes
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <instance_attributes id="params-public-ip">
       <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 For an OCF resource, the result would be an environment variable
 called +OCF_RESKEY_ip+ with a value of +192.0.2.2+.
 
 The list of instance attributes supported by an OCF resource agent can be
 found by calling the resource agent with the `meta-data` command.
 The output contains an XML description of all the supported
 attributes, their purpose and default values.
 
 .Displaying the metadata for the Dummy resource agent template
 =====
 ----
 # export OCF_ROOT=/usr/lib/ocf
 # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data
 ----
 [source,XML]
 -------
 <?xml version="1.0"?>
 <!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
 <resource-agent name="Dummy" version="1.0">
 <version>1.0</version>
 
 <longdesc lang="en">
 This is a Dummy Resource Agent. It does absolutely nothing except 
 keep track of whether its running or not.
 Its purpose in life is for testing and to serve as a template for RA writers.
 
 NB: Please pay attention to the timeouts specified in the actions
 section below. They should be meaningful for the kind of resource
 the agent manages. They should be the minimum advised timeouts,
 but they shouldn't/cannot cover _all_ possible resource
 instances. So, try to be neither overly generous nor too stingy,
 but moderate. The minimum timeouts should never be below 10 seconds.
 </longdesc>
 <shortdesc lang="en">Example stateless resource agent</shortdesc>
 
 <parameters>
 <parameter name="state" unique="1">
 <longdesc lang="en">
 Location to store the resource state in.
 </longdesc>
 <shortdesc lang="en">State file</shortdesc>
 <content type="string" default="/var/run/Dummy-default.state" />
 </parameter>
 
 <parameter name="fake" unique="0">
 <longdesc lang="en">
 Fake attribute that can be changed to cause a reload
 </longdesc>
 <shortdesc lang="en">Fake attribute that can be changed to cause a reload</shortdesc>
 <content type="string" default="dummy" />
 </parameter>
 
 <parameter name="op_sleep" unique="1">
 <longdesc lang="en">
 Number of seconds to sleep during operations.  This can be used to test how
 the cluster reacts to operation timeouts.
 </longdesc>
 <shortdesc lang="en">Operation sleep duration in seconds.</shortdesc>
 <content type="string" default="0" />
 </parameter>
 
 </parameters>
 
 <actions>
 <action name="start"        timeout="20" />
 <action name="stop"         timeout="20" />
 <action name="monitor"      timeout="20" interval="10" depth="0"/>
 <action name="reload"       timeout="20" />
 <action name="migrate_to"   timeout="20" />
 <action name="migrate_from" timeout="20" />
 <action name="validate-all" timeout="20" />
 <action name="meta-data"    timeout="5" />
 </actions>
 </resource-agent>
 -------
 =====
 
 == Resource Operations ==
 
 indexterm:[Resource,Action]
 
 'Operations' are actions the cluster can perform on a resource by calling the
 resource agent. Resource agents must support certain common operations such as
 start, stop and monitor, and may implement any others.
 
 Some operations are generated by the cluster itself, for example, stopping and
 starting resources as needed.
 
 You can configure operations in the cluster configuration. As an example, by
 default the cluster will 'not' ensure your resources stay healthy once they are
 started. footnote:[Currently, anyway. Automatic monitoring operations may be
 added in a future version of Pacemaker.] To instruct the cluster to do this,
 you need to add a +monitor+ operation to the resource's definition.
 
 .An OCF resource with a recurring health check
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
   <operations>
      <op id="public-ip-check" name="monitor" interval="60s"/>
   </operations>
   <instance_attributes id="params-public-ip">
      <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
   </instance_attributes>
 </primitive>
 -------
 =====
 
 .Properties of an Operation
-[width="95%",cols="2m,3,6<a",options="header",align="center"]
+[width="95%",cols="2m,3,<6",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the operation.
  indexterm:[id,Action Property]
  indexterm:[Action,Property,id]
 
 |name
 |
 |The action to perform. This can be any action supported by the agent; common
  values include +monitor+, +start+, and +stop+.
  indexterm:[name,Action Property]
  indexterm:[Action,Property,name]
 
 |interval
 |0
 |How frequently (in seconds) to perform the operation. A value of 0 means never.
  A positive value defines a 'recurring action', which is typically used with
  <<s-resource-monitoring,monitor>>.
  indexterm:[interval,Action Property]
  indexterm:[Action,Property,interval]
 
 |timeout
 |
 |How long to wait before declaring the action has failed
  indexterm:[timeout,Action Property]
  indexterm:[Action,Property,timeout]
 
 |on-fail
 |restart '(except for +stop+ operations, which default to' fence 'when
  STONITH is enabled and' block 'otherwise)'
-|The action to take if this action ever fails. Allowed values:
+a|The action to take if this action ever fails. Allowed values:
 
 * +ignore:+ Pretend the resource did not fail.
 * +block:+ Don't perform any further operations on the resource.
 * +stop:+ Stop the resource and do not start it elsewhere.
 * +restart:+ Stop the resource and start it again (possibly on a different node).
 * +fence:+ STONITH the node on which the resource failed.
 * +standby:+ Move _all_ resources away from the node on which the resource failed.
 
 indexterm:[on-fail,Action Property]
 indexterm:[Action,Property,on-fail]
 
 |enabled
 |TRUE
 |If +false+, ignore this operation definition.  This is typically used to pause
  a particular recurring +monitor+ operation; for instance, it can complement
  the respective resource being unmanaged (+is-managed=false+), as this alone
  will <<s-monitoring-unmanaged,not block any configured monitoring>>.
  Disabling the operation does not suppress all actions of the given type.
  Allowed values: +true+, +false+.
  indexterm:[enabled,Action Property]
  indexterm:[Action,Property,enabled]
 
 |record-pending
 |FALSE
 |If +true+, the intention to perform the operation is recorded so that
  GUIs and CLI tools can indicate that an operation is in progress.
  This is best set as an _operation default_ (see next section).
  Allowed values: +true+, +false+.
  indexterm:[enabled,Action Property]
  indexterm:[Action,Property,enabled]
 
 |role
 |
 |Run the operation only on node(s) that the cluster thinks should be in
  the specified role. This only makes sense for recurring +monitor+ operations.
  Allowed (case-sensitive) values: +Stopped+, +Started+, and in the
  case of <<s-resource-promotable,promotable clone resources>>, +Slave+ and +Master+.
  indexterm:[role,Action Property]
  indexterm:[Action,Property,role]
 
 |=========================================================
 
 [[s-resource-monitoring]]
 === Monitoring Resources for Failure ===
 
 When Pacemaker first starts a resource, it runs one-time +monitor+ operations
 (referred to as 'probes') to ensure the resource is running where it's
 supposed to be, and not running where it's not supposed to be. (This behavior
 can be affected by the +resource-discovery+ location constraint property.)
 
 Other than those initial probes, Pacemaker will not (by default) check that
 the resource continues to stay healthy. As in the example above, you must
 configure +monitor+ operations explicitly to perform these checks.
 
 By default, a +monitor+ operation will ensure that the resource is running
 where it is supposed to. The +target-role+ property can be used for further
 checking.
 
 For example, if a resource has one +monitor+ operation with
 +interval=10 role=Started+ and a second +monitor+ operation with
 +interval=11 role=Stopped+, the cluster will run the first monitor on any nodes
 it thinks 'should' be running the resource, and the second monitor on any nodes
 that it thinks 'should not' be running the resource (for the truly paranoid,
 who want to know when an administrator manually starts a service by mistake).
 
 [[s-monitoring-unmanaged]]
 === Monitoring Resources When Administration is Disabled ===
 
 Recurring +monitor+ operations behave differently under various administrative
 settings:
 
 * When a resource is unmanaged (by setting +is-managed=false+): No monitors
   will be stopped.
 +
 If the unmanaged resource is stopped on a node where the cluster thinks it
 should be running, the cluster will detect and report that it is not, but it
 will not consider the monitor failed, and will not try to start the resource
 until it is managed again.
 +
 Starting the unmanaged resource on a different node is strongly discouraged
 and will at least cause the cluster to consider the resource failed, and
 may require the resource's +target-role+ to be set to +Stopped+ then +Started+
 to be recovered.
 
 * When a node is put into standby: All resources will be moved away from the
   node, and all +monitor+ operations will be stopped on the node, except those
   specifying +role+ as +Stopped+. Such rather atypical monitoring will
   consequently be started on the node if appropriate.
 
 * When the cluster is put into maintenance mode: All resources will be marked
   as unmanaged. All monitor operations will be stopped, except those with
   specifying +role+ as +Stopped+. As with single unmanaged resources, starting
   a resource on a node other than where the cluster expects it to be will
   cause problems.
 
 [[s-operation-defaults]]
 === Setting Global Defaults for Operations ===
 
 You can change the global default values for operation properties
 in a given cluster. These are defined in an +op_defaults+ section 
 of the CIB's +configuration+ section, and can be set with `crm_attribute`.
 For example,
 
 ----
 # crm_attribute --type op_defaults --name timeout --update 20s
 ----
 
 would default each operation's +timeout+ to 20 seconds.  If an
 operation's definition also includes a value for +timeout+, then that
 value would be used for that operation instead.
 
 === When Implicit Operations Take a Long Time ===
 
 The cluster will always perform a number of implicit operations: +start+,
 +stop+ and a non-recurring +monitor+ operation used at startup to check
 whether the resource is already active.  If one of these is taking too long,
 then you can create an entry for them and specify a longer timeout.
 
 .An OCF resource with custom timeouts for its implicit actions
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
   <operations>
      <op id="public-ip-startup" name="monitor" interval="0" timeout="90s"/>
      <op id="public-ip-start" name="start" interval="0" timeout="180s"/>
      <op id="public-ip-stop" name="stop" interval="0" timeout="15min"/>
   </operations>
   <instance_attributes id="params-public-ip">
      <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
   </instance_attributes>
 </primitive>
 -------
 =====
 
 === Multiple Monitor Operations ===
 
 Provided no two operations (for a single resource) have the same name
 and interval, you can have as many +monitor+ operations as you like.
 In this way, you can do a superficial health check every minute and
 progressively more intense ones at higher intervals.
 
 To tell the resource agent what kind of check to perform, you need to
 provide each monitor with a different value for a common parameter.
 The OCF standard creates a special parameter called +OCF_CHECK_LEVEL+
 for this purpose and dictates that it is "made available to the
 resource agent without the normal +OCF_RESKEY+ prefix".
 
 Whatever name you choose, you can specify it by adding an
 +instance_attributes+ block to the +op+ tag. It is up to each
 resource agent to look for the parameter and decide how to use it.
 
 .An OCF resource with two recurring health checks, performing different levels of checks specified via +OCF_CHECK_LEVEL+.
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <operations>
       <op id="public-ip-health-60" name="monitor" interval="60">
          <instance_attributes id="params-public-ip-depth-60">
             <nvpair id="public-ip-depth-60" name="OCF_CHECK_LEVEL" value="10"/>
          </instance_attributes>
       </op>
       <op id="public-ip-health-300" name="monitor" interval="300">
          <instance_attributes id="params-public-ip-depth-300">
             <nvpair id="public-ip-depth-300" name="OCF_CHECK_LEVEL" value="20"/>
          </instance_attributes>
      </op>
    </operations>
    <instance_attributes id="params-public-ip">
        <nvpair id="public-ip-level" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 === Disabling a Monitor Operation ===
 
 The easiest way to stop a recurring monitor is to just delete it.
 However, there can be times when you only want to disable it
 temporarily.  In such cases, simply add +enabled=false+ to the
 operation's definition.
 
 .Example of an OCF resource with a disabled health check
 =====
 [source,XML]
 -------
 <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
    <operations>
       <op id="public-ip-check" name="monitor" interval="60s" enabled="false"/>
    </operations>
    <instance_attributes id="params-public-ip">
       <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 This can be achieved from the command line by executing:
 
 ----
 # cibadmin --modify --xml-text '<op id="public-ip-check" enabled="false"/>'
 ----
 
 Once you've done whatever you needed to do, you can then re-enable it with
 ----
 # cibadmin --modify --xml-text '<op id="public-ip-check" enabled="true"/>'
 ----
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Rules.txt b/doc/Pacemaker_Explained/en-US/Ch-Rules.txt
index 6e39ba27a8..af05d7b8f4 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Rules.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Rules.txt
@@ -1,642 +1,642 @@
 = Rules =
 
 //// 
 We prefer [[ch-rules]], but older versions of asciidoc don't deal well
 with that construct for chapter headings
 ////
 
 anchor:ch-rules[Chapter 8, Rules]
 indexterm:[Resource,Constraint,Rule]
 
 Rules can be used to make your configuration more dynamic.  One common
 example is to set one value for +resource-stickiness+ during working
 hours, to prevent resources from being moved back to their most
 preferred location, and another on weekends when no-one is around to
 notice an outage.
 
 Another use of rules might be to assign machines to different
 processing groups (using a node attribute) based on time and to then
 use that attribute when creating location constraints.
 
 Each rule can contain a number of expressions, date-expressions and
 even other rules.  The results of the expressions are combined based
 on the rule's +boolean-op+ field to determine if the rule ultimately
 evaluates to +true+ or +false+.  What happens next depends on the
 context in which the rule is being used.
     
 == Rule Properties ==
 
 .Properties of a Rule
-[width="95%",cols="2m,1,5<",options="header",align="center"]
+[width="95%",cols="2m,1,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the rule (required)
  indexterm:[id,Constraint Rule]
  indexterm:[Constraint,Rule,id]
 
 |role
 |+Started+
 |Limits the rule to apply only when the resource is in the specified
  role. Allowed values are +Started+, +Slave+, and +Master+. A rule
  with +role="Master"+ cannot determine the initial location of a
  clone instance and will only affect which of the active instances
  will be promoted.
  indexterm:[role,Constraint Rule]
  indexterm:[Constraint,Rule,role]
 
 |score
 |
 |The score to apply if the rule evaluates to +true+. Limited to use in
  rules that are part of location constraints.
  indexterm:[score,Constraint Rule]
  indexterm:[Constraint,Rule,score]
 
 |score-attribute
 |
 |The node attribute to look up and use as a score if the rule
  evaluates to +true+. Limited to use in rules that are part of
  location constraints.
  indexterm:[score-attribute,Constraint Rule]
  indexterm:[Constraint,Rule,score-attribute]
 
 |boolean-op 
 |+and+
 |How to combine the result of multiple expression objects. Allowed
  values are +and+ and +or+.
  indexterm:[boolean-op,Constraint Rule]
  indexterm:[Constraint,Rule,boolean-op]
 
 |=========================================================
 
 == Node Attribute Expressions ==
 
 indexterm:[Resource,Constraint,Attribute Expression]
 
 Expression objects are used to control a resource based on the
 attributes defined by a node or nodes.
 
 .Properties of an Expression
-[width="95%",cols="2m,1,5<a",options="header",align="center"]
+[width="95%",cols="2m,1,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Default
 |Description
 
 |id
 |
 |A unique name for the expression (required)
  indexterm:[id,Constraint Expression]
  indexterm:[Constraint,Attribute Expression,id]
 
 |attribute
 |
 |The node attribute to test (required)
  indexterm:[attribute,Constraint Expression]
  indexterm:[Constraint,Attribute Expression,attribute]
 
 |type
 |+string+
 |Determines how the value(s) should be tested. Allowed values are
  +string+, +integer+, and +version+.
  indexterm:[type,Constraint Expression]
  indexterm:[Constraint,Attribute Expression,type]
 
 |operation
 |
-|The comparison to perform (required). Allowed values:
+a|The comparison to perform (required). Allowed values:
 
 * +lt:+ True if the value of the node's +attribute+ is less than +value+
 * +gt:+ True if the value of the node's +attribute+ is greater than +value+
 * +lte:+ True if the value of the node's +attribute+ is less than or equal to +value+
 * +gte:+ True if the value of the node's +attribute+ is greater than or equal to +value+
 * +eq:+ True if the value of the node's +attribute+ is equal to +value+
 * +ne:+ True if the value of the node's +attribute+ is not equal to +value+
 * +defined:+ True if the node has the named attribute
 * +not_defined:+ True if the node does not have the named attribute
  indexterm:[operation,Constraint Expression]
  indexterm:[Constraint,Attribute Expression,operation]
 
 |value
 |
 |User-supplied value for comparison (required)
  indexterm:[value,Constraint Expression]
  indexterm:[Constraint,Attribute Expression,value]
 
 |value-source
 |+literal+
-|How the +value+ is derived. Allowed values:
+a|How the +value+ is derived. Allowed values:
 
 * +literal+: +value+ is a literal string to compare against
 * +param+: +value+ is the name of a resource parameter to compare against (only
   valid in location constraints)
 * +meta+: +value+ is the name of a resource meta-attribute to compare against
   (only valid in location constraints)
  indexterm:[value,Constraint Expression]
  indexterm:[Constraint,Attribute Expression,value]
 
 |=========================================================
 
 In addition to any attributes added by the administrator, the cluster defines
 special, built-in node attributes for each node that can also be used.
 
 .Built-in node attributes
-[width="95%",cols="1m,5<a",options="header",align="center"]
+[width="95%",cols="1m,<5",options="header",align="center"]
 |=========================================================
 
 |Name
 |Value
 
 |#uname
 |Node <<s-node-name,name>>
 
 |#id
 |Node ID
 
 |#kind
 |Node type. Possible values are +cluster+, +remote+, and +container+. Kind is
  +remote+ for Pacemaker Remote nodes created with the +ocf:pacemaker:remote+
  resource, and +container+ for Pacemaker Remote guest nodes and bundle nodes
 
 |#is_dc
 |"true" if this node is a Designated Controller (DC), "false" otherwise
 
 |#cluster-name
 |The value of the +cluster-name+ cluster property, if set
 
 |#site-name
 |The value of the +site-name+ cluster property, if set, otherwise identical to
  +#cluster-name+
 
 |#role
-|The role the relevant promotable clone resource has on this node. Valid only within
+a|The role the relevant promotable clone resource has on this node. Valid only within
  a rule for a location constraint for a promotable clone resource.
 
 ////
 // if uncommenting, put a pipe in front of first two lines
 #ra-version
 The installed version of the resource agent on the node, as defined
  by the +version+ attribute of the +resource-agent+ tag in the agent's
  metadata. Valid only within rules controlling resource options. This can be
  useful during rolling upgrades of a backward-incompatible resource agent.
  '(coming in x.x.x)'
 ////
 
 |=========================================================
 
 == Time- and Date-Based Expressions ==
 
 indexterm:[Time Based Expressions]
 indexterm:[Resource,Constraint,Date/Time Expression]
         
 As the name suggests, +date_expressions+ are used to control a
 resource or cluster option based on the current date/time.  They may
 contain an optional +date_spec+ and/or +duration+ object depending on
 the context.
       
 .Properties of a Date Expression
-[width="95%",cols="2m,5<a",options="header",align="center"]
+[width="95%",cols="2m,<5",options="header",align="center"]
 |=========================================================
 |Field
 |Description
 
 |start
 |A date/time conforming to the http://en.wikipedia.org/wiki/ISO_8601[ISO8601]
  specification.
  indexterm:[start,Constraint Expression]
  indexterm:[Constraint,Date/Time Expression,start]
 
 |end
 |A date/time conforming to the http://en.wikipedia.org/wiki/ISO_8601[ISO8601]
  specification. Can be inferred by supplying a value for +start+ and a
  +duration+.
  indexterm:[end,Constraint Expression]
  indexterm:[Constraint,Date/Time Expression,end]
 
 |operation
-|Compares the current date/time with the start and/or end date,
+a|Compares the current date/time with the start and/or end date,
  depending on the context. Allowed values:
 
 * +gt:+ True if the current date/time is after +start+
 * +lt:+ True if the current date/time is before +end+
 * +in_range:+ True if the current date/time is after +start+ and before +end+
 * +date_spec:+ True if the current date/time matches a +date_spec+ object
   (described below)
  indexterm:[operation,Constraint Expression]
  indexterm:[Constraint,Date/Time Expression,operation]
 
 |=========================================================
 
 [NOTE]
 ======
 As these comparisons (except for +date_spec+) include the time, the
 +eq+, +neq+, +gte+ and +lte+ operators have not been implemented since
 they would only be valid for a single second.
 ======
 
 === Date Specifications ===
 indexterm:[Date Specification]
 indexterm:[Resource,Constraint,Date Specification]
 
 +date_spec+ objects are used to create cron-like expressions relating
 to time.  Each field can contain a single number or a single range.
 Instead of defaulting to zero, any field not supplied is ignored.
 
 For example, +monthdays="1"+ matches the first day of every month and
 +hours="09-17"+ matches the hours between 9am and 5pm (inclusive).
 At this time, multiple ranges (e.g. +weekdays="1,2"+ or
 +weekdays="1-2,5-6"+) are not supported; depending on
 demand, this might be implemented in a future release.
         
 .Properties of a Date Specification
-[width="95%",cols="2m,5<",options="header",align="center"]
+[width="95%",cols="2m,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |id
 |A unique name for the object
  indexterm:[id,Date Specification]
  indexterm:[Constraint,Date Specification,id]
 
 |hours
 |Allowed values: 0-23
  indexterm:[hours,Date Specification]
  indexterm:[Constraint,Date Specification,hours]
 
 |monthdays
 |Allowed values: 1-31 (depending on month and year)
  indexterm:[monthdays,Date Specification]
  indexterm:[Constraint,Date Specification,monthdays]
 
 |weekdays
 |Allowed values: 1-7 (1=Monday, 7=Sunday)
  indexterm:[weekdays,Date Specification]
  indexterm:[Constraint,Date Specification,weekdays]
 
 |yeardays
 |Allowed values: 1-366 (depending on the year)
  indexterm:[yeardays,Date Specification]
  indexterm:[Constraint,Date Specification,yeardays]
 
 |months
 |Allowed values: 1-12
  indexterm:[months,Date Specification]
  indexterm:[Constraint,Date Specification,months]
 
 |weeks
 |Allowed values: 1-53 (depending on weekyear)
  indexterm:[weeks,Date Specification]
  indexterm:[Constraint,Date Specification,weeks]
 
 |years
 |Year according to the Gregorian calendar
  indexterm:[years,Date Specification]
  indexterm:[Constraint,Date Specification,years]
 
 |weekyears
 |Year in which the week started; e.g. 1 January 2005
  can be specified as '2005-001 Ordinal', '2005-01-01 Gregorian' or '2004-W53-6
  Weekly' and thus would match +years="2005"+ or +weekyears="2004"+
  indexterm:[weekyears,Date Specification]
  indexterm:[Constraint,Date Specification,weekyears]
 
 |moon
 |Allowed values are 0-7 (0 is new, 4 is full moon). Seriously, you can
  use this. This was implemented to demonstrate the ease with which new
  comparisons could be added.
  indexterm:[moon,Date Specification]
  indexterm:[Constraint,Date Specification,moon]
 
 |=========================================================
 
 === Durations ===
 indexterm:[Duration]
 indexterm:[Resource,Constraint,Duration]
 
 Durations are used to calculate a value for +end+ when one is not
 supplied to +in_range+ operations. They contain the same fields as
 +date_spec+ objects but without the limitations (e.g. you can have a
 duration of 19 months). As with +date_specs+, any field not supplied is
 ignored.
 
 === Sample Time-Based Expressions ===
 
 A small sample of how time-based expressions can be used:
 
 ////
 On older versions of asciidoc, the [source] directive makes the title disappear
 ////
 
 .True if now is any time in the year 2005
 ====
 [source,XML]
 ----
 <rule id="rule1">
    <date_expression id="date_expr1" start="2005-001" operation="in_range">
     <duration years="1"/>
    </date_expression>
 </rule>
 ----
 ====
 
 .Equivalent expression
 ====
 [source,XML]
 ----
 <rule id="rule2">
    <date_expression id="date_expr2" operation="date_spec">
     <date_spec years="2005"/>
    </date_expression>
 </rule> 
 ----
 ====
 
 .9am-5pm Monday-Friday
 ====
 [source,XML]
 -------
 <rule id="rule3">
    <date_expression id="date_expr3" operation="date_spec">
     <date_spec hours="9-16" days="1-5"/>
    </date_expression>
 </rule> 
 -------
 ====
 
 Please note that the +16+ matches up to +16:59:59+, as the numeric
 value (hour) still matches!
 
 .9am-6pm Monday through Friday or anytime Saturday
 ====
 [source,XML]
 -------
 <rule id="rule4" boolean-op="or">
    <date_expression id="date_expr4-1" operation="date_spec">
     <date_spec hours="9-16" days="1-5"/>
    </date_expression>
    <date_expression id="date_expr4-2" operation="date_spec">
     <date_spec days="6"/>
    </date_expression>
 </rule> 
 -------
 ====
 
 .9am-5pm or 9pm-12am Monday through Friday
 ====
 [source,XML]
 -------
 <rule id="rule5" boolean-op="and">
    <rule id="rule5-nested1" boolean-op="or">
     <date_expression id="date_expr5-1" operation="date_spec">
      <date_spec hours="9-16"/>
     </date_expression>
     <date_expression id="date_expr5-2" operation="date_spec">
      <date_spec hours="21-23"/>
     </date_expression>
    </rule>
    <date_expression id="date_expr5-3" operation="date_spec">
     <date_spec days="1-5"/>
    </date_expression>
   </rule> 
 -------
 ====
 
 .Mondays in March 2005
 ====
 [source,XML]
 -------
 <rule id="rule6" boolean-op="and">
    <date_expression id="date_expr6-1" operation="date_spec">
     <date_spec weekdays="1"/>
    </date_expression>
    <date_expression id="date_expr6-2" operation="in_range"
      start="2005-03-01" end="2005-04-01"/>
   </rule> 
 -------
 ====
 
 [NOTE]
 ======
 Because no time is specified with the above dates, 00:00:00 is implied. This
 means that the range includes all of 2005-03-01 but none of 2005-04-01.
 You may wish to write +end="2005-03-31T23:59:59"+ to avoid confusion.
 ======
 
 .A full moon on Friday the 13th
 =====
 [source,XML]
 -------
 <rule id="rule7" boolean-op="and">
    <date_expression id="date_expr7" operation="date_spec">
     <date_spec weekdays="5" monthdays="13" moon="4"/>
    </date_expression>
 </rule> 
 -------
 =====
 
 == Using Rules to Determine Resource Location ==
 indexterm:[Rule,Determine Resource Location]
 indexterm:[Resource,Location,Determine by Rules]
 
 A location constraint may contain rules. When the constraint's outermost
 rule evaluates to +false+, the cluster treats the constraint as if it were not
 there.  When the rule evaluates to +true+, the node's preference for running
 the resource is updated with the score associated with the rule.
 
 If this sounds familiar, it is because you have been using a simplified
 syntax for location constraint rules already.  Consider the following
 location constraint:
       
 .Prevent myApacheRsc from running on c001n03
 =====
 [source,XML]
 -------
 <rsc_location id="dont-run-apache-on-c001n03" rsc="myApacheRsc" 
               score="-INFINITY" node="c001n03"/> 
 -------
 =====
 
 This constraint can be more verbosely written as:
 
 .Prevent myApacheRsc from running on c001n03 - expanded version
 =====
 [source,XML]
 -------
 <rsc_location id="dont-run-apache-on-c001n03" rsc="myApacheRsc">
     <rule id="dont-run-apache-rule" score="-INFINITY">
       <expression id="dont-run-apache-expr" attribute="#uname"
         operation="eq" value="c00n03"/>
     </rule>
 </rsc_location>
 -------
 =====
 
 The advantage of using the expanded form is that one can then add
 extra clauses to the rule, such as limiting the rule such that it only
 applies during certain times of the day or days of the week.
 
 === Location Rules Based on Other Node Properties ===
 
 The expanded form allows us to match on node properties other than its name.
 If we rated each machine's CPU power such that the cluster had the
 following nodes section:
 
 .A sample nodes section for use with score-attribute 
 =====
 [source,XML]
 -------
 <nodes>
    <node id="uuid1" uname="c001n01" type="normal">
       <instance_attributes id="uuid1-custom_attrs">
         <nvpair id="uuid1-cpu_mips" name="cpu_mips" value="1234"/>
       </instance_attributes>
    </node>
    <node id="uuid2" uname="c001n02" type="normal">
       <instance_attributes id="uuid2-custom_attrs">
         <nvpair id="uuid2-cpu_mips" name="cpu_mips" value="5678"/>
       </instance_attributes>
    </node>
 </nodes>
 -------
 =====
 
 then we could prevent resources from running on underpowered machines with this rule:
 
 [source,XML]
 -------
 <rule id="need-more-power-rule" score="-INFINITY">
    <expression id="need-more-power-expr" attribute="cpu_mips"
                operation="lt" value="3000"/>
 </rule>
 -------
 
 === Using +score-attribute+ Instead of +score+ ===
 
 When using +score-attribute+ instead of +score+, each node matched by
 the rule has its score adjusted differently, according to its value
 for the named node attribute.  Thus, in the previous example, if a
 rule used +score-attribute="cpu_mips"+, +c001n01+ would have its
 preference to run the resource increased by +1234+ whereas +c001n02+
 would have its preference increased by +5678+.
 
 == Using Rules to Control Resource Options ==
 
 Often some cluster nodes will be different from their peers. Sometimes,
 these differences -- e.g. the location of a binary or the names of network
 interfaces -- require resources to be configured differently depending
 on the machine they're hosted on.
 
 By defining multiple +instance_attributes+ objects for the resource
 and adding a rule to each, we can easily handle these special cases.
 
 In the example below, +mySpecialRsc+ will use eth1 and port 9999 when
 run on +node1+, eth2 and port 8888 on +node2+ and default to eth0 and
 port 9999 for all other nodes.
 
 .Defining different resource options based on the node name
 =====
 [source,XML]
 -------
 <primitive id="mySpecialRsc" class="ocf" type="Special" provider="me">
    <instance_attributes id="special-node1" score="3">
     <rule id="node1-special-case" score="INFINITY" >
      <expression id="node1-special-case-expr" attribute="#uname"
        operation="eq" value="node1"/>
     </rule>
     <nvpair id="node1-interface" name="interface" value="eth1"/>
    </instance_attributes>
    <instance_attributes id="special-node2" score="2" >
     <rule id="node2-special-case" score="INFINITY">
      <expression id="node2-special-case-expr" attribute="#uname"
        operation="eq" value="node2"/>
     </rule>
     <nvpair id="node2-interface" name="interface" value="eth2"/>
     <nvpair id="node2-port" name="port" value="8888"/>
    </instance_attributes>
    <instance_attributes id="defaults" score="1" >
     <nvpair id="default-interface" name="interface" value="eth0"/>
     <nvpair id="default-port" name="port" value="9999"/>
    </instance_attributes>
 </primitive>
 -------
 =====
 
 The order in which +instance_attributes+ objects are evaluated is
 determined by their score (highest to lowest).  If not supplied, score
 defaults to zero, and objects with an equal score are processed in
 listed order.  If the +instance_attributes+ object has no rule
 or a +rule+ that evaluates to +true+, then for any parameter the resource does
 not yet have a value for, the resource will use the parameter values defined by
 the +instance_attributes+.
 
 For example, given the configuration above, if the resource is placed on node1:
 
 . +special-node1+ has the highest score (3) and so is evaluated first;
   its rule evaluates to +true+, so +interface+ is set to +eth1+.
 . +special-node2+ is evaluated next with score 2, but its rule evaluates to +false+,
   so it is ignored.
 . +defaults+ is evaluated last with score 1, and has no rule, so its values
   are examined; +interface+ is already defined, so the value here is not used,
   but +port+ is not yet defined, so +port+ is set to +9999+.
 
 == Using Rules to Control Cluster Options ==
 indexterm:[Rule,Controlling Cluster Options]
 indexterm:[Cluster,Setting Options with Rules]
 
 Controlling cluster options is achieved in much the same manner as
 specifying different resource options on different nodes.
 
 The difference is that because they are cluster options, one cannot
 (or should not, because they won't work) use attribute-based
 expressions.  The following example illustrates how to set a different
 +resource-stickiness+ value during and outside work hours.  This
 allows resources to automatically move back to their most preferred
 hosts, but at a time that (in theory) does not interfere with business
 activities.
 
 .Change +resource-stickiness+ during working hours
 =====
 [source,XML]
 -------
 <rsc_defaults>
    <meta_attributes id="core-hours" score="2">
       <rule id="core-hour-rule" score="0">
         <date_expression id="nine-to-five-Mon-to-Fri" operation="date_spec">
           <date_spec id="nine-to-five-Mon-to-Fri-spec" hours="9-16" weekdays="1-5"/>
         </date_expression>
       </rule>
       <nvpair id="core-stickiness" name="resource-stickiness" value="INFINITY"/>
    </meta_attributes>
    <meta_attributes id="after-hours" score="1" >
       <nvpair id="after-stickiness" name="resource-stickiness" value="0"/>
    </meta_attributes>
 </rsc_defaults>
 -------
 =====
 
 [[s-rules-recheck]]
 == Ensuring Time-Based Rules Take Effect ==
 
 A Pacemaker cluster is an event-driven system.  As such, it won't
 recalculate the best place for resources to run unless something
 (like a resource failure or configuration change) happens.  This can
 mean that a location constraint that only allows resource X to run
 between 9am and 5pm is not enforced.
 
 If you rely on time-based rules, the +cluster-recheck-interval+ cluster option
 (which defaults to 15 minutes) is essential.  This tells the cluster to
 periodically recalculate the ideal state of the cluster.
 
 For example, if you set +cluster-recheck-interval="5m"+, then sometime between
 09:00 and 09:05 the cluster would notice that it needs to start resource X,
 and between 17:00 and 17:05 it would realize that X needed to be stopped.
 The timing of the actual start and stop actions depends on what other actions
 the cluster may need to perform first.
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Status.txt b/doc/Pacemaker_Explained/en-US/Ch-Status.txt
index cc5eaa3ffe..e6394ad1a1 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Status.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Status.txt
@@ -1,372 +1,372 @@
 = Status -- Here be dragons =
 
 Most users never need to understand the contents of the status section
 and can be happy with the output from `crm_mon`.
 
 However for those with a curious inclination, this section attempts to
 provide an overview of its contents.
     
 == Node Status ==
 
 indexterm:[Node,Status]
 indexterm:[Status of a Node]
 
 In addition to the cluster's configuration, the CIB holds an
 up-to-date representation of each cluster node in the +status+ section.
       
 .A bare-bones status entry for a healthy node *cl-virt-1*
 ======
 [source,XML]
 -----
   <node_state id="1" uname="cl-virt-1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
    <transient_attributes id="1"/>
    <lrm id="1"/>
   </node_state>
 -----
 ======      
 
 Users are highly recommended _not_ to modify any part of a node's
 state _directly_.  The cluster will periodically regenerate the entire
 section from authoritative sources, so any changes should be done
 with the tools appropriate to those sources.
       
 .Authoritative Sources for State Information
-[width="95%",cols="1m,1<",options="header",align="center"]
+[width="95%",cols="1m,<1",options="header",align="center"]
 |=========================================================
 
 | CIB Object | Authoritative Source
 
 |node_state|pacemaker-controld
 
 |transient_attributes|pacemaker-attrd
 
 |lrm|pacemaker-execd
 
 |=========================================================
 
 The fields used in the +node_state+ objects are named as they are
 largely for historical reasons and are rooted in Pacemaker's origins
 as the resource manager for the older Heartbeat project. They have remained
 unchanged to preserve compatibility with older versions.
       
 .Node Status Fields
-[width="95%",cols="1m,4<",options="header",align="center"]
+[width="95%",cols="1m,<4",options="header",align="center"]
 |=========================================================
 
 |Field |Description
 
 
 | id |
 indexterm:[id,Node Status]
 indexterm:[Node,Status,id]
 Unique identifier for the node. Corosync-based clusters use a numeric counter.
 
 | uname |
 indexterm:[uname,Node Status]
 indexterm:[Node,Status,uname]
 The node's name as known by the cluster
 
 | in_ccm |            
 indexterm:[in_ccm,Node Status]
 indexterm:[Node,Status,in_ccm]
 Is the node a member at the cluster communication layer? Allowed values:
 +true+, +false+.
 
 | crmd |
 indexterm:[crmd,Node Status]
 indexterm:[Node,Status,crmd]
 Is the node a member at the pacemaker layer? Allowed values: +online+,
 +offline+.
 
 | crm-debug-origin |
 indexterm:[crm-debug-origin,Node Status]
 indexterm:[Node,Status,crm-debug-origin]
 The name of the source function that made the most recent change (for debugging
 purposes).
 
 | join |
 indexterm:[join,Node Status]
 indexterm:[Node,Status,join]
 Does the node participate in hosting resources? Allowed values: +down+,
 +pending+, +member+, +banned+.
             
 | expected |
 indexterm:[expected,Node Status]
 indexterm:[Node,Status,expected]
 Expected value for +join+.
 
 |=========================================================
 
 The cluster uses these fields to determine whether, at the node level, the
 node is healthy or is in a failed state and needs to be fenced.
 
 == Transient Node Attributes ==
 
 Like regular <<s-node-attributes,node attributes>>, the name/value
 pairs listed in the +transient_attributes+ section help to describe the
 node.  However they are forgotten by the cluster when the node goes offline.
 This can be useful, for instance, when you want a node to be in standby mode
 (not able to run resources) just until the next reboot.
       
 In addition to any values the administrator sets, the cluster will
 also store information about failed resources here.
       
 .A set of transient node attributes for node *cl-virt-1*
 ======
 [source,XML]
 -----
 <transient_attributes id="cl-virt-1">
   <instance_attributes id="status-cl-virt-1">
      <nvpair id="status-cl-virt-1-pingd" name="pingd" value="3"/>
      <nvpair id="status-cl-virt-1-probe_complete" name="probe_complete" value="true"/>
      <nvpair id="status-cl-virt-1-fail-count-pingd:0.monitor_30000" name="fail-count-pingd:0#monitor_30000" value="1"/>
      <nvpair id="status-cl-virt-1-last-failure-pingd:0" name="last-failure-pingd:0" value="1239009742"/>
   </instance_attributes>
 </transient_attributes>
 -----
 ======
 
 In the above example, we can see that a monitor on the +pingd:0+ resource has
 failed once, at 09:22:22 UTC 6 April 2009.
 footnote:[
 You can use the standard `date` command to print a human-readable version of
 any seconds-since-epoch value, for example `date -d @1239009742`.
 ]
 We also see that the node is connected to three *pingd* peers and that
 all known resources have been checked for on this machine (+probe_complete+).
       
 == Operation History ==
 indexterm:[Operation History] 
 
 A node's resource history is held in the +lrm_resources+ tag (a child
 of the +lrm+ tag). The information stored here includes enough
 information for the cluster to stop the resource safely if it is
 removed from the +configuration+ section. Specifically, the resource's
 +id+, +class+, +type+ and +provider+ are stored.
 
 .A record of the +apcstonith+ resource
 ======
 [source,XML]
 <lrm_resource id="apcstonith" type="apcmastersnmp" class="stonith"/>
 ======
 
 Additionally, we store the last job for every combination of
 +resource+, +action+ and +interval+.  The concatenation of the values in
 this tuple are used to create the id of the +lrm_rsc_op+ object.
 
 .Contents of an +lrm_rsc_op+ job
-[width="95%",cols="2m,5<",options="header",align="center"]
+[width="95%",cols="2m,<5",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 | id |
 indexterm:[id,Action Status]
 indexterm:[Action,Status,id]
 
 Identifier for the job constructed from the resource's +id+,
 +operation+ and +interval+.
             
 | call-id |
 indexterm:[call-id,Action Status]
 indexterm:[Action,Status,call-id]
 
 The job's ticket number. Used as a sort key to determine the order in
 which the jobs were executed.
             
 | operation |
 indexterm:[operation,Action Status]
 indexterm:[Action,Status,operation]
 
 The action the resource agent was invoked with.
 
 | interval |
 indexterm:[interval,Action Status]
 indexterm:[Action,Status,interval]
 
 The frequency, in milliseconds, at which the operation will be
 repeated. A one-off job is indicated by 0.
             
 | op-status |
 indexterm:[op-status,Action Status]
 indexterm:[Action,Status,op-status]
 
 The job's status. Generally this will be either 0 (done) or -1
 (pending). Rarely used in favor of +rc-code+.
             
 | rc-code |
 indexterm:[rc-code,Action Status]
 indexterm:[Action,Status,rc-code]
 
 The job's result. Refer to the 'Resource Agents' chapter of 'Pacemaker
 Administration' for details on what the values here mean and how they are
 interpreted.
 
 | last-run |
 indexterm:[last-run,Action Status]
 indexterm:[Action,Status,last-run]
 
 Machine-local date/time, in seconds since epoch,
 at which the job was executed. For diagnostic purposes.
 
 | last-rc-change |
 indexterm:[last-rc-change,Action Status]
 indexterm:[Action,Status,last-rc-change]
 
 Machine-local date/time, in seconds since epoch,
 at which the job first returned the current value of +rc-code+.
 For diagnostic purposes.
 
 | exec-time |
 indexterm:[exec-time,Action Status]
 indexterm:[Action,Status,exec-time]
 
 Time, in milliseconds, that the job was running for.
 For diagnostic purposes.
 
 | queue-time |
 indexterm:[queue-time,Action Status]
 indexterm:[Action,Status,queue-time]
 
 Time, in seconds, that the job was queued for in the LRMd.
 For diagnostic purposes.
 
 | crm_feature_set |
 indexterm:[crm_feature_set,Action Status]
 indexterm:[Action,Status,crm_feature_set]
 
 The version which this job description conforms to. Used when
 processing +op-digest+.
 
 | transition-key |
 indexterm:[transition-key,Action Status]
 indexterm:[Action,Status,transition-key]
 
 A concatenation of the job's graph action number, the graph number,
 the expected result and the UUID of the controller instance that scheduled
 it. This is used to construct +transition-magic+ (below).
 
 | transition-magic |
 indexterm:[transition-magic,Action Status]
 indexterm:[Action,Status,transition-magic]
 
 A concatenation of the job's +op-status+, +rc-code+ and
 +transition-key+. Guaranteed to be unique for the life of the cluster
 (which ensures it is part of CIB update notifications) and contains
 all the information needed for the controller to correctly analyze and
 process the completed job. Most importantly, the decomposed elements
 tell the controller if the job entry was expected and whether it failed.
             
 | op-digest |
 indexterm:[op-digest,Action Status]
 indexterm:[Action,Status,op-digest]
 
 An MD5 sum representing the parameters passed to the job. Used to
 detect changes to the configuration, to restart resources if
 necessary.
 
 | crm-debug-origin |
 indexterm:[crm-debug-origin,Action Status]
 indexterm:[Action,Status,crm-debug-origin]
 
 The origin of the current values.
 For diagnostic purposes.
 
 |=========================================================
 
 === Simple Operation History Example ===
         
 .A monitor operation (determines current state of the +apcstonith+ resource)
 ======
 [source,XML]
 -----
 <lrm_resource id="apcstonith" type="apcmastersnmp" class="stonith">
   <lrm_rsc_op id="apcstonith_monitor_0" operation="monitor" call-id="2"
     rc-code="7" op-status="0" interval="0"
     crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
     op-digest="2e3da9274d3550dc6526fb24bfcbcba0"
     transition-key="22:2:7:2668bbeb-06d5-40f9-936d-24cb7f87006a"
     transition-magic="0:7;22:2:7:2668bbeb-06d5-40f9-936d-24cb7f87006a"
     last-run="1239008085" last-rc-change="1239008085" exec-time="10" queue-time="0"/>
 </lrm_resource>
 -----
 ======
         
 In the above example, the job is a non-recurring monitor operation
 often referred to as a "probe" for the +apcstonith+ resource.
 
 The cluster schedules probes for every configured resource on a node when
 the node first starts, in order to determine the resource's current state
 before it takes any further action.
         
 From the +transition-key+, we can see that this was the 22nd action of
 the 2nd graph produced by this instance of the controller
 (2668bbeb-06d5-40f9-936d-24cb7f87006a).
 
 The third field of the +transition-key+ contains a 7, which indicates
 that the job expects to find the resource inactive. By looking at the +rc-code+
 property, we see that this was the case.
 
 As that is the only job recorded for this node, we can conclude that
 the cluster started the resource elsewhere.
 
 === Complex Operation History Example ===
         
 .Resource history of a +pingd+ clone with multiple jobs
 ======
 [source,XML]
 -----
 <lrm_resource id="pingd:0" type="pingd" class="ocf" provider="pacemaker">
   <lrm_rsc_op id="pingd:0_monitor_30000" operation="monitor" call-id="34"
     rc-code="0" op-status="0" interval="30000"
     crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
     transition-key="10:11:0:2668bbeb-06d5-40f9-936d-24cb7f87006a"
     ...
     last-run="1239009741" last-rc-change="1239009741" exec-time="10" queue-time="0"/>
   <lrm_rsc_op id="pingd:0_stop_0" operation="stop"
     crm-debug-origin="do_update_resource" crm_feature_set="3.0.1" call-id="32"
     rc-code="0" op-status="0" interval="0"
     transition-key="11:11:0:2668bbeb-06d5-40f9-936d-24cb7f87006a"
     ...
     last-run="1239009741" last-rc-change="1239009741" exec-time="10" queue-time="0"/>
   <lrm_rsc_op id="pingd:0_start_0" operation="start" call-id="33"
     rc-code="0" op-status="0" interval="0"
     crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
     transition-key="31:11:0:2668bbeb-06d5-40f9-936d-24cb7f87006a"
     ...
     last-run="1239009741" last-rc-change="1239009741" exec-time="10" queue-time="0" />
   <lrm_rsc_op id="pingd:0_monitor_0" operation="monitor" call-id="3"
     rc-code="0" op-status="0" interval="0"
     crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
     transition-key="23:2:7:2668bbeb-06d5-40f9-936d-24cb7f87006a"
     ...
     last-run="1239008085" last-rc-change="1239008085" exec-time="20" queue-time="0"/>
   </lrm_resource>
 -----
 ======
 
 When more than one job record exists, it is important to first sort
 them by +call-id+ before interpreting them.
 
 Once sorted, the above example can be summarized as:
 
 . A non-recurring monitor operation returning 7 (not running), with a +call-id+ of 3
 . A stop operation returning 0 (success), with a +call-id+ of 32
 . A start operation returning 0 (success), with a +call-id+ of 33
 . A recurring monitor returning 0 (success), with a +call-id+ of 34
 
 
 The cluster processes each job record to build up a picture of the
 resource's state.  After the first and second entries, it is
 considered stopped, and after the third it considered active.
 
 Based on the last operation, we can tell that the resource is
 currently active.
 
 Additionally, from the presence of a +stop+ operation with a lower
 +call-id+ than that of the +start+ operation, we can conclude that the
 resource has been restarted.  Specifically this occurred as part of
 actions 11 and 31 of transition 11 from the controller instance with the key
 +2668bbeb...+.  This information can be helpful for locating the
 relevant section of the logs when looking for the source of a failure.
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
index 7c11c851fb..ff7cf5f98b 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
@@ -1,939 +1,939 @@
 = STONITH =
 
 ////
 We prefer [[ch-stonith]], but older versions of asciidoc don't deal well
 with that construct for chapter headings
 ////
 anchor:ch-stonith[Chapter 13, STONITH]
 indexterm:[STONITH, Configuration]
 
 == What Is STONITH? ==
 
 STONITH (an acronym for "Shoot The Other Node In The Head"), also called
 'fencing', protects your data from being corrupted by rogue nodes or concurrent
 access.
 
 Just because a node is unresponsive, this doesn't mean it isn't
 accessing your data. The only way to be 100% sure that your data is
 safe, is to use STONITH so we can be certain that the node is truly
 offline, before allowing the data to be accessed from another node.
 
 STONITH also has a role to play in the event that a clustered service
 cannot be stopped. In this case, the cluster uses STONITH to force the
 whole node offline, thereby making it safe to start the service
 elsewhere.
 
 == What STONITH Device Should You Use? ==
 
 It is crucial that the STONITH device can allow the cluster to
 differentiate between a node failure and a network one.
 
 The biggest mistake people make in choosing a STONITH device is to
 use a remote power switch (such as many on-board IPMI controllers) that
 shares power with the node it controls. In such cases, the cluster
 cannot be sure if the node is really offline, or active and suffering
 from a network fault.
 
 Likewise, any device that relies on the machine being active (such as
 SSH-based "devices" used during testing) are inappropriate.
 
 == Special Treatment of STONITH Resources ==
 
 STONITH resources are somewhat special in Pacemaker.
 
 STONITH may be initiated by pacemaker or by other parts of the cluster
 (such as resources like DRBD or DLM). To accommodate this, pacemaker
 does not require the STONITH resource to be in the 'started' state
 in order to be used, thus allowing reliable use of STONITH devices in such a
 case.
 
 All nodes have access to STONITH devices' definitions and instantiate them
 on-the-fly when needed, but preference is given to 'verified' instances, which
 are the ones that are 'started' according to the cluster's knowledge.
 
 In the case of a cluster split, the partition with a verified instance
 will have a slight advantage, because the STONITH daemon in the other partition
 will have to hear from all its current peers before choosing a node to
 perform the fencing.
 
 Fencing resources do work the same as regular resources in some respects:
 
 * +target-role+ can be used to enable or disable the resource
 * Location constraints can be used to prevent a specific node from using the resource
 
 [IMPORTANT]
 ===========
 Currently there is a limitation that fencing resources may only have
 one set of meta-attributes and one set of instance attributes.  This
 can be revisited if it becomes a significant limitation for people.
 ===========
 
 See the table below or run `man pacemaker-fenced` to see special instance attributes
 that may be set for any fencing resource, regardless of fence agent.
 
 .Additional Properties of Fencing Resources
-[width="95%",cols="5m,2,3,10<a",options="header",align="center"]
+[width="95%",cols="5m,2,3,<10",options="header",align="center"]
 |=========================================================
 
 |Field
 |Type
 |Default
 |Description
 
 |stonith-timeout
 |NA
 |NA
-|Older versions used this to override the default period to wait for a STONITH (reboot, on, off) action to complete for this device.
+a|Older versions used this to override the default period to wait for a STONITH (reboot, on, off) action to complete for this device.
  It has been replaced by the +pcmk_reboot_timeout+ and +pcmk_off_timeout+ properties.
  indexterm:[stonith-timeout,Fencing]
  indexterm:[Fencing,Property,stonith-timeout]
 
 ////
  priority
  integer
  0
  The priority of the STONITH resource. Devices are tried in order of highest priority to lowest.
  indexterm:[priority,Fencing]
  indexterm:[Fencing,Property,priority]
 ////
 
 |provides
 |string
 |
 |Any special capability provided by the fence device. Currently, only one such
  capability is meaningful: +unfencing+ (see <<s-unfencing>>).
  indexterm:[priority,Fencing]
  indexterm:[Fencing,Property,priority]
 
 |pcmk_host_map
 |string
 |
 |A mapping of host names to ports numbers for devices that do not support host names.
  Example: +node1:1;node2:2,3+ tells the cluster to use port 1 for
  *node1* and ports 2 and 3 for *node2*.
  indexterm:[pcmk_host_map,Fencing]
  indexterm:[Fencing,Property,pcmk_host_map]
 
 |pcmk_host_list
 |string
 |
 |A list of machines controlled by this device (optional unless
 +pcmk_host_check+ is +static-list+).
  indexterm:[pcmk_host_list,Fencing]
  indexterm:[Fencing,Property,pcmk_host_list]
 
 |pcmk_host_check
 |string
 |dynamic-list
-|How to determine which machines are controlled by the device.
+a|How to determine which machines are controlled by the device.
  Allowed values:
 
 * +dynamic-list:+ query the device
 * +static-list:+ check the +pcmk_host_list+ attribute
 * +none:+ assume every device can fence every machine
 
 indexterm:[pcmk_host_check,Fencing]
 indexterm:[Fencing,Property,pcmk_host_check]
 
 |pcmk_delay_max
 |time
 |0s
 |Enable a random delay of up to the time specified before executing stonith
 actions. This is sometimes used in two-node clusters to ensure that the
 nodes don't fence each other at the same time. The overall delay introduced
 by pacemaker is derived from this random delay value adding a static delay so
 that the sum is kept below the maximum delay.
 
 indexterm:[pcmk_delay_max,Fencing]
 indexterm:[Fencing,Property,pcmk_delay_max]
 
 |pcmk_delay_base
 |time
 |0s
 |Enable a static delay before executing stonith actions. This can be used
  e.g. in two-node clusters to ensure that the nodes don't fence each other,
  by having separate fencing resources with different values. The node that is
  fenced with the shorter delay will lose a fencing race. The overall delay
  introduced by pacemaker is derived from this value plus a random delay such
  that the sum is kept below the maximum delay.
 
 indexterm:[pcmk_delay_base,Fencing]
 indexterm:[Fencing,Property,pcmk_delay_base]
 
 |pcmk_action_limit
 |integer
 |1
 |The maximum number of actions that can be performed in parallel on this
  device, if the cluster option +concurrent-fencing+ is +true+. -1 is unlimited.
 
 indexterm:[pcmk_action_limit,Fencing]
 indexterm:[Fencing,Property,pcmk_action_limit]
 
 |pcmk_host_argument
 |string
 |port
 |'Advanced use only.' Which parameter should be supplied to the resource agent
 to identify the node to be fenced. Some devices do not support the standard
 +port+ parameter or may provide additional ones. Use this to specify an
 alternate, device-specific parameter. A value of +none+ tells the
 cluster not to supply any additional parameters.
  indexterm:[pcmk_host_argument,Fencing]
  indexterm:[Fencing,Property,pcmk_host_argument]
 
 |pcmk_reboot_action
 |string
 |reboot
 |'Advanced use only.' The command to send to the resource agent in order to
 reboot a node. Some devices do not support the standard commands or may provide
 additional ones. Use this to specify an alternate, device-specific command.
  indexterm:[pcmk_reboot_action,Fencing]
  indexterm:[Fencing,Property,pcmk_reboot_action]
 
 |pcmk_reboot_timeout
 |time
 |60s
 |'Advanced use only.' Specify an alternate timeout to use for `reboot` actions
 instead of the value of +stonith-timeout+. Some devices need much more or less
 time to complete than normal. Use this to specify an alternate, device-specific
 timeout.
  indexterm:[pcmk_reboot_timeout,Fencing]
  indexterm:[Fencing,Property,pcmk_reboot_timeout]
  indexterm:[stonith-timeout,Fencing]
  indexterm:[Fencing,Property,stonith-timeout]
 
 |pcmk_reboot_retries
 |integer
 |2
 |'Advanced use only.' The maximum number of times to retry the `reboot` command
 within the timeout period. Some devices do not support multiple connections, and
 operations may fail if the device is busy with another task, so Pacemaker will
 automatically retry the operation, if there is time remaining. Use this option
 to alter the number of times Pacemaker retries before giving up.
  indexterm:[pcmk_reboot_retries,Fencing]
  indexterm:[Fencing,Property,pcmk_reboot_retries]
 
 |pcmk_off_action
 |string
 |off
 |'Advanced use only.' The command to send to the resource agent in order to
 shut down a node. Some devices do not support the standard commands or may provide
 additional ones. Use this to specify an alternate, device-specific command.
  indexterm:[pcmk_off_action,Fencing]
  indexterm:[Fencing,Property,pcmk_off_action]
 
 |pcmk_off_timeout
 |time
 |60s
 |'Advanced use only.' Specify an alternate timeout to use for `off` actions
 instead of the value of +stonith-timeout+. Some devices need much more or less
 time to complete than normal. Use this to specify an alternate, device-specific
 timeout.
  indexterm:[pcmk_off_timeout,Fencing]
  indexterm:[Fencing,Property,pcmk_off_timeout]
  indexterm:[stonith-timeout,Fencing]
  indexterm:[Fencing,Property,stonith-timeout]
 
 |pcmk_off_retries
 |integer
 |2
 |'Advanced use only.' The maximum number of times to retry the `off` command
 within the timeout period. Some devices do not support multiple connections, and
 operations may fail if the device is busy with another task, so Pacemaker will
 automatically retry the operation, if there is time remaining. Use this option
 to alter the number of times Pacemaker retries before giving up.
  indexterm:[pcmk_off_retries,Fencing]
  indexterm:[Fencing,Property,pcmk_off_retries]
 
 |pcmk_list_action
 |string
 |list
 |'Advanced use only.' The command to send to the resource agent in order to
 list nodes. Some devices do not support the standard commands or may provide
 additional ones. Use this to specify an alternate, device-specific command.
  indexterm:[pcmk_list_action,Fencing]
  indexterm:[Fencing,Property,pcmk_list_action]
 
 |pcmk_list_timeout
 |time
 |60s
 |'Advanced use only.' Specify an alternate timeout to use for `list` actions
 instead of the value of +stonith-timeout+. Some devices need much more or less
 time to complete than normal. Use this to specify an alternate, device-specific
 timeout.
  indexterm:[pcmk_list_timeout,Fencing]
  indexterm:[Fencing,Property,pcmk_list_timeout]
 
 |pcmk_list_retries
 |integer
 |2
 |'Advanced use only.' The maximum number of times to retry the `list` command
 within the timeout period. Some devices do not support multiple connections, and
 operations may fail if the device is busy with another task, so Pacemaker will
 automatically retry the operation, if there is time remaining. Use this option
 to alter the number of times Pacemaker retries before giving up.
  indexterm:[pcmk_list_retries,Fencing]
  indexterm:[Fencing,Property,pcmk_list_retries]
 
 |pcmk_monitor_action
 |string
 |monitor
 |'Advanced use only.' The command to send to the resource agent in order to
 report extended status. Some devices do not support the standard commands or may provide
 additional ones. Use this to specify an alternate, device-specific command.
  indexterm:[pcmk_monitor_action,Fencing]
  indexterm:[Fencing,Property,pcmk_monitor_action]
 
 |pcmk_monitor_timeout
 |time
 |60s
 |'Advanced use only.' Specify an alternate timeout to use for `monitor` actions
 instead of the value of +stonith-timeout+. Some devices need much more or less
 time to complete than normal. Use this to specify an alternate, device-specific
 timeout.
  indexterm:[pcmk_monitor_timeout,Fencing]
  indexterm:[Fencing,Property,pcmk_monitor_timeout]
 
 |pcmk_monitor_retries
 |integer
 |2
 |'Advanced use only.' The maximum number of times to retry the `monitor` command
 within the timeout period. Some devices do not support multiple connections, and
 operations may fail if the device is busy with another task, so Pacemaker will
 automatically retry the operation, if there is time remaining. Use this option
 to alter the number of times Pacemaker retries before giving up.
  indexterm:[pcmk_monitor_retries,Fencing]
  indexterm:[Fencing,Property,pcmk_monitor_retries]
 
 |pcmk_status_action
 |string
 |status
 |'Advanced use only.' The command to send to the resource agent in order to
 report status. Some devices do not support the standard commands or may provide
 additional ones. Use this to specify an alternate, device-specific command.
  indexterm:[pcmk_status_action,Fencing]
  indexterm:[Fencing,Property,pcmk_status_action]
 
 |pcmk_status_timeout
 |time
 |60s
 |'Advanced use only.' Specify an alternate timeout to use for `status` actions
 instead of the value of +stonith-timeout+. Some devices need much more or less
 time to complete than normal. Use this to specify an alternate, device-specific
 timeout.
  indexterm:[pcmk_status_timeout,Fencing]
  indexterm:[Fencing,Property,pcmk_status_timeout]
 
 |pcmk_status_retries
 |integer
 |2
 |'Advanced use only.' The maximum number of times to retry the `status` command
 within the timeout period. Some devices do not support multiple connections, and
 operations may fail if the device is busy with another task, so Pacemaker will
 automatically retry the operation, if there is time remaining. Use this option
 to alter the number of times Pacemaker retries before giving up.
  indexterm:[pcmk_status_retries,Fencing]
  indexterm:[Fencing,Property,pcmk_status_retries]
 
 |=========================================================
 
 [[s-unfencing]]
 == Unfencing ==
 
 Most fence devices cut the power to the target. By contrast, fence devices that
 perform 'fabric fencing' cut off a node's access to some critical resource,
 such as a shared disk or a network switch.
 
 With fabric fencing, it is expected that the cluster will fence the node, and
 then a system administrator must manually investigate what went wrong, correct
 any issues found, then reboot (or restart the cluster services on) the node.
 
 Once the node reboots and rejoins the cluster, some fabric fencing devices
 require that an explicit command to restore the node's access to the critical
 resource. This capability is called 'unfencing' and is typically implemented
 as the fence agent's +on+ command.
 
 If any cluster resource has +requires+ set to +unfencing+, then that resource
 will not be probed or started on a node until that node has been unfenced.
 
 == Configuring STONITH ==
 
 [NOTE]
 ===========
 Higher-level configuration shells include functionality to simplify the
 process below, particularly the step for deciding which parameters are
 required.  However since this document deals only with core
 components, you should refer to the STONITH chapter of the
 http://www.clusterlabs.org/doc/[Clusters from Scratch] guide for those details.
 ===========
 
 . Find the correct driver:
 +
 ----
 # stonith_admin --list-installed
 ----
 
 . Find the required parameters associated with the device
   (replacing $AGENT_NAME with the name obtained from the previous step):
 +
 ----
 # stonith_admin --metadata --agent $AGENT_NAME
 ----
 
 . Create a file called +stonith.xml+ containing a primitive resource
   with a class of +stonith+, a type equal to the agent name obtained earlier,
   and a parameter for each of the values returned in the previous step.
 
 . If the device does not know how to fence nodes based on their uname,
   you may also need to set the special +pcmk_host_map+ parameter.  See
   `man pacemaker-fenced` for details.
 
 . If the device does not support the `list` command, you may also need
   to set the special +pcmk_host_list+ and/or +pcmk_host_check+
   parameters.  See `man pacemaker-fenced` for details.
 
 . If the device does not expect the victim to be specified with the
   `port` parameter, you may also need to set the special
   +pcmk_host_argument+ parameter. See `man pacemaker-fenced` for details.
 
 . Upload it into the CIB using cibadmin:
 +
 ----
 # cibadmin -C -o resources --xml-file stonith.xml
 ----
 
 . Set +stonith-enabled+ to true:
 +
 ----
 # crm_attribute -t crm_config -n stonith-enabled -v true
 ----
 
 . Once the stonith resource is running, you can test it by executing the
   following (although you might want to stop the cluster on that machine
   first):
 +
 ----
 # stonith_admin --reboot nodename
 ----
 
 === Example STONITH Configuration ===
 
 Assume we have an chassis containing four nodes and an IPMI device
 active on 192.0.2.1. We would choose the `fence_ipmilan` driver,
 and obtain the following list of parameters:
 
 .Obtaining a list of STONITH Parameters
 ====
 ----
 # stonith_admin --metadata -a fence_ipmilan
 ----
 
 [source,XML]
 ----
 <resource-agent name="fence_ipmilan" shortdesc="Fence agent for IPMI over LAN">
   <symlink name="fence_ilo3" shortdesc="Fence agent for HP iLO3"/>
   <symlink name="fence_ilo4" shortdesc="Fence agent for HP iLO4"/>
   <symlink name="fence_idrac" shortdesc="Fence agent for Dell iDRAC"/>
   <symlink name="fence_imm" shortdesc="Fence agent for IBM Integrated Management Module"/>
   <longdesc>
   </longdesc>
   <vendor-url>
   </vendor-url>
   <parameters>
     <parameter name="auth" unique="0" required="0">
       <getopt mixed="-A"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="ipaddr" unique="0" required="1">
       <getopt mixed="-a"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="passwd" unique="0" required="0">
       <getopt mixed="-p"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="passwd_script" unique="0" required="0">
       <getopt mixed="-S"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="lanplus" unique="0" required="0">
       <getopt mixed="-P"/>
       <content type="boolean"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="login" unique="0" required="0">
       <getopt mixed="-l"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="action" unique="0" required="0">
       <getopt mixed="-o"/>
       <content type="string" default="reboot"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="timeout" unique="0" required="0">
       <getopt mixed="-t"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="cipher" unique="0" required="0">
       <getopt mixed="-C"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="method" unique="0" required="0">
       <getopt mixed="-M"/>
       <content type="string" default="onoff"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="power_wait" unique="0" required="0">
       <getopt mixed="-T"/>
       <content type="string" default="2"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="delay" unique="0" required="0">
       <getopt mixed="-f"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="privlvl" unique="0" required="0">
       <getopt mixed="-L"/>
       <content type="string"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
     <parameter name="verbose" unique="0" required="0">
       <getopt mixed="-v"/>
       <content type="boolean"/>
       <shortdesc lang="en">
       </shortdesc>
     </parameter>
   </parameters>
   <actions>
     <action name="on"/>
     <action name="off"/>
     <action name="reboot"/>
     <action name="status"/>
     <action name="diag"/>
     <action name="list"/>
     <action name="monitor"/>
     <action name="metadata"/>
     <action name="stop" timeout="20s"/>
     <action name="start" timeout="20s"/>
   </actions>
 </resource-agent>
 ----
 ====
 
 Based on that, we would create a STONITH resource fragment that might look
 like this:
 
 .An IPMI-based STONITH Resource
 ====
 [source,XML]
 ----
 <primitive id="Fencing" class="stonith" type="fence_ipmilan" >
   <instance_attributes id="Fencing-params" >
     <nvpair id="Fencing-passwd" name="passwd" value="testuser" />
     <nvpair id="Fencing-login" name="login" value="abc123" />
     <nvpair id="Fencing-ipaddr" name="ipaddr" value="192.0.2.1" />
     <nvpair id="Fencing-pcmk_host_list" name="pcmk_host_list" value="pcmk-1 pcmk-2" />
   </instance_attributes>
   <operations >
     <op id="Fencing-monitor-10m" interval="10m" name="monitor" timeout="300s" />
   </operations>
 </primitive>
 ----
 ====
 
 Finally, we need to enable STONITH:
 ----
 # crm_attribute -t crm_config -n stonith-enabled -v true
 ----
 
 == Advanced STONITH Configurations ==
 
 Some people consider that having one fencing device is a single point
 of failure footnote:[Not true, since a node or resource must fail
 before fencing even has a chance to]; others prefer removing the node
 from the storage and network instead of turning it off.
 
 Whatever the reason, Pacemaker supports fencing nodes with multiple
 devices through a feature called 'fencing topologies'.
 
 Simply create the individual devices as you normally would, then
 define one or more +fencing-level+ entries in the +fencing-topology+ section of
 the configuration.
 
 * Each fencing level is attempted in order of ascending +index+. Allowed
   values are 1 through 9.
 * If a device fails, processing terminates for the current level.
   No further devices in that level are exercised, and the next level is attempted instead.
 * If the operation succeeds for all the listed devices in a level, the level is deemed to have passed.
 * The operation is finished when a level has passed (success), or all levels have been attempted (failed).
 * If the operation failed, the next step is determined by the scheduler
   and/or the controller.
 
 Some possible uses of topologies include:
 
 * Try poison-pill and fail back to power
 * Try disk and network, and fall back to power if either fails
 * Initiate a kdump and then poweroff the node
 
 .Properties of Fencing Levels
-[width="95%",cols="1m,3<",options="header",align="center"]
+[width="95%",cols="1m,<3",options="header",align="center"]
 |=========================================================
 
 |Field
 |Description
 
 |id
 |A unique name for the level
  indexterm:[id,fencing-level]
  indexterm:[Fencing,fencing-level,id]
 
 |target
 |The name of a single node to which this level applies
  indexterm:[target,fencing-level]
  indexterm:[Fencing,fencing-level,target]
 
 |target-pattern
 |An extended regular expression (as defined in
  http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04[POSIX])
  matching the names of nodes to which this level applies
  indexterm:[target-pattern,fencing-level]
  indexterm:[Fencing,fencing-level,target-pattern]
 
 |target-attribute
 |The name of a node attribute that is set (to +target-value+) for nodes to
  which this level applies
  indexterm:[target-attribute,fencing-level]
  indexterm:[Fencing,fencing-level,target-attribute]
 
 |target-value
 |The node attribute value (of +target-attribute+) that is set for nodes to
  which this level applies
  indexterm:[target-attribute,fencing-level]
  indexterm:[Fencing,fencing-level,target-attribute]
 
 |index
 |The order in which to attempt the levels.
  Levels are attempted in ascending order 'until one succeeds'.
  Valid values are 1 through 9.
  indexterm:[index,fencing-level]
  indexterm:[Fencing,fencing-level,index]
 
 |devices
 |A comma-separated list of devices that must all be tried for this level
  indexterm:[devices,fencing-level]
  indexterm:[Fencing,fencing-level,devices]
 
 |=========================================================
 
 .Fencing topology with different devices for different nodes
 ====
 [source,XML]
 ----
  <cib crm_feature_set="3.0.6" validate-with="pacemaker-1.2" admin_epoch="1" epoch="0" num_updates="0">
   <configuration>
     ...
     <fencing-topology>
       <!-- For pcmk-1, try poison-pill and fail back to power -->
       <fencing-level id="f-p1.1" target="pcmk-1" index="1" devices="poison-pill"/>
       <fencing-level id="f-p1.2" target="pcmk-1" index="2" devices="power"/>
 
       <!-- For pcmk-2, try disk and network, and fail back to power -->
       <fencing-level id="f-p2.1" target="pcmk-2" index="1" devices="disk,network"/>
       <fencing-level id="f-p2.2" target="pcmk-2" index="2" devices="power"/>
     </fencing-topology>
     ...
   <configuration>
   <status/>
 </cib>
 ----
 ====
 
 === Example Dual-Layer, Dual-Device Fencing Topologies ===
 
 The following example illustrates an advanced use of +fencing-topology+ in a cluster with the following properties:
 
 * 3 nodes (2 active prod-mysql nodes, 1 prod_mysql-rep in standby for quorum purposes)
 * the active nodes have an IPMI-controlled power board reached at 192.0.2.1 and 192.0.2.2
 * the active nodes also have two independent PSUs (Power Supply Units)
   connected to two independent PDUs (Power Distribution Units) reached at
   198.51.100.1 (port 10 and port 11) and 203.0.113.1 (port 10 and port 11)
 * the first fencing method uses the `fence_ipmi` agent
 * the second fencing method uses the `fence_apc_snmp` agent targetting 2 fencing devices (one per PSU, either port 10 or 11)
 * fencing is only implemented for the active nodes and has location constraints
 * fencing topology is set to try IPMI fencing first then default to a "sure-kill" dual PDU fencing
 
 In a normal failure scenario, STONITH will first select +fence_ipmi+ to try to kill the faulty node.
 Using a fencing topology, if that first method fails, STONITH will then move on to selecting +fence_apc_snmp+ twice:
 
 * once for the first PDU 
 * again for the second PDU 
 
 The fence action is considered successful only if both PDUs report the required status. If any of them fails, STONITH loops back to the first fencing method, +fence_ipmi+, and so on until the node is fenced or fencing action is cancelled.
 
 .First fencing method: single IPMI device
 
 Each cluster node has it own dedicated IPMI channel that can be called for fencing using the following primitives:
 [source,XML]
 ----
 <primitive class="stonith" id="fence_prod-mysql1_ipmi" type="fence_ipmilan">
   <instance_attributes id="fence_prod-mysql1_ipmi-instance_attributes">
     <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-ipaddr" name="ipaddr" value="192.0.2.1"/>
     <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-action" name="action" value="off"/>
     <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-login" name="login" value="fencing"/>
     <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-passwd" name="passwd" value="finishme"/>
     <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-verbose" name="verbose" value="true"/>
     <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql1"/>
     <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-lanplus" name="lanplus" value="true"/>
   </instance_attributes>
 </primitive>
 <primitive class="stonith" id="fence_prod-mysql2_ipmi" type="fence_ipmilan">
   <instance_attributes id="fence_prod-mysql2_ipmi-instance_attributes">
     <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-ipaddr" name="ipaddr" value="192.0.2.2"/>
     <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-action" name="action" value="off"/>
     <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-login" name="login" value="fencing"/>
     <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-passwd" name="passwd" value="finishme"/>
     <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-verbose" name="verbose" value="true"/>
     <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql2"/>
     <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-lanplus" name="lanplus" value="true"/>
   </instance_attributes>
 </primitive>
 ----
 
 .Second fencing method: dual PDU devices
 
 Each cluster node also has two distinct power channels controlled by two
 distinct PDUs. That means a total of 4 fencing devices configured as follows:
 
 - Node 1, PDU 1, PSU 1 @ port 10
 - Node 1, PDU 2, PSU 2 @ port 10
 - Node 2, PDU 1, PSU 1 @ port 11
 - Node 2, PDU 2, PSU 2 @ port 11
 
 The matching fencing agents are configured as follows:
 [source,XML]
 ----
 <primitive class="stonith" id="fence_prod-mysql1_apc1" type="fence_apc_snmp">
   <instance_attributes id="fence_prod-mysql1_apc1-instance_attributes">
     <nvpair id="fence_prod-mysql1_apc1-instance_attributes-ipaddr" name="ipaddr" value="198.51.100.1"/>
     <nvpair id="fence_prod-mysql1_apc1-instance_attributes-action" name="action" value="off"/>
     <nvpair id="fence_prod-mysql1_apc1-instance_attributes-port" name="port" value="10"/>
     <nvpair id="fence_prod-mysql1_apc1-instance_attributes-login" name="login" value="fencing"/>
     <nvpair id="fence_prod-mysql1_apc1-instance_attributes-passwd" name="passwd" value="fencing"/>
     <nvpair id="fence_prod-mysql1_apc1-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql1"/>
   </instance_attributes>
 </primitive>
 <primitive class="stonith" id="fence_prod-mysql1_apc2" type="fence_apc_snmp">
   <instance_attributes id="fence_prod-mysql1_apc2-instance_attributes">
     <nvpair id="fence_prod-mysql1_apc2-instance_attributes-ipaddr" name="ipaddr" value="203.0.113.1"/>
     <nvpair id="fence_prod-mysql1_apc2-instance_attributes-action" name="action" value="off"/>
     <nvpair id="fence_prod-mysql1_apc2-instance_attributes-port" name="port" value="10"/>
     <nvpair id="fence_prod-mysql1_apc2-instance_attributes-login" name="login" value="fencing"/>
     <nvpair id="fence_prod-mysql1_apc2-instance_attributes-passwd" name="passwd" value="fencing"/>
     <nvpair id="fence_prod-mysql1_apc2-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql1"/>
   </instance_attributes>
 </primitive>
 <primitive class="stonith" id="fence_prod-mysql2_apc1" type="fence_apc_snmp">
   <instance_attributes id="fence_prod-mysql2_apc1-instance_attributes">
     <nvpair id="fence_prod-mysql2_apc1-instance_attributes-ipaddr" name="ipaddr" value="198.51.100.1"/>
     <nvpair id="fence_prod-mysql2_apc1-instance_attributes-action" name="action" value="off"/>
     <nvpair id="fence_prod-mysql2_apc1-instance_attributes-port" name="port" value="11"/>
     <nvpair id="fence_prod-mysql2_apc1-instance_attributes-login" name="login" value="fencing"/>
     <nvpair id="fence_prod-mysql2_apc1-instance_attributes-passwd" name="passwd" value="fencing"/>
     <nvpair id="fence_prod-mysql2_apc1-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql2"/>
   </instance_attributes>
 </primitive>
 <primitive class="stonith" id="fence_prod-mysql2_apc2" type="fence_apc_snmp">
   <instance_attributes id="fence_prod-mysql2_apc2-instance_attributes">
     <nvpair id="fence_prod-mysql2_apc2-instance_attributes-ipaddr" name="ipaddr" value="203.0.113.1"/>
     <nvpair id="fence_prod-mysql2_apc2-instance_attributes-action" name="action" value="off"/>
     <nvpair id="fence_prod-mysql2_apc2-instance_attributes-port" name="port" value="11"/>
     <nvpair id="fence_prod-mysql2_apc2-instance_attributes-login" name="login" value="fencing"/>
     <nvpair id="fence_prod-mysql2_apc2-instance_attributes-passwd" name="passwd" value="fencing"/>
     <nvpair id="fence_prod-mysql2_apc2-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql2"/>
   </instance_attributes>
 </primitive>
 ----
 
 .Location Constraints 
 
 To prevent STONITH from trying to run a fencing agent on the same node it is
 supposed to fence, constraints are placed on all the fencing primitives:
 [source,XML]
 ----
 <constraints>
   <rsc_location id="l_fence_prod-mysql1_ipmi" node="prod-mysql1" rsc="fence_prod-mysql1_ipmi" score="-INFINITY"/>
   <rsc_location id="l_fence_prod-mysql2_ipmi" node="prod-mysql2" rsc="fence_prod-mysql2_ipmi" score="-INFINITY"/>
   <rsc_location id="l_fence_prod-mysql1_apc2" node="prod-mysql1" rsc="fence_prod-mysql1_apc2" score="-INFINITY"/>
   <rsc_location id="l_fence_prod-mysql1_apc1" node="prod-mysql1" rsc="fence_prod-mysql1_apc1" score="-INFINITY"/>
   <rsc_location id="l_fence_prod-mysql2_apc1" node="prod-mysql2" rsc="fence_prod-mysql2_apc1" score="-INFINITY"/>
   <rsc_location id="l_fence_prod-mysql2_apc2" node="prod-mysql2" rsc="fence_prod-mysql2_apc2" score="-INFINITY"/>
 </constraints>
 ----
 
 .Fencing topology
 
 Now that all the fencing resources are defined, it's time to create the right topology. 
 We want to first fence using IPMI and if that does not work, fence both PDUs to effectively and surely kill the node.
 [source,XML]
 ----
 <fencing-topology>
   <fencing-level devices="fence_prod-mysql1_ipmi" id="fencing-2" index="1" target="prod-mysql1"/>
   <fencing-level devices="fence_prod-mysql1_apc1,fence_prod-mysql1_apc2" id="fencing-3" index="2" target="prod-mysql1"/>
   <fencing-level devices="fence_prod-mysql2_ipmi" id="fencing-0" index="1" target="prod-mysql2"/>
   <fencing-level devices="fence_prod-mysql2_apc1,fence_prod-mysql2_apc2" id="fencing-1" index="2" target="prod-mysql2"/>
 </fencing-topology>
 ----
 Please note, in +fencing-topology+, the lowest +index+ value determines the priority of the first fencing method. 
 
 .Final configuration
 
 Put together, the configuration looks like this:
 [source,XML]
 ----
 <cib admin_epoch="0" crm_feature_set="3.0.7" epoch="292" have-quorum="1" num_updates="29" validate-with="pacemaker-1.2">
   <configuration>
     <crm_config>
       <cluster_property_set id="cib-bootstrap-options">
         <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true"/>
         <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="off"/>
         <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="3"/>
        ...
       </cluster_property_set>
     </crm_config>
     <nodes>
       <node id="prod-mysql1" uname="prod-mysql1">
       <node id="prod-mysql2" uname="prod-mysql2"/>
       <node id="prod-mysql-rep1" uname="prod-mysql-rep1"/>
         <instance_attributes id="prod-mysql-rep1">
           <nvpair id="prod-mysql-rep1-standby" name="standby" value="on"/>
         </instance_attributes>
       </node>
     </nodes>
     <resources>
       <primitive class="stonith" id="fence_prod-mysql1_ipmi" type="fence_ipmilan">
         <instance_attributes id="fence_prod-mysql1_ipmi-instance_attributes">
           <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-ipaddr" name="ipaddr" value="192.0.2.1"/>
           <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-action" name="action" value="off"/>
           <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-login" name="login" value="fencing"/>
           <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-passwd" name="passwd" value="finishme"/>
           <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-verbose" name="verbose" value="true"/>
           <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql1"/>
           <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-lanplus" name="lanplus" value="true"/>
         </instance_attributes>
       </primitive>
       <primitive class="stonith" id="fence_prod-mysql2_ipmi" type="fence_ipmilan">
         <instance_attributes id="fence_prod-mysql2_ipmi-instance_attributes">
           <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-ipaddr" name="ipaddr" value="192.0.2.2"/>
           <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-action" name="action" value="off"/>
           <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-login" name="login" value="fencing"/>
           <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-passwd" name="passwd" value="finishme"/>
           <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-verbose" name="verbose" value="true"/>
           <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql2"/>
           <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-lanplus" name="lanplus" value="true"/>
         </instance_attributes>
       </primitive>
       <primitive class="stonith" id="fence_prod-mysql1_apc1" type="fence_apc_snmp">
         <instance_attributes id="fence_prod-mysql1_apc1-instance_attributes">
           <nvpair id="fence_prod-mysql1_apc1-instance_attributes-ipaddr" name="ipaddr" value="198.51.100.1"/>
           <nvpair id="fence_prod-mysql1_apc1-instance_attributes-action" name="action" value="off"/>
           <nvpair id="fence_prod-mysql1_apc1-instance_attributes-port" name="port" value="10"/>
           <nvpair id="fence_prod-mysql1_apc1-instance_attributes-login" name="login" value="fencing"/>
           <nvpair id="fence_prod-mysql1_apc1-instance_attributes-passwd" name="passwd" value="fencing"/>
           <nvpair id="fence_prod-mysql1_apc1-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql1"/>
         </instance_attributes>
       </primitive>
       <primitive class="stonith" id="fence_prod-mysql1_apc2" type="fence_apc_snmp">
         <instance_attributes id="fence_prod-mysql1_apc2-instance_attributes">
           <nvpair id="fence_prod-mysql1_apc2-instance_attributes-ipaddr" name="ipaddr" value="203.0.113.1"/>
           <nvpair id="fence_prod-mysql1_apc2-instance_attributes-action" name="action" value="off"/>
           <nvpair id="fence_prod-mysql1_apc2-instance_attributes-port" name="port" value="10"/>
           <nvpair id="fence_prod-mysql1_apc2-instance_attributes-login" name="login" value="fencing"/>
           <nvpair id="fence_prod-mysql1_apc2-instance_attributes-passwd" name="passwd" value="fencing"/>
           <nvpair id="fence_prod-mysql1_apc2-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql1"/>
         </instance_attributes>
       </primitive>
       <primitive class="stonith" id="fence_prod-mysql2_apc1" type="fence_apc_snmp">
         <instance_attributes id="fence_prod-mysql2_apc1-instance_attributes">
           <nvpair id="fence_prod-mysql2_apc1-instance_attributes-ipaddr" name="ipaddr" value="198.51.100.1"/>
           <nvpair id="fence_prod-mysql2_apc1-instance_attributes-action" name="action" value="off"/>
           <nvpair id="fence_prod-mysql2_apc1-instance_attributes-port" name="port" value="11"/>
           <nvpair id="fence_prod-mysql2_apc1-instance_attributes-login" name="login" value="fencing"/>
           <nvpair id="fence_prod-mysql2_apc1-instance_attributes-passwd" name="passwd" value="fencing"/>
           <nvpair id="fence_prod-mysql2_apc1-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql2"/>
         </instance_attributes>
       </primitive>
       <primitive class="stonith" id="fence_prod-mysql2_apc2" type="fence_apc_snmp">
         <instance_attributes id="fence_prod-mysql2_apc2-instance_attributes">
           <nvpair id="fence_prod-mysql2_apc2-instance_attributes-ipaddr" name="ipaddr" value="203.0.113.1"/>
           <nvpair id="fence_prod-mysql2_apc2-instance_attributes-action" name="action" value="off"/>
           <nvpair id="fence_prod-mysql2_apc2-instance_attributes-port" name="port" value="11"/>
           <nvpair id="fence_prod-mysql2_apc2-instance_attributes-login" name="login" value="fencing"/>
           <nvpair id="fence_prod-mysql2_apc2-instance_attributes-passwd" name="passwd" value="fencing"/>
           <nvpair id="fence_prod-mysql2_apc2-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql2"/>
         </instance_attributes>
       </primitive>
    </resources>
     <constraints>
       <rsc_location id="l_fence_prod-mysql1_ipmi" node="prod-mysql1" rsc="fence_prod-mysql1_ipmi" score="-INFINITY"/>
       <rsc_location id="l_fence_prod-mysql2_ipmi" node="prod-mysql2" rsc="fence_prod-mysql2_ipmi" score="-INFINITY"/>
       <rsc_location id="l_fence_prod-mysql1_apc2" node="prod-mysql1" rsc="fence_prod-mysql1_apc2" score="-INFINITY"/>
       <rsc_location id="l_fence_prod-mysql1_apc1" node="prod-mysql1" rsc="fence_prod-mysql1_apc1" score="-INFINITY"/>
       <rsc_location id="l_fence_prod-mysql2_apc1" node="prod-mysql2" rsc="fence_prod-mysql2_apc1" score="-INFINITY"/>
       <rsc_location id="l_fence_prod-mysql2_apc2" node="prod-mysql2" rsc="fence_prod-mysql2_apc2" score="-INFINITY"/>
     </constraints>
     <fencing-topology>
       <fencing-level devices="fence_prod-mysql1_ipmi" id="fencing-2" index="1" target="prod-mysql1"/>
       <fencing-level devices="fence_prod-mysql1_apc1,fence_prod-mysql1_apc2" id="fencing-3" index="2" target="prod-mysql1"/>
       <fencing-level devices="fence_prod-mysql2_ipmi" id="fencing-0" index="1" target="prod-mysql2"/>
       <fencing-level devices="fence_prod-mysql2_apc1,fence_prod-mysql2_apc2" id="fencing-1" index="2" target="prod-mysql2"/>
     </fencing-topology>
    ...
   </configuration>
 </cib>
 ----
 
 == Remapping Reboots ==
 
 When the cluster needs to reboot a node, whether because +stonith-action+ is +reboot+ or because
 a reboot was manually requested (such as by `stonith_admin --reboot`), it will remap that to
 other commands in two cases:
 
 . If the chosen fencing device does not support the +reboot+ command, the cluster
   will ask it to perform +off+ instead.
 
 . If a fencing topology level with multiple devices must be executed, the cluster
   will ask all the devices to perform +off+, then ask the devices to perform +on+.
 
 To understand the second case, consider the example of a node with redundant
 power supplies connected to intelligent power switches. Rebooting one switch
 and then the other would have no effect on the node. Turning both switches off,
 and then on, actually reboots the node.
 
 In such a case, the fencing operation will be treated as successful as long as
 the +off+ commands succeed, because then it is safe for the cluster to recover
 any resources that were on the node. Timeouts and errors in the +on+ phase will
 be logged but ignored.
 
 When a reboot operation is remapped, any action-specific timeout for the
 remapped action will be used (for example, +pcmk_off_timeout+ will be used when
 executing the +off+ command, not +pcmk_reboot_timeout+).
diff --git a/doc/Pacemaker_Remote/en-US/Ch-Options.txt b/doc/Pacemaker_Remote/en-US/Ch-Options.txt
index 87663f8727..f50cd25f7c 100644
--- a/doc/Pacemaker_Remote/en-US/Ch-Options.txt
+++ b/doc/Pacemaker_Remote/en-US/Ch-Options.txt
@@ -1,136 +1,136 @@
 = Configuration Explained =
 
 The walk-through examples use some of these options, but don't explain exactly
 what they mean or do.  This section is meant to be the go-to resource for all
 the options available for configuring pacemaker_remote-based nodes.
 (((configuration)))
 
 == Resource Meta-Attributes for Guest Nodes ==
 
 When configuring a virtual machine as a guest node, the virtual machine is
 created using one of the usual resource agents for that purpose (for example,
 ocf:heartbeat:VirtualDomain or ocf:heartbeat:Xen), with additional metadata
 parameters.
 
 No restrictions are enforced on what agents may be used to create a guest node,
 but obviously the agent must create a distinct environment capable of running
 the pacemaker_remote daemon and cluster resources. An additional requirement is
 that fencing the host running the guest node resource must be sufficient for
 ensuring the guest node is stopped. This means, for example, that not all
 hypervisors supported by VirtualDomain may be used to create guest nodes; if
 the guest can survive the hypervisor being fenced, it may not be used as a
 guest node.
 
 Below are the metadata options available to enable a resource as a guest node
 and define its connection parameters.
 
 .Meta-attributes for configuring VM resources as guest nodes
-[width="95%",cols="2m,1,4<",options="header",align="center"]
+[width="95%",cols="2m,1,<4",options="header",align="center"]
 |=========================================================
 
 |Option
 |Default
 |Description
 
 |remote-node
 |'none'
 |The node name of the guest node this resource defines. This both enables the
 resource as a guest node and defines the unique name used to identify the
 guest node. If no other parameters are set, this value will also be assumed as
 the hostname to use when connecting to pacemaker_remote on the VM. This value
 *must not* overlap with any resource or node IDs.
 
 |remote-port
 |3121
 |The port on the virtual machine that the cluster will use to connect to
 pacemaker_remote.
 
 |remote-addr
 |'value of' +remote-node+
 |The IP address or hostname to use when connecting to pacemaker_remote on the VM.
 
 |remote-connect-timeout
 |60s
 |How long before a pending guest connection will time out.
 
 |=========================================================
 
 == Connection Resources for Remote Nodes ==
 
 A remote node is defined by a connection resource. That connection resource
 has instance attributes that define where the remote node is located on the
 network and how to communicate with it.
 
 Descriptions of these instance attributes can be retrieved using the following
 `pcs` command:
 ----
 # pcs resource describe remote
 ocf:pacemaker:remote - remote resource agent
 
 Resource options:
   server: Server location to connect to. This can be an ip address or hostname.
   port: tcp port to connect to.
   reconnect_interval: Interval in seconds at which Pacemaker will attempt to
 		      reconnect to a remote node after an active connection to
 		      the remote node has been severed. When this value is
 		      nonzero, Pacemaker will retry the connection
 		      indefinitely, at the specified interval. As with any
 		      time-based actions, this is not guaranteed to be checked
 		      more frequently than the value of the
                       cluster-recheck-interval cluster option.
 ----
 
 When defining a remote node's connection resource, it is common and recommended
 to name the connection resource the same as the remote node's hostname. By
 default, if no *server* option is provided, the cluster will attempt to contact
 the remote node using the resource name as the hostname.
 
 Example defining a remote node with the hostname *remote1*:
 ----
 # pcs resource create remote1 remote
 ----
 
 Example defining a remote node to connect to a specific IP address and port:
 ----
 # pcs resource create remote1 remote server=192.168.122.200 port=8938
 ----
 
 == Environment Variables for Daemon Start-up ==
 
 Authentication and encryption of the connection between cluster nodes
 and nodes running pacemaker_remote is achieved using
 with https://en.wikipedia.org/wiki/TLS-PSK[TLS-PSK] encryption/authentication
 over TCP (port 3121 by default). This means that both the cluster node and
 remote node must share the same private key. By default, this
 key is placed at +/etc/pacemaker/authkey+ on each node.
 
 You can change the default port and/or key location for Pacemaker and
 pacemaker_remote via environment variables. How these variables are set varies
 by OS, but usually they are set in the +/etc/sysconfig/pacemaker+ or
 +/etc/default/pacemaker+ file.
 
 ----
 #==#==# Pacemaker Remote
 # Use a custom directory for finding the authkey.
 PCMK_authkey_location=/etc/pacemaker/authkey
 #
 # Specify a custom port for Pacemaker Remote connections
 PCMK_remote_port=3121
 ----
 
 == Removing Remote Nodes and Guest Nodes ==
 
 If the resource creating a guest node, or the *ocf:pacemaker:remote* resource
 creating a connection to a remote node, is removed from the configuration, the
 affected node will continue to show up in output as an offline node.
 
 If you want to get rid of that output, run (replacing $NODE_NAME appropriately):
 ----
 # crm_node --force --remove $NODE_NAME
 ----
 
 [WARNING]
 =========
 Be absolutely sure that there are no references to the node's resource in the
 configuration before running the above command.
 =========
diff --git a/doc/asciidoc.reference b/doc/asciidoc.reference
index 9323864998..e06d96c251 100644
--- a/doc/asciidoc.reference
+++ b/doc/asciidoc.reference
@@ -1,70 +1,96 @@
 = Single-chapter part of the documentation =
 
 == Go-to reference chapter for how we use AsciiDoc on this project ==
 
 [NOTE]
 ======
 This is *not* an attempt for fully self-hosted AsciiDoc document,
 consider it a plaintext full of AsciiDoc samples (it's up to the reader
 to recognize the borderline) at documentation writers' disposal
 to somewhat standardize the style{empty}footnote:[
   style of both source notation and final visual appearance
 ].
 
 See also:
    http://powerman.name/doc/asciidoc
 ======
 
 Emphasis:    _some test_
 Mono:        +some text+
 Strong:      *some text*
 Super:       ^some text^
 Sub:         ~some text~
 Quotes:
              ``double quoted''
               `single quoted'
 
 Command:     `some-tool --with option`
 Newly introduced term:
              'some text' (another form of emphasis as of this edit)
 
 File:        mono
 Literal:     mono
 Tool:        command
 Option:      mono
 Replaceable: emphasis mono
 Varname:     mono
 Term encountered on system (e.g., menu choice, hostname):
              strong
 
 
 .Title for Example
 =====
 Some text
 =====
 
 .Title for Example with XML Listing
 =====
 [source,XML]
 -----
 <some xml=here/>
 -----
 =====
 
 Naked code listing:
 (Use 'C' and a leading '#' instead of 'Bash' when commands are being show)
 
 [source,C]
 -----
 # some command --here
 -----
 
 
 Section anchors:
 
 [[s-name]]
 === Some Section Title ===
 
 References to section anchors:
 
 <<s-name>> or <<s-name,Alternate Text>>
+
+
+Tables:
+
+Typically styled like this:
+[width="95%",cols="1m,<4m,<6",options="header",align="center"]
+
+It's vital that column alignment/style, if any, goes first/last in the proper
+column specifier (as a whole possibly preceded with column multiplier),
+otherwise Asciidoctor will end up with invalid DocBook sources:
+- correct: 1m,<4m,<6
+- bad:     1m,4<m,6<
+
+Avoid "a" (asciidoc) style for the columns, since it will prevent any
+reference anchors being placed there.  However, if the particular cell
+is to carry a list (inherently a block element) or a comment that should
+be omitted from the output, it needs to be turned into asciidoc style like
+this (note the initial 'a'):
+
+|col1-per-row
+|col2-per-row
+|Details for col1 + col2 per row combo:
+a|Hence either:
+
+* foo
+* bar