diff --git a/doc/Pacemaker_Explained/en-US/Ch-Options.txt b/doc/Pacemaker_Explained/en-US/Ch-Options.txt
index 3dfd512745..05db3f3ce8 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Options.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Options.txt
@@ -1,454 +1,454 @@
 = Cluster-Wide Configuration =
 
 == CIB Properties ==
 
 Certain settings are defined by CIB properties (that is, attributes of the
 +cib+ tag) rather than with the rest of the cluster configuration in the
 +configuration+ section.
 
 The reason is simply a matter of parsing. These options are used by the
 configuration database which is, by design, mostly ignorant of the content it
 holds.  So the decision was made to place them in an easy-to-find location.
 
 .CIB Properties
 [width="95%",cols="2m,5<",options="header",align="center"]
 |=========================================================
 |Field |Description
 
 | admin_epoch |
 indexterm:[Configuration Version,Cluster]
 indexterm:[Cluster,Option,Configuration Version]
 indexterm:[admin_epoch,Cluster Option]
 indexterm:[Cluster,Option,admin_epoch]
 When a node joins the cluster, the cluster performs a check to see
 which node has the best configuration. It asks the node with the highest
 (+admin_epoch+, +epoch+, +num_updates+) tuple to replace the configuration on
 all the nodes -- which makes setting them, and setting them correctly, very
 important. +admin_epoch+ is never modified by the cluster; you can use this
 to make the configurations on any inactive nodes obsolete. _Never set this
 value to zero_. In such cases, the cluster cannot tell the difference between
 your configuration and the "empty" one used when nothing is found on disk.
 
 | epoch |
 indexterm:[epoch,Cluster Option]
 indexterm:[Cluster,Option,epoch]
 The cluster increments this every time the configuration is updated (usually by
 the administrator).
 
 | num_updates |
 indexterm:[num_updates,Cluster Option]
 indexterm:[Cluster,Option,num_updates]
 The cluster increments this every time the configuration or status is updated
 (usually by the cluster) and resets it to 0 when epoch changes.
 
 | validate-with |
 indexterm:[validate-with,Cluster Option]
 indexterm:[Cluster,Option,validate-with]
 Determines the type of XML validation that will be done on the configuration.
 If set to +none+, the cluster will not verify that updates conform to the
 DTD (nor reject ones that don't). This option can be useful when
 operating a mixed-version cluster during an upgrade.
 
 |cib-last-written |
 indexterm:[cib-last-written,Cluster Property]
 indexterm:[Cluster,Property,cib-last-written]
 Indicates when the configuration was last written to disk. Maintained by the
 cluster; for informational purposes only.
 
 |have-quorum |
 indexterm:[have-quorum,Cluster Property]
 indexterm:[Cluster,Property,have-quorum]
 Indicates if the cluster has quorum. If false, this may mean that the
 cluster cannot start resources or fence other nodes (see
 +no-quorum-policy+ below). Maintained by the cluster.
 
 |dc-uuid |
 indexterm:[dc-uuid,Cluster Property]
 indexterm:[Cluster,Property,dc-uuid]
 Indicates which cluster node is the current leader. Used by the
 cluster when placing resources and determining the order of some
 events. Maintained by the cluster.
 
 |=========================================================
 
 === Working with CIB Properties ===
 
 Although these fields can be written to by the user, in
 most cases the cluster will overwrite any values specified by the
 user with the "correct" ones.
 
 To change the ones that can be specified by the user,
 for example +admin_epoch+, one should use:
 ----
 # cibadmin --modify --xml-text '<cib admin_epoch="42"/>'
 ----
 
 A complete set of CIB properties will look something like this:
 
 .Attributes set for a cib object
 ======
 [source,XML]
 -------
 <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" 
    admin_epoch="42" epoch="116" num_updates="1"
    cib-last-written="Mon Jan 12 15:46:39 2015" update-origin="rhel7-1"
    update-client="crm_attribute" have-quorum="1" dc-uuid="1">
 -------
 ======
 
 [[s-cluster-options]]
 == Cluster Options ==
 
 Cluster options, as you might expect, control how the cluster behaves
 when confronted with certain situations.
 
 They are grouped into sets within the +crm_config+ section, and, in advanced
 configurations, there may be more than one set. (This will be described later
 in the section on <<ch-rules>> where we will show how to have the cluster use
 different sets of options during working hours than during weekends.) For now,
 we will describe the simple case where each option is present at most once.
 
 You can obtain an up-to-date list of cluster options, including
 their default values, by running the `man pengine` and `man crmd` commands.
 
 .Cluster Options
 [width="95%",cols="5m,2,11<a",options="header",align="center"]
 |=========================================================
 |Option |Default |Description
 
 | dc-version | |
 indexterm:[dc-version,Cluster Property]
 indexterm:[Cluster,Property,dc-version]
 Version of Pacemaker on the cluster's DC.
 Determined automatically by the cluster.
 Often includes the hash which identifies the exact Git changeset it was built
 from.  Used for diagnostic purposes.
 
 | cluster-infrastructure | |
 indexterm:[cluster-infrastructure,Cluster Property]
 indexterm:[Cluster,Property,cluster-infrastructure]
 The messaging stack on which Pacemaker is currently running.
 Determined automatically by the cluster.
 Used for informational and diagnostic purposes.
 
 | expected-quorum-votes | |
 indexterm:[expected-quorum-votes,Cluster Property]
 indexterm:[Cluster,Property,expected-quorum-votes]
 The number of nodes expected to be in the cluster.
 Determined automatically by the cluster.
 Used to calculate quorum in clusters that use Corosync 1.x without CMAN
 as the messaging layer.
 
 | no-quorum-policy | stop |
 indexterm:[no-quorum-policy,Cluster Option]
 indexterm:[Cluster,Option,no-quorum-policy]
 What to do when the cluster does not have quorum.  Allowed values:
 
 * +ignore:+ continue all resource management
 * +freeze:+ continue resource management, but don't recover resources from nodes not in the affected partition
 * +stop:+ stop all resources in the affected cluster partition
 * +suicide:+ fence all nodes in the affected cluster partition
 
 | batch-limit | 30 |
 indexterm:[batch-limit,Cluster Option]
 indexterm:[Cluster,Option,batch-limit]
 The number of jobs that the Transition Engine (TE) is allowed to execute in
 parallel. The TE is the logic in pacemaker's CRMd that executes the actions
 determined by the Policy Engine (PE). The "correct" value will depend on the
 speed and load of your network and cluster nodes.
 
 | migration-limit | -1 |
 indexterm:[migration-limit,Cluster Option]
 indexterm:[Cluster,Option,migration-limit]
 The number of migration jobs that the TE is allowed to execute in
 parallel on a node. A value of -1 means unlimited.
 
 | symmetric-cluster | TRUE |
 indexterm:[symmetric-cluster,Cluster Option]
 indexterm:[Cluster,Option,symmetric-cluster]
 Can all resources run on any node by default?
 
 | stop-all-resources | FALSE |
 indexterm:[stop-all-resources,Cluster Option]
 indexterm:[Cluster,Option,stop-all-resources]
 Should the cluster stop all resources?
 
 | stop-orphan-resources | TRUE |
 indexterm:[stop-orphan-resources,Cluster Option]
 indexterm:[Cluster,Option,stop-orphan-resources]
 Should deleted resources be stopped?
 
 | stop-orphan-actions | TRUE |
 indexterm:[stop-orphan-actions,Cluster Option]
 indexterm:[Cluster,Option,stop-orphan-actions]
 Should deleted actions be cancelled?
 
 | start-failure-is-fatal | TRUE |
 indexterm:[start-failure-is-fatal,Cluster Option]
 indexterm:[Cluster,Option,start-failure-is-fatal]
 Should a failure to start a resource on a particular node prevent further start
 attempts on that node? If FALSE, the cluster will decide whether the same
 node is still eligible based on the resource's current failure count
 and +migration-threshold+ (see <<s-failure-handling>>).
 
 | enable-startup-probes | TRUE |
 indexterm:[enable-startup-probes,Cluster Option]
 indexterm:[Cluster,Option,enable-startup-probes]
 Should the cluster check for active resources during startup?
 
 | maintenance-mode | FALSE |
 indexterm:[maintenance-mode,Cluster Option]
 indexterm:[Cluster,Option,maintenance-mode]
 Should the cluster refrain from monitoring, starting and stopping resources?
 
 | stonith-enabled | TRUE |
 indexterm:[stonith-enabled,Cluster Option]
 indexterm:[Cluster,Option,stonith-enabled]
 Should failed nodes and nodes with resources that can't be stopped be
 shot? If you value your data, set up a STONITH device and enable this.
 
 If true, or unset, the cluster will refuse to start resources unless
 one or more STONITH resources have been configured.
 If false, unresponsive nodes are immediately assumed to be running no
 resources, and resource takeover to online nodes starts without any
 further protection (which means _data loss_ if the unresponsive node
 still accesses shared storage, for example).  See also the +requires+
 meta-attribute in <<s-resource-options>>.
 
 | stonith-action | reboot |
 indexterm:[stonith-action,Cluster Option]
 indexterm:[Cluster,Option,stonith-action]
 Action to send to STONITH device. Allowed values are +reboot+ and +off+.
 The value +poweroff+ is also allowed, but is only used for
 legacy devices.
 
 | stonith-timeout | 60s |
 indexterm:[stonith-timeout,Cluster Option]
 indexterm:[Cluster,Option,stonith-timeout]
 How long to wait for STONITH actions (reboot, on, off) to complete
 
 | stonith-max-attempts | 10 |
 indexterm:[stonith-max-attempts,Cluster Option]
 indexterm:[Cluster,Option,stonith-max-attempts]
-How many times stonith can fail before it will no longer be attempted on a target.
-Positive non-zero values are allowed. '(since 1.1.17)'
+How many times fencing can fail for a target before the cluster will no longer
+immediately re-attempt it. '(since 1.1.17)'
 
 | concurrent-fencing | FALSE |
 indexterm:[concurrent-fencing,Cluster Option]
 indexterm:[Cluster,Option,concurrent-fencing]
 Is the cluster allowed to initiate multiple fence actions concurrently?
 
 | cluster-delay | 60s |
 indexterm:[cluster-delay,Cluster Option]
 indexterm:[Cluster,Option,cluster-delay]
 Estimated maximum round-trip delay over the network (excluding action
 execution). If the TE requires an action to be executed on another node,
 it will consider the action failed if it does not get a response
 from the other node in this time (after considering the action's
 own timeout). The "correct" value will depend on the speed and load of your
 network and cluster nodes.
 
 | dc-deadtime | 20s |
 indexterm:[dc-deadtime,Cluster Option]
 indexterm:[Cluster,Option,dc-deadtime]
 How long to wait for a response from other nodes during startup.
 
 The "correct" value will depend on the speed/load of your network and the type of switches used.
 
 | cluster-recheck-interval | 15min |
 indexterm:[cluster-recheck-interval,Cluster Option]
 indexterm:[Cluster,Option,cluster-recheck-interval]
 Polling interval for time-based changes to options, resource parameters and constraints.
 
 The Cluster is primarily event-driven, but your configuration can have
 elements that take effect based on the time of day. To ensure these changes
 take effect, we can optionally poll the cluster's status for changes. A value
 of 0 disables polling. Positive values are an interval (in seconds unless other
 SI units are specified, e.g. 5min).
 
 | pe-error-series-max | -1 |
 indexterm:[pe-error-series-max,Cluster Option]
 indexterm:[Cluster,Option,pe-error-series-max]
 The number of PE inputs resulting in ERRORs to save. Used when reporting problems.
 A value of -1 means unlimited (report all).
 
 | pe-warn-series-max | -1 |
 indexterm:[pe-warn-series-max,Cluster Option]
 indexterm:[Cluster,Option,pe-warn-series-max]
 The number of PE inputs resulting in WARNINGs to save. Used when reporting problems.
 A value of -1 means unlimited (report all).
 
 | pe-input-series-max | -1 |
 indexterm:[pe-input-series-max,Cluster Option]
 indexterm:[Cluster,Option,pe-input-series-max]
 The number of "normal" PE inputs to save. Used when reporting problems.
 A value of -1 means unlimited (report all).
 
 | placement-strategy | default |
 indexterm:[placement-strategy,Cluster Option]
 indexterm:[Cluster,Option,placement-strategy]
  How the cluster should allocate resources to nodes (see <<s-utilization>>).
  Allowed values are +default+, +utilization+, +balanced+, and +minimal+.
  '(since 1.1.0)'
 
 | node-health-strategy | none |
 indexterm:[node-health-strategy,Cluster Option]
 indexterm:[Cluster,Option,node-health-strategy]
  How the cluster should react to node health attributes (see <<s-node-health>>).
  Allowed values are +none+, +migrate-on-red+, +only-green+, +progressive+, and
  +custom+.
 
 | node-health-base | 0 |
 indexterm:[node-health-base,Cluster Option]
 indexterm:[Cluster,Option,node-health-base]
  The base health score assigned to a node. Only used when
  +node-health-strategy+ is +progressive+. '(since 1.1.16)'
 
 | node-health-green | 0 |
 indexterm:[node-health-green,Cluster Option]
 indexterm:[Cluster,Option,node-health-green]
  The score to use for a node health attribute whose value is +green+.
  Only used when +node-health-strategy+ is +progressive+ or +custom+.
 
 | node-health-yellow | 0 |
 indexterm:[node-health-yellow,Cluster Option]
 indexterm:[Cluster,Option,node-health-yellow]
  The score to use for a node health attribute whose value is +yellow+.
  Only used when +node-health-strategy+ is +progressive+ or +custom+.
 
 | node-health-red | 0 |
 indexterm:[node-health-red,Cluster Option]
 indexterm:[Cluster,Option,node-health-red]
  The score to use for a node health attribute whose value is +red+.
  Only used when +node-health-strategy+ is +progressive+ or +custom+.
 
 | remove-after-stop | FALSE |
 indexterm:[remove-after-stop,Cluster Option]
 indexterm:[Cluster,Option,remove-after-stop]
 _Advanced Use Only:_ Should the cluster remove resources from the LRM after
 they are stopped? Values other than the default are, at best, poorly tested and
 potentially dangerous.
 
 | startup-fencing | TRUE |
 indexterm:[startup-fencing,Cluster Option]
 indexterm:[Cluster,Option,startup-fencing]
 _Advanced Use Only:_ Should the cluster shoot unseen nodes?
 Not using the default is very unsafe!
 
 | election-timeout | 2min |
 indexterm:[election-timeout,Cluster Option]
 indexterm:[Cluster,Option,election-timeout]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | shutdown-escalation | 20min |
 indexterm:[shutdown-escalation,Cluster Option]
 indexterm:[Cluster,Option,shutdown-escalation]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | crmd-integration-timeout | 3min |
 indexterm:[crmd-integration-timeout,Cluster Option]
 indexterm:[Cluster,Option,crmd-integration-timeout]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | crmd-finalization-timeout | 30min |
 indexterm:[crmd-finalization-timeout,Cluster Option]
 indexterm:[Cluster,Option,crmd-finalization-timeout]
 _Advanced Use Only:_ If you need to adjust this value, it probably indicates
 the presence of a bug.
 
 | crmd-transition-delay | 0s |
 indexterm:[crmd-transition-delay,Cluster Option]
 indexterm:[Cluster,Option,crmd-transition-delay]
 _Advanced Use Only:_ Delay cluster recovery for the configured interval to
 allow for additional/related events to occur. Useful if your configuration is
 sensitive to the order in which ping updates arrive.
 Enabling this option will slow down cluster recovery under
 all conditions.
 
 |default-resource-stickiness  | 0 |
 indexterm:[default-resource-stickiness,Cluster Option]
 indexterm:[Cluster,Option,default-resource-stickiness]
 _Deprecated:_ See <<s-resource-defaults>> instead
 
 | is-managed-default | TRUE |
 indexterm:[is-managed-default,Cluster Option]
 indexterm:[Cluster,Option,is-managed-default]
 _Deprecated:_ See <<s-resource-defaults>> instead
 
 | default-action-timeout | 20s |
 indexterm:[default-action-timeout,Cluster Option]
 indexterm:[Cluster,Option,default-action-timeout]
 _Deprecated:_ See <<s-operation-defaults>> instead
 
 |=========================================================
 
 === Querying and Setting Cluster Options ===
 
 indexterm:[Querying,Cluster Option]
 indexterm:[Setting,Cluster Option]
 indexterm:[Cluster,Querying Options]
 indexterm:[Cluster,Setting Options]
 
 Cluster options can be queried and modified using the `crm_attribute` tool. To
 get the current value of +cluster-delay+, you can run:
 
 ----
 # crm_attribute --query --name cluster-delay
 ----
 
 which is more simply written as
 
 ----
 # crm_attribute -G -n cluster-delay
 ----
 
 If a value is found, you'll see a result like this:
 
 ----
 # crm_attribute -G -n cluster-delay
 scope=crm_config name=cluster-delay value=60s
 ----
 
 If no value is found, the tool will display an error:
 
 ----
 # crm_attribute -G -n clusta-deway
 scope=crm_config name=clusta-deway value=(null)
 Error performing operation: No such device or address
 ----
 
 To use a different value (for example, 30 seconds), simply run:
 
 ----
 # crm_attribute --name cluster-delay --update 30s
 ----
 
 To go back to the cluster's default value, you can delete the value, for example:
 
 ----
 # crm_attribute --name cluster-delay --delete
 Deleted crm_config option: id=cib-bootstrap-options-cluster-delay name=cluster-delay
 ----
 
 === When Options are Listed More Than Once ===
 
 If you ever see something like the following, it means that the option you're modifying is present more than once.
 
 .Deleting an option that is listed twice
 =======
 ------
 # crm_attribute --name batch-limit --delete
 
 Multiple attributes match name=batch-limit in crm_config:
 Value: 50          (set=cib-bootstrap-options, id=cib-bootstrap-options-batch-limit)
 Value: 100         (set=custom, id=custom-batch-limit)
 Please choose from one of the matches above and supply the 'id' with --id
 -------
 =======
 
 In such cases, follow the on-screen instructions to perform the
 requested action.  To determine which value is currently being used by
 the cluster, refer to <<ch-rules>>.