diff --git a/doc/Pacemaker_Explained/en-US/Ch-Rules.txt b/doc/Pacemaker_Explained/en-US/Ch-Rules.txt
index 8393b71976..dc42a0b16c 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Rules.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Rules.txt
@@ -1,624 +1,640 @@
= Rules =
////
We prefer [[ch-rules]], but older versions of asciidoc don't deal well
with that construct for chapter headings
////
anchor:ch-rules[Chapter 8, Rules]
indexterm:[Resource,Constraint,Rule]
Rules can be used to make your configuration more dynamic. One common
example is to set one value for +resource-stickiness+ during working
hours, to prevent resources from being moved back to their most
preferred location, and another on weekends when no-one is around to
notice an outage.
Another use of rules might be to assign machines to different
processing groups (using a node attribute) based on time and to then
use that attribute when creating location constraints.
Each rule can contain a number of expressions, date-expressions and
even other rules. The results of the expressions are combined based
on the rule's +boolean-op+ field to determine if the rule ultimately
evaluates to +true+ or +false+. What happens next depends on the
context in which the rule is being used.
== Rule Properties ==
.Properties of a Rule
[width="95%",cols="2m,1,5<",options="header",align="center"]
|=========================================================
|Field
|Default
|Description
|id
|
|A unique name for the rule (required)
indexterm:[id,Constraint Rule]
indexterm:[Constraint,Rule,id]
|role
|+Started+
|Limits the rule to apply only when the resource is in the specified
role. Allowed values are +Started+, +Slave+, and +Master+. A rule
with +role="Master"+ cannot determine the initial location of a
clone instance and will only affect which of the active instances
will be promoted.
indexterm:[role,Constraint Rule]
indexterm:[Constraint,Rule,role]
|score
|
|The score to apply if the rule evaluates to +true+. Limited to use in
rules that are part of location constraints.
indexterm:[score,Constraint Rule]
indexterm:[Constraint,Rule,score]
|score-attribute
|
|The node attribute to look up and use as a score if the rule
evaluates to +true+. Limited to use in rules that are part of
location constraints.
indexterm:[score-attribute,Constraint Rule]
indexterm:[Constraint,Rule,score-attribute]
|boolean-op
|+and+
|How to combine the result of multiple expression objects. Allowed
values are +and+ and +or+.
indexterm:[boolean-op,Constraint Rule]
indexterm:[Constraint,Rule,boolean-op]
|=========================================================
== Node Attribute Expressions ==
indexterm:[Resource,Constraint,Attribute Expression]
Expression objects are used to control a resource based on the
attributes defined by a node or nodes.
.Properties of an Expression
[width="95%",cols="2m,1,5>
+
+|#id
+|Node ID
|#kind
|Node type. Possible values are +cluster+, +remote+, and +container+. Kind is
+remote+ for Pacemaker Remote nodes created with the +ocf:pacemaker:remote+
- resource, and +container+ for Pacemaker Remote guest nodes (a legacy name
- unrelated to the now-common use of "container" for resource isolation).
+ resource, and +container+ for Pacemaker Remote guest nodes and bundle nodes
'(since 1.1.13)'
+|#is_dc
+|"true" if this node is a Designated Controller (DC), "false" otherwise
+
+|#cluster-name
+|The value of the +cluster-name+ cluster property, if set
+
+|#site-name
+|The value of the +site-name+ cluster property, if set, otherwise identical to
+ +#cluster-name+
+
+|#role
+|The role the relevant multistate resource has on this node. Valid only within
+ a rule for a location constraint for a multistate resource.
+
|#ra-version
|The installed version of the resource agent on the node, as defined
by the +version+ attribute of the +resource-agent+ tag in the agent's
metadata. Valid only within rules controlling resource options. This can be
useful during rolling upgrades of a backward-incompatible resource agent.
'(coming in 1.1.18)'
|=========================================================
== Time- and Date-Based Expressions ==
indexterm:[Time Based Expressions]
indexterm:[Resource,Constraint,Date/Time Expression]
As the name suggests, +date_expressions+ are used to control a
resource or cluster option based on the current date/time. They may
contain an optional +date_spec+ and/or +duration+ object depending on
the context.
.Properties of a Date Expression
[width="95%",cols="2m,5
----
====
.Equivalent expression
====
[source,XML]
----
----
====
.9am-5pm Monday-Friday
====
[source,XML]
-------
-------
====
Please note that the +16+ matches up to +16:59:59+, as the numeric
value (hour) still matches!
.9am-6pm Monday through Friday or anytime Saturday
====
[source,XML]
-------
-------
====
.9am-5pm or 9pm-12am Monday through Friday
====
[source,XML]
-------
-------
====
.Mondays in March 2005
====
[source,XML]
-------
-------
====
[NOTE]
======
Because no time is specified with the above dates, 00:00:00 is implied. This
means that the range includes all of 2005-03-01 but none of 2005-04-01.
You may wish to write +end="2005-03-31T23:59:59"+ to avoid confusion.
======
.A full moon on Friday the 13th
=====
[source,XML]
-------
-------
=====
== Using Rules to Determine Resource Location ==
indexterm:[Rule,Determine Resource Location]
indexterm:[Resource,Location,Determine by Rules]
A location constraint may contain rules. When the constraint's outermost
rule evaluates to +false+, the cluster treats the constraint as if it were not
there. When the rule evaluates to +true+, the node's preference for running
the resource is updated with the score associated with the rule.
If this sounds familiar, it is because you have been using a simplified
syntax for location constraint rules already. Consider the following
location constraint:
.Prevent myApacheRsc from running on c001n03
=====
[source,XML]
-------
-------
=====
This constraint can be more verbosely written as:
.Prevent myApacheRsc from running on c001n03 - expanded version
=====
[source,XML]
-------
-------
=====
The advantage of using the expanded form is that one can then add
extra clauses to the rule, such as limiting the rule such that it only
applies during certain times of the day or days of the week.
=== Location Rules Based on Other Node Properties ===
The expanded form allows us to match on node properties other than its name.
If we rated each machine's CPU power such that the cluster had the
following nodes section:
.A sample nodes section for use with score-attribute
=====
[source,XML]
-------
-------
=====
then we could prevent resources from running on underpowered machines with this rule:
[source,XML]
-------
-------
=== Using +score-attribute+ Instead of +score+ ===
When using +score-attribute+ instead of +score+, each node matched by
the rule has its score adjusted differently, according to its value
for the named node attribute. Thus, in the previous example, if a
rule used +score-attribute="cpu_mips"+, +c001n01+ would have its
preference to run the resource increased by +1234+ whereas +c001n02+
would have its preference increased by +5678+.
== Using Rules to Control Resource Options ==
Often some cluster nodes will be different from their peers. Sometimes,
these differences -- e.g. the location of a binary or the names of network
interfaces -- require resources to be configured differently depending
on the machine they're hosted on.
By defining multiple +instance_attributes+ objects for the resource
and adding a rule to each, we can easily handle these special cases.
In the example below, +mySpecialRsc+ will use eth1 and port 9999 when
run on +node1+, eth2 and port 8888 on +node2+ and default to eth0 and
port 9999 for all other nodes.
.Defining different resource options based on the node name
=====
[source,XML]
-------
-------
=====
The order in which +instance_attributes+ objects are evaluated is
determined by their score (highest to lowest). If not supplied, score
defaults to zero, and objects with an equal score are processed in
listed order. If the +instance_attributes+ object has no rule
or a +rule+ that evaluates to +true+, then for any parameter the resource does
not yet have a value for, the resource will use the parameter values defined by
the +instance_attributes+.
For example, given the configuration above, if the resource is placed on node1:
. +special-node1+ has the highest score (3) and so is evaluated first;
its rule evaluates to +true+, so +interface+ is set to +eth1+.
. +special-node2+ is evaluated next with score 2, but its rule evaluates to +false+,
so it is ignored.
. +defaults+ is evaluated last with score 1, and has no rule, so its values
are examined; +interface+ is already defined, so the value here is not used,
but +port+ is not yet defined, so +port+ is set to +9999+.
== Using Rules to Control Cluster Options ==
indexterm:[Rule,Controlling Cluster Options]
indexterm:[Cluster,Setting Options with Rules]
Controlling cluster options is achieved in much the same manner as
specifying different resource options on different nodes.
The difference is that because they are cluster options, one cannot
(or should not, because they won't work) use attribute-based
expressions. The following example illustrates how to set a different
+resource-stickiness+ value during and outside work hours. This
allows resources to automatically move back to their most preferred
hosts, but at a time that (in theory) does not interfere with business
activities.
.Change +resource-stickiness+ during working hours
=====
[source,XML]
-------
-------
=====
[[s-rules-recheck]]
== Ensuring Time-Based Rules Take Effect ==
A Pacemaker cluster is an event-driven system. As such, it won't
recalculate the best place for resources to run unless something
(like a resource failure or configuration change) happens. This can
mean that a location constraint that only allows resource X to run
between 9am and 5pm is not enforced.
If you rely on time-based rules, the +cluster-recheck-interval+ cluster option
(which defaults to 15 minutes) is essential. This tells the cluster to
periodically recalculate the ideal state of the cluster.
For example, if you set +cluster-recheck-interval="5m"+, then sometime between
09:00 and 09:05 the cluster would notice that it needs to start resource X,
and between 17:00 and 17:05 it would realize that X needed to be stopped.
The timing of the actual start and stop actions depends on what other actions
the cluster may need to perform first.