Page MenuHomeClusterLabs Projects

No OneTemporary

diff --git a/doc/sphinx/Pacemaker_Explained/constraints.rst b/doc/sphinx/Pacemaker_Explained/constraints.rst
index 333e4b5431..d39bd540d9 100644
--- a/doc/sphinx/Pacemaker_Explained/constraints.rst
+++ b/doc/sphinx/Pacemaker_Explained/constraints.rst
@@ -1,1067 +1,1087 @@
.. index::
single: constraint
single: resource; constraint
.. _constraints:
Resource Constraints
--------------------
.. index::
single: resource; score
single: node; score
Scores
######
Scores of all kinds are integral to how the cluster works.
Practically everything from moving a resource to deciding which
resource to stop in a degraded cluster is achieved by manipulating
scores in some way.
Scores are calculated per resource and node. Any node with a
negative score for a resource can't run that resource. The cluster
places a resource on the node with the highest score for it.
Infinity Math
_____________
Pacemaker implements **INFINITY** (or equivalently, **+INFINITY**) internally as a
score of 1,000,000. Addition and subtraction with it follow these three basic
rules:
* Any value + **INFINITY** = **INFINITY**
* Any value - **INFINITY** = -**INFINITY**
* **INFINITY** - **INFINITY** = **-INFINITY**
.. note::
What if you want to use a score higher than 1,000,000? Typically this possibility
arises when someone wants to base the score on some external metric that might
go above 1,000,000.
The short answer is you can't.
The long answer is it is sometimes possible work around this limitation
creatively. You may be able to set the score to some computed value based on
the external metric rather than use the metric directly. For nodes, you can
store the metric as a node attribute, and query the attribute when computing
the score (possibly as part of a custom resource agent).
.. _location-constraint:
.. index::
single: location constraint
single: constraint; location
Deciding Which Nodes a Resource Can Run On
##########################################
*Location constraints* tell the cluster which nodes a resource can run on.
There are two alternative strategies. One way is to say that, by default,
resources can run anywhere, and then the location constraints specify nodes
that are not allowed (an *opt-out* cluster). The other way is to start with
nothing able to run anywhere, and use location constraints to selectively
enable allowed nodes (an *opt-in* cluster).
Whether you should choose opt-in or opt-out depends on your
personal preference and the make-up of your cluster. If most of your
resources can run on most of the nodes, then an opt-out arrangement is
likely to result in a simpler configuration. On the other-hand, if
most resources can only run on a small subset of nodes, an opt-in
configuration might be simpler.
.. index::
pair: XML element; rsc_location
single: constraint; rsc_location
Location Properties
___________________
.. table:: **Attributes of a rsc_location Element**
:class: longtable
:widths: 1 1 4
+--------------------+---------+----------------------------------------------------------------------------------------------+
| Attribute | Default | Description |
+====================+=========+==============================================================================================+
| id | | .. index:: |
| | | single: rsc_location; attribute, id |
| | | single: attribute; id (rsc_location) |
| | | single: id; rsc_location attribute |
| | | |
| | | A unique name for the constraint (required) |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| rsc | | .. index:: |
| | | single: rsc_location; attribute, rsc |
| | | single: attribute; rsc (rsc_location) |
| | | single: rsc; rsc_location attribute |
| | | |
| | | The name of the resource to which this constraint |
| | | applies. A location constraint must either have a |
| | | ``rsc``, have a ``rsc-pattern``, or contain at |
| | | least one resource set. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| rsc-pattern | | .. index:: |
| | | single: rsc_location; attribute, rsc-pattern |
| | | single: attribute; rsc-pattern (rsc_location) |
| | | single: rsc-pattern; rsc_location attribute |
| | | |
| | | A pattern matching the names of resources to which |
| | | this constraint applies. The syntax is the same as |
| | | `POSIX <http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04>`_ |
| | | extended regular expressions, with the addition of an |
- | | | initial *!* indicating that resources *not* matching |
+ | | | initial ``!`` indicating that resources *not* matching |
| | | the pattern are selected. If the regular expression |
| | | contains submatches, and the constraint is governed by |
| | | a :ref:`rule <rules>`, the submatches can be |
- | | | referenced as **%1** through **%9** in the rule's |
- | | | ``score-attribute`` or a rule expression's ``attribute``. |
- | | | A location constraint must either have a ``rsc``, have a |
- | | | ``rsc-pattern``, or contain at least one resource set. |
+ | | | referenced as ``%1`` through ``%9`` in the rule's |
+ | | | ``score-attribute`` or a rule expression's ``attribute`` |
+ | | | (see :ref:`s-rsc-pattern-rules`). A location constraint |
+ | | | must either have a ``rsc``, have a ``rsc-pattern``, or |
+ | | | contain at least one resource set. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| node | | .. index:: |
| | | single: rsc_location; attribute, node |
| | | single: attribute; node (rsc_location) |
| | | single: node; rsc_location attribute |
| | | |
| | | The name of the node to which this constraint applies. |
| | | A location constraint must either have a ``node`` and |
| | | ``score``, or contain at least one rule. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| score | | .. index:: |
| | | single: rsc_location; attribute, score |
| | | single: attribute; score (rsc_location) |
| | | single: score; rsc_location attribute |
| | | |
| | | Positive values indicate a preference for running the |
| | | affected resource(s) on ``node`` -- the higher the value, |
| | | the stronger the preference. Negative values indicate |
| | | the resource(s) should avoid this node (a value of |
| | | **-INFINITY** changes "should" to "must"). A location |
| | | constraint must either have a ``node`` and ``score``, |
| | | or contain at least one rule. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| resource-discovery | always | .. index:: |
| | | single: rsc_location; attribute, resource-discovery |
| | | single: attribute; resource-discovery (rsc_location) |
| | | single: resource-discovery; rsc_location attribute |
| | | |
| | | Whether Pacemaker should perform resource discovery |
| | | (that is, check whether the resource is already running) |
| | | for this resource on this node. This should normally be |
| | | left as the default, so that rogue instances of a |
| | | service can be stopped when they are running where they |
| | | are not supposed to be. However, there are two |
| | | situations where disabling resource discovery is a good |
| | | idea: when a service is not installed on a node, |
| | | discovery might return an error (properly written OCF |
| | | agents will not, so this is usually only seen with other |
| | | agent types); and when Pacemaker Remote is used to scale |
| | | a cluster to hundreds of nodes, limiting resource |
| | | discovery to allowed nodes can significantly boost |
| | | performance. |
| | | |
| | | * ``always:`` Always perform resource discovery for |
| | | the specified resource on this node. |
| | | |
| | | * ``never:`` Never perform resource discovery for the |
| | | specified resource on this node. This option should |
| | | generally be used with a -INFINITY score, although |
| | | that is not strictly required. |
| | | |
| | | * ``exclusive:`` Perform resource discovery for the |
| | | specified resource only on this node (and other nodes |
| | | similarly marked as ``exclusive``). Multiple location |
| | | constraints using ``exclusive`` discovery for the |
| | | same resource across different nodes creates a subset |
| | | of nodes resource-discovery is exclusive to. If a |
| | | resource is marked for ``exclusive`` discovery on one |
| | | or more nodes, that resource is only allowed to be |
| | | placed within that subset of nodes. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
.. warning::
Setting ``resource-discovery`` to ``never`` or ``exclusive`` removes Pacemaker's
ability to detect and stop unwanted instances of a service running
where it's not supposed to be. It is up to the system administrator (you!)
to make sure that the service can *never* be active on nodes without
``resource-discovery`` (such as by leaving the relevant software uninstalled).
.. index::
single: Asymmetrical Clusters
single: Opt-In Clusters
Asymmetrical "Opt-In" Clusters
______________________________
To create an opt-in cluster, start by preventing resources from running anywhere
by default:
.. code-block:: none
# crm_attribute --name symmetric-cluster --update false
Then start enabling nodes. The following fragment says that the web
server prefers **sles-1**, the database prefers **sles-2** and both can
fail over to **sles-3** if their most preferred node fails.
.. topic:: Opt-in location constraints for two resources
.. code-block:: xml
<constraints>
<rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
<rsc_location id="loc-2" rsc="Webserver" node="sles-3" score="0"/>
<rsc_location id="loc-3" rsc="Database" node="sles-2" score="200"/>
<rsc_location id="loc-4" rsc="Database" node="sles-3" score="0"/>
</constraints>
.. index::
single: Symmetrical Clusters
single: Opt-Out Clusters
Symmetrical "Opt-Out" Clusters
______________________________
To create an opt-out cluster, start by allowing resources to run
anywhere by default:
.. code-block:: none
# crm_attribute --name symmetric-cluster --update true
Then start disabling nodes. The following fragment is the equivalent
of the above opt-in configuration.
.. topic:: Opt-out location constraints for two resources
.. code-block:: xml
<constraints>
<rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
<rsc_location id="loc-2-do-not-run" rsc="Webserver" node="sles-2" score="-INFINITY"/>
<rsc_location id="loc-3-do-not-run" rsc="Database" node="sles-1" score="-INFINITY"/>
<rsc_location id="loc-4" rsc="Database" node="sles-2" score="200"/>
</constraints>
.. _node-score-equal:
What if Two Nodes Have the Same Score
_____________________________________
If two nodes have the same score, then the cluster will choose one.
This choice may seem random and may not be what was intended, however
the cluster was not given enough information to know any better.
.. topic:: Constraints where a resource prefers two nodes equally
.. code-block:: xml
<constraints>
<rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="INFINITY"/>
<rsc_location id="loc-2" rsc="Webserver" node="sles-2" score="INFINITY"/>
<rsc_location id="loc-3" rsc="Database" node="sles-1" score="500"/>
<rsc_location id="loc-4" rsc="Database" node="sles-2" score="300"/>
<rsc_location id="loc-5" rsc="Database" node="sles-2" score="200"/>
</constraints>
In the example above, assuming no other constraints and an inactive
cluster, **Webserver** would probably be placed on **sles-1** and **Database** on
**sles-2**. It would likely have placed **Webserver** based on the node's
uname and **Database** based on the desire to spread the resource load
evenly across the cluster. However other factors can also be involved
in more complex configurations.
+.. _s-rsc-pattern:
+
+Specifying locations using pattern matching
+___________________________________________
+
+A location constraint can affect all resources whose IDs match a given pattern.
+The following example bans resources named **ip-httpd**, **ip-asterisk**,
+**ip-gateway**, etc., from **node1**.
+
+.. topic:: Location constraint banning all resources matching a pattern from one node
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="ban-ips-from-node1" rsc-pattern="ip-.*" node="node1" score="-INFINITY"/>
+ </constraints>
+
+
.. index::
single: constraint; ordering
single: resource; start order
+
.. _s-resource-ordering:
Specifying the Order in which Resources Should Start/Stop
#########################################################
*Ordering constraints* tell the cluster the order in which certain
resource actions should occur.
.. important::
Ordering constraints affect *only* the ordering of resource actions;
they do *not* require that the resources be placed on the
same node. If you want resources to be started on the same node
*and* in a specific order, you need both an ordering constraint *and*
a colocation constraint (see :ref:`s-resource-colocation`), or
alternatively, a group (see :ref:`group-resources`).
.. index::
pair: XML element; rsc_order
pair: constraint; rsc_order
Ordering Properties
___________________
.. table:: **Attributes of a rsc_order Element**
:class: longtable
:widths: 1 2 4
+--------------+----------------------------+-------------------------------------------------------------------+
| Field | Default | Description |
+==============+============================+===================================================================+
| id | | .. index:: |
| | | single: rsc_order; attribute, id |
| | | single: attribute; id (rsc_order) |
| | | single: id; rsc_order attribute |
| | | |
| | | A unique name for the constraint |
+--------------+----------------------------+-------------------------------------------------------------------+
| first | | .. index:: |
| | | single: rsc_order; attribute, first |
| | | single: attribute; first (rsc_order) |
| | | single: first; rsc_order attribute |
| | | |
| | | Name of the resource that the ``then`` resource |
| | | depends on |
+--------------+----------------------------+-------------------------------------------------------------------+
| then | | .. index:: |
| | | single: rsc_order; attribute, then |
| | | single: attribute; then (rsc_order) |
| | | single: then; rsc_order attribute |
| | | |
| | | Name of the dependent resource |
+--------------+----------------------------+-------------------------------------------------------------------+
| first-action | start | .. index:: |
| | | single: rsc_order; attribute, first-action |
| | | single: attribute; first-action (rsc_order) |
| | | single: first-action; rsc_order attribute |
| | | |
| | | The action that the ``first`` resource must complete |
| | | before ``then-action`` can be initiated for the ``then`` |
| | | resource. Allowed values: ``start``, ``stop``, |
| | | ``promote``, ``demote``. |
+--------------+----------------------------+-------------------------------------------------------------------+
| then-action | value of ``first-action`` | .. index:: |
| | | single: rsc_order; attribute, then-action |
| | | single: attribute; then-action (rsc_order) |
| | | single: first-action; rsc_order attribute |
| | | |
| | | The action that the ``then`` resource can execute only |
| | | after the ``first-action`` on the ``first`` resource has |
| | | completed. Allowed values: ``start``, ``stop``, |
| | | ``promote``, ``demote``. |
+--------------+----------------------------+-------------------------------------------------------------------+
| kind | Mandatory | .. index:: |
| | | single: rsc_order; attribute, kind |
| | | single: attribute; kind (rsc_order) |
| | | single: kind; rsc_order attribute |
| | | |
| | | How to enforce the constraint. Allowed values: |
| | | |
| | | * ``Mandatory:`` ``then-action`` will never be initiated |
| | | for the ``then`` resource unless and until ``first-action`` |
| | | successfully completes for the ``first`` resource. |
| | | |
| | | * ``Optional:`` The constraint applies only if both specified |
| | | resource actions are scheduled in the same transition |
| | | (that is, in response to the same cluster state). This |
| | | means that ``then-action`` is allowed on the ``then`` |
| | | resource regardless of the state of the ``first`` resource, |
| | | but if both actions happen to be scheduled at the same time, |
| | | they will be ordered. |
| | | |
| | | * ``Serialize:`` Ensure that the specified actions are never |
| | | performed concurrently for the specified resources. |
| | | ``First-action`` and ``then-action`` can be executed in either |
| | | order, but one must complete before the other can be initiated. |
| | | An example use case is when resource start-up puts a high load |
| | | on the host. |
+--------------+----------------------------+-------------------------------------------------------------------+
| symmetrical | TRUE for ``Mandatory`` and | .. index:: |
| | ``Optional`` kinds. FALSE | single: rsc_order; attribute, symmetrical |
| | for ``Serialize`` kind. | single: attribute; symmetrical (rsc)order) |
| | | single: symmetrical; rsc_order attribute |
| | | |
| | | If true, the reverse of the constraint applies for the |
| | | opposite action (for example, if B starts after A starts, |
| | | then B stops before A stops). ``Serialize`` orders cannot |
| | | be symmetrical. |
+--------------+----------------------------+-------------------------------------------------------------------+
``Promote`` and ``demote`` apply to :ref:`promotable <s-resource-promotable>`
clone resources.
Optional and mandatory ordering
_______________________________
Here is an example of ordering constraints where **Database** *must* start before
**Webserver**, and **IP** *should* start before **Webserver** if they both need to be
started:
.. topic:: Optional and mandatory ordering constraints
.. code-block:: xml
<constraints>
<rsc_order id="order-1" first="IP" then="Webserver" kind="Optional"/>
<rsc_order id="order-2" first="Database" then="Webserver" kind="Mandatory" />
</constraints>
Because the above example lets ``symmetrical`` default to TRUE, **Webserver**
must be stopped before **Database** can be stopped, and **Webserver** should be
stopped before **IP** if they both need to be stopped.
.. index::
single: colocation
single: constraint; colocation
single: resource; location relative to other resources
.. _s-resource-colocation:
Placing Resources Relative to other Resources
#############################################
*Colocation constraints* tell the cluster that the location of one resource
depends on the location of another one.
Colocation has an important side-effect: it affects the order in which
resources are assigned to a node. Think about it: You can't place A relative to
B unless you know where B is [#]_.
So when you are creating colocation constraints, it is important to
consider whether you should colocate A with B, or B with A.
.. important::
Colocation constraints affect *only* the placement of resources; they do *not*
require that the resources be started in a particular order. If you want
resources to be started on the same node *and* in a specific order, you need
both an ordering constraint (see :ref:`s-resource-ordering`) *and* a colocation
constraint, or alternatively, a group (see :ref:`group-resources`).
.. index::
pair: XML element; rsc_colocation
single: constraint; rsc_colocation
Colocation Properties
_____________________
.. table:: **Attributes of a rsc_colocation Constraint**
:class: longtable
:widths: 2 2 5
+----------------+----------------+--------------------------------------------------------+
| Field | Default | Description |
+================+================+========================================================+
| id | | .. index:: |
| | | single: rsc_colocation; attribute, id |
| | | single: attribute; id (rsc_colocation) |
| | | single: id; rsc_colocation attribute |
| | | |
| | | A unique name for the constraint (required). |
+----------------+----------------+--------------------------------------------------------+
| rsc | | .. index:: |
| | | single: rsc_colocation; attribute, rsc |
| | | single: attribute; rsc (rsc_colocation) |
| | | single: rsc; rsc_colocation attribute |
| | | |
| | | The name of a resource that should be located |
| | | relative to ``with-rsc``. A colocation constraint must |
| | | either contain at least one |
| | | :ref:`resource set <s-resource-sets>`, or specify both |
| | | ``rsc`` and ``with-rsc``. |
+----------------+----------------+--------------------------------------------------------+
| with-rsc | | .. index:: |
| | | single: rsc_colocation; attribute, with-rsc |
| | | single: attribute; with-rsc (rsc_colocation) |
| | | single: with-rsc; rsc_colocation attribute |
| | | |
| | | The name of the resource used as the colocation |
| | | target. The cluster will decide where to put this |
| | | resource first and then decide where to put ``rsc``. |
| | | A colocation constraint must either contain at least |
| | | one :ref:`resource set <s-resource-sets>`, or specify |
| | | both ``rsc`` and ``with-rsc``. |
+----------------+----------------+--------------------------------------------------------+
| node-attribute | #uname | .. index:: |
| | | single: rsc_colocation; attribute, node-attribute |
| | | single: attribute; node-attribute (rsc_colocation) |
| | | single: node-attribute; rsc_colocation attribute |
| | | |
| | | If ``rsc`` and ``with-rsc`` are specified, this node |
| | | attribute must be the same on the node running ``rsc`` |
| | | and the node running ``with-rsc`` for the constraint |
| | | to be satisfied. (For details, see |
| | | :ref:`s-coloc-attribute`.) |
+----------------+----------------+--------------------------------------------------------+
| score | 0 | .. index:: |
| | | single: rsc_colocation; attribute, score |
| | | single: attribute; score (rsc_colocation) |
| | | single: score; rsc_colocation attribute |
| | | |
| | | Positive values indicate the resources should run on |
| | | the same node. Negative values indicate the resources |
| | | should run on different nodes. Values of |
| | | +/- ``INFINITY`` change "should" to "must". |
+----------------+----------------+--------------------------------------------------------+
| rsc-role | Started | .. index:: |
| | | single: clone; ordering constraint, rsc-role |
| | | single: ordering constraint; rsc-role (clone) |
| | | single: rsc-role; clone ordering constraint |
| | | |
| | | If ``rsc`` and ``with-rsc`` are specified, and ``rsc`` |
| | | is a :ref:`promotable clone <s-resource-promotable>`, |
| | | the constraint applies only to ``rsc`` instances in |
| | | this role. Allowed values: ``Started``, ``Promoted``, |
| | | ``Unpromoted``. For details, see |
| | | :ref:`promotable-clone-constraints`. |
+----------------+----------------+--------------------------------------------------------+
| with-rsc-role | Started | .. index:: |
| | | single: clone; ordering constraint, with-rsc-role |
| | | single: ordering constraint; with-rsc-role (clone) |
| | | single: with-rsc-role; clone ordering constraint |
| | | |
| | | If ``rsc`` and ``with-rsc`` are specified, and |
| | | ``with-rsc`` is a |
| | | :ref:`promotable clone <s-resource-promotable>`, the |
| | | constraint applies only to ``with-rsc`` instances in |
| | | this role. Allowed values: ``Started``, ``Promoted``, |
| | | ``Unpromoted``. For details, see |
| | | :ref:`promotable-clone-constraints`. |
+----------------+----------------+--------------------------------------------------------+
| influence | value of | .. index:: |
| | ``critical`` | single: rsc_colocation; attribute, influence |
| | meta-attribute | single: attribute; influence (rsc_colocation) |
| | for ``rsc`` | single: influence; rsc_colocation attribute |
| | | |
| | | Whether to consider the location preferences of |
| | | ``rsc`` when ``with-rsc`` is already active. Allowed |
| | | values: ``true``, ``false``. For details, see |
| | | :ref:`s-coloc-influence`. *(since 2.1.0)* |
+----------------+----------------+--------------------------------------------------------+
Mandatory Placement
___________________
Mandatory placement occurs when the constraint's score is
**+INFINITY** or **-INFINITY**. In such cases, if the constraint can't be
satisfied, then the **rsc** resource is not permitted to run. For
``score=INFINITY``, this includes cases where the ``with-rsc`` resource is
not active.
If you need resource **A** to always run on the same machine as
resource **B**, you would add the following constraint:
.. topic:: Mandatory colocation constraint for two resources
.. code-block:: xml
<rsc_colocation id="colocate" rsc="A" with-rsc="B" score="INFINITY"/>
Remember, because **INFINITY** was used, if **B** can't run on any
of the cluster nodes (for whatever reason) then **A** will not
be allowed to run. Whether **A** is running or not has no effect on **B**.
Alternatively, you may want the opposite -- that **A** *cannot*
run on the same machine as **B**. In this case, use ``score="-INFINITY"``.
.. topic:: Mandatory anti-colocation constraint for two resources
.. code-block:: xml
<rsc_colocation id="anti-colocate" rsc="A" with-rsc="B" score="-INFINITY"/>
Again, by specifying **-INFINITY**, the constraint is binding. So if the
only place left to run is where **B** already is, then **A** may not run anywhere.
As with **INFINITY**, **B** can run even if **A** is stopped. However, in this
case **A** also can run if **B** is stopped, because it still meets the
constraint of **A** and **B** not running on the same node.
Advisory Placement
__________________
If mandatory placement is about "must" and "must not", then advisory
placement is the "I'd prefer if" alternative.
For colocation constraints with scores greater than **-INFINITY** and less than
**INFINITY**, the cluster will try to accommodate your wishes, but may ignore
them if other factors outweigh the colocation score. Those factors might
include other constraints, resource stickiness, failure thresholds, whether
other resources would be prevented from being active, etc.
.. topic:: Advisory colocation constraint for two resources
.. code-block:: xml
<rsc_colocation id="colocate-maybe" rsc="A" with-rsc="B" score="500"/>
.. _s-coloc-attribute:
Colocation by Node Attribute
____________________________
The ``node-attribute`` property of a colocation constraints allows you to express
the requirement, "these resources must be on similar nodes".
As an example, imagine that you have two Storage Area Networks (SANs) that are
not controlled by the cluster, and each node is connected to one or the other.
You may have two resources **r1** and **r2** such that **r2** needs to use the same
SAN as **r1**, but doesn't necessarily have to be on the same exact node.
In such a case, you could define a :ref:`node attribute <node_attributes>` named
**san**, with the value **san1** or **san2** on each node as appropriate. Then, you
could colocate **r2** with **r1** using ``node-attribute`` set to **san**.
.. _s-coloc-influence:
Colocation Influence
____________________
By default, if A is colocated with B, the cluster will take into account A's
preferences when deciding where to place B, to maximize the chance that both
resources can run.
For a detailed look at exactly how this occurs, see
`Colocation Explained <http://clusterlabs.org/doc/Colocation_Explained.pdf>`_.
However, if ``influence`` is set to ``false`` in the colocation constraint,
this will happen only if B is inactive and needing to be started. If B is
already active, A's preferences will have no effect on placing B.
An example of what effect this would have and when it would be desirable would
be a nonessential reporting tool colocated with a resource-intensive service
that takes a long time to start. If the reporting tool fails enough times to
reach its migration threshold, by default the cluster will want to move both
resources to another node if possible. Setting ``influence`` to ``false`` on
the colocation constraint would mean that the reporting tool would be stopped
in this situation instead, to avoid forcing the service to move.
The ``critical`` resource meta-attribute is a convenient way to specify the
default for all colocation constraints and groups involving a particular
resource.
.. note::
If a noncritical resource is a member of a group, all later members of the
group will be treated as noncritical, even if they are marked as (or left to
default to) critical.
.. _s-resource-sets:
Resource Sets
#############
.. index::
single: constraint; resource set
single: resource; resource set
*Resource sets* allow multiple resources to be affected by a single constraint.
.. topic:: A set of 3 resources
.. code-block:: xml
<resource_set id="resource-set-example">
<resource_ref id="A"/>
<resource_ref id="B"/>
<resource_ref id="C"/>
</resource_set>
Resource sets are valid inside ``rsc_location``, ``rsc_order``
(see :ref:`s-resource-sets-ordering`), ``rsc_colocation``
(see :ref:`s-resource-sets-colocation`), and ``rsc_ticket``
(see :ref:`ticket-constraints`) constraints.
A resource set has a number of properties that can be set, though not all
have an effect in all contexts.
.. index::
pair: XML element; resource_set
.. table:: **Attributes of a resource_set Element**
:class: longtable
:widths: 2 2 5
+-------------+------------------+--------------------------------------------------------+
| Field | Default | Description |
+=============+==================+========================================================+
| id | | .. index:: |
| | | single: resource_set; attribute, id |
| | | single: attribute; id (resource_set) |
| | | single: id; resource_set attribute |
| | | |
| | | A unique name for the set (required) |
+-------------+------------------+--------------------------------------------------------+
| sequential | true | .. index:: |
| | | single: resource_set; attribute, sequential |
| | | single: attribute; sequential (resource_set) |
| | | single: sequential; resource_set attribute |
| | | |
| | | Whether the members of the set must be acted on in |
| | | order. Meaningful within ``rsc_order`` and |
| | | ``rsc_colocation``. |
+-------------+------------------+--------------------------------------------------------+
| require-all | true | .. index:: |
| | | single: resource_set; attribute, require-all |
| | | single: attribute; require-all (resource_set) |
| | | single: require-all; resource_set attribute |
| | | |
| | | Whether all members of the set must be active before |
| | | continuing. With the current implementation, the |
| | | cluster may continue even if only one member of the |
| | | set is started, but if more than one member of the set |
| | | is starting at the same time, the cluster will still |
| | | wait until all of those have started before continuing |
| | | (this may change in future versions). Meaningful |
| | | within ``rsc_order``. |
+-------------+------------------+--------------------------------------------------------+
| role | | .. index:: |
| | | single: resource_set; attribute, role |
| | | single: attribute; role (resource_set) |
| | | single: role; resource_set attribute |
| | | |
| | | The constraint applies only to resource set members |
| | | that are :ref:`s-resource-promotable` in this |
| | | role. Meaningful within ``rsc_location``, |
| | | ``rsc_colocation`` and ``rsc_ticket``. |
| | | Allowed values: ``Started``, ``Promoted``, |
| | | ``Unpromoted``. For details, see |
| | | :ref:`promotable-clone-constraints`. |
+-------------+------------------+--------------------------------------------------------+
| action | value of | .. index:: |
| | ``first-action`` | single: resource_set; attribute, action |
| | in the enclosing | single: attribute; action (resource_set) |
| | ordering | single: action; resource_set attribute |
| | constraint | |
| | | The action that applies to *all members* of the set. |
| | | Meaningful within ``rsc_order``. Allowed values: |
| | | ``start``, ``stop``, ``promote``, ``demote``. |
+-------------+------------------+--------------------------------------------------------+
| score | | .. index:: |
| | | single: resource_set; attribute, score |
| | | single: attribute; score (resource_set) |
| | | single: score; resource_set attribute |
| | | |
| | | *Advanced use only.* Use a specific score for this |
| | | set within the constraint. |
+-------------+------------------+--------------------------------------------------------+
.. _s-resource-sets-ordering:
Ordering Sets of Resources
##########################
A common situation is for an administrator to create a chain of ordered
resources, such as:
.. topic:: A chain of ordered resources
.. code-block:: xml
<constraints>
<rsc_order id="order-1" first="A" then="B" />
<rsc_order id="order-2" first="B" then="C" />
<rsc_order id="order-3" first="C" then="D" />
</constraints>
.. topic:: Visual representation of the four resources' start order for the above constraints
.. image:: images/resource-set.png
:alt: Ordered set
Ordered Set
___________
To simplify this situation, :ref:`s-resource-sets` can be used within ordering
constraints:
.. topic:: A chain of ordered resources expressed as a set
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-example" sequential="true">
<resource_ref id="A"/>
<resource_ref id="B"/>
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
</rsc_order>
</constraints>
While the set-based format is not less verbose, it is significantly easier to
get right and maintain.
.. important::
If you use a higher-level tool, pay attention to how it exposes this
functionality. Depending on the tool, creating a set **A B** may be equivalent to
**A then B**, or **B then A**.
Ordering Multiple Sets
______________________
The syntax can be expanded to allow sets of resources to be ordered relative to
each other, where the members of each individual set may be ordered or
unordered (controlled by the ``sequential`` property). In the example below, **A**
and **B** can both start in parallel, as can **C** and **D**, however **C** and
**D** can only start once *both* **A** *and* **B** are active.
.. topic:: Ordered sets of unordered resources
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-1" sequential="false">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
<resource_set id="ordered-set-2" sequential="false">
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
</rsc_order>
</constraints>
.. topic:: Visual representation of the start order for two ordered sets of
unordered resources
.. image:: images/two-sets.png
:alt: Two ordered sets
Of course either set -- or both sets -- of resources can also be internally
ordered (by setting ``sequential="true"``) and there is no limit to the number
of sets that can be specified.
.. topic:: Advanced use of set ordering - Three ordered sets, two of which are
internally unordered
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-1" sequential="false">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
<resource_set id="ordered-set-2" sequential="true">
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
<resource_set id="ordered-set-3" sequential="false">
<resource_ref id="E"/>
<resource_ref id="F"/>
</resource_set>
</rsc_order>
</constraints>
.. topic:: Visual representation of the start order for the three sets defined above
.. image:: images/three-sets.png
:alt: Three ordered sets
.. important::
An ordered set with ``sequential=false`` makes sense only if there is another
set in the constraint. Otherwise, the constraint has no effect.
Resource Set OR Logic
_____________________
The unordered set logic discussed so far has all been "AND" logic. To illustrate
this take the 3 resource set figure in the previous section. Those sets can be
expressed, **(A and B) then (C) then (D) then (E and F)**.
Say for example we want to change the first set, **(A and B)**, to use "OR" logic
so the sets look like this: **(A or B) then (C) then (D) then (E and F)**. This
functionality can be achieved through the use of the ``require-all`` option.
This option defaults to TRUE which is why the "AND" logic is used by default.
Setting ``require-all=false`` means only one resource in the set needs to be
started before continuing on to the next set.
.. topic:: Resource Set "OR" logic: Three ordered sets, where the first set is
internally unordered with "OR" logic
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-1" sequential="false" require-all="false">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
<resource_set id="ordered-set-2" sequential="true">
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
<resource_set id="ordered-set-3" sequential="false">
<resource_ref id="E"/>
<resource_ref id="F"/>
</resource_set>
</rsc_order>
</constraints>
.. important::
An ordered set with ``require-all=false`` makes sense only in conjunction with
``sequential=false``. Think of it like this: ``sequential=false`` modifies the set
to be an unordered set using "AND" logic by default, and adding
``require-all=false`` flips the unordered set's "AND" logic to "OR" logic.
.. _s-resource-sets-colocation:
Colocating Sets of Resources
############################
Another common situation is for an administrator to create a set of
colocated resources.
The simplest way to do this is to define a resource group (see
:ref:`group-resources`), but that cannot always accurately express the desired
relationships. For example, maybe the resources do not need to be ordered.
Another way would be to define each relationship as an individual constraint,
but that causes a difficult-to-follow constraint explosion as the number of
resources and combinations grow.
.. topic:: Colocation chain as individual constraints, where A is placed first,
then B, then C, then D
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" rsc="D" with-rsc="C" score="INFINITY"/>
<rsc_colocation id="coloc-2" rsc="C" with-rsc="B" score="INFINITY"/>
<rsc_colocation id="coloc-3" rsc="B" with-rsc="A" score="INFINITY"/>
</constraints>
To express complicated relationships with a simplified syntax [#]_,
:ref:`resource sets <s-resource-sets>` can be used within colocation constraints.
.. topic:: Equivalent colocation chain expressed using **resource_set**
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" score="INFINITY" >
<resource_set id="colocated-set-example" sequential="true">
<resource_ref id="A"/>
<resource_ref id="B"/>
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
</rsc_colocation>
</constraints>
.. note::
Within a ``resource_set``, the resources are listed in the order they are
*placed*, which is the reverse of the order in which they are *colocated*.
In the above example, resource **A** is placed before resource **B**, which is
the same as saying resource **B** is colocated with resource **A**.
As with individual constraints, a resource that can't be active prevents any
resource that must be colocated with it from being active. In both of the two
previous examples, if **B** is unable to run, then both **C** and by inference **D**
must remain stopped.
.. important::
If you use a higher-level tool, pay attention to how it exposes this
functionality. Depending on the tool, creating a set **A B** may be equivalent to
**A with B**, or **B with A**.
Resource sets can also be used to tell the cluster that entire *sets* of
resources must be colocated relative to each other, while the individual
members within any one set may or may not be colocated relative to each other
(determined by the set's ``sequential`` property).
In the following example, resources **B**, **C**, and **D** will each be colocated
with **A** (which will be placed first). **A** must be able to run in order for any
of the resources to run, but any of **B**, **C**, or **D** may be stopped without
affecting any of the others.
.. topic:: Using colocated sets to specify a shared dependency
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" score="INFINITY" >
<resource_set id="colocated-set-2" sequential="false">
<resource_ref id="B"/>
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
<resource_set id="colocated-set-1" sequential="true">
<resource_ref id="A"/>
</resource_set>
</rsc_colocation>
</constraints>
.. note::
Pay close attention to the order in which resources and sets are listed.
While the members of any one sequential set are placed first to last (i.e., the
colocation dependency is last with first), multiple sets are placed last to
first (i.e. the colocation dependency is first with last).
.. important::
A colocated set with ``sequential="false"`` makes sense only if there is
another set in the constraint. Otherwise, the constraint has no effect.
There is no inherent limit to the number and size of the sets used.
The only thing that matters is that in order for any member of one set
in the constraint to be active, all members of sets listed after it must also
be active (and naturally on the same node); and if a set has ``sequential="true"``,
then in order for one member of that set to be active, all members listed
before it must also be active.
If desired, you can restrict the dependency to instances of promotable clone
resources that are in a specific role, using the set's ``role`` property.
.. topic:: Colocation in which the members of the middle set have no
interdependencies, and the last set listed applies only to promoted
instances
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" score="INFINITY" >
<resource_set id="colocated-set-1" sequential="true">
<resource_ref id="F"/>
<resource_ref id="G"/>
</resource_set>
<resource_set id="colocated-set-2" sequential="false">
<resource_ref id="C"/>
<resource_ref id="D"/>
<resource_ref id="E"/>
</resource_set>
<resource_set id="colocated-set-3" sequential="true" role="Promoted">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
</rsc_colocation>
</constraints>
.. topic:: Visual representation of the above example (resources are placed from
left to right)
.. image:: ../shared/images/pcmk-colocated-sets.png
:alt: Colocation chain
.. note::
Unlike ordered sets, colocated sets do not use the ``require-all`` option.
.. [#] While the human brain is sophisticated enough to read the constraint
in any order and choose the correct one depending on the situation,
the cluster is not quite so smart. Yet.
.. [#] which is not the same as saying easy to follow
diff --git a/doc/sphinx/Pacemaker_Explained/rules.rst b/doc/sphinx/Pacemaker_Explained/rules.rst
index d03165ac66..69457488e8 100644
--- a/doc/sphinx/Pacemaker_Explained/rules.rst
+++ b/doc/sphinx/Pacemaker_Explained/rules.rst
@@ -1,979 +1,1033 @@
.. index::
single: rule
.. _rules:
Rules
-----
Rules can be used to make your configuration more dynamic, allowing values to
change depending on the time or the value of a node attribute. Examples of
things rules are useful for:
* Set a higher value for :ref:`resource-stickiness <resource-stickiness>`
during working hours, to minimize downtime, and a lower value on weekends, to
allow resources to move to their most preferred locations when people aren't
around to notice.
* Automatically place the cluster into maintenance mode during a scheduled
maintenance window.
* Assign certain nodes and resources to a particular department via custom
node attributes and meta-attributes, and add a single location constraint
that restricts the department's resources to run only on those nodes.
Each constraint type or property set that supports rules may contain one or more
``rule`` elements specifying conditions under which the constraint or properties
take effect. Examples later in this chapter will make this clearer.
.. index::
pair: XML element; rule
Rule Properties
###############
.. table:: **Attributes of a rule Element**
:widths: 1 1 3
+-----------------+-------------+-------------------------------------------+
| Attribute | Default | Description |
+=================+=============+===========================================+
| id | | .. index:: |
| | | pair: rule; id |
| | | |
| | | A unique name for this element (required) |
+-----------------+-------------+-------------------------------------------+
| role | ``Started`` | .. index:: |
| | | pair: rule; role |
| | | |
| | | The rule is in effect only when the |
| | | resource is in the specified role. |
| | | Allowed values are ``Started``, |
| | | ``Unpromoted``, and ``Promoted``. A rule |
| | | with a ``role`` of ``Promoted`` cannot |
| | | determine the initial location of a clone |
| | | instance and will only affect which of |
| | | the active instances will be promoted. |
+-----------------+-------------+-------------------------------------------+
| score | | .. index:: |
| | | pair: rule; score |
| | | |
| | | If this rule is used in a location |
| | | constraint and evaluates to true, apply |
| | | this score to the constraint. Only one of |
| | | ``score`` and ``score-attribute`` may be |
| | | used. |
+-----------------+-------------+-------------------------------------------+
| score-attribute | | .. index:: |
| | | pair: rule; score-attribute |
| | | |
| | | If this rule is used in a location |
| | | constraint and evaluates to true, use the |
| | | value of this node attribute as the score |
| | | to apply to the constraint. Only one of |
| | | ``score`` and ``score-attribute`` may be |
| | | used. |
+-----------------+-------------+-------------------------------------------+
| boolean-op | ``and`` | .. index:: |
| | | pair: rule; boolean-op |
| | | |
| | | If this rule contains more than one |
| | | condition, a value of ``and`` specifies |
| | | that the rule evaluates to true only if |
| | | all conditions are true, and a value of |
| | | ``or`` specifies that the rule evaluates |
| | | to true if any condition is true. |
+-----------------+-------------+-------------------------------------------+
A ``rule`` element must contain one or more conditions. A condition may be an
``expression`` element, a ``date_expression`` element, or another ``rule`` element.
.. index::
single: rule; node attribute expression
single: node attribute; rule expression
pair: XML element; expression
.. _node_attribute_expressions:
Node Attribute Expressions
##########################
Expressions are rule conditions based on the values of node attributes.
.. table:: **Attributes of an expression Element**
:class: longtable
:widths: 1 2 3
+--------------+---------------------------------+-------------------------------------------+
| Attribute | Default | Description |
+==============+=================================+===========================================+
| id | | .. index:: |
| | | pair: expression; id |
| | | |
| | | A unique name for this element (required) |
+--------------+---------------------------------+-------------------------------------------+
| attribute | | .. index:: |
| | | pair: expression; attribute |
| | | |
| | | The node attribute to test (required) |
+--------------+---------------------------------+-------------------------------------------+
| type | The default type for | .. index:: |
| | ``lt``, ``gt``, ``lte``, and | pair: expression; type |
| | ``gte`` operations is ``number``| |
| | if either value contains a | How the node attributes should be |
| | decimal point character, or | compared. Allowed values are ``string``, |
| | ``integer`` otherwise. The | ``integer`` *(since 2.0.5)*, ``number``, |
| | default type for all other | and ``version``. ``integer`` truncates |
| | operations is ``string``. If a | floating-point values if necessary before |
| | numeric parse fails for either | performing a 64-bit integer comparison. |
| | value, then the values are | ``number`` performs a double-precision |
| | compared as type ``string``. | floating-point comparison |
| | | *(32-bit integer before 2.0.5)*. |
+--------------+---------------------------------+-------------------------------------------+
| operation | | .. index:: |
| | | pair: expression; operation |
| | | |
| | | The comparison to perform (required). |
| | | Allowed values: |
| | | |
| | | * ``lt:`` True if the node attribute value|
| | | is less than the comparison value |
| | | * ``gt:`` True if the node attribute value|
| | | is greater than the comparison value |
| | | * ``lte:`` True if the node attribute |
| | | value is less than or equal to the |
| | | comparison value |
| | | * ``gte:`` True if the node attribute |
| | | value is greater than or equal to the |
| | | comparison value |
| | | * ``eq:`` True if the node attribute value|
| | | is equal to the comparison value |
| | | * ``ne:`` True if the node attribute value|
| | | is not equal to the comparison value |
| | | * ``defined:`` True if the node has the |
| | | named attribute |
| | | * ``not_defined:`` True if the node does |
| | | not have the named attribute |
+--------------+---------------------------------+-------------------------------------------+
| value | | .. index:: |
| | | pair: expression; value |
| | | |
| | | User-supplied value for comparison |
| | | (required for operations other than |
| | | ``defined`` and ``not_defined``) |
+--------------+---------------------------------+-------------------------------------------+
| value-source | ``literal`` | .. index:: |
| | | pair: expression; value-source |
| | | |
| | | How the ``value`` is derived. Allowed |
| | | values: |
| | | |
| | | * ``literal``: ``value`` is a literal |
| | | string to compare against |
| | | * ``param``: ``value`` is the name of a |
| | | resource parameter to compare against |
| | | (only valid in location constraints) |
| | | * ``meta``: ``value`` is the name of a |
| | | resource meta-attribute to compare |
| | | against (only valid in location |
| | | constraints) |
+--------------+---------------------------------+-------------------------------------------+
.. _node-attribute-expressions-special:
In addition to custom node attributes defined by the administrator, the cluster
defines special, built-in node attributes for each node that can also be used
in rule expressions.
.. table:: **Built-in Node Attributes**
:widths: 1 4
+---------------+-----------------------------------------------------------+
| Name | Value |
+===============+===========================================================+
| #uname | :ref:`Node name <node_name>` |
+---------------+-----------------------------------------------------------+
| #id | Node ID |
+---------------+-----------------------------------------------------------+
| #kind | Node type. Possible values are ``cluster``, ``remote``, |
| | and ``container``. Kind is ``remote`` for Pacemaker Remote|
| | nodes created with the ``ocf:pacemaker:remote`` resource, |
| | and ``container`` for Pacemaker Remote guest nodes and |
| | bundle nodes |
+---------------+-----------------------------------------------------------+
| #is_dc | ``true`` if this node is the cluster's Designated |
| | Controller (DC), ``false`` otherwise |
+---------------+-----------------------------------------------------------+
| #cluster-name | The value of the ``cluster-name`` cluster property, if set|
+---------------+-----------------------------------------------------------+
| #site-name | The value of the ``site-name`` node attribute, if set, |
| | otherwise identical to ``#cluster-name`` |
+---------------+-----------------------------------------------------------+
| #role | The role the relevant promotable clone resource has on |
| | this node. Valid only within a rule for a location |
| | constraint for a promotable clone resource. |
+---------------+-----------------------------------------------------------+
.. Add_to_above_table_if_released:
+---------------+-----------------------------------------------------------+
| #ra-version | The installed version of the resource agent on the node, |
| | as defined by the ``version`` attribute of the |
| | ``resource-agent`` tag in the agent's metadata. Valid only|
| | within rules controlling resource options. This can be |
| | useful during rolling upgrades of a backward-incompatible |
| | resource agent. *(since x.x.x)* |
.. index::
single: rule; date/time expression
pair: XML element; date_expression
Date/Time Expressions
#####################
Date/time expressions are rule conditions based (as the name suggests) on the
current date and time.
A ``date_expression`` element may optionally contain a ``date_spec`` or
``duration`` element depending on the context.
.. table:: **Attributes of a date_expression Element**
:widths: 1 4
+---------------+-----------------------------------------------------------+
| Attribute | Description |
+===============+===========================================================+
| id | .. index:: |
| | pair: id; date_expression |
| | |
| | A unique name for this element (required) |
+---------------+-----------------------------------------------------------+
| start | .. index:: |
| | pair: start; date_expression |
| | |
| | A date/time conforming to the |
| | `ISO8601 <https://en.wikipedia.org/wiki/ISO_8601>`_ |
| | specification. May be used when ``operation`` is |
| | ``in_range`` (in which case at least one of ``start`` or |
| | ``end`` must be specified) or ``gt`` (in which case |
| | ``start`` is required). |
+---------------+-----------------------------------------------------------+
| end | .. index:: |
| | pair: end; date_expression |
| | |
| | A date/time conforming to the |
| | `ISO8601 <https://en.wikipedia.org/wiki/ISO_8601>`_ |
| | specification. May be used when ``operation`` is |
| | ``in_range`` (in which case at least one of ``start`` or |
| | ``end`` must be specified) or ``lt`` (in which case |
| | ``end`` is required). |
+---------------+-----------------------------------------------------------+
| operation | .. index:: |
| | pair: operation; date_expression |
| | |
| | Compares the current date/time with the start and/or end |
| | date, depending on the context. Allowed values: |
| | |
| | * ``gt:`` True if the current date/time is after ``start``|
| | * ``lt:`` True if the current date/time is before ``end`` |
| | * ``in_range:`` True if the current date/time is after |
| | ``start`` (if specified) and before either ``end`` (if |
| | specified) or ``start`` plus the value of the |
| | ``duration`` element (if one is contained in the |
| | ``date_expression``). If both ``end`` and ``duration`` |
| | are specified, ``duration`` is ignored. |
| | * ``date_spec:`` True if the current date/time matches |
| | the specification given in the contained ``date_spec`` |
| | element (described below) |
+---------------+-----------------------------------------------------------+
.. note:: There is no ``eq``, ``neq``, ``gte``, or ``lte`` operation, since
they would be valid only for a single second.
.. index::
single: date specification
pair: XML element; date_spec
Date Specifications
___________________
A ``date_spec`` element is used to create a cron-like expression relating
to time. Each field can contain a single number or range. Any field not
supplied is ignored.
.. table:: **Attributes of a date_spec Element**
:widths: 1 3
+---------------+-----------------------------------------------------------+
| Attribute | Description |
+===============+===========================================================+
| id | .. index:: |
| | pair: id; date_spec |
| | |
| | A unique name for this element (required) |
+---------------+-----------------------------------------------------------+
| seconds | .. index:: |
| | pair: seconds; date_spec |
| | |
| | Allowed values: 0-59 |
+---------------+-----------------------------------------------------------+
| minutes | .. index:: |
| | pair: minutes; date_spec |
| | |
| | Allowed values: 0-59 |
+---------------+-----------------------------------------------------------+
| hours | .. index:: |
| | pair: hours; date_spec |
| | |
| | Allowed values: 0-23 (where 0 is midnight and 23 is |
| | 11 p.m.) |
+---------------+-----------------------------------------------------------+
| monthdays | .. index:: |
| | pair: monthdays; date_spec |
| | |
| | Allowed values: 1-31 (depending on month and year) |
+---------------+-----------------------------------------------------------+
| weekdays | .. index:: |
| | pair: weekdays; date_spec |
| | |
| | Allowed values: 1-7 (where 1 is Monday and 7 is Sunday) |
+---------------+-----------------------------------------------------------+
| yeardays | .. index:: |
| | pair: yeardays; date_spec |
| | |
| | Allowed values: 1-366 (depending on the year) |
+---------------+-----------------------------------------------------------+
| months | .. index:: |
| | pair: months; date_spec |
| | |
| | Allowed values: 1-12 |
+---------------+-----------------------------------------------------------+
| weeks | .. index:: |
| | pair: weeks; date_spec |
| | |
| | Allowed values: 1-53 (depending on weekyear) |
+---------------+-----------------------------------------------------------+
| years | .. index:: |
| | pair: years; date_spec |
| | |
| | Year according to the Gregorian calendar |
+---------------+-----------------------------------------------------------+
| weekyears | .. index:: |
| | pair: weekyears; date_spec |
| | |
| | Year in which the week started; for example, 1 January |
| | 2005 can be specified in ISO 8601 as "2005-001 Ordinal", |
| | "2005-01-01 Gregorian" or "2004-W53-6 Weekly" and thus |
| | would match ``years="2005"`` or ``weekyears="2004"`` |
+---------------+-----------------------------------------------------------+
| moon | .. index:: |
| | pair: moon; date_spec |
| | |
| | Allowed values are 0-7 (where 0 is the new moon and 4 is |
| | full moon). Seriously, you can use this. This was |
| | implemented to demonstrate the ease with which new |
| | comparisons could be added. |
+---------------+-----------------------------------------------------------+
For example, ``monthdays="1"`` matches the first day of every month, and
``hours="09-17"`` matches the hours between 9 a.m. and 5 p.m. (inclusive).
At this time, multiple ranges (e.g. ``weekdays="1,2"`` or ``weekdays="1-2,5-6"``)
are not supported.
.. note:: Pacemaker can calculate when evaluation of a ``date_expression`` with
an ``operation`` of ``gt``, ``lt``, or ``in_range`` will next change,
and schedule a cluster re-check for that time. However, it does not
do this for ``date_spec``. Instead, it evaluates the ``date_spec``
whenever a cluster re-check naturally happens via a cluster event or
the ``cluster-recheck-interval`` cluster option.
For example, if you have a ``date_spec`` enabling a resource from 9
a.m. to 5 p.m., and ``cluster-recheck-interval`` has been set to 5
minutes, then sometime between 9 a.m. and 9:05 a.m. the cluster would
notice that it needs to start the resource, and sometime between 5
p.m. and 5:05 p.m. it would realize that it needs to stop the
resource. The timing of the actual start and stop actions will
further depend on factors such as any other actions the cluster may
need to perform first, and the load of the machine.
.. index::
single: duration
pair: XML element; duration
Durations
_________
A ``duration`` is used to calculate a value for ``end`` when one is not
supplied to ``in_range`` operations. It contains one or more attributes each
containing a single number. Any attribute not supplied is ignored.
.. table:: **Attributes of a duration Element**
:widths: 1 3
+---------------+-----------------------------------------------------------+
| Attribute | Description |
+===============+===========================================================+
| id | .. index:: |
| | pair: id; duration |
| | |
| | A unique name for this element (required) |
+---------------+-----------------------------------------------------------+
| seconds | .. index:: |
| | pair: seconds; duration |
| | |
| | This many seconds will be added to the total duration |
+---------------+-----------------------------------------------------------+
| minutes | .. index:: |
| | pair: minutes; duration |
| | |
| | This many minutes will be added to the total duration |
+---------------+-----------------------------------------------------------+
| hours | .. index:: |
| | pair: hours; duration |
| | |
| | This many hours will be added to the total duration |
+---------------+-----------------------------------------------------------+
| days | .. index:: |
| | pair: days; duration |
| | |
| | This many days will be added to the total duration |
+---------------+-----------------------------------------------------------+
| weeks | .. index:: |
| | pair: weeks; duration |
| | |
| | This many weeks will be added to the total duration |
+---------------+-----------------------------------------------------------+
| months | .. index:: |
| | pair: months; duration |
| | |
| | This many months will be added to the total duration |
+---------------+-----------------------------------------------------------+
| years | .. index:: |
| | pair: years; duration |
| | |
| | This many years will be added to the total duration |
+---------------+-----------------------------------------------------------+
Example Time-Based Expressions
______________________________
A small sample of how time-based expressions can be used:
.. topic:: True if now is any time in the year 2005
.. code-block:: xml
<rule id="rule1" score="INFINITY">
<date_expression id="date_expr1" start="2005-001" operation="in_range">
<duration id="duration1" years="1"/>
</date_expression>
</rule>
or equivalently:
.. code-block:: xml
<rule id="rule2" score="INFINITY">
<date_expression id="date_expr2" operation="date_spec">
<date_spec id="date_spec2" years="2005"/>
</date_expression>
</rule>
.. topic:: 9 a.m. to 5 p.m. Monday through Friday
.. code-block:: xml
<rule id="rule3" score="INFINITY">
<date_expression id="date_expr3" operation="date_spec">
<date_spec id="date_spec3" hours="9-16" weekdays="1-5"/>
</date_expression>
</rule>
Note that the ``16`` matches all the way through ``16:59:59``, because the
numeric value of the hour still matches.
.. topic:: 9 a.m. to 6 p.m. Monday through Friday or anytime Saturday
.. code-block:: xml
<rule id="rule4" score="INFINITY" boolean-op="or">
<date_expression id="date_expr4-1" operation="date_spec">
<date_spec id="date_spec4-1" hours="9-16" weekdays="1-5"/>
</date_expression>
<date_expression id="date_expr4-2" operation="date_spec">
<date_spec id="date_spec4-2" weekdays="6"/>
</date_expression>
</rule>
.. topic:: 9 a.m. to 5 p.m. or 9 p.m. to 12 a.m. Monday through Friday
.. code-block:: xml
<rule id="rule5" score="INFINITY" boolean-op="and">
<rule id="rule5-nested1" score="INFINITY" boolean-op="or">
<date_expression id="date_expr5-1" operation="date_spec">
<date_spec id="date_spec5-1" hours="9-16"/>
</date_expression>
<date_expression id="date_expr5-2" operation="date_spec">
<date_spec id="date_spec5-2" hours="21-23"/>
</date_expression>
</rule>
<date_expression id="date_expr5-3" operation="date_spec">
<date_spec id="date_spec5-3" weekdays="1-5"/>
</date_expression>
</rule>
.. topic:: Mondays in March 2005
.. code-block:: xml
<rule id="rule6" score="INFINITY" boolean-op="and">
<date_expression id="date_expr6-1" operation="date_spec">
<date_spec id="date_spec6" weekdays="1"/>
</date_expression>
<date_expression id="date_expr6-2" operation="in_range"
start="2005-03-01" end="2005-04-01"/>
</rule>
.. note:: Because no time is specified with the above dates, 00:00:00 is
implied. This means that the range includes all of 2005-03-01 but
none of 2005-04-01. You may wish to write ``end`` as
``"2005-03-31T23:59:59"`` to avoid confusion.
.. topic:: A full moon on Friday the 13th
.. code-block:: xml
<rule id="rule7" score="INFINITY" boolean-op="and">
<date_expression id="date_expr7" operation="date_spec">
<date_spec id="date_spec7" weekdays="5" monthdays="13" moon="4"/>
</date_expression>
</rule>
.. index::
single: rule; resource expression
single: resource; rule expression
pair: XML element; rsc_expression
Resource Expressions
####################
An ``rsc_expression`` *(since 2.0.5)* is a rule condition based on a resource
agent's properties. This rule is only valid within an ``rsc_defaults`` or
``op_defaults`` context. None of the matching attributes of ``class``,
``provider``, and ``type`` are required. If one is omitted, all values of that
attribute will match. For instance, omitting ``type`` means every type will
match.
.. table:: **Attributes of a rsc_expression Element**
:widths: 1 3
+---------------+-----------------------------------------------------------+
| Attribute | Description |
+===============+===========================================================+
| id | .. index:: |
| | pair: id; rsc_expression |
| | |
| | A unique name for this element (required) |
+---------------+-----------------------------------------------------------+
| class | .. index:: |
| | pair: class; rsc_expression |
| | |
| | The standard name to be matched against resource agents |
+---------------+-----------------------------------------------------------+
| provider | .. index:: |
| | pair: provider; rsc_expression |
| | |
| | If given, the vendor to be matched against resource |
| | agents (only relevant when ``class`` is ``ocf``) |
+---------------+-----------------------------------------------------------+
| type | .. index:: |
| | pair: type; rsc_expression |
| | |
| | The name of the resource agent to be matched |
+---------------+-----------------------------------------------------------+
Example Resource-Based Expressions
__________________________________
A small sample of how resource-based expressions can be used:
.. topic:: True for all ``ocf:heartbeat:IPaddr2`` resources
.. code-block:: xml
<rule id="rule1" score="INFINITY">
<rsc_expression id="rule_expr1" class="ocf" provider="heartbeat" type="IPaddr2"/>
</rule>
.. topic:: Provider doesn't apply to non-OCF resources
.. code-block:: xml
<rule id="rule2" score="INFINITY">
<rsc_expression id="rule_expr2" class="stonith" type="fence_xvm"/>
</rule>
.. index::
single: rule; operation expression
single: operation; rule expression
pair: XML element; op_expression
Operation Expressions
#####################
An ``op_expression`` *(since 2.0.5)* is a rule condition based on an action of
some resource agent. This rule is only valid within an ``op_defaults`` context.
.. table:: **Attributes of an op_expression Element**
:widths: 1 3
+---------------+-----------------------------------------------------------+
| Attribute | Description |
+===============+===========================================================+
| id | .. index:: |
| | pair: id; op_expression |
| | |
| | A unique name for this element (required) |
+---------------+-----------------------------------------------------------+
| name | .. index:: |
| | pair: name; op_expression |
| | |
| | The action name to match against. This can be any action |
| | supported by the resource agent; common values include |
| | ``monitor``, ``start``, and ``stop`` (required). |
+---------------+-----------------------------------------------------------+
| interval | .. index:: |
| | pair: interval; op_expression |
| | |
| | The interval of the action to match against. If not given,|
| | only the name attribute will be used to match. |
+---------------+-----------------------------------------------------------+
Example Operation-Based Expressions
___________________________________
A small sample of how operation-based expressions can be used:
.. topic:: True for all monitor actions
.. code-block:: xml
<rule id="rule1" score="INFINITY">
<op_expression id="rule_expr1" name="monitor"/>
</rule>
.. topic:: True for all monitor actions with a 10 second interval
.. code-block:: xml
<rule id="rule2" score="INFINITY">
<op_expression id="rule_expr2" name="monitor" interval="10s"/>
</rule>
.. index::
pair: location constraint; rule
Using Rules to Determine Resource Location
##########################################
A location constraint may contain one or more top-level rules. The cluster will
act as if there is a separate location constraint for each rule that evaluates
as true.
Consider the following simple location constraint:
.. topic:: Prevent resource ``webserver`` from running on node ``node3``
.. code-block:: xml
<rsc_location id="ban-apache-on-node3" rsc="webserver"
score="-INFINITY" node="node3"/>
The same constraint can be more verbosely written using a rule:
.. topic:: Prevent resource ``webserver`` from running on node ``node3`` using a rule
.. code-block:: xml
<rsc_location id="ban-apache-on-node3" rsc="webserver">
<rule id="ban-apache-rule" score="-INFINITY">
<expression id="ban-apache-expr" attribute="#uname"
operation="eq" value="node3"/>
</rule>
</rsc_location>
The advantage of using the expanded form is that one could add more expressions
(for example, limiting the constraint to certain days of the week), or activate
the constraint by some node attribute other than node name.
Location Rules Based on Other Node Properties
_____________________________________________
The expanded form allows us to match on node properties other than its name.
If we rated each machine's CPU power such that the cluster had the following
nodes section:
.. topic:: Sample node section with node attributes
.. code-block:: xml
<nodes>
<node id="uuid1" uname="c001n01" type="normal">
<instance_attributes id="uuid1-custom_attrs">
<nvpair id="uuid1-cpu_mips" name="cpu_mips" value="1234"/>
</instance_attributes>
</node>
<node id="uuid2" uname="c001n02" type="normal">
<instance_attributes id="uuid2-custom_attrs">
<nvpair id="uuid2-cpu_mips" name="cpu_mips" value="5678"/>
</instance_attributes>
</node>
</nodes>
then we could prevent resources from running on underpowered machines with this
rule:
.. topic:: Rule using a node attribute (to be used inside a location constraint)
.. code-block:: xml
<rule id="need-more-power-rule" score="-INFINITY">
<expression id="need-more-power-expr" attribute="cpu_mips"
operation="lt" value="3000"/>
</rule>
Using ``score-attribute`` Instead of ``score``
______________________________________________
When using ``score-attribute`` instead of ``score``, each node matched by the
rule has its score adjusted differently, according to its value for the named
node attribute. Thus, in the previous example, if a rule inside a location
constraint for a resource used ``score-attribute="cpu_mips"``, ``c001n01``
would have its preference to run the resource increased by ``1234`` whereas
``c001n02`` would have its preference increased by ``5678``.
+.. _s-rsc-pattern-rules:
+
+Specifying location scores using pattern submatches
+___________________________________________________
+
+Location constraints may use ``rsc-pattern`` to apply the constraint to all
+resources whose IDs match the given pattern (see :ref:`s-rsc-pattern`). The
+pattern may contain up to 9 submatches in parentheses, whose values may be used
+as ``%1`` through ``%9`` in a rule's ``score-attribute`` or a rule expression's
+``attribute``.
+
+As an example, the following configuration (only relevant parts are shown)
+gives the resources **server-httpd** and **ip-httpd** a preference of 100 on
+**node1** and 50 on **node2**, and **ip-gateway** a preference of -100 on
+**node1** and 200 on **node2**.
+
+.. topic:: Location constraint using submatches
+
+ .. code-block:: xml
+
+ <nodes>
+ <node id="1" uname="node1">
+ <instance_attributes id="node1-attrs">
+ <nvpair id="node1-prefer-httpd" name="prefer-httpd" value="100"/>
+ <nvpair id="node1-prefer-gateway" name="prefer-gateway" value="-100"/>
+ </instance_attributes>
+ </node>
+ <node id="2" uname="node2">
+ <instance_attributes id="node2-attrs">
+ <nvpair id="node2-prefer-httpd" name="prefer-httpd" value="50"/>
+ <nvpair id="node2-prefer-gateway" name="prefer-gateway" value="200"/>
+ </instance_attributes>
+ </node>
+ </nodes>
+ <resources>
+ <primitive id="server-httpd" class="ocf" provider="heartbeat" type="apache"/>
+ <primitive id="ip-httpd" class="ocf" provider="heartbeat" type="IPaddr2"/>
+ <primitive id="ip-gateway" class="ocf" provider="heartbeat" type="IPaddr2"/>
+ </resources>
+ <constraints>
+ <!-- The following constraint says that for any resource whose name
+ starts with "server-" or "ip-", that resource's preference for a
+ node is the value of the node attribute named "prefer-" followed
+ by the part of the resource name after "server-" or "ip-",
+ wherever such a node attribute is defined.
+ -->
+ <rsc_location id="location1" rsc-pattern="(server|ip)-(.*)">
+ <rule id="location1-rule1" score-attribute="prefer-%2">
+ <expression id="location1-rule1-expression1" attribute="prefer-%2" operation="defined"/>
+ </rule>
+ </rsc_location>
+ </constraints>
+
+
.. index::
pair: cluster option; rule
pair: instance attribute; rule
pair: meta-attribute; rule
pair: resource defaults; rule
pair: operation defaults; rule
pair: node attribute; rule
Using Rules to Define Options
#############################
Rules may be used to control a variety of options:
* :ref:`Cluster options <cluster_options>` (``cluster_property_set`` elements)
* :ref:`Node attributes <node_attributes>` (``instance_attributes`` or
``utilization`` elements inside a ``node`` element)
* :ref:`Resource options <resource_options>` (``utilization``,
``meta_attributes``, or ``instance_attributes`` elements inside a resource
definition element or ``op`` , ``rsc_defaults``, ``op_defaults``, or
``template`` element)
* :ref:`Operation properties <operation_properties>` (``meta_attributes``
elements inside an ``op`` or ``op_defaults`` element)
.. note::
Attribute-based expressions for meta-attributes can only be used within
``operations`` and ``op_defaults``. They will not work with resource
configuration or ``rsc_defaults``. Additionally, attribute-based
expressions cannot be used with cluster options.
Using Rules to Control Resource Options
_______________________________________
Often some cluster nodes will be different from their peers. Sometimes,
these differences -- e.g. the location of a binary or the names of network
interfaces -- require resources to be configured differently depending
on the machine they're hosted on.
By defining multiple ``instance_attributes`` objects for the resource and
adding a rule to each, we can easily handle these special cases.
In the example below, ``mySpecialRsc`` will use eth1 and port 9999 when run on
``node1``, eth2 and port 8888 on ``node2`` and default to eth0 and port 9999
for all other nodes.
.. topic:: Defining different resource options based on the node name
.. code-block:: xml
<primitive id="mySpecialRsc" class="ocf" type="Special" provider="me">
<instance_attributes id="special-node1" score="3">
<rule id="node1-special-case" score="INFINITY" >
<expression id="node1-special-case-expr" attribute="#uname"
operation="eq" value="node1"/>
</rule>
<nvpair id="node1-interface" name="interface" value="eth1"/>
</instance_attributes>
<instance_attributes id="special-node2" score="2" >
<rule id="node2-special-case" score="INFINITY">
<expression id="node2-special-case-expr" attribute="#uname"
operation="eq" value="node2"/>
</rule>
<nvpair id="node2-interface" name="interface" value="eth2"/>
<nvpair id="node2-port" name="port" value="8888"/>
</instance_attributes>
<instance_attributes id="defaults" score="1" >
<nvpair id="default-interface" name="interface" value="eth0"/>
<nvpair id="default-port" name="port" value="9999"/>
</instance_attributes>
</primitive>
The order in which ``instance_attributes`` objects are evaluated is determined
by their score (highest to lowest). If not supplied, the score defaults to
zero. Objects with an equal score are processed in their listed order. If the
``instance_attributes`` object has no rule, or a ``rule`` that evaluates to
``true``, then for any parameter the resource does not yet have a value for,
the resource will use the parameter values defined by the ``instance_attributes``.
For example, given the configuration above, if the resource is placed on
``node1``:
* ``special-node1`` has the highest score (3) and so is evaluated first; its
rule evaluates to ``true``, so ``interface`` is set to ``eth1``.
* ``special-node2`` is evaluated next with score 2, but its rule evaluates to
``false``, so it is ignored.
* ``defaults`` is evaluated last with score 1, and has no rule, so its values
are examined; ``interface`` is already defined, so the value here is not
used, but ``port`` is not yet defined, so ``port`` is set to ``9999``.
Using Rules to Control Resource Defaults
________________________________________
Rules can be used for resource and operation defaults. The following example
illustrates how to set a different ``resource-stickiness`` value during and
outside work hours. This allows resources to automatically move back to their
most preferred hosts, but at a time that (in theory) does not interfere with
business activities.
.. topic:: Change ``resource-stickiness`` during working hours
.. code-block:: xml
<rsc_defaults>
<meta_attributes id="core-hours" score="2">
<rule id="core-hour-rule" score="0">
<date_expression id="nine-to-five-Mon-to-Fri" operation="date_spec">
<date_spec id="nine-to-five-Mon-to-Fri-spec" hours="9-16" weekdays="1-5"/>
</date_expression>
</rule>
<nvpair id="core-stickiness" name="resource-stickiness" value="INFINITY"/>
</meta_attributes>
<meta_attributes id="after-hours" score="1" >
<nvpair id="after-stickiness" name="resource-stickiness" value="0"/>
</meta_attributes>
</rsc_defaults>
Rules may be used similarly in ``instance_attributes`` or ``utilization``
blocks.
Any single block may directly contain only a single rule, but that rule may
itself contain any number of rules.
``rsc_expression`` and ``op_expression`` blocks may additionally be used to
set defaults on either a single resource or across an entire class of resources
with a single rule. ``rsc_expression`` may be used to select resource agents
within both ``rsc_defaults`` and ``op_defaults``, while ``op_expression`` may
only be used within ``op_defaults``. If multiple rules succeed for a given
resource agent, the last one specified will be the one that takes effect. As
with any other rule, boolean operations may be used to make more complicated
expressions.
.. topic:: Default all IPaddr2 resources to stopped
.. code-block:: xml
<rsc_defaults>
<meta_attributes id="op-target-role">
<rule id="op-target-role-rule" score="INFINITY">
<rsc_expression id="op-target-role-expr" class="ocf" provider="heartbeat"
type="IPaddr2"/>
</rule>
<nvpair id="op-target-role-nvpair" name="target-role" value="Stopped"/>
</meta_attributes>
</rsc_defaults>
.. topic:: Default all monitor action timeouts to 7 seconds
.. code-block:: xml
<op_defaults>
<meta_attributes id="op-monitor-defaults">
<rule id="op-monitor-default-rule" score="INFINITY">
<op_expression id="op-monitor-default-expr" name="monitor"/>
</rule>
<nvpair id="op-monitor-timeout" name="timeout" value="7s"/>
</meta_attributes>
</op_defaults>
.. topic:: Default the timeout on all 10-second-interval monitor actions on ``IPaddr2`` resources to 8 seconds
.. code-block:: xml
<op_defaults>
<meta_attributes id="op-monitor-and">
<rule id="op-monitor-and-rule" score="INFINITY">
<rsc_expression id="op-monitor-and-rsc-expr" class="ocf" provider="heartbeat"
type="IPaddr2"/>
<op_expression id="op-monitor-and-op-expr" name="monitor" interval="10s"/>
</rule>
<nvpair id="op-monitor-and-timeout" name="timeout" value="8s"/>
</meta_attributes>
</op_defaults>
.. index::
pair: rule; cluster option
Using Rules to Control Cluster Options
______________________________________
Controlling cluster options is achieved in much the same manner as specifying
different resource options on different nodes.
The following example illustrates how to set ``maintenance_mode`` during a
scheduled maintenance window. This will keep the cluster running but not
monitor, start, or stop resources during this time.
.. topic:: Schedule a maintenance window for 9 to 11 p.m. CDT Sept. 20, 2019
.. code-block:: xml
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair id="bootstrap-stonith-enabled" name="stonith-enabled" value="1"/>
</cluster_property_set>
<cluster_property_set id="normal-set" score="10">
<nvpair id="normal-maintenance-mode" name="maintenance-mode" value="false"/>
</cluster_property_set>
<cluster_property_set id="maintenance-window-set" score="1000">
<nvpair id="maintenance-nvpair1" name="maintenance-mode" value="true"/>
<rule id="maintenance-rule1" score="INFINITY">
<date_expression id="maintenance-date1" operation="in_range"
start="2019-09-20 21:00:00 -05:00" end="2019-09-20 23:00:00 -05:00"/>
</rule>
</cluster_property_set>
</crm_config>
.. important:: The ``cluster_property_set`` with an ``id`` set to
"cib-bootstrap-options" will *always* have the highest priority,
regardless of any scores. Therefore, rules in another
``cluster_property_set`` can never take effect for any
properties listed in the bootstrap set.

File Metadata

Mime Type
text/x-diff
Expires
Sat, Jan 25, 11:22 AM (1 d, 11 h)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1322342
Default Alt Text
(119 KB)

Event Timeline