diff --git a/doc/sphinx/Pacemaker_Explained/constraints.rst b/doc/sphinx/Pacemaker_Explained/constraints.rst index 8722f81866..b5b9f8b144 100644 --- a/doc/sphinx/Pacemaker_Explained/constraints.rst +++ b/doc/sphinx/Pacemaker_Explained/constraints.rst @@ -1,1061 +1,1061 @@ .. index:: single: constraint single: resource; constraint .. _constraints: Resource Constraints -------------------- .. index:: single: resource; score single: node; score Scores ###### Scores of all kinds are integral to how the cluster works. Practically everything from moving a resource to deciding which resource to stop in a degraded cluster is achieved by manipulating scores in some way. Scores are calculated per resource and node. Any node with a negative score for a resource can't run that resource. The cluster places a resource on the node with the highest score for it. Infinity Math _____________ Pacemaker implements **INFINITY** (or equivalently, **+INFINITY**) internally as a score of 1,000,000. Addition and subtraction with it follow these three basic rules: * Any value + **INFINITY** = **INFINITY** * Any value - **INFINITY** = -**INFINITY** * **INFINITY** - **INFINITY** = **-INFINITY** .. note:: What if you want to use a score higher than 1,000,000? Typically this possibility arises when someone wants to base the score on some external metric that might go above 1,000,000. The short answer is you can't. The long answer is it is sometimes possible work around this limitation creatively. You may be able to set the score to some computed value based on the external metric rather than use the metric directly. For nodes, you can store the metric as a node attribute, and query the attribute when computing the score (possibly as part of a custom resource agent). .. _location-constraint: .. index:: single: location constraint single: constraint; location Deciding Which Nodes a Resource Can Run On ########################################## *Location constraints* tell the cluster which nodes a resource can run on. There are two alternative strategies. One way is to say that, by default, resources can run anywhere, and then the location constraints specify nodes that are not allowed (an *opt-out* cluster). The other way is to start with nothing able to run anywhere, and use location constraints to selectively enable allowed nodes (an *opt-in* cluster). Whether you should choose opt-in or opt-out depends on your personal preference and the make-up of your cluster. If most of your resources can run on most of the nodes, then an opt-out arrangement is likely to result in a simpler configuration. On the other-hand, if most resources can only run on a small subset of nodes, an opt-in configuration might be simpler. .. index:: pair: XML element; rsc_location single: constraint; rsc_location Location Properties ___________________ .. table:: **Attributes of a rsc_location Element** +--------------------+---------+----------------------------------------------------------------------------------------------+ | Attribute | Default | Description | +====================+=========+==============================================================================================+ | id | | .. index:: | | | | single: rsc_location; attribute, id | | | | single: attribute; id (rsc_location) | | | | single: id; rsc_location attribute | | | | | | | | A unique name for the constraint (required) | +--------------------+---------+----------------------------------------------------------------------------------------------+ | rsc | | .. index:: | | | | single: rsc_location; attribute, rsc | | | | single: attribute; rsc (rsc_location) | | | | single: rsc; rsc_location attribute | | | | | | | | The name of the resource to which this constraint | | | | applies. A location constraint must either have a | | | | ``rsc``, have a ``rsc-pattern``, or contain at | | | | least one resource set. | +--------------------+---------+----------------------------------------------------------------------------------------------+ | rsc-pattern | | .. index:: | | | | single: rsc_location; attribute, rsc-pattern | | | | single: attribute; rsc-pattern (rsc_location) | | | | single: rsc-pattern; rsc_location attribute | | | | | | | | A pattern matching the names of resources to which | | | | this constraint applies. The syntax is the same as | | | | `POSIX `_ | | | | extended regular expressions, with the addition of an | | | | initial *!* indicating that resources *not* matching | | | | the pattern are selected. If the regular expression | | | | contains submatches, and the constraint is governed by | | | | a :ref:`rule `, the submatches can be | | | | referenced as **%1** through **%9** in the rule's | | | | ``score-attribute`` or a rule expression's ``attribute``. | | | | A location constraint must either have a ``rsc``, have a | | | | ``rsc-pattern``, or contain at least one resource set. | +--------------------+---------+----------------------------------------------------------------------------------------------+ | node | | .. index:: | | | | single: rsc_location; attribute, node | | | | single: attribute; node (rsc_location) | | | | single: node; rsc_location attribute | | | | | | | | The name of the node to which this constraint applies. | | | | A location constraint must either have a ``node`` and | | | | ``score``, or contain at least one rule. | +--------------------+---------+----------------------------------------------------------------------------------------------+ | score | | .. index:: | | | | single: rsc_location; attribute, score | | | | single: attribute; score (rsc_location) | | | | single: score; rsc_location attribute | | | | | | | | Positive values indicate a preference for running the | | | | affected resource(s) on ``node`` -- the higher the value, | | | | the stronger the preference. Negative values indicate | | | | the resource(s) should avoid this node (a value of | | | | **-INFINITY** changes "should" to "must"). A location | | | | constraint must either have a ``node`` and ``score``, | | | | or contain at least one rule. | +--------------------+---------+----------------------------------------------------------------------------------------------+ | resource-discovery | always | .. index:: | | | | single: rsc_location; attribute, resource-discovery | | | | single: attribute; resource-discovery (rsc_location) | | | | single: resource-discovery; rsc_location attribute | | | | | | | | Whether Pacemaker should perform resource discovery | | | | (that is, check whether the resource is already running) | | | | for this resource on this node. This should normally be | | | | left as the default, so that rogue instances of a | | | | service can be stopped when they are running where they | | | | are not supposed to be. However, there are two | | | | situations where disabling resource discovery is a good | | | | idea: when a service is not installed on a node, | | | | discovery might return an error (properly written OCF | | | | agents will not, so this is usually only seen with other | | | | agent types); and when Pacemaker Remote is used to scale | | | | a cluster to hundreds of nodes, limiting resource | | | | discovery to allowed nodes can significantly boost | | | | performance. | | | | | | | | * ``always:`` Always perform resource discovery for | | | | the specified resource on this node. | | | | | | | | * ``never:`` Never perform resource discovery for the | | | | specified resource on this node. This option should | | | | generally be used with a -INFINITY score, although | | | | that is not strictly required. | | | | | | | | * ``exclusive:`` Perform resource discovery for the | | | | specified resource only on this node (and other nodes | | | | similarly marked as ``exclusive``). Multiple location | | | | constraints using ``exclusive`` discovery for the | | | | same resource across different nodes creates a subset | | | | of nodes resource-discovery is exclusive to. If a | | | | resource is marked for ``exclusive`` discovery on one | | | | or more nodes, that resource is only allowed to be | | | | placed within that subset of nodes. | +--------------------+---------+----------------------------------------------------------------------------------------------+ .. warning:: Setting ``resource-discovery`` to ``never`` or ``exclusive`` removes Pacemaker's ability to detect and stop unwanted instances of a service running where it's not supposed to be. It is up to the system administrator (you!) to make sure that the service can *never* be active on nodes without ``resource-discovery`` (such as by leaving the relevant software uninstalled). .. index:: single: Asymmetrical Clusters single: Opt-In Clusters Asymmetrical "Opt-In" Clusters ______________________________ To create an opt-in cluster, start by preventing resources from running anywhere by default: .. code-block:: none # crm_attribute --name symmetric-cluster --update false Then start enabling nodes. The following fragment says that the web server prefers **sles-1**, the database prefers **sles-2** and both can fail over to **sles-3** if their most preferred node fails. .. topic:: Opt-in location constraints for two resources .. code-block:: xml .. index:: single: Symmetrical Clusters single: Opt-Out Clusters Symmetrical "Opt-Out" Clusters ______________________________ To create an opt-out cluster, start by allowing resources to run anywhere by default: .. code-block:: none # crm_attribute --name symmetric-cluster --update true Then start disabling nodes. The following fragment is the equivalent of the above opt-in configuration. .. topic:: Opt-out location constraints for two resources .. code-block:: xml .. _node-score-equal: What if Two Nodes Have the Same Score _____________________________________ If two nodes have the same score, then the cluster will choose one. This choice may seem random and may not be what was intended, however the cluster was not given enough information to know any better. .. topic:: Constraints where a resource prefers two nodes equally .. code-block:: xml In the example above, assuming no other constraints and an inactive cluster, **Webserver** would probably be placed on **sles-1** and **Database** on **sles-2**. It would likely have placed **Webserver** based on the node's uname and **Database** based on the desire to spread the resource load evenly across the cluster. However other factors can also be involved in more complex configurations. .. index:: single: constraint; ordering single: resource; start order .. _s-resource-ordering: Specifying the Order in which Resources Should Start/Stop ######################################################### *Ordering constraints* tell the cluster the order in which certain resource actions should occur. .. important:: Ordering constraints affect *only* the ordering of resource actions; they do *not* require that the resources be placed on the same node. If you want resources to be started on the same node *and* in a specific order, you need both an ordering constraint *and* a colocation constraint (see :ref:`s-resource-colocation`), or alternatively, a group (see :ref:`group-resources`). .. index:: pair: XML element; rsc_order pair: constraint; rsc_order Ordering Properties ___________________ .. table:: **Attributes of a rsc_order Element** +--------------+----------------------------+-------------------------------------------------------------------+ | Field | Default | Description | +==============+============================+===================================================================+ | id | | .. index:: | | | | single: rsc_order; attribute, id | | | | single: attribute; id (rsc_order) | | | | single: id; rsc_order attribute | | | | | | | | A unique name for the constraint | +--------------+----------------------------+-------------------------------------------------------------------+ | first | | .. index:: | | | | single: rsc_order; attribute, first | | | | single: attribute; first (rsc_order) | | | | single: first; rsc_order attribute | | | | | | | | Name of the resource that the ``then`` resource | | | | depends on | +--------------+----------------------------+-------------------------------------------------------------------+ | then | | .. index:: | | | | single: rsc_order; attribute, then | | | | single: attribute; then (rsc_order) | | | | single: then; rsc_order attribute | | | | | | | | Name of the dependent resource | +--------------+----------------------------+-------------------------------------------------------------------+ | first-action | start | .. index:: | | | | single: rsc_order; attribute, first-action | | | | single: attribute; first-action (rsc_order) | | | | single: first-action; rsc_order attribute | | | | | | | | The action that the ``first`` resource must complete | | | | before ``then-action`` can be initiated for the ``then`` | | | | resource. Allowed values: ``start``, ``stop``, | | | | ``promote``, ``demote``. | +--------------+----------------------------+-------------------------------------------------------------------+ | then-action | value of ``first-action`` | .. index:: | | | | single: rsc_order; attribute, then-action | | | | single: attribute; then-action (rsc_order) | | | | single: first-action; rsc_order attribute | | | | | | | | The action that the ``then`` resource can execute only | | | | after the ``first-action`` on the ``first`` resource has | | | | completed. Allowed values: ``start``, ``stop``, | | | | ``promote``, ``demote``. | +--------------+----------------------------+-------------------------------------------------------------------+ | kind | Mandatory | .. index:: | | | | single: rsc_order; attribute, kind | | | | single: attribute; kind (rsc_order) | | | | single: kind; rsc_order attribute | | | | | | | | How to enforce the constraint. Allowed values: | | | | | | | | * ``Mandatory:`` ``then-action`` will never be initiated | | | | for the ``then`` resource unless and until ``first-action`` | | | | successfully completes for the ``first`` resource. | | | | | | | | * ``Optional:`` The constraint applies only if both specified | | | | resource actions are scheduled in the same transition | | | | (that is, in response to the same cluster state). This | | | | means that ``then-action`` is allowed on the ``then`` | | | | resource regardless of the state of the ``first`` resource, | | | | but if both actions happen to be scheduled at the same time, | | | | they will be ordered. | | | | | | | | * ``Serialize:`` Ensure that the specified actions are never | | | | performed concurrently for the specified resources. | | | | ``First-action`` and ``then-action`` can be executed in either | | | | order, but one must complete before the other can be initiated. | | | | An example use case is when resource start-up puts a high load | | | | on the host. | +--------------+----------------------------+-------------------------------------------------------------------+ | symmetrical | TRUE for ``Mandatory`` and | .. index:: | | | ``Optional`` kinds. FALSE | single: rsc_order; attribute, symmetrical | | | for ``Serialize`` kind. | single: attribute; symmetrical (rsc)order) | | | | single: symmetrical; rsc_order attribute | | | | | | | | If true, the reverse of the constraint applies for the | | | | opposite action (for example, if B starts after A starts, | | | | then B stops before A stops). ``Serialize`` orders cannot | | | | be symmetrical. | +--------------+----------------------------+-------------------------------------------------------------------+ ``Promote`` and ``demote`` apply to :ref:`promotable ` clone resources. Optional and mandatory ordering _______________________________ Here is an example of ordering constraints where **Database** *must* start before **Webserver**, and **IP** *should* start before **Webserver** if they both need to be started: .. topic:: Optional and mandatory ordering constraints .. code-block:: xml Because the above example lets ``symmetrical`` default to TRUE, **Webserver** must be stopped before **Database** can be stopped, and **Webserver** should be stopped before **IP** if they both need to be stopped. .. index:: single: colocation single: constraint; colocation single: resource; location relative to other resources .. _s-resource-colocation: Placing Resources Relative to other Resources ############################################# *Colocation constraints* tell the cluster that the location of one resource depends on the location of another one. Colocation has an important side-effect: it affects the order in which resources are assigned to a node. Think about it: You can't place A relative to B unless you know where B is [#]_. So when you are creating colocation constraints, it is important to consider whether you should colocate A with B, or B with A. .. important:: Colocation constraints affect *only* the placement of resources; they do *not* require that the resources be started in a particular order. If you want resources to be started on the same node *and* in a specific order, you need both an ordering constraint (see :ref:`s-resource-ordering`) *and* a colocation constraint, or alternatively, a group (see :ref:`group-resources`). .. index:: pair: XML element; rsc_colocation single: constraint; rsc_colocation Colocation Properties _____________________ .. table:: **Attributes of a rsc_colocation Constraint** +----------------+----------------+--------------------------------------------------------+ | Field | Default | Description | +================+================+========================================================+ | id | | .. index:: | | | | single: rsc_colocation; attribute, id | | | | single: attribute; id (rsc_colocation) | | | | single: id; rsc_colocation attribute | | | | | | | | A unique name for the constraint (required). | +----------------+----------------+--------------------------------------------------------+ | rsc | | .. index:: | | | | single: rsc_colocation; attribute, rsc | | | | single: attribute; rsc (rsc_colocation) | | | | single: rsc; rsc_colocation attribute | | | | | | | | The name of a resource that should be located | | | | relative to ``with-rsc``. A colocation constraint must | | | | either contain at least one | | | | :ref:`resource set `, or specify both | | | | ``rsc`` and ``with-rsc``. | +----------------+----------------+--------------------------------------------------------+ | with-rsc | | .. index:: | | | | single: rsc_colocation; attribute, with-rsc | | | | single: attribute; with-rsc (rsc_colocation) | | | | single: with-rsc; rsc_colocation attribute | | | | | | | | The name of the resource used as the colocation | | | | target. The cluster will decide where to put this | | | | resource first and then decide where to put ``rsc``. | | | | A colocation constraint must either contain at least | | | | one :ref:`resource set `, or specify | | | | both ``rsc`` and ``with-rsc``. | +----------------+----------------+--------------------------------------------------------+ | node-attribute | #uname | .. index:: | | | | single: rsc_colocation; attribute, node-attribute | | | | single: attribute; node-attribute (rsc_colocation) | | | | single: node-attribute; rsc_colocation attribute | | | | | | | | If ``rsc`` and ``with-rsc`` are specified, this node | | | | attribute must be the same on the node running ``rsc`` | | | | and the node running ``with-rsc`` for the constraint | | | | to be satisfied. (For details, see | | | | :ref:`s-coloc-attribute`.) | +----------------+----------------+--------------------------------------------------------+ | score | 0 | .. index:: | | | | single: rsc_colocation; attribute, score | | | | single: attribute; score (rsc_colocation) | | | | single: score; rsc_colocation attribute | | | | | | | | Positive values indicate the resources should run on | | | | the same node. Negative values indicate the resources | | | | should run on different nodes. Values of | | | | +/- ``INFINITY`` change "should" to "must". | +----------------+----------------+--------------------------------------------------------+ | rsc-role | Started | .. index:: | | | | single: clone; ordering constraint, rsc-role | | | | single: ordering constraint; rsc-role (clone) | | | | single: rsc-role; clone ordering constraint | | | | | | | | If ``rsc`` and ``with-rsc`` are specified, and ``rsc`` | | | | is a :ref:`promotable clone `, | | | | the constraint applies only to ``rsc`` instances in | | | | this role. Allowed values: ``Started``, ``Promoted``, | | | | ``Unpromoted``. For details, see | | | | :ref:`promotable-clone-constraints`. | +----------------+----------------+--------------------------------------------------------+ | with-rsc-role | Started | .. index:: | | | | single: clone; ordering constraint, with-rsc-role | | | | single: ordering constraint; with-rsc-role (clone) | | | | single: with-rsc-role; clone ordering constraint | | | | | | | | If ``rsc`` and ``with-rsc`` are specified, and | | | | ``with-rsc`` is a | | | | :ref:`promotable clone `, the | | | | constraint applies only to ``with-rsc`` instances in | | | | this role. Allowed values: ``Started``, ``Promoted``, | | | | ``Unpromoted``. For details, see | | | | :ref:`promotable-clone-constraints`. | +----------------+----------------+--------------------------------------------------------+ | influence | value of | .. index:: | | | ``critical`` | single: rsc_colocation; attribute, influence | | | meta-attribute | single: attribute; influence (rsc_colocation) | | | for ``rsc`` | single: influence; rsc_colocation attribute | | | | | | | | Whether to consider the location preferences of | | | | ``rsc`` when ``with-rsc`` is already active. Allowed | | | | values: ``true``, ``false``. For details, see | - | | | :ref:`s-coloc-influence`. | + | | | :ref:`s-coloc-influence`. *(since 2.1.0)* | +----------------+----------------+--------------------------------------------------------+ Mandatory Placement ___________________ Mandatory placement occurs when the constraint's score is **+INFINITY** or **-INFINITY**. In such cases, if the constraint can't be satisfied, then the **rsc** resource is not permitted to run. For ``score=INFINITY``, this includes cases where the ``with-rsc`` resource is not active. If you need resource **A** to always run on the same machine as resource **B**, you would add the following constraint: .. topic:: Mandatory colocation constraint for two resources .. code-block:: xml Remember, because **INFINITY** was used, if **B** can't run on any of the cluster nodes (for whatever reason) then **A** will not be allowed to run. Whether **A** is running or not has no effect on **B**. Alternatively, you may want the opposite -- that **A** *cannot* run on the same machine as **B**. In this case, use ``score="-INFINITY"``. .. topic:: Mandatory anti-colocation constraint for two resources .. code-block:: xml Again, by specifying **-INFINITY**, the constraint is binding. So if the only place left to run is where **B** already is, then **A** may not run anywhere. As with **INFINITY**, **B** can run even if **A** is stopped. However, in this case **A** also can run if **B** is stopped, because it still meets the constraint of **A** and **B** not running on the same node. Advisory Placement __________________ If mandatory placement is about "must" and "must not", then advisory placement is the "I'd prefer if" alternative. For constraints with scores greater than **-INFINITY** and less than **INFINITY**, the cluster will try to accommodate your wishes but may ignore them if the alternative is to stop some of the cluster resources. As in life, where if enough people prefer something it effectively becomes mandatory, advisory colocation constraints can combine with other elements of the configuration to behave as if they were mandatory. .. topic:: Advisory colocation constraint for two resources .. code-block:: xml .. _s-coloc-attribute: Colocation by Node Attribute ____________________________ The ``node-attribute`` property of a colocation constraints allows you to express the requirement, "these resources must be on similar nodes". As an example, imagine that you have two Storage Area Networks (SANs) that are not controlled by the cluster, and each node is connected to one or the other. You may have two resources **r1** and **r2** such that **r2** needs to use the same SAN as **r1**, but doesn't necessarily have to be on the same exact node. In such a case, you could define a :ref:`node attribute ` named **san**, with the value **san1** or **san2** on each node as appropriate. Then, you could colocate **r2** with **r1** using ``node-attribute`` set to **san**. .. _s-coloc-influence: Colocation Influence ____________________ By default, if A is colocated with B, the cluster will take into account A's preferences when deciding where to place B, to maximize the chance that both resources can run. For a detailed look at exactly how this occurs, see `Colocation Explained `_. However, if ``influence`` is set to ``false`` in the colocation constraint, this will happen only if B is inactive and needing to be started. If B is already active, A's preferences will have no effect on placing B. An example of what effect this would have and when it would be desirable would be a nonessential reporting tool colocated with a resource-intensive service that takes a long time to start. If the reporting tool fails enough times to reach its migration threshold, by default the cluster will want to move both resources to another node if possible. Setting ``influence`` to ``false`` on the colocation constraint would mean that the reporting tool would be stopped in this situation instead, to avoid forcing the service to move. The ``critical`` resource meta-attribute is a convenient way to specify the default for all colocation constraints and groups involving a particular resource. .. note:: If a noncritical resource is a member of a group, all later members of the group will be treated as noncritical, even if they are marked as (or left to default to) critical. .. _s-resource-sets: Resource Sets ############# .. index:: single: constraint; resource set single: resource; resource set *Resource sets* allow multiple resources to be affected by a single constraint. .. topic:: A set of 3 resources .. code-block:: xml Resource sets are valid inside ``rsc_location``, ``rsc_order`` (see :ref:`s-resource-sets-ordering`), ``rsc_colocation`` (see :ref:`s-resource-sets-colocation`), and ``rsc_ticket`` (see :ref:`ticket-constraints`) constraints. A resource set has a number of properties that can be set, though not all have an effect in all contexts. .. index:: pair: XML element; resource_set .. topic:: **Attributes of a resource_set Element** +-------------+------------------+--------------------------------------------------------+ | Field | Default | Description | +=============+==================+========================================================+ | id | | .. index:: | | | | single: resource_set; attribute, id | | | | single: attribute; id (resource_set) | | | | single: id; resource_set attribute | | | | | | | | A unique name for the set (required) | +-------------+------------------+--------------------------------------------------------+ | sequential | true | .. index:: | | | | single: resource_set; attribute, sequential | | | | single: attribute; sequential (resource_set) | | | | single: sequential; resource_set attribute | | | | | | | | Whether the members of the set must be acted on in | | | | order. Meaningful within ``rsc_order`` and | | | | ``rsc_colocation``. | +-------------+------------------+--------------------------------------------------------+ | require-all | true | .. index:: | | | | single: resource_set; attribute, require-all | | | | single: attribute; require-all (resource_set) | | | | single: require-all; resource_set attribute | | | | | | | | Whether all members of the set must be active before | | | | continuing. With the current implementation, the | | | | cluster may continue even if only one member of the | | | | set is started, but if more than one member of the set | | | | is starting at the same time, the cluster will still | | | | wait until all of those have started before continuing | | | | (this may change in future versions). Meaningful | | | | within ``rsc_order``. | +-------------+------------------+--------------------------------------------------------+ | role | | .. index:: | | | | single: resource_set; attribute, role | | | | single: attribute; role (resource_set) | | | | single: role; resource_set attribute | | | | | | | | The constraint applies only to resource set members | | | | that are :ref:`s-resource-promotable` in this | | | | role. Meaningful within ``rsc_location``, | | | | ``rsc_colocation`` and ``rsc_ticket``. | | | | Allowed values: ``Started``, ``Promoted``, | | | | ``Unpromoted``. For details, see | | | | :ref:`promotable-clone-constraints`. | +-------------+------------------+--------------------------------------------------------+ | action | value of | .. index:: | | | ``first-action`` | single: resource_set; attribute, action | | | in the enclosing | single: attribute; action (resource_set) | | | ordering | single: action; resource_set attribute | | | constraint | | | | | The action that applies to *all members* of the set. | | | | Meaningful within ``rsc_order``. Allowed values: | | | | ``start``, ``stop``, ``promote``, ``demote``. | +-------------+------------------+--------------------------------------------------------+ | score | | .. index:: | | | | single: resource_set; attribute, score | | | | single: attribute; score (resource_set) | | | | single: score; resource_set attribute | | | | | | | | *Advanced use only.* Use a specific score for this | | | | set within the constraint. | +-------------+------------------+--------------------------------------------------------+ .. _s-resource-sets-ordering: Ordering Sets of Resources ########################## A common situation is for an administrator to create a chain of ordered resources, such as: .. topic:: A chain of ordered resources .. code-block:: xml .. topic:: Visual representation of the four resources' start order for the above constraints .. image:: images/resource-set.png :alt: Ordered set Ordered Set ___________ To simplify this situation, :ref:`s-resource-sets` can be used within ordering constraints: .. topic:: A chain of ordered resources expressed as a set .. code-block:: xml While the set-based format is not less verbose, it is significantly easier to get right and maintain. .. important:: If you use a higher-level tool, pay attention to how it exposes this functionality. Depending on the tool, creating a set **A B** may be equivalent to **A then B**, or **B then A**. Ordering Multiple Sets ______________________ The syntax can be expanded to allow sets of resources to be ordered relative to each other, where the members of each individual set may be ordered or unordered (controlled by the ``sequential`` property). In the example below, **A** and **B** can both start in parallel, as can **C** and **D**, however **C** and **D** can only start once *both* **A** *and* **B** are active. .. topic:: Ordered sets of unordered resources .. code-block:: xml .. topic:: Visual representation of the start order for two ordered sets of unordered resources .. image:: images/two-sets.png :alt: Two ordered sets Of course either set -- or both sets -- of resources can also be internally ordered (by setting ``sequential="true"``) and there is no limit to the number of sets that can be specified. .. topic:: Advanced use of set ordering - Three ordered sets, two of which are internally unordered .. code-block:: xml .. topic:: Visual representation of the start order for the three sets defined above .. image:: images/three-sets.png :alt: Three ordered sets .. important:: An ordered set with ``sequential=false`` makes sense only if there is another set in the constraint. Otherwise, the constraint has no effect. Resource Set OR Logic _____________________ The unordered set logic discussed so far has all been "AND" logic. To illustrate this take the 3 resource set figure in the previous section. Those sets can be expressed, **(A and B) then (C) then (D) then (E and F)**. Say for example we want to change the first set, **(A and B)**, to use "OR" logic so the sets look like this: **(A or B) then (C) then (D) then (E and F)**. This functionality can be achieved through the use of the ``require-all`` option. This option defaults to TRUE which is why the "AND" logic is used by default. Setting ``require-all=false`` means only one resource in the set needs to be started before continuing on to the next set. .. topic:: Resource Set "OR" logic: Three ordered sets, where the first set is internally unordered with "OR" logic .. code-block:: xml .. important:: An ordered set with ``require-all=false`` makes sense only in conjunction with ``sequential=false``. Think of it like this: ``sequential=false`` modifies the set to be an unordered set using "AND" logic by default, and adding ``require-all=false`` flips the unordered set's "AND" logic to "OR" logic. .. _s-resource-sets-colocation: Colocating Sets of Resources ############################ Another common situation is for an administrator to create a set of colocated resources. The simplest way to do this is to define a resource group (see :ref:`group-resources`), but that cannot always accurately express the desired relationships. For example, maybe the resources do not need to be ordered. Another way would be to define each relationship as an individual constraint, but that causes a difficult-to-follow constraint explosion as the number of resources and combinations grow. .. topic:: Colocation chain as individual constraints, where A is placed first, then B, then C, then D .. code-block:: xml To express complicated relationships with a simplified syntax [#]_, :ref:`resource sets ` can be used within colocation constraints. .. topic:: Equivalent colocation chain expressed using **resource_set** .. code-block:: xml .. note:: Within a ``resource_set``, the resources are listed in the order they are *placed*, which is the reverse of the order in which they are *colocated*. In the above example, resource **A** is placed before resource **B**, which is the same as saying resource **B** is colocated with resource **A**. As with individual constraints, a resource that can't be active prevents any resource that must be colocated with it from being active. In both of the two previous examples, if **B** is unable to run, then both **C** and by inference **D** must remain stopped. .. important:: If you use a higher-level tool, pay attention to how it exposes this functionality. Depending on the tool, creating a set **A B** may be equivalent to **A with B**, or **B with A**. Resource sets can also be used to tell the cluster that entire *sets* of resources must be colocated relative to each other, while the individual members within any one set may or may not be colocated relative to each other (determined by the set's ``sequential`` property). In the following example, resources **B**, **C**, and **D** will each be colocated with **A** (which will be placed first). **A** must be able to run in order for any of the resources to run, but any of **B**, **C**, or **D** may be stopped without affecting any of the others. .. topic:: Using colocated sets to specify a shared dependency .. code-block:: xml .. note:: Pay close attention to the order in which resources and sets are listed. While the members of any one sequential set are placed first to last (i.e., the colocation dependency is last with first), multiple sets are placed last to first (i.e. the colocation dependency is first with last). .. important:: A colocated set with ``sequential="false"`` makes sense only if there is another set in the constraint. Otherwise, the constraint has no effect. There is no inherent limit to the number and size of the sets used. The only thing that matters is that in order for any member of one set in the constraint to be active, all members of sets listed after it must also be active (and naturally on the same node); and if a set has ``sequential="true"``, then in order for one member of that set to be active, all members listed before it must also be active. If desired, you can restrict the dependency to instances of promotable clone resources that are in a specific role, using the set's ``role`` property. .. topic:: Colocation in which the members of the middle set have no interdependencies, and the last set listed applies only to promoted instances .. code-block:: xml .. topic:: Visual representation of the above example (resources are placed from left to right) .. image:: ../shared/images/pcmk-colocated-sets.png :alt: Colocation chain .. note:: Unlike ordered sets, colocated sets do not use the ``require-all`` option. .. [#] While the human brain is sophisticated enough to read the constraint in any order and choose the correct one depending on the situation, the cluster is not quite so smart. Yet. .. [#] which is not the same as saying easy to follow diff --git a/doc/sphinx/Pacemaker_Explained/resources.rst b/doc/sphinx/Pacemaker_Explained/resources.rst index 003d4ebfb0..773188c7cc 100644 --- a/doc/sphinx/Pacemaker_Explained/resources.rst +++ b/doc/sphinx/Pacemaker_Explained/resources.rst @@ -1,1036 +1,1036 @@ .. _resource: Cluster Resources ----------------- .. _s-resource-primitive: What is a Cluster Resource? ########################### .. index:: single: resource A resource is a service made highly available by a cluster. The simplest type of resource, a *primitive* resource, is described in this chapter. More complex forms, such as groups and clones, are described in later chapters. Every primitive resource has a *resource agent*. A resource agent is an external program that abstracts the service it provides and present a consistent view to the cluster. This allows the cluster to be agnostic about the resources it manages. The cluster doesn't need to understand how the resource works because it relies on the resource agent to do the right thing when given a **start**, **stop** or **monitor** command. For this reason, it is crucial that resource agents are well-tested. Typically, resource agents come in the form of shell scripts. However, they can be written using any technology (such as C, Python or Perl) that the author is comfortable with. .. _s-resource-supported: .. index:: single: resource; class Resource Classes ################ Pacemaker supports several classes of agents: * OCF * LSB * Systemd * Upstart (deprecated) * Service * Fencing * Nagios Plugins .. index:: single: resource; OCF single: OCF; resources single: Open Cluster Framework; resources Open Cluster Framework ______________________ The OCF standard [#]_ is basically an extension of the Linux Standard Base conventions for init scripts to: * support parameters, * make them self-describing, and * make them extensible OCF specs have strict definitions of the exit codes that actions must return [#]_. The cluster follows these specifications exactly, and giving the wrong exit code will cause the cluster to behave in ways you will likely find puzzling and annoying. In particular, the cluster needs to distinguish a completely stopped resource from one which is in some erroneous and indeterminate state. Parameters are passed to the resource agent as environment variables, with the special prefix ``OCF_RESKEY_``. So, a parameter which the user thinks of as ``ip`` will be passed to the resource agent as ``OCF_RESKEY_ip``. The number and purpose of the parameters is left to the resource agent; however, the resource agent should use the **meta-data** command to advertise any that it supports. The OCF class is the most preferred as it is an industry standard, highly flexible (allowing parameters to be passed to agents in a non-positional manner) and self-describing. For more information, see the `reference `_ and the *Resource Agents* chapter of *Pacemaker Administration*. .. index:: single: resource; LSB single: LSB; resources single: Linux Standard Base; resources Linux Standard Base ___________________ *LSB* resource agents are more commonly known as *init scripts*. If a full path is not given, they are assumed to be located in ``/etc/init.d``. Commonly, they are provided by the OS distribution. In order to be used with a Pacemaker cluster, they must conform to the LSB specification [#]_. .. warning:: Many distributions or particular software packages claim LSB compliance but ship with broken init scripts. For details on how to check whether your init script is LSB-compatible, see the `Resource Agents` chapter of `Pacemaker Administration`. Common problematic violations of the LSB standard include: * Not implementing the ``status`` operation at all * Not observing the correct exit status codes for ``start``/``stop``/``status`` actions * Starting a started resource returns an error * Stopping a stopped resource returns an error .. important:: Remember to make sure the computer is `not` configured to start any services at boot time -- that should be controlled by the cluster. .. _s-resource-supported-systemd: .. index:: single: Resource; Systemd single: Systemd; resources Systemd _______ Most Linux distributions have replaced the old `SysV `_ style of initialization daemons and scripts with `Systemd `_. Pacemaker is able to manage these services `if they are present`. Instead of init scripts, systemd has `unit files`. Generally, the services (unit files) are provided by the OS distribution, but there are online guides for converting from init scripts [#]_. .. important:: Remember to make sure the computer is `not` configured to start any services at boot time -- that should be controlled by the cluster. .. index:: single: Resource; Upstart single: Upstart; resources Upstart _______ Some distributions replaced the old `SysV `_ style of initialization daemons (and scripts) with `Upstart `_. Pacemaker is able to manage these services `if they are present`. Instead of init scripts, Upstart has `jobs`. Generally, the services (jobs) are provided by the OS distribution. .. important:: Remember to make sure the computer is `not` configured to start any services at boot time -- that should be controlled by the cluster. .. warning:: Upstart support is deprecated in Pacemaker. Upstart is no longer an actively maintained project, and test platforms for it are no longer readily usable. Support will likely be dropped entirely at the next major release of Pacemaker. .. index:: single: Resource; System Services single: System Service; resources System Services _______________ Since there are various types of system services (``systemd``, ``upstart``, and ``lsb``), Pacemaker supports a special ``service`` alias which intelligently figures out which one applies to a given cluster node. This is particularly useful when the cluster contains a mix of ``systemd``, ``upstart``, and ``lsb``. In order, Pacemaker will try to find the named service as: * an LSB init script * a Systemd unit file * an Upstart job .. index:: single: Resource; STONITH single: STONITH; resources STONITH _______ The STONITH class is used exclusively for fencing-related resources. This is discussed later in :ref:`fencing`. .. index:: single: Resource; Nagios Plugins single: Nagios Plugins; resources Nagios Plugins ______________ Nagios Plugins [#]_ allow us to monitor services on remote hosts. Pacemaker is able to do remote monitoring with the plugins `if they are present`. A common use case is to configure them as resources belonging to a resource container (usually a virtual machine), and the container will be restarted if any of them has failed. Another use is to configure them as ordinary resources to be used for monitoring hosts or services via the network. The supported parameters are same as the long options of the plugin. .. _primitive-resource: Resource Properties ################### These values tell the cluster which resource agent to use for the resource, where to find that resource agent and what standards it conforms to. .. table:: **Properties of a Primitive Resource** +----------+------------------------------------------------------------------+ | Field | Description | +==========+==================================================================+ | id | .. index:: | | | single: id; resource | | | single: resource; property, id | | | | | | Your name for the resource | +----------+------------------------------------------------------------------+ | class | .. index:: | | | single: class; resource | | | single: resource; property, class | | | | | | The standard the resource agent conforms to. Allowed values: | | | ``lsb``, ``nagios``, ``ocf``, ``service``, ``stonith``, | | | ``systemd``, ``upstart`` | +----------+------------------------------------------------------------------+ | type | .. index:: | | | single: type; resource | | | single: resource; property, type | | | | | | The name of the Resource Agent you wish to use. E.g. | | | ``IPaddr`` or ``Filesystem`` | +----------+------------------------------------------------------------------+ | provider | .. index:: | | | single: provider; resource | | | single: resource; property, provider | | | | | | The OCF spec allows multiple vendors to supply the same resource | | | agent. To use the OCF resource agents supplied by the Heartbeat | | | project, you would specify ``heartbeat`` here. | +----------+------------------------------------------------------------------+ The XML definition of a resource can be queried with the **crm_resource** tool. For example: .. code-block:: none # crm_resource --resource Email --query-xml might produce: .. topic:: A system resource definition .. code-block:: xml .. note:: One of the main drawbacks to system services (LSB, systemd or Upstart) resources is that they do not allow any parameters! .. topic:: An OCF resource definition .. code-block:: xml .. _resource_options: Resource Options ################ Resources have two types of options: *meta-attributes* and *instance attributes*. Meta-attributes apply to any type of resource, while instance attributes are specific to each resource agent. Resource Meta-Attributes ________________________ Meta-attributes are used by the cluster to decide how a resource should behave and can be easily set using the ``--meta`` option of the **crm_resource** command. .. table:: **Meta-attributes of a Primitive Resource** +----------------------------+----------------------------------+------------------------------------------------------+ | Field | Default | Description | +============================+==================================+======================================================+ | priority | 0 | .. index:: | | | | single: priority; resource option | | | | single: resource; option, priority | | | | | | | | If not all resources can be active, the cluster | | | | will stop lower priority resources in order to | | | | keep higher priority ones active. | +----------------------------+----------------------------------+------------------------------------------------------+ | critical | true | .. index:: | | | | single: critical; resource option | | | | single: resource; option, critical | | | | | | | | Use this value as the default for ``influence`` in | | | | all :ref:`colocation constraints | | | | ` involving this resource, | | | | as well as the implicit colocation constraints | | | | created if this resource is in a :ref:`group | | | | `. For details, see | - | | | :ref:`s-coloc-influence`. | + | | | :ref:`s-coloc-influence`. *(since 2.1.0)* | +----------------------------+----------------------------------+------------------------------------------------------+ | target-role | Started | .. index:: | | | | single: target-role; resource option | | | | single: resource; option, target-role | | | | | | | | What state should the cluster attempt to keep this | | | | resource in? Allowed values: | | | | | | | | * ``Stopped:`` Force the resource to be stopped | | | | * ``Started:`` Allow the resource to be started | | | | (and in the case of :ref:`promotable clone | | | | resources `, promoted | | | | if appropriate) | | | | * ``Unpromoted:`` Allow the resource to be started, | | | | but only in the unpromoted role if the resource is | | | | :ref:`promotable ` | | | | * ``Promoted:`` Equivalent to ``Started`` | +----------------------------+----------------------------------+------------------------------------------------------+ | is-managed | TRUE | .. index:: | | | | single: is-managed; resource option | | | | single: resource; option, is-managed | | | | | | | | Is the cluster allowed to start and stop | | | | the resource? Allowed values: ``true``, ``false`` | +----------------------------+----------------------------------+------------------------------------------------------+ | maintenance | FALSE | .. index:: | | | | single: maintenance; resource option | | | | single: resource; option, maintenance | | | | | | | | Similar to the ``maintenance-mode`` | | | | :ref:`cluster option `, but for | | | | a single resource. If true, the resource will not | | | | be started, stopped, or monitored on any node. This | | | | differs from ``is-managed`` in that monitors will | | | | not be run. Allowed values: ``true``, ``false`` | +----------------------------+----------------------------------+------------------------------------------------------+ | resource-stickiness | 1 for individual clone | .. _resource-stickiness: | | | instances, 0 for all | | | | other resources | .. index:: | | | | single: resource-stickiness; resource option | | | | single: resource; option, resource-stickiness | | | | | | | | A score that will be added to the current node when | | | | a resource is already active. This allows running | | | | resources to stay where they are, even if they | | | | would be placed elsewhere if they were being | | | | started from a stopped state. | +----------------------------+----------------------------------+------------------------------------------------------+ | requires | ``quorum`` for resources | .. _requires: | | | with a ``class`` of ``stonith``, | | | | otherwise ``unfencing`` if | .. index:: | | | unfencing is active in the | single: requires; resource option | | | cluster, otherwise ``fencing`` | single: resource; option, requires | | | if ``stonith-enabled`` is true, | | | | otherwise ``quorum`` | Conditions under which the resource can be | | | | started. Allowed values: | | | | | | | | * ``nothing:`` can always be started | | | | * ``quorum:`` The cluster can only start this | | | | resource if a majority of the configured nodes | | | | are active | | | | * ``fencing:`` The cluster can only start this | | | | resource if a majority of the configured nodes | | | | are active *and* any failed or unknown nodes | | | | have been :ref:`fenced ` | | | | * ``unfencing:`` The cluster can only start this | | | | resource if a majority of the configured nodes | | | | are active *and* any failed or unknown nodes have | | | | been fenced *and* only on nodes that have been | | | | :ref:`unfenced ` | +----------------------------+----------------------------------+------------------------------------------------------+ | migration-threshold | INFINITY | .. index:: | | | | single: migration-threshold; resource option | | | | single: resource; option, migration-threshold | | | | | | | | How many failures may occur for this resource on | | | | a node, before this node is marked ineligible to | | | | host this resource. A value of 0 indicates that this | | | | feature is disabled (the node will never be marked | | | | ineligible); by constrast, the cluster treats | | | | INFINITY (the default) as a very large but finite | | | | number. This option has an effect only if the | | | | failed operation specifies ``on-fail`` as | | | | ``restart`` (the default), and additionally for | | | | failed ``start`` operations, if the cluster | | | | property ``start-failure-is-fatal`` is ``false``. | +----------------------------+----------------------------------+------------------------------------------------------+ | failure-timeout | 0 | .. index:: | | | | single: failure-timeout; resource option | | | | single: resource; option, failure-timeout | | | | | | | | How many seconds to wait before acting as if the | | | | failure had not occurred, and potentially allowing | | | | the resource back to the node on which it failed. | | | | A value of 0 indicates that this feature is | | | | disabled. | +----------------------------+----------------------------------+------------------------------------------------------+ | multiple-active | stop_start | .. index:: | | | | single: multiple-active; resource option | | | | single: resource; option, multiple-active | | | | | | | | What should the cluster do if it ever finds the | | | | resource active on more than one node? Allowed | | | | values: | | | | | | | | * ``block``: mark the resource as unmanaged | | | | * ``stop_only``: stop all active instances and | | | | leave them that way | | | | * ``stop_start``: stop all active instances and | | | | start the resource in one location only | +----------------------------+----------------------------------+------------------------------------------------------+ | allow-migrate | TRUE for ocf:pacemaker:remote | Whether the cluster should try to "live migrate" | | | resources, FALSE otherwise | this resource when it needs to be moved (see | | | | :ref:`live-migration`) | +----------------------------+----------------------------------+------------------------------------------------------+ | container-attribute-target | | Specific to bundle resources; see | | | | :ref:`s-bundle-attributes` | +----------------------------+----------------------------------+------------------------------------------------------+ | remote-node | | The name of the Pacemaker Remote guest node this | | | | resource is associated with, if any. If | | | | specified, this both enables the resource as a | | | | guest node and defines the unique name used to | | | | identify the guest node. The guest must be | | | | configured to run the Pacemaker Remote daemon | | | | when it is started. **WARNING:** This value | | | | cannot overlap with any resource or node IDs. | +----------------------------+----------------------------------+------------------------------------------------------+ | remote-port | 3121 | If ``remote-node`` is specified, the port on the | | | | guest used for its Pacemaker Remote connection. | | | | The Pacemaker Remote daemon on the guest must | | | | be configured to listen on this port. | +----------------------------+----------------------------------+------------------------------------------------------+ | remote-addr | value of ``remote-node`` | If ``remote-node`` is specified, the IP | | | | address or hostname used to connect to the | | | | guest via Pacemaker Remote. The Pacemaker Remote | | | | daemon on the guest must be configured to accept | | | | connections on this address. | +----------------------------+----------------------------------+------------------------------------------------------+ | remote-connect-timeout | 60s | If ``remote-node`` is specified, how long before | | | | a pending guest connection will time out. | +----------------------------+----------------------------------+------------------------------------------------------+ As an example of setting resource options, if you performed the following commands on an LSB Email resource: .. code-block:: none # crm_resource --meta --resource Email --set-parameter priority --parameter-value 100 # crm_resource -m -r Email -p multiple-active -v block the resulting resource definition might be: .. topic:: An LSB resource with cluster options .. code-block:: xml In addition to the cluster-defined meta-attributes described above, you may also configure arbitrary meta-attributes of your own choosing. Most commonly, this would be done for use in :ref:`rules `. For example, an IT department might define a custom meta-attribute to indicate which company department each resource is intended for. To reduce the chance of name collisions with cluster-defined meta-attributes added in the future, it is recommended to use a unique, organization-specific prefix for such attributes. .. _s-resource-defaults: Setting Global Defaults for Resource Meta-Attributes ____________________________________________________ To set a default value for a resource option, add it to the ``rsc_defaults`` section with ``crm_attribute``. For example, .. code-block:: none # crm_attribute --type rsc_defaults --name is-managed --update false would prevent the cluster from starting or stopping any of the resources in the configuration (unless of course the individual resources were specifically enabled by having their ``is-managed`` set to ``true``). Resource Instance Attributes ____________________________ The resource agents of some resource classes (lsb, systemd and upstart *not* among them) can be given parameters which determine how they behave and which instance of a service they control. If your resource agent supports parameters, you can add them with the ``crm_resource`` command. For example, .. code-block:: none # crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2 would create an entry in the resource like this: .. topic:: An example OCF resource with instance attributes .. code-block:: xml For an OCF resource, the result would be an environment variable called ``OCF_RESKEY_ip`` with a value of ``192.0.2.2``. The list of instance attributes supported by an OCF resource agent can be found by calling the resource agent with the ``meta-data`` command. The output contains an XML description of all the supported attributes, their purpose and default values. .. topic:: Displaying the metadata for the Dummy resource agent template .. code-block:: none # export OCF_ROOT=/usr/lib/ocf # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data .. code-block:: xml 1.1 This is a dummy OCF resource agent. It does absolutely nothing except keep track of whether it is running or not, and can be configured so that actions fail or take a long time. Its purpose is primarily for testing, and to serve as a template for resource agent writers. Example stateless resource agent Location to store the resource state in. State file Fake password field Password Fake attribute that can be changed to cause a reload Fake attribute that can be changed to cause a reload Number of seconds to sleep during operations. This can be used to test how the cluster reacts to operation timeouts. Operation sleep duration in seconds. Start, migrate_from, and reload-agent actions will return failure if running on the host specified here, but the resource will run successfully anyway (future monitor calls will find it running). This can be used to test on-fail=ignore. Report bogus start failure on specified host If this is set, the environment will be dumped to this file for every call. Environment dump file .. index:: single: resource; action single: resource; operation .. _operation: Resource Operations ################### *Operations* are actions the cluster can perform on a resource by calling the resource agent. Resource agents must support certain common operations such as start, stop, and monitor, and may implement any others. Operations may be explicitly configured for two purposes: to override defaults for options (such as timeout) that the cluster will use whenever it initiates the operation, and to run an operation on a recurring basis (for example, to monitor the resource for failure). .. topic:: An OCF resource with a non-default start timeout .. code-block:: xml Pacemaker identifies operations by a combination of name and interval, so this combination must be unique for each resource. That is, you should not configure two operations for the same resource with the same name and interval. .. _operation_properties: Operation Properties ____________________ Operation properties may be specified directly in the ``op`` element as XML attributes, or in a separate ``meta_attributes`` block as ``nvpair`` elements. XML attributes take precedence over ``nvpair`` elements if both are specified. .. table:: **Properties of an Operation** +----------------+-----------------------------------+-----------------------------------------------------+ | Field | Default | Description | +================+===================================+=====================================================+ | id | | .. index:: | | | | single: id; action property | | | | single: action; property, id | | | | | | | | A unique name for the operation. | +----------------+-----------------------------------+-----------------------------------------------------+ | name | | .. index:: | | | | single: name; action property | | | | single: action; property, name | | | | | | | | The action to perform. This can be any action | | | | supported by the agent; common values include | | | | ``monitor``, ``start``, and ``stop``. | +----------------+-----------------------------------+-----------------------------------------------------+ | interval | 0 | .. index:: | | | | single: interval; action property | | | | single: action; property, interval | | | | | | | | How frequently (in seconds) to perform the | | | | operation. A value of 0 means "when needed". | | | | A positive value defines a *recurring action*, | | | | which is typically used with | | | | :ref:`monitor `. | +----------------+-----------------------------------+-----------------------------------------------------+ | timeout | | .. index:: | | | | single: timeout; action property | | | | single: action; property, timeout | | | | | | | | How long to wait before declaring the action | | | | has failed | +----------------+-----------------------------------+-----------------------------------------------------+ | on-fail | Varies by action: | .. index:: | | | | single: on-fail; action property | | | * ``stop``: ``fence`` if | single: action; property, on-fail | | | ``stonith-enabled`` is true | | | | or ``block`` otherwise | The action to take if this action ever fails. | | | * ``demote``: ``on-fail`` of the | Allowed values: | | | ``monitor`` action with | | | | ``role`` set to ``Promoted``, | * ``ignore:`` Pretend the resource did not fail. | | | if present, enabled, and | * ``block:`` Don't perform any further operations | | | configured to a value other | on the resource. | | | than ``demote``, or ``restart`` | * ``stop:`` Stop the resource and do not start | | | otherwise | it elsewhere. | | | * all other actions: ``restart`` | * ``demote:`` Demote the resource, without a | | | | full restart. This is valid only for ``promote`` | | | | actions, and for ``monitor`` actions with both | | | | a nonzero ``interval`` and ``role`` set to | | | | ``Promoted``; for any other action, a | | | | configuration error will be logged, and the | | | | default behavior will be used. *(since 2.0.5)* | | | | * ``restart:`` Stop the resource and start it | | | | again (possibly on a different node). | | | | * ``fence:`` STONITH the node on which the | | | | resource failed. | | | | * ``standby:`` Move *all* resources away from the | | | | node on which the resource failed. | +----------------+-----------------------------------+-----------------------------------------------------+ | enabled | TRUE | .. index:: | | | | single: enabled; action property | | | | single: action; property, enabled | | | | | | | | If ``false``, ignore this operation definition. | | | | This is typically used to pause a particular | | | | recurring ``monitor`` operation; for instance, it | | | | can complement the respective resource being | | | | unmanaged (``is-managed=false``), as this alone | | | | will :ref:`not block any configured monitoring | | | | `. Disabling the operation | | | | does not suppress all actions of the given type. | | | | Allowed values: ``true``, ``false``. | +----------------+-----------------------------------+-----------------------------------------------------+ | record-pending | TRUE | .. index:: | | | | single: record-pending; action property | | | | single: action; property, record-pending | | | | | | | | If ``true``, the intention to perform the operation | | | | is recorded so that GUIs and CLI tools can indicate | | | | that an operation is in progress. This is best set | | | | as an *operation default* | | | | (see :ref:`s-operation-defaults`). Allowed values: | | | | ``true``, ``false``. | +----------------+-----------------------------------+-----------------------------------------------------+ | role | | .. index:: | | | | single: role; action property | | | | single: action; property, role | | | | | | | | Run the operation only on node(s) that the cluster | | | | thinks should be in the specified role. This only | | | | makes sense for recurring ``monitor`` operations. | | | | Allowed (case-sensitive) values: ``Stopped``, | | | | ``Started``, and in the case of :ref:`promotable | | | | clone resources `, | | | | ``Unpromoted`` and ``Promoted``. | +----------------+-----------------------------------+-----------------------------------------------------+ .. note:: When ``on-fail`` is set to ``demote``, recovery from failure by a successful demote causes the cluster to recalculate whether and where a new instance should be promoted. The node with the failure is eligible, so if promotion scores have not changed, it will be promoted again. There is no direct equivalent of ``migration-threshold`` for the promoted role, but the same effect can be achieved with a location constraint using a :ref:`rule ` with a node attribute expression for the resource's fail count. For example, to immediately ban the promoted role from a node with any failed promote or promoted instance monitor: .. code-block:: xml This example assumes that there is a promotable clone of the ``my_primitive`` resource (note that the primitive name, not the clone name, is used in the rule), and that there is a recurring 10-second-interval monitor configured for the promoted role (fail count attributes specify the interval in milliseconds). .. _s-resource-monitoring: Monitoring Resources for Failure ________________________________ When Pacemaker first starts a resource, it runs one-time ``monitor`` operations (referred to as *probes*) to ensure the resource is running where it's supposed to be, and not running where it's not supposed to be. (This behavior can be affected by the ``resource-discovery`` location constraint property.) Other than those initial probes, Pacemaker will *not* (by default) check that the resource continues to stay healthy [#]_. You must configure ``monitor`` operations explicitly to perform these checks. .. topic:: An OCF resource with a recurring health check .. code-block:: xml By default, a ``monitor`` operation will ensure that the resource is running where it is supposed to. The ``target-role`` property can be used for further checking. For example, if a resource has one ``monitor`` operation with ``interval=10 role=Started`` and a second ``monitor`` operation with ``interval=11 role=Stopped``, the cluster will run the first monitor on any nodes it thinks *should* be running the resource, and the second monitor on any nodes that it thinks *should not* be running the resource (for the truly paranoid, who want to know when an administrator manually starts a service by mistake). .. note:: Currently, monitors with ``role=Stopped`` are not implemented for :ref:`clone ` resources. .. _s-monitoring-unmanaged: Monitoring Resources When Administration is Disabled ____________________________________________________ Recurring ``monitor`` operations behave differently under various administrative settings: * When a resource is unmanaged (by setting ``is-managed=false``): No monitors will be stopped. If the unmanaged resource is stopped on a node where the cluster thinks it should be running, the cluster will detect and report that it is not, but it will not consider the monitor failed, and will not try to start the resource until it is managed again. Starting the unmanaged resource on a different node is strongly discouraged and will at least cause the cluster to consider the resource failed, and may require the resource's ``target-role`` to be set to ``Stopped`` then ``Started`` to be recovered. * When a node is put into standby: All resources will be moved away from the node, and all ``monitor`` operations will be stopped on the node, except those specifying ``role`` as ``Stopped`` (which will be newly initiated if appropriate). * When the cluster is put into maintenance mode: All resources will be marked as unmanaged. All monitor operations will be stopped, except those specifying ``role`` as ``Stopped`` (which will be newly initiated if appropriate). As with single unmanaged resources, starting a resource on a node other than where the cluster expects it to be will cause problems. .. _s-operation-defaults: Setting Global Defaults for Operations ______________________________________ You can change the global default values for operation properties in a given cluster. These are defined in an ``op_defaults`` section of the CIB's ``configuration`` section, and can be set with ``crm_attribute``. For example, .. code-block:: none # crm_attribute --type op_defaults --name timeout --update 20s would default each operation's ``timeout`` to 20 seconds. If an operation's definition also includes a value for ``timeout``, then that value would be used for that operation instead. When Implicit Operations Take a Long Time _________________________________________ The cluster will always perform a number of implicit operations: ``start``, ``stop`` and a non-recurring ``monitor`` operation used at startup to check whether the resource is already active. If one of these is taking too long, then you can create an entry for them and specify a longer timeout. .. topic:: An OCF resource with custom timeouts for its implicit actions .. code-block:: xml Multiple Monitor Operations ___________________________ Provided no two operations (for a single resource) have the same name and interval, you can have as many ``monitor`` operations as you like. In this way, you can do a superficial health check every minute and progressively more intense ones at higher intervals. To tell the resource agent what kind of check to perform, you need to provide each monitor with a different value for a common parameter. The OCF standard creates a special parameter called ``OCF_CHECK_LEVEL`` for this purpose and dictates that it is "made available to the resource agent without the normal ``OCF_RESKEY`` prefix". Whatever name you choose, you can specify it by adding an ``instance_attributes`` block to the ``op`` tag. It is up to each resource agent to look for the parameter and decide how to use it. .. topic:: An OCF resource with two recurring health checks, performing different levels of checks specified via ``OCF_CHECK_LEVEL``. .. code-block:: xml Disabling a Monitor Operation _____________________________ The easiest way to stop a recurring monitor is to just delete it. However, there can be times when you only want to disable it temporarily. In such cases, simply add ``enabled=false`` to the operation's definition. .. topic:: Example of an OCF resource with a disabled health check .. code-block:: xml This can be achieved from the command line by executing: .. code-block:: none # cibadmin --modify --xml-text '' Once you've done whatever you needed to do, you can then re-enable it with .. code-block:: none # cibadmin --modify --xml-text '' .. [#] See https://github.com/ClusterLabs/OCF-spec/tree/master/ra. The Pacemaker implementation has been somewhat extended from the OCF specs. .. [#] The resource-agents source code includes the **ocf-tester** script, which can be useful in this regard. .. [#] See http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html for the LSB Spec as it relates to init scripts. .. [#] For example, http://0pointer.de/blog/projects/systemd-for-admins-3.html .. [#] The project has two independent forks, hosted at https://www.nagios-plugins.org/ and https://www.monitoring-plugins.org/. Output from both projects' plugins is similar, so plugins from either project can be used with pacemaker. .. [#] Currently, anyway. Automatic monitoring operations may be added in a future version of Pacemaker.