diff --git a/doc/sphinx/Pacemaker_Explained/advanced-resources.rst b/doc/sphinx/Pacemaker_Explained/advanced-resources.rst index 9f348e3467..c2d952d098 100644 --- a/doc/sphinx/Pacemaker_Explained/advanced-resources.rst +++ b/doc/sphinx/Pacemaker_Explained/advanced-resources.rst @@ -1,1480 +1,1642 @@ Advanced Resource Types ----------------------- -.. Convert_to_RST: - - [[group-resources]] - == Groups - A Syntactic Shortcut == - indexterm:[Group Resources] - indexterm:[Resource,Groups] - - - One of the most common elements of a cluster is a set of resources - that need to be located together, start sequentially, and stop in the - reverse order. To simplify this configuration, we support the concept - of groups. - - .A group of two primitive resources - ====== - [source,XML] - ------- - - - - - - - - - ------- - ====== - - - Although the example above contains only two resources, there is no - limit to the number of resources a group can contain. The example is - also sufficient to explain the fundamental properties of a group: - - * Resources are started in the order they appear in (+Public-IP+ - first, then +Email+) - * Resources are stopped in the reverse order to which they appear in - (+Email+ first, then +Public-IP+) - - If a resource in the group can't run anywhere, then nothing after that - is allowed to run, too. - - * If +Public-IP+ can't run anywhere, neither can +Email+; - * but if +Email+ can't run anywhere, this does not affect +Public-IP+ - in any way - - The group above is logically equivalent to writing: - - .How the cluster sees a group resource - ====== - [source,XML] - ------- - - - - - - - - - - - - - - - ------- - ====== +.. index: + single: group resource + single: resource; group + +.. _group-resources: + +Groups - A Syntactic Shortcut +############################# + +One of the most common elements of a cluster is a set of resources +that need to be located together, start sequentially, and stop in the +reverse order. To simplify this configuration, we support the concept +of groups. - Obviously as the group grows bigger, the reduced configuration effort - can become significant. +.. topic:: A group of two primitive resources + + .. code-block:: xml + + + + + + + + + + +Although the example above contains only two resources, there is no +limit to the number of resources a group can contain. The example is +also sufficient to explain the fundamental properties of a group: + +* Resources are started in the order they appear in (**Public-IP** first, + then **Email**) +* Resources are stopped in the reverse order to which they appear in + (**Email** first, then **Public-IP**) + +If a resource in the group can't run anywhere, then nothing after that +is allowed to run, too. + +* If **Public-IP** can't run anywhere, neither can **Email**; +* but if **Email** can't run anywhere, this does not affect **Public-IP** + in any way + +The group above is logically equivalent to writing: + +.. topic:: How the cluster sees a group resource + + .. code-block:: xml + + + + + + + + + + + + + + + + +Obviously as the group grows bigger, the reduced configuration effort +can become significant. + +Another (typical) example of a group is a DRBD volume, the filesystem +mount, an IP address, and an application that uses them. + +.. index:: + pair: XML element; group + +Group Properties +________________ + +.. table:: **Properties of a Group Resource** + + +-------+--------------------------------------+ + | Field | Description | + +=======+======================================+ + | id | .. index:: | + | | single: group; property, id | + | | single: property; id (group) | + | | single: id; group property | + | | | + | | A unique name for the group | + +-------+--------------------------------------+ + +Group Options +_____________ + +Groups inherit the ``priority``, ``target-role``, and ``is-managed`` properties +from primitive resources. See :ref:`resource_options` for information about +those properties. - Another (typical) example of a group is a DRBD volume, the filesystem - mount, an IP address, and an application that uses them. +Group Instance Attributes +_________________________ + +Groups have no instance attributes. However, any that are set for the group +object will be inherited by the group's children. - === Group Properties === - .Properties of a Group Resource - [width="95%",cols="3m,<5",options="header",align="center"] - |========================================================= +Group Contents +______________ + +Groups may only contain a collection of cluster resources (see +:ref:`primitive-resource`). To refer to a child of a group resource, just use +the child's ``id`` instead of the group's. - |Field - |Description +Group Constraints +_________________ - |id - |A unique name for the group - indexterm:[id,Group Resource Property] - indexterm:[Resource,Group Property,id] +Although it is possible to reference a group's children in +constraints, it is usually preferable to reference the group itself. - |========================================================= +.. topic:: Some constraints involving groups + + .. code-block:: xml + + + + + + + +.. index:: + pair: resource-stickiness; group + +Group Stickiness +________________ + +Stickiness, the measure of how much a resource wants to stay where it +is, is additive in groups. Every active resource of the group will +contribute its stickiness value to the group's total. So if the +default ``resource-stickiness`` is 100, and a group has seven members, +five of which are active, then the group as a whole will prefer its +current location with a score of 500. + +.. index:: + single: clone resource + single: resource; clone - === Group Options === +.. _s-resource-clone: + +Clones - Resources That Can Have Multiple Active Instances +########################################################## + +*Clone* resources are resources that can have more than one copy active at the +same time. This allows you, for example, to run a copy of a daemon on every +node. You can clone any primitive or group resource [#]_. - Groups inherit the +priority+, +target-role+, and +is-managed+ properties - from primitive resources. See <> for information about - those properties. +Anonymous versus Unique Clones +______________________________ - === Group Instance Attributes === +A clone resource is configured to be either *anonymous* or *globally unique*. - Groups have no instance attributes. However, any that are set for the group - object will be inherited by the group's children. +Anonymous clones are the simplest. These behave completely identically +everywhere they are running. Because of this, there can be only one instance of +an anonymous clone active per node. + +The instances of globally unique clones are distinct entities. All instances +are launched identically, but one instance of the clone is not identical to any +other instance, whether running on the same node or a different node. As an +example, a cloned IP address can use special kernel functionality such that +each instance handles a subset of requests for the same IP address. + +.. index:: + single: Promotable Clone Resources + single: resource; promotable + +.. _s-resource-promotable: + +Promotable clones +_________________ + +If a clone is *promotable*, its instances can perform a special role that +Pacemaker will manage via the ``promote`` and ``demote`` actions of the resource +agent. + +Services that support such a special role have various terms for the special +role and the default role: primary and secondary, master and replica, +controller and worker, etc. Pacemaker uses the terms *master* and *slave* [#]_, +but is agnostic to what the service calls them or what they do. + +All that Pacemaker cares about is that an instance comes up in the default role +when started, and the resource agent supports the ``promote`` and ``demote`` actions +to manage entering and exiting the special role. + +.. index:: + pair: XML element; clone - === Group Contents === +Clone Properties +________________ - Groups may only contain a collection of cluster resources (see - <>). To refer to a child of a group resource, just use - the child's +id+ instead of the group's. +.. table:: **Properties of a Clone Resource** + + +-------+--------------------------------------+ + | Field | Description | + +=======+======================================+ + | id | .. index:: | + | | single: clone; property, id | + | | single: property; id (clone) | + | | single: id; clone property | + | | | + | | A unique name for the clone | + +-------+--------------------------------------+ + +.. index:: + pair: options; clone + +Clone Options +_____________ + +:ref:`Options ` inherited from primitive resources: +``priority, target-role, is-managed`` - === Group Constraints === +.. table:: **Clone-specific configuration options** + + +-------------------+-----------------+-------------------------------------------------------+ + | Field | Default | Description | + +===================+=================+=======================================================+ + | globally-unique | false | .. index:: | + | | | single: clone; option, globally-unique | + | | | single: option; globally-unique (clone) | + | | | single: globally-unique; clone option | + | | | | + | | | If **true**, each clone instance performs a | + | | | distinct function | + +-------------------+-----------------+-------------------------------------------------------+ + | clone-max | number of nodes | .. index:: | + | | in the cluster | single: clone; option, clone-max | + | | | single: option; clone-max (clone) | + | | | single: clone-max; clone option | + | | | | + | | | The maximum number of clone instances that can | + | | | be started across the entire cluster | + +-------------------+-----------------+-------------------------------------------------------+ + | clone-node-max | 1 | .. index:: | + | | | single: clone; option, clone-node-max | + | | | single: option; clone-node-max (clone) | + | | | single: clone-node-max; clone option | + | | | | + | | | If ``globally-unique`` is **true**, the maximum | + | | | number of clone instances that can be started | + | | | on a single node | + +-------------------+-----------------+-------------------------------------------------------+ + | clone-min | 0 | .. index:: | + | | | single: clone; option, clone-min | + | | | single: option; clone-min (clone) | + | | | single: clone-min; clone option | + | | | | + | | | Require at least this number of clone instances | + | | | to be runnable before allowing resources | + | | | depending on the clone to be runnable. A value | + | | | of 0 means require all clone instances to be | + | | | runnable. | + +-------------------+-----------------+-------------------------------------------------------+ + | notify | false | .. index:: | + | | | single: clone; option, notify | + | | | single: option; notify (clone) | + | | | single: notify; clone option | + | | | | + | | | Call the resource agent's **notify** action for | + | | | all active instances, before and after starting | + | | | or stopping any clone instance. The resource | + | | | agent must support this action. | + | | | Allowed values: **false**, **true** | + +-------------------+-----------------+-------------------------------------------------------+ + | ordered | false | .. index:: | + | | | single: clone; option, ordered | + | | | single: option; ordered (clone) | + | | | single: ordered; clone option | + | | | | + | | | If **true**, clone instances must be started | + | | | sequentially instead of in parallel. | + | | | Allowed values: **false**, **true** | + +-------------------+-----------------+-------------------------------------------------------+ + | interleave | false | .. index:: | + | | | single: clone; option, interleave | + | | | single: option; interleave (clone) | + | | | single: interleave; clone option | + | | | | + | | | When this clone is ordered relative to another | + | | | clone, if this option is **false** (the default), | + | | | the ordering is relative to *all* instances of | + | | | the other clone, whereas if this option is | + | | | **true**, the ordering is relative only to | + | | | instances on the same node. | + | | | Allowed values: **false**, **true** | + +-------------------+-----------------+-------------------------------------------------------+ + | promotable | false | .. index:: | + | | | single: clone; option, promotable | + | | | single: option; promotable (clone) | + | | | single: promotable; clone option | + | | | | + | | | If **true**, clone instances can perform a | + | | | special role that Pacemaker will manage via the | + | | | resource agent's **promote** and **demote** | + | | | actions. The resource agent must support these | + | | | actions. | + | | | Allowed values: **false**, **true** | + +-------------------+-----------------+-------------------------------------------------------+ + | promoted-max | 1 | .. index:: | + | | | single: clone; option, promoted-max | + | | | single: option; promoted-max (clone) | + | | | single: promoted-max; clone option | + | | | | + | | | If ``promotable`` is **true**, the number of | + | | | instances that can be promoted at one time | + | | | across the entire cluster | + +-------------------+-----------------+-------------------------------------------------------+ + | promoted-node-max | 1 | .. index:: | + | | | single: clone; option, promoted-node-max | + | | | single: option; promoted-node-max (clone) | + | | | single: promoted-node-max; clone option | + | | | | + | | | If ``promotable`` is **true** and ``globally-unique`` | + | | | is **false**, the number of clone instances can be | + | | | promoted at one time on a single node | + +-------------------+-----------------+-------------------------------------------------------+ + +For backward compatibility, ``master-max`` and ``master-node-max`` are accepted as +aliases for ``promoted-max`` and ``promoted-node-max``, but are deprecated since +2.0.0, and support for them will be removed in a future version. + +Clone Contents +______________ + +Clones must contain exactly one primitive or group resource. + +.. topic:: A clone that runs a web server on all nodes + + .. code-block:: xml + + + + + + + + + +.. warning:: + + You should never reference the name of a clone's child (the primitive or group + resource being cloned). If you think you need to do this, you probably need to + re-evaluate your design. - Although it is possible to reference a group's children in - constraints, it is usually preferable to reference the group itself. +Clone Instance Attribute +________________________ - .Some constraints involving groups - ====== - [source,XML] - ------- - - - - - - ------- - ====== +Clones have no instance attributes; however, any that are set here will be +inherited by the clone's child. - === Group Stickiness === - indexterm:[resource-stickiness,Groups] +Clone Constraints +_________________ - Stickiness, the measure of how much a resource wants to stay where it - is, is additive in groups. Every active resource of the group will - contribute its stickiness value to the group's total. So if the - default +resource-stickiness+ is 100, and a group has seven members, - five of which are active, then the group as a whole will prefer its - current location with a score of 500. +In most cases, a clone will have a single instance on each active cluster +node. If this is not the case, you can indicate which nodes the +cluster should preferentially assign copies to with resource location +constraints. These constraints are written no differently from those +for primitive resources except that the clone's **id** is used. - [[s-resource-clone]] - == Clones - Resources That Can Have Multiple Active Instances == - indexterm:[Clone Resources] - indexterm:[Resource,Clones] +.. topic:: Some constraints involving clones + + .. code-block:: xml + + + + + + + +Ordering constraints behave slightly differently for clones. In the +example above, ``apache-stats`` will wait until all copies of ``apache-clone`` +that need to be started have done so before being started itself. +Only if *no* copies can be started will ``apache-stats`` be prevented +from being active. Additionally, the clone will wait for +``apache-stats`` to be stopped before stopping itself. + +Colocation of a primitive or group resource with a clone means that +the resource can run on any node with an active instance of the clone. +The cluster will choose an instance based on where the clone is running and +the resource's own location preferences. + +Colocation between clones is also possible. If one clone **A** is colocated +with another clone **B**, the set of allowed locations for **A** is limited to +nodes on which **B** is (or will be) active. Placement is then performed +normally. + +Promotable Clone Constraints +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For promotable clone resources, the ``first-action`` and/or ``then-action`` fields +for ordering constraints may be set to ``promote`` or ``demote`` to constrain the +master role, and colocation constraints may contain ``rsc-role`` and/or +``with-rsc-role`` fields. + +.. index:: + single: constraint; colocation + +.. table:: **Additional colocation constraint options for promotable clone resources** + + +---------------+---------+-------------------------------------------------------+ + | Field | Default | Description | + +===============+=========+=======================================================+ + | rsc-role | Started | .. index:: | + | | | single: clone; ordering constraint, rsc-role | + | | | single: ordering constraint; rsc-role (clone) | + | | | single: rsc-role; clone ordering constraint | + | | | | + | | | An additional attribute of colocation constraints | + | | | that specifies the role that ``rsc`` must be in. | + | | | Allowed values: **Started**, **Master**, **Slave**. | + +---------------+---------+-------------------------------------------------------+ + | with-rsc-role | Started | .. index:: | + | | | single: clone; ordering constraint, with-rsc-role | + | | | single: ordering constraint; with-rsc-role (clone) | + | | | single: with-rsc-role; clone ordering constraint | + | | | | + | | | An additional attribute of colocation constraints | + | | | that specifies the role that ``with-rsc`` must be in. | + | | | Allowed values: **Started**, **Master**, **Slave**. | + +---------------+---------+-------------------------------------------------------+ + +.. topic:: Constraints involving promotable clone resources + + .. code-block:: xml + + + + + + + + + +In the example above, **myApp** will wait until one of the database +copies has been started and promoted to master before being started +itself on the same node. Only if no copies can be promoted will **myApp** be +prevented from being active. Additionally, the cluster will wait for +**myApp** to be stopped before demoting the database. + +Colocation of a primitive or group resource with a promotable clone +resource means that it can run on any node with an active instance of +the promotable clone resource that has the specified role (**master** or +**slave**). In the example above, the cluster will choose a location based on +where database is running as a **master**, and if there are multiple +**master** instances it will also factor in **myApp**'s own location +preferences when deciding which location to choose. + +Colocation with regular clones and other promotable clone resources is also +possible. In such cases, the set of allowed locations for the **rsc** +clone is (after role filtering) limited to nodes on which the +``with-rsc`` promotable clone resource is (or will be) in the specified role. +Placement is then performed as normal. + +Using Promotable Clone Resources in Colocation Sets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. index:: + single: constraint; colocation + single: constraint; resource set - 'Clone' resources are resources that can have more than one copy active at the - same time. This allows you, for example, to run a copy of a daemon on every - node. You can clone any primitive or group resource. - footnote:[ - Of course, the service must support running multiple instances. - ] +.. table:: **Additional colocation set options relevant to promotable clone resources** + + +-------+---------+-----------------------------------------------------+ + | Field | Default | Description | + +=======+=========+=====================================================+ + | role | Started | .. index:: | + | | | single: clone; ordering constraint; role | + | | | single: ordering constraint; role (clone) | + | | | single: role; clone ordering constraint | + | | | | + | | | The role that *all members* of the set must be in. | + | | | Allowed values: **Started**, **Master**, **Slave**. | + +-------+---------+-----------------------------------------------------+ + +In the following example **B**'s master must be located on the same node as **A**'s master. +Additionally resources **C** and **D** must be located on the same node as **A**'s +and **B**'s masters. - === Anonymous versus Unique Clones === +.. topic:: Colocate C and D with A's and B's master instances + + .. code-block:: xml + + + + + + + + + + + + + - A clone resource is configured to be either 'anonymous' or 'globally unique'. +Using Promotable Clone Resources in Ordered Sets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. index:: + single: constraint; colocation + single: constraint; resource set + +.. table:: **Additional ordered set options relevant to promotable clone resources** + + +--------+------------------+-----------------------------------------------------+ + | Field | Default | Description | + +=======++==================+=====================================================+ + | action | value of | .. index:: | + | | ``first-action`` | single: clone; ordering constraint; action | + | | | single: ordering constraintl action (clone) | + | | | single: action; clone ordering constraint | + | | | | + | | | An additional attribute of ordering constraint | + | | | sets that specifies the action that applies to | + | | | *all members* of the set. | + | | | Allowed values: **start**, **stop**, **promote**, | + | | | **demote**. | + +--------+------------------+-----------------------------------------------------+ + +.. topic:: Start C and D after first promoting A and B + + .. code-block:: xml + + + + + + + + + + + + + - Anonymous clones are the simplest. These behave completely identically - everywhere they are running. Because of this, there can be only one instance of - an anonymous clone active per node. - - The instances of globally unique clones are distinct entities. All instances - are launched identically, but one instance of the clone is not identical to any - other instance, whether running on the same node or a different node. As an - example, a cloned IP address can use special kernel functionality such that - each instance handles a subset of requests for the same IP address. - - [[s-resource-promotable]] - === Promotable clones === - - indexterm:[Promotable Clone Resources] - indexterm:[Resource,Promotable] - - If a clone is 'promotable', its instances can perform a special role that - Pacemaker will manage via the +promote+ and +demote+ actions of the resource - agent. - - Services that support such a special role have various terms for the special - role and the default role: primary and secondary, master and replica, - controller and worker, etc. Pacemaker uses the terms 'master' and 'slave', - footnote:[ - These are historical terms that will eventually be replaced, but the extensive - use of them and the need for backward compatibility makes it a long process. - You may see examples using a +master+ tag instead of a +clone+ tag with the - +promotable+ meta-attribute set to +true+; the +master+ tag is supported, but - deprecated, and will be removed in a future version. You may also see such - services referred to as 'multi-state' or 'stateful'; these means the same thing - as 'promotable'. - ] - but is agnostic to what the service calls them or what they do. - - All that Pacemaker cares about is that an instance comes up in the default role - when started, and the resource agent supports the +promote+ and +demote+ actions - to manage entering and exiting the special role. - - === Clone Properties === - - .Properties of a Clone Resource - [width="95%",cols="3m,<5",options="header",align="center"] - |========================================================= - - |Field - |Description - - |id - |A unique name for the clone - indexterm:[id,Clone Property] - indexterm:[Clone,Property,id] - - |========================================================= - - === Clone Options === - - <> inherited from primitive resources: - +priority, target-role, is-managed+ - - .Clone-specific configuration options - [width="95%",cols="1m,1,<3",options="header",align="center"] - |========================================================= - - |Field - |Default - |Description - - |globally-unique - |false - |If +true+, each clone instance performs a distinct function - indexterm:[globally-unique,Clone Option] - indexterm:[Clone,Option,globally-unique] - - |clone-max - |number of nodes in cluster - |The maximum number of clone instances that can be started across the entire - cluster - indexterm:[clone-max,Clone Option] - indexterm:[Clone,Option,clone-max] - - |clone-node-max - |1 - |If +globally-unique+ is +true+, the maximum number of clone instances that can - be started on a single node - indexterm:[clone-node-max,Clone Option] - indexterm:[Clone,Option,clone-node-max] - - |clone-min - |0 - |Require at least this number of clone instances to be runnable before allowing - resources depending on the clone to be runnable. A value of 0 means require - all clone instances to be runnable. - indexterm:[clone-min,Clone Option] - indexterm:[Clone,Option,clone-min] - - |notify - |false - |Call the resource agent's +notify+ action for all active instances, before and - after starting or stopping any clone instance. The resource agent must support - this action. Allowed values: +false+, +true+ - indexterm:[notify,Clone Option] - indexterm:[Clone,Option,notify] - - |ordered - |false - |If +true+, clone instances must be started sequentially instead of in parallel - Allowed values: +false+, +true+ - indexterm:[ordered,Clone Option] - indexterm:[Clone,Option,ordered] - - |interleave - |false - |When this clone is ordered relative to another clone, if this option is - +false+ (the default), the ordering is relative to 'all' instances of the - other clone, whereas if this option is +true+, the ordering is relative only - to instances on the same node. - Allowed values: +false+, +true+ - indexterm:[interleave,Clone Option] - indexterm:[Clone,Option,interleave] - - |promotable - |false - |If +true+, clone instances can perform a special role that Pacemaker will - manage via the resource agent's +promote+ and +demote+ actions. The resource - agent must support these actions. - Allowed values: +false+, +true+ - indexterm:[promotable,Clone Option] - indexterm:[Clone,Option,promotable] - - |promoted-max - |1 - |If +promotable+ is +true+, the number of instances that can be promoted at one - time across the entire cluster - indexterm:[promoted-max,Clone Option] - indexterm:[Clone,Option,promoted-max] - - |promoted-node-max - |1 - |If +promotable+ is +true+ and +globally-unique+ is +false+, the number of - clone instances can be promoted at one time on a single node - indexterm:[promoted-node-max,Clone Option] - indexterm:[Clone,Option,promoted-node-max] - - |========================================================= - - For backward compatibility, +master-max+ and +master-node-max+ are accepted as - aliases for +promoted-max+ and +promoted-node-max+, but are deprecated since - 2.0.0, and support for them will be removed in a future version. - - === Clone Contents === - - Clones must contain exactly one primitive or group resource. - - .A clone that runs a web server on all nodes - ==== - [source,XML] - ---- - - - - - - - - ---- - ==== - - [WARNING] - You should never reference the name of a clone's child (the primitive or group - resource being cloned). If you think you need to do this, you probably need to - re-evaluate your design. +In the above example, **B** cannot be promoted to a master role until **A** has +been promoted. Additionally, resources **C** and **D** must wait until **A** and **B** +have been promoted before they can start. + +.. index:: + pair: resource-stickiness; clone - === Clone Instance Attributes === - - Clones have no instance attributes; however, any that are set here will be - inherited by the clone's child. - - === Clone Constraints === - - In most cases, a clone will have a single instance on each active cluster - node. If this is not the case, you can indicate which nodes the - cluster should preferentially assign copies to with resource location - constraints. These constraints are written no differently from those - for primitive resources except that the clone's +id+ is used. - - .Some constraints involving clones - ====== - [source,XML] - ------- - - - - - - ------- - ====== - - Ordering constraints behave slightly differently for clones. In the - example above, +apache-stats+ will wait until all copies of +apache-clone+ - that need to be started have done so before being started itself. - Only if _no_ copies can be started will +apache-stats+ be prevented - from being active. Additionally, the clone will wait for - +apache-stats+ to be stopped before stopping itself. - - Colocation of a primitive or group resource with a clone means that - the resource can run on any node with an active instance of the clone. - The cluster will choose an instance based on where the clone is running and - the resource's own location preferences. - - Colocation between clones is also possible. If one clone +A+ is colocated - with another clone +B+, the set of allowed locations for +A+ is limited to - nodes on which +B+ is (or will be) active. Placement is then performed - normally. - - ==== Promotable Clone Constraints ==== - - For promotable clone resources, the +first-action+ and/or +then-action+ fields - for ordering constraints may be set to +promote+ or +demote+ to constrain the - master role, and colocation constraints may contain +rsc-role+ and/or - +with-rsc-role+ fields. - - .Additional colocation constraint options for promotable clone resources - [width="95%",cols="1m,1,<3",options="header",align="center"] - |========================================================= - - |Field - |Default - |Description - - |rsc-role - |Started - |An additional attribute of colocation constraints that specifies the - role that +rsc+ must be in. Allowed values: +Started+, +Master+, - +Slave+. - indexterm:[rsc-role,Ordering Constraints] - indexterm:[Constraints,Ordering,rsc-role] - - |with-rsc-role - |Started - |An additional attribute of colocation constraints that specifies the - role that +with-rsc+ must be in. Allowed values: +Started+, - +Master+, +Slave+. - indexterm:[with-rsc-role,Ordering Constraints] - indexterm:[Constraints,Ordering,with-rsc-role] - - |========================================================= - - .Constraints involving promotable clone resources - ====== - [source,XML] - ------- - - - - - - - - ------- - ====== - - In the example above, +myApp+ will wait until one of the database - copies has been started and promoted to master before being started - itself on the same node. Only if no copies can be promoted will +myApp+ be - prevented from being active. Additionally, the cluster will wait for - +myApp+ to be stopped before demoting the database. - - Colocation of a primitive or group resource with a promotable clone - resource means that it can run on any node with an active instance of - the promotable clone resource that has the specified role (+master+ or - +slave+). In the example above, the cluster will choose a location based on - where database is running as a +master+, and if there are multiple - +master+ instances it will also factor in +myApp+'s own location - preferences when deciding which location to choose. - - Colocation with regular clones and other promotable clone resources is also - possible. In such cases, the set of allowed locations for the +rsc+ - clone is (after role filtering) limited to nodes on which the - +with-rsc+ promotable clone resource is (or will be) in the specified role. - Placement is then performed as normal. - - ==== Using Promotable Clone Resources in Colocation Sets ==== - - .Additional colocation set options relevant to promotable clone resources - [width="95%",cols="1m,1,<6",options="header",align="center"] - |========================================================= - - |Field - |Default - |Description - - |role - |Started - |The role that 'all members' of the set must be in. Allowed values: +Started+, +Master+, - +Slave+. - indexterm:[role,Ordering Constraints] - indexterm:[Constraints,Ordering,role] - - |========================================================= - - In the following example +B+'s master must be located on the same node as +A+'s master. - Additionally resources +C+ and +D+ must be located on the same node as +A+'s - and +B+'s masters. - - .Colocate C and D with A's and B's master instances - ====== - [source,XML] - ------- - - - - - - - - - - - - - ------- - ====== - - ==== Using Promotable Clone Resources in Ordered Sets ==== - - .Additional ordered set options relevant to promotable clone resources - [width="95%",cols="1m,1,<3",options="header",align="center"] - |========================================================= - - |Field - |Default - |Description - - |action - |value of +first-action+ - |An additional attribute of ordering constraint sets that specifies the - action that applies to 'all members' of the set. Allowed - values: +start+, +stop+, +promote+, +demote+. - indexterm:[action,Ordering Constraints] - indexterm:[Constraints,Ordering,action] - - |========================================================= - - .Start C and D after first promoting A and B - ====== - [source,XML] - ------- - - - - - - - - - - - - - ------- - ====== - - In the above example, +B+ cannot be promoted to a master role until +A+ has - been promoted. Additionally, resources +C+ and +D+ must wait until +A+ and +B+ - have been promoted before they can start. - - - [[s-clone-stickiness]] - === Clone Stickiness === - - indexterm:[resource-stickiness,Clones] - - To achieve a stable allocation pattern, clones are slightly sticky by - default. If no value for +resource-stickiness+ is provided, the clone - will use a value of 1. Being a small value, it causes minimal - disturbance to the score calculations of other resources but is enough - to prevent Pacemaker from needlessly moving copies around the cluster. - - [NOTE] - ==== +.. _s-clone-stickiness: + +Clone Stickiness +________________ + +To achieve a stable allocation pattern, clones are slightly sticky by +default. If no value for ``resource-stickiness`` is provided, the clone +will use a value of 1. Being a small value, it causes minimal +disturbance to the score calculations of other resources but is enough +to prevent Pacemaker from needlessly moving copies around the cluster. + +.. note:: + For globally unique clones, this may result in multiple instances of the clone staying on a single node, even after another eligible node becomes active (for example, after being put into standby mode then made active again). - If you do not want this behavior, specify a +resource-stickiness+ of 0 + If you do not want this behavior, specify a ``resource-stickiness`` of 0 for the clone temporarily and let the cluster adjust, then set it back to 1 if you want the default behavior to apply again. - ==== - [IMPORTANT] - ==== - If +resource-stickiness+ is set in the +rsc_defaults+ section, it will - apply to clone instances as well. This means an explicit +resource-stickiness+ - of 0 in +rsc_defaults+ works differently from the implicit default used when - +resource-stickiness+ is not specified. - ==== +.. important:: + + If ``resource-stickiness`` is set in the ``rsc_defaults`` section, it will + apply to clone instances as well. This means an explicit ``resource-stickiness`` + of 0 in ``rsc_defaults`` works differently from the implicit default used when + ``resource-stickiness`` is not specified. + +Clone Resource Agent Requirements +_________________________________ + +Any resource can be used as an anonymous clone, as it requires no +additional support from the resource agent. Whether it makes sense to +do so depends on your resource and its resource agent. + +Resource Agent Requirements for Globally Unique Clones +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Globally unique clones require additional support in the resource agent. In +particular, it must only respond with ``${OCF_SUCCESS}`` if the node has that +exact instance active. All other probes for instances of the clone should +result in ``${OCF_NOT_RUNNING}`` (or one of the other OCF error codes if +they are failed). + +Individual instances of a clone are identified by appending a colon and a +numerical offset, e.g. **apache:2**. + +Resource agents can find out how many copies there are by examining +the ``OCF_RESKEY_CRM_meta_clone_max`` environment variable and which +instance it is by examining ``OCF_RESKEY_CRM_meta_clone``. + +The resource agent must not make any assumptions (based on +``OCF_RESKEY_CRM_meta_clone``) about which numerical instances are active. In +particular, the list of active copies will not always be an unbroken +sequence, nor always start at 0. + +Resource Agent Requirements for Promotable Clones +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Promotable clone resources require two extra actions, ``demote`` and ``promote``, +which are responsible for changing the state of the resource. Like **start** and +**stop**, they should return ``${OCF_SUCCESS}`` if they completed successfully or +a relevant error code if they did not. + +The states can mean whatever you wish, but when the resource is +started, it must come up in the mode called **slave**. From there the +cluster will decide which instances to promote to **master**. + +In addition to the clone requirements for monitor actions, agents must +also *accurately* report which state they are in. The cluster relies +on the agent to report its status (including role) accurately and does +not indicate to the agent what role it currently believes it to be in. - === Clone Resource Agent Requirements === +.. table:: **Role implications of OCF return codes** + + +---------------------+------------------------------------------------+ + | Monitor Return Code | Description | + +=====================+================================================+ + | OCF_NOT_RUNNING | .. index:: | + | | single: OCF_NOT_RUNNING | + | | single: OCF return code; OCF_NOT_RUNNING | + | | | + | | Stopped | + +---------------------+------------------------------------------------+ + | OCF_SUCCESS | .. index:: | + | | single: OCF_SUCCESS | + | | single: OCF return code; OCF_SUCCESS | + | | | + | | Running (Slave) | + +---------------------+------------------------------------------------+ + | OCF_RUNNING_MASTER | .. index:: | + | | single: OCF_RUNNING_MASTER | + | | single: OCF return code; OCF_RUNNING_MASTER | + | | | + | | Running (Master) | + +---------------------+------------------------------------------------+ + | OCF_FAILED_MASTER | .. index:: | + | | single: OCF_FAILED_MASTER | + | | single: OCF return code; OCF_FAILED_MASTER | + | | | + | | Failed (Master) | + +---------------------+------------------------------------------------+ + | Other | .. index:: | + | | single: return code | + | | | + | | Failed (Slave) | + +---------------------+------------------------------------------------+ + +Clone Notifications +~~~~~~~~~~~~~~~~~~~ + +If the clone has the ``notify`` meta-attribute set to **true**, and the resource +agent supports the ``notify`` action, Pacemaker will call the action when +appropriate, passing a number of extra variables which, when combined with +additional context, can be used to calculate the current state of the cluster +and what is about to happen to it. + +.. index:: + single: clone; environment variables + single: notify; environment variables - Any resource can be used as an anonymous clone, as it requires no - additional support from the resource agent. Whether it makes sense to - do so depends on your resource and its resource agent. +.. table:: **Environment variables supplied with Clone notify actions** + + +----------------------------------------------+-------------------------------------------------------------------------------+ + | Variable | Description | + +==============================================+===============================================================================+ + | OCF_RESKEY_CRM_meta_notify_type | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_type | + | | single: OCF_RESKEY_CRM_meta_notify_type | + | | | + | | Allowed values: **pre**, **post** | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_operation | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_operation | + | | single: OCF_RESKEY_CRM_meta_notify_operation | + | | | + | | Allowed values: **start**, **stop** | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_start_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_start_resource | + | | single: OCF_RESKEY_CRM_meta_notify_start_resource | + | | | + | | Resources to be started | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_stop_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_stop_resource | + | | single: OCF_RESKEY_CRM_meta_notify_stop_resource | + | | | + | | Resources to be stopped | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_active_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_active_resource | + | | single: OCF_RESKEY_CRM_meta_notify_active_resource | + | | | + | | Resources that are running | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_inactive_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_inactive_resource | + | | single: OCF_RESKEY_CRM_meta_notify_inactive_resource | + | | | + | | Resources that are not running | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_start_uname | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_start_uname | + | | single: OCF_RESKEY_CRM_meta_notify_start_uname | + | | | + | | Nodes on which resources will be started | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_stop_uname | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_stop_uname | + | | single: OCF_RESKEY_CRM_meta_notify_stop_uname | + | | | + | | Nodes on which resources will be stopped | + +----------------------------------------------+-------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_active_uname | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_active_uname | + | | single: OCF_RESKEY_CRM_meta_notify_active_uname | + | | | + | | Nodes on which resources are running | + +----------------------------------------------+-------------------------------------------------------------------------------+ + +The variables come in pairs, such as +``OCF_RESKEY_CRM_meta_notify_start_resource`` and +``OCF_RESKEY_CRM_meta_notify_start_uname``, and should be treated as an +array of whitespace-separated elements. + +``OCF_RESKEY_CRM_meta_notify_inactive_resource`` is an exception, as the +matching **uname** variable does not exist since inactive resources +are not running on any node. + +Thus, in order to indicate that **clone:0** will be started on **sles-1**, +**clone:2** will be started on **sles-3**, and **clone:3** will be started +on **sles-2**, the cluster would set: - ==== Resource Agent Requirements for Globally Unique Clones ==== +.. topic:: Notification variables + + .. code-block:: none + + OCF_RESKEY_CRM_meta_notify_start_resource="clone:0 clone:2 clone:3" + OCF_RESKEY_CRM_meta_notify_start_uname="sles-1 sles-3 sles-2" + +.. note:: + + Pacemaker will log but otherwise ignore failures of notify actions. - Globally unique clones require additional support in the resource agent. In - particular, it must only respond with +$\{OCF_SUCCESS}+ if the node has that - exact instance active. All other probes for instances of the clone should - result in +$\{OCF_NOT_RUNNING}+ (or one of the other OCF error codes if - they are failed). +Interpretation of Notification Variables +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Individual instances of a clone are identified by appending a colon and a - numerical offset, e.g. +apache:2+. +**Pre-notification (stop):** - Resource agents can find out how many copies there are by examining - the +OCF_RESKEY_CRM_meta_clone_max+ environment variable and which - instance it is by examining +OCF_RESKEY_CRM_meta_clone+. +* Active resources: ``$OCF_RESKEY_CRM_meta_notify_active_resource`` +* Inactive resources: ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` +* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` - The resource agent must not make any assumptions (based on - +OCF_RESKEY_CRM_meta_clone+) about which numerical instances are active. In - particular, the list of active copies will not always be an unbroken - sequence, nor always start at 0. +**Post-notification (stop) / Pre-notification (start):** - ==== Resource Agent Requirements for Promotable Clones ==== +* Active resources + + * ``$OCF_RESKEY_CRM_meta_notify_active_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +* Inactive resources + + * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +**Post-notification (start):** + +* Active resources: + + * ``$OCF_RESKEY_CRM_meta_notify_active_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + +* Inactive resources: + + * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + +* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` - Promotable clone resources require two extra actions, +demote+ and +promote+, - which are responsible for changing the state of the resource. Like +start+ and - +stop+, they should return +$\{OCF_SUCCESS}+ if they completed successfully or - a relevant error code if they did not. +Extra Notifications for Promotable Clones +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. index:: + single: clone; environment variables + single: promotable; environment variables - The states can mean whatever you wish, but when the resource is - started, it must come up in the mode called +slave+. From there the - cluster will decide which instances to promote to +master+. +.. table:: **Extra environment variables supplied for promotable clones** + + +---------------------------------------------+------------------------------------------------------------------------------+ + | Variable | Description | + +=============================================+==============================================================================+ + | OCF_RESKEY_CRM_meta_notify_master_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_master_resource | + | | single: OCF_RESKEY_CRM_meta_notify_master_resource | + | | | + | | Resources that are running in **Master** mode | + +---------------------------------------------+------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_slave_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_slave_resource | + | | single: OCF_RESKEY_CRM_meta_notify_slave_resource | + | | | + | | Resources that are running in **Slave** mode | + +---------------------------------------------+------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_promote_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_promote_resource | + | | single: OCF_RESKEY_CRM_meta_notify_promote_resource | + | | | + | | Resources to be promoted | + +---------------------------------------------+------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_demote_resource | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_demote_resource | + | | single: OCF_RESKEY_CRM_meta_notify_demote_resource | + | | | + | | Resources to be demoted | + +---------------------------------------------+------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_promote_uname | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_promote_uname | + | | single: OCF_RESKEY_CRM_meta_notify_promote_uname | + | | | + | | Nodes on which resources will be promoted | + +---------------------------------------------+------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_demote_uname | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_demote_uname | + | | single: OCF_RESKEY_CRM_meta_notify_demote_uname | + | | | + | | Nodes on which resources will be demoted | + +---------------------------------------------+------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_master_uname | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_master_uname | + | | single: OCF_RESKEY_CRM_meta_notify_master_uname | + | | | + | | Nodes on which resources are running in **Master** mode | + +---------------------------------------------+------------------------------------------------------------------------------+ + | OCF_RESKEY_CRM_meta_notify_slave_uname | .. index:: | + | | single: environment variable; OCF_RESKEY_CRM_meta_notify_slave_uname | + | | single: OCF_RESKEY_CRM_meta_notify_slave_uname | + | | | + | | Nodes on which resources are running in **Slave** mode | + +---------------------------------------------+------------------------------------------------------------------------------+ + +Interpretation of Promotable Notification Variables +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +**Pre-notification (demote):** + +* **Active** resources: ``$OCF_RESKEY_CRM_meta_notify_active_resource`` +* **Master** resources: ``$OCF_RESKEY_CRM_meta_notify_master_resource`` +* **Slave** resources: ``$OCF_RESKEY_CRM_meta_notify_slave_resource`` +* Inactive resources: ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` +* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` +* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +**Post-notification (demote) / Pre-notification (stop):** + +* **Active** resources: ``$OCF_RESKEY_CRM_meta_notify_active_resource`` +* **Master** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_master_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` + +* **Slave** resources: ``$OCF_RESKEY_CRM_meta_notify_slave_resource`` +* Inactive resources: ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` +* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` +* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` +* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` - In addition to the clone requirements for monitor actions, agents must - also _accurately_ report which state they are in. The cluster relies - on the agent to report its status (including role) accurately and does - not indicate to the agent what role it currently believes it to be in. +**Post-notification (stop) / Pre-notification (start)** - .Role implications of OCF return codes - [width="95%",cols="1,<1",options="header",align="center"] - |========================================================= +* **Active** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_active_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +* **Master** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_master_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` + +* **Slave** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_slave_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +* Inactive resources: + + * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` +* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` +* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +**Post-notification (start) / Pre-notification (promote)** + +* **Active** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_active_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + +* **Master** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_master_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` + +* **Slave** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_slave_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + +* Inactive resources: + + * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + +* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` +* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` +* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` - |Monitor Return Code - |Description +**Post-notification (promote)** - |OCF_NOT_RUNNING - |Stopped - indexterm:[Return Code,OCF_NOT_RUNNING] - - |OCF_SUCCESS - |Running (Slave) - indexterm:[Return Code,OCF_SUCCESS] - - |OCF_RUNNING_MASTER - |Running (Master) - indexterm:[Return Code,OCF_RUNNING_MASTER] +* **Active** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_active_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + +* **Master** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_master_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` + +* **Slave** resources: + + * ``$OCF_RESKEY_CRM_meta_notify_slave_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` + +* Inactive resources: + + * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource`` + * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + * minus ``$OCF_RESKEY_CRM_meta_notify_start_resource`` + +* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` +* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` +* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource`` +* Resources that were promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource`` +* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource`` +* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource`` + +Monitoring Promotable Clone Resources +_____________________________________ + +The usual monitor actions are insufficient to monitor a promotable clone +resource, because Pacemaker needs to verify not only that the resource is +active, but also that its actual role matches its intended one. + +Define two monitoring actions: the usual one will cover the slave role, +and an additional one with ``role="master"`` will cover the master role. - |OCF_FAILED_MASTER - |Failed (Master) - indexterm:[Return Code,OCF_FAILED_MASTER] - - |Other - |Failed (Slave) - - |========================================================= - - ==== Clone Notifications ==== - - If the clone has the +notify+ meta-attribute set to +true+, and the resource - agent supports the +notify+ action, Pacemaker will call the action when - appropriate, passing a number of extra variables which, when combined with - additional context, can be used to calculate the current state of the cluster - and what is about to happen to it. - - .Environment variables supplied with Clone notify actions - [width="95%",cols="5,<3",options="header",align="center"] - |========================================================= - - |Variable - |Description - - |OCF_RESKEY_CRM_meta_notify_type - |Allowed values: +pre+, +post+ - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,type] - indexterm:[type,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_operation - |Allowed values: +start+, +stop+ - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,operation] - indexterm:[operation,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_start_resource - |Resources to be started - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_resource] - indexterm:[start_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_stop_resource - |Resources to be stopped - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_resource] - indexterm:[stop_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_active_resource - |Resources that are running - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_resource] - indexterm:[active_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_inactive_resource - |Resources that are not running - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,inactive_resource] - indexterm:[inactive_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_start_uname - |Nodes on which resources will be started - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_uname] - indexterm:[start_uname,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_stop_uname - |Nodes on which resources will be stopped - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_uname] - indexterm:[stop_uname,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_active_uname - |Nodes on which resources are running - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_uname] - indexterm:[active_uname,Notification Environment Variable] - - |========================================================= - - The variables come in pairs, such as - +OCF_RESKEY_CRM_meta_notify_start_resource+ and - +OCF_RESKEY_CRM_meta_notify_start_uname+, and should be treated as an - array of whitespace-separated elements. - - +OCF_RESKEY_CRM_meta_notify_inactive_resource+ is an exception, as the - matching +uname+ variable does not exist since inactive resources - are not running on any node. - - Thus, in order to indicate that +clone:0+ will be started on +sles-1+, - +clone:2+ will be started on +sles-3+, and +clone:3+ will be started - on +sles-2+, the cluster would set: - - .Notification variables - ====== - [source,Bash] - ------- - OCF_RESKEY_CRM_meta_notify_start_resource="clone:0 clone:2 clone:3" - OCF_RESKEY_CRM_meta_notify_start_uname="sles-1 sles-3 sles-2" - ------- - ====== - - [NOTE] - ==== - Pacemaker will log but otherwise ignore failures of notify actions. - ==== - - ==== Interpretation of Notification Variables ==== - - .Pre-notification (stop): - - * Active resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+ - * Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - - - .Post-notification (stop) / Pre-notification (start): - - * Active resources - ** +$OCF_RESKEY_CRM_meta_notify_active_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Inactive resources - ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - - - .Post-notification (start): - - * Active resources: - ** +$OCF_RESKEY_CRM_meta_notify_active_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Inactive resources: - ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - - ==== Extra Notifications for Promotable Clones ==== - - .Extra environment variables supplied for promotable clones - [width="95%",cols="5,<3",options="header",align="center"] - |========================================================= - - |Variable - |Description - - |OCF_RESKEY_CRM_meta_notify_master_resource - |Resources that are running in +Master+ mode - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,master_resource] - indexterm:[master_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_slave_resource - |Resources that are running in +Slave+ mode - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,slave_resource] - indexterm:[slave_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_promote_resource - |Resources to be promoted - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,promote_resource] - indexterm:[promote_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_demote_resource - |Resources to be demoted - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,demote_resource] - indexterm:[demote_resource,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_promote_uname - |Nodes on which resources will be promoted - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,promote_uname] - indexterm:[promote_uname,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_demote_uname - |Nodes on which resources will be demoted - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,demote_uname] - indexterm:[demote_uname,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_master_uname - |Nodes on which resources are running in +Master+ mode - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,master_uname] - indexterm:[master_uname,Notification Environment Variable] - - |OCF_RESKEY_CRM_meta_notify_slave_uname - |Nodes on which resources are running in +Slave+ mode - indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,slave_uname] - indexterm:[slave_uname,Notification Environment Variable] - - |========================================================= - - ==== Interpretation of Promotable Notification Variables ==== - - .Pre-notification (demote): - - * +Active+ resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+ - * +Master+ resources: +$OCF_RESKEY_CRM_meta_notify_master_resource+ - * +Slave+ resources: +$OCF_RESKEY_CRM_meta_notify_slave_resource+ - * Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - - - .Post-notification (demote) / Pre-notification (stop): - - * +Active+ resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+ - * +Master+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_master_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * +Slave+ resources: +$OCF_RESKEY_CRM_meta_notify_slave_resource+ - * Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - - - .Post-notification (stop) / Pre-notification (start) - - * +Active+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_active_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * +Master+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_master_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * +Slave+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_slave_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Inactive resources: - ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - - - .Post-notification (start) / Pre-notification (promote) - - * +Active+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_active_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * +Master+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_master_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * +Slave+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_slave_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Inactive resources: - ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - - .Post-notification (promote) - - * +Active+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_active_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * +Master+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_master_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * +Slave+ resources: - ** +$OCF_RESKEY_CRM_meta_notify_slave_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * Inactive resources: - ** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+ - ** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - ** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - * Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+ - * Resources that were promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+ - * Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+ - * Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+ - - === Monitoring Promotable Clone Resources === - - The usual monitor actions are insufficient to monitor a promotable clone - resource, because Pacemaker needs to verify not only that the resource is - active, but also that its actual role matches its intended one. - - Define two monitoring actions: the usual one will cover the slave role, - and an additional one with +role="master"+ will cover the master role. - - .Monitoring both states of a promotable clone resource - ====== - [source,XML] - ------- - - - - - - - - - - - - ------- - ====== - - [IMPORTANT] - =========== - It is crucial that _every_ monitor operation has a different interval! +.. topic:: Monitoring both states of a promotable clone resource + + .. code-block:: xml + + + + + + + + + + + + + +.. important:: + + It is crucial that *every* monitor operation has a different interval! Pacemaker currently differentiates between operations only by resource and interval; so if (for example) a promotable clone resource had the same monitor interval for both roles, Pacemaker would ignore the role when checking the status -- which would cause unexpected return codes, and therefore unnecessary complications. - =========== - - [[s-promotion-scores]] - === Determining Which Instance is Promoted === - - Pacemaker can choose a promotable clone instance to be promoted in one of two - ways: - - * Promotion scores: These are node attributes set via the `crm_master` utility, - which generally would be called by the resource agent's start action if it - supports promotable clones. This tool automatically detects both the resource - and host, and should be used to set a preference for being promoted. Based on - this, +promoted-max+, and +promoted-node-max+, the instance(s) with the - highest preference will be promoted. - - * Constraints: Location constraints can indicate which nodes are most preferred - as masters. - - .Explicitly preferring node1 to be promoted to master - ====== - [source,XML] - ------- - - - - - - ------- - ====== - - [[s-resource-bundle]] - == Bundles - Isolated Environments == - indexterm:[Resource,Bundle] - indexterm:[Container,Docker,Bundle] - indexterm:[Container,podman,Bundle] - indexterm:[Container,rkt,Bundle] - - Pacemaker supports a special syntax for launching a - https://en.wikipedia.org/wiki/Operating-system-level_virtualization[container] - with any infrastructure it requires: the 'bundle'. - - Pacemaker bundles support https://www.docker.com/[Docker], - https://podman.io/[podman], and https://coreos.com/rkt/[rkt] - container technologies. - footnote:[Docker is a trademark of Docker, Inc. No endorsement by or - association with Docker, Inc. is implied.] - - .A bundle for a containerized web server - ==== - [source,XML] - ---- - - - - - - - - - - - - - ---- - ==== - - === Bundle Prerequisites === - indexterm:[Resource,Bundle,Prerequisites] - - Before configuring a bundle in Pacemaker, the user must install the appropriate - container launch technology (Docker, podman, or rkt), and supply a fully - configured container image, on every node allowed to run the bundle. - - Pacemaker will create an implicit resource of type +ocf:heartbeat:docker+, - +ocf:heartbeat:podman+, or +ocf:heartbeat:rkt+ to manage a bundle's - container. The user must ensure that the appropriate resource agent is - installed on every node allowed to run the bundle. - - === Bundle Properties === - - indexterm:[XML element,bundle element] - - .XML Attributes of a bundle Element - [width="95%",cols="3m,<5",options="header",align="center"] - |========================================================= - - |Attribute - |Description - - |id - |A unique name for the bundle (required) - indexterm:[XML attribute,id attribute,bundle element] - indexterm:[XML element,bundle element,id attribute] - - |description - |Arbitrary text (not used by Pacemaker) - indexterm:[XML attribute,description attribute,bundle element] - indexterm:[XML element,bundle element,description attribute] - - |========================================================= - - A bundle must contain exactly one +docker+, +podman+, or +rkt+ element. - - === Bundle Container Properties === - indexterm:[XML element,docker element] - indexterm:[XML element,podman element] - indexterm:[XML element,rkt element] - indexterm:[Resource,Bundle,Container] - - .XML attributes of a docker, podman, or rkt Element - [width="95%",cols="3m,4,<5",options="header",align="center"] - |==== - - |Attribute - |Default - |Description - - |image - | - |Container image tag (required) - indexterm:[XML attribute,image attribute,docker element] - indexterm:[XML element,docker element,image attribute] - indexterm:[XML attribute,image attribute,podman element] - indexterm:[XML element,podman element,image attribute] - indexterm:[XML attribute,image attribute,rkt element] - indexterm:[XML element,rkt element,image attribute] - - |replicas - |Value of +promoted-max+ if that is positive, else 1 - |A positive integer specifying the number of container instances to launch - indexterm:[XML attribute,replicas attribute,docker element] - indexterm:[XML element,docker element,replicas attribute] - indexterm:[XML attribute,replicas attribute,podman element] - indexterm:[XML element,podman element,replicas attribute] - indexterm:[XML attribute,replicas attribute,rkt element] - indexterm:[XML element,rkt element,replicas attribute] - - |replicas-per-host - |1 - |A positive integer specifying the number of container instances allowed to run - on a single node - indexterm:[XML attribute,replicas-per-host attribute,docker element] - indexterm:[XML element,docker element,replicas-per-host attribute] - indexterm:[XML attribute,replicas-per-host attribute,podman element] - indexterm:[XML element,podman element,replicas-per-host attribute] - indexterm:[XML attribute,replicas-per-host attribute,rkt element] - indexterm:[XML element,rkt element,replicas-per-host attribute] - - |promoted-max - |0 - |A non-negative integer that, if positive, indicates that the containerized - service should be treated as a promotable service, with this many replicas - allowed to run the service in the master role - indexterm:[XML attribute,promoted-max attribute,docker element] - indexterm:[XML element,docker element,promoted-max attribute] - indexterm:[XML attribute,promoted-max attribute,podman element] - indexterm:[XML element,podman element,promoted-max attribute] - indexterm:[XML attribute,promoted-max attribute,rkt element] - indexterm:[XML element,rkt element,promoted-max attribute] - - |network - | - |If specified, this will be passed to the `docker run`, `podman run`, or - `rkt run` command as the network setting for the container. - indexterm:[XML attribute,network attribute,docker element] - indexterm:[XML element,docker element,network attribute] - indexterm:[XML attribute,network attribute,podman element] - indexterm:[XML element,podman element,network attribute] - indexterm:[XML attribute,network attribute,rkt element] - indexterm:[XML element,rkt element,network attribute] - - |run-command - |`/usr/sbin/pacemaker-remoted` if bundle contains a +primitive+, otherwise none - |This command will be run inside the container when launching it ("PID 1"). If - the bundle contains a +primitive+, this command 'must' start pacemaker-remoted - (but could, for example, be a script that does other stuff, too). - indexterm:[XML attribute,run-command attribute,docker element] - indexterm:[XML element,docker element,run-command attribute] - indexterm:[XML attribute,run-command attribute,podman element] - indexterm:[XML element,podman element,run-command attribute] - indexterm:[XML attribute,run-command attribute,rkt element] - indexterm:[XML element,rkt element,run-command attribute] - - |options - | - |Extra command-line options to pass to the `docker run`, `podman run`, or - `rkt run` command - indexterm:[XML attribute,options attribute,docker element] - indexterm:[XML element,docker element,options attribute] - indexterm:[XML attribute,options attribute,podman element] - indexterm:[XML element,podman element,options attribute] - indexterm:[XML attribute,options attribute,rkt element] - indexterm:[XML element,rkt element,options attribute] - - |==== - - [NOTE] - ==== + +.. _s-promotion-scores: + +Determining Which Instance is Promoted +______________________________________ + +Pacemaker can choose a promotable clone instance to be promoted in one of two +ways: + +* Promotion scores: These are node attributes set via the ``crm_master`` utility, + which generally would be called by the resource agent's start action if it + supports promotable clones. This tool automatically detects both the resource + and host, and should be used to set a preference for being promoted. Based on + this, ``promoted-max``, and ``promoted-node-max``, the instance(s) with the + highest preference will be promoted. + +* Constraints: Location constraints can indicate which nodes are most preferred + as masters. + +.. topic:: Explicitly preferring node1 to be promoted to master + + .. code-block:: xml + + + + + + + +.. index: + single: bundle resource + single: resource; bundle + pair: container; Docker + pair: container; podman + pair: container; rkt + +.. _s-resource-bundle: + +Bundles - Isolated Environments +############################### + +Pacemaker supports a special syntax for launching a +`container `_ +with any infrastructure it requires: the *bundle*. + +Pacemaker bundles support `Docker `_, +`podman `_, and `rkt `_ +container technologies. [#]_ + +.. topic:: A bundle for a containerized web server + + .. code-block:: xml + + + + + + + + + + + + + + +.. index: + single: bundle resource + single: resource; bundle + +Bundle Prerequisites +____________________ + +Before configuring a bundle in Pacemaker, the user must install the appropriate +container launch technology (Docker, podman, or rkt), and supply a fully +configured container image, on every node allowed to run the bundle. + +Pacemaker will create an implicit resource of type **ocf:heartbeat:docker**, +**ocf:heartbeat:podman**, or **ocf:heartbeat:rkt** to manage a bundle's +container. The user must ensure that the appropriate resource agent is +installed on every node allowed to run the bundle. + +.. index:: + pair: XML element; bundle + +Bundle Properties +_________________ + +.. table:: **XML Attributes of a bundle Element** + + +-------------+-----------------------------------------------+ + | Attribute | Description | + +=============+===============================================+ + | id | .. index:: | + | | single: bundle; attribute, id | + | | single: attribute; id (bundle) | + | | single: id; bundle attribute | + | | | + | | A unique name for the bundle (required) | + +-------------+-----------------------------------------------+ + | description | .. index:: | + | | single: bundle; attribute, description | + | | single: attribute; description (bundle) | + | | single: description; bundle attribute | + | | | + | | Arbitrary text (not used by Pacemaker) | + +-------------+-----------------------------------------------+ + +A bundle must contain exactly one ``docker``, ``podman``, or ``rkt`` element. + +.. index:: + pair: XML element; docker + pair: XML element; podman + pair: XML element; rkt + single: resource; bundle + +Bundle Container Properties +___________________________ + +.. table:: **XML attributes of a docker, podman, or rkt Element** + + +-------------------+------------------------------------+---------------------------------------------------+ + | Attribute | Default | Description | + +===================+====================================+===================================================+ + | image | | .. index:: | + | | | single: docker; attribute, image | + | | | single: attribute; image (docker) | + | | | single: image; docker attribute | + | | | single: podman; attribute, image | + | | | single: attribute; image (podman) | + | | | single: image; podman attribute | + | | | single: rkt; attribute, image | + | | | single: attribute; image (rkt) | + | | | single: image; rkt attribute | + | | | | + | | | Container image tag (required) | + +-------------------+------------------------------------+---------------------------------------------------+ + | replicas | Value of ``promoted-max`` | .. index:: | + | | if that is positive, else 1 | single: docker; attribute, replicas | + | | | single: attribute; replicas (docker) | + | | | single: replicas; docker attribute | + | | | single: podman; attribute, replicas | + | | | single: attribute; replicas (podman) | + | | | single: replicas; podman attribute | + | | | single: rkt; attribute, replicas | + | | | single: attribute; replicas (rkt) | + | | | single: replicas; rkt attribute | + | | | | + | | | A positive integer specifying the number of | + | | | container instances to launch | + +-------------------+------------------------------------+---------------------------------------------------+ + | replicas-per-host | 1 | .. index:: | + | | | single: docker; attribute, replicas-per-host | + | | | single: attribute; replicas-per-host (docker) | + | | | single: replicas-per-host; docker attribute | + | | | single: podman; attribute, replicas-per-host | + | | | single: attribute; replicas-per-host (podman) | + | | | single: replicas-per-host; podman attribute | + | | | single: rkt; attribute, replicas-per-host | + | | | single: attribute; replicas-per-host (rkt) | + | | | single: replicas-per-host; rkt attribute | + | | | | + | | | A positive integer specifying the number of | + | | | container instances allowed to run on a | + | | | single node | + +-------------------+------------------------------------+---------------------------------------------------+ + | promoted-max | 0 | .. index:: | + | | | single: docker; attribute, promoted-max | + | | | single: attribute; promoted-max (docker) | + | | | single: promoted-max; docker attribute | + | | | single: podman; attribute, promoted-max | + | | | single: attribute; promoted-max (podman) | + | | | single: promoted-max; podman attribute | + | | | single: rkt; attribute, promoted-max | + | | | single: attribute; promoted-max (rkt) | + | | | single: promoted-max; rkt attribute | + | | | | + | | | A non-negative integer that, if positive, | + | | | indicates that the containerized service | + | | | should be treated as a promotable service, | + | | | with this many replicas allowed to run the | + | | | service in the master role | + +-------------------+------------------------------------+---------------------------------------------------+ + | network | | .. index:: | + | | | single: docker; attribute, network | + | | | single: attribute; network (docker) | + | | | single: network; docker attribute | + | | | single: podman; attribute, network | + | | | single: attribute; network (podman) | + | | | single: network; podman attribute | + | | | single: rkt; attribute, network | + | | | single: attribute; network (rkt) | + | | | single: network; rkt attribute | + | | | | + | | | If specified, this will be passed to the | + | | | ``docker run``, ``podman run``, or | + | | | ``rkt run`` command as the network setting | + | | | for the container. | + +-------------------+------------------------------------+---------------------------------------------------+ + | run-command | ``/usr/sbin/pacemaker-remoted`` if | .. index:: | + | | bundle contains a **primitive**, | single: docker; attribute, run-command | + | | otherwise none | single: attribute; run-command (docker) | + | | | single: run-command; docker attribute | + | | | single: podman; attribute, run-command | + | | | single: attribute; run-command (podman) | + | | | single: run-command; podman attribute | + | | | single: rkt; attribute, run-command | + | | | single: attribute; run-command (rkt) | + | | | single: run-command; rkt attribute | + | | | | + | | | This command will be run inside the container | + | | | when launching it ("PID 1"). If the bundle | + | | | contains a **primitive**, this command *must* | + | | | start ``pacemaker-remoted`` (but could, for | + | | | example, be a script that does other stuff, too). | + +-------------------+------------------------------------+---------------------------------------------------+ + | options | | .. index:: | + | | | single: docker; attribute, options | + | | | single: attribute; options (docker) | + | | | single: options; docker attribute | + | | | single: podman; attribute, options | + | | | single: attribute; options (podman) | + | | | single: options; podman attribute | + | | | single: rkt; attribute, options | + | | | single: attribute; options (rkt) | + | | | single: options; rkt attribute | + | | | | + | | | Extra command-line options to pass to the | + | | | ``docker run``, ``podman run``, or ``rkt run`` | + | | | command | + +-------------------+------------------------------------+---------------------------------------------------+ + +.. note:: + Considerations when using cluster configurations or container images from Pacemaker 1.1: - - If the container image has a pre-2.0.0 version of Pacemaker, set +run-command+ - to +/usr/sbin/pacemaker_remoted+ (note the underbar instead of dash). - - - +masters+ is accepted as an alias for +promoted-max+, but is deprecated since - 2.0.0, and support for it will be removed in a future version. - ==== - - === Bundle Network Properties === - - A bundle may optionally contain one ++ element. - indexterm:[XML element,network element] - indexterm:[Resource,Bundle,Networking] - - .XML attributes of a network Element - [width="95%",cols="2m,1,<4",options="header",align="center"] - |========================================================= - - |Attribute - |Default - |Description - - |add-host - |TRUE - |If TRUE, and +ip-range-start+ is used, Pacemaker will automatically ensure - that +/etc/hosts+ inside the containers has entries for each - <> and its assigned IP. - indexterm:[XML element,add-host attribute,network element] - indexterm:[XML attribute,network element,add-host attribute] - - |ip-range-start - | - |If specified, Pacemaker will create an implicit +ocf:heartbeat:IPaddr2+ - resource for each container instance, starting with this IP address, - using up to +replicas+ sequential addresses. These addresses can be used - from the host's network to reach the service inside the container, though - it is not visible within the container itself. Only IPv4 addresses are - currently supported. - indexterm:[XML element,ip-range-start attribute,network element] - indexterm:[XML attribute,network element,ip-range-start attribute] - - |host-netmask - |32 - |If +ip-range-start+ is specified, the IP addresses are created with this - CIDR netmask (as a number of bits). - indexterm:[XML element,host-netmask attribute,network element] - indexterm:[XML attribute,network element,host-netmask attribute] - - |host-interface - | - |If +ip-range-start+ is specified, the IP addresses are created on this - host interface (by default, it will be determined from the IP address). - indexterm:[XML element,host-interface attribute,network element] - indexterm:[XML attribute,network element,host-interface attribute] - - |control-port - |3121 - |If the bundle contains a +primitive+, the cluster will use this integer TCP - port for communication with Pacemaker Remote inside the container. Changing - this is useful when the container is unable to listen on the default port, - for example, when the container uses the host's network rather than - +ip-range-start+ (in which case +replicas-per-host+ must be 1), or when the - bundle may run on a Pacemaker Remote node that is already listening on the - default port. Any PCMK_remote_port environment variable set on the host or in - the container is ignored for bundle connections. - indexterm:[XML element,control-port attribute,network element] - indexterm:[XML attribute,network element,control-port attribute] - - |========================================================= - - [[s-resource-bundle-note-replica-names]] - [NOTE] - ==== + * If the container image has a pre-2.0.0 version of Pacemaker, set ``run-command`` + to ``/usr/sbin/pacemaker_remoted`` (note the underbar instead of dash). + + * ``masters`` is accepted as an alias for ``promoted-max``, but is deprecated since + 2.0.0, and support for it will be removed in a future version. + +Bundle Network Properties +_________________________ + +A bundle may optionally contain one ```` element. + +.. index:: + pair: XML element; network + single: resource; bundle + single: bundle; networking + +.. topic:: **XML attributes of a network Element** + + +----------------+---------+------------------------------------------------------------+ + | Attribute | Default | Description | + +================+=========+============================================================+ + | add-host | TRUE | .. index:: | + | | | single: network; attribute, add-host | + | | | single: attribute; add-host (network) | + | | | single: add-host; network attribute | + | | | | + | | | If TRUE, and ``ip-range-start`` is used, Pacemaker will | + | | | automatically ensure that ``/etc/hosts`` inside the | + | | | containers has entries for each | + | | | :ref:`replica name ` | + | | | and its assigned IP. | + +----------------+---------+------------------------------------------------------------+ + | ip-range-start | | .. index:: | + | | | single: network; attribute, ip-range-start | + | | | single: attribute; ip-range-start (network) | + | | | single: ip-range-start; network attribute | + | | | | + | | | If specified, Pacemaker will create an implicit | + | | | ``ocf:heartbeat:IPaddr2`` resource for each container | + | | | instance, starting with this IP address, using up to | + | | | ``replicas`` sequential addresses. These addresses can be | + | | | used from the host's network to reach the service inside | + | | | the container, though it is not visible within the | + | | | container itself. Only IPv4 addresses are currently | + | | | supported. | + +----------------+---------+------------------------------------------------------------+ + | host-netmask | 32 | .. index:: | + | | | single: network; attribute; host-netmask | + | | | single: attribute; host-netmask (network) | + | | | single: host-netmask; network attribute | + | | | | + | | | If ``ip-range-start`` is specified, the IP addresses | + | | | are created with this CIDR netmask (as a number of bits). | + +----------------+---------+------------------------------------------------------------+ + | host-interface | | .. index:: | + | | | single: network; attribute; host-interface | + | | | single: attribute; host-interface (network) | + | | | single: host-interface; network attribute | + | | | | + | | | If ``ip-range-start`` is specified, the IP addresses are | + | | | created on this host interface (by default, it will be | + | | | determined from the IP address). | + +----------------+---------+------------------------------------------------------------+ + | control-port | 3121 | .. index:: | + | | | single: network; attribute; control-port | + | | | single: attribute; control-port (network) | + | | | single: control-port; network attribute | + | | | | + | | | If the bundle contains a ``primitive``, the cluster will | + | | | use this integer TCP port for communication with | + | | | Pacemaker Remote inside the container. Changing this is | + | | | useful when the container is unable to listen on the | + | | | default port, for example, when the container uses the | + | | | host's network rather than ``ip-range-start`` (in which | + | | | case ``replicas-per-host`` must be 1), or when the bundle | + | | | may run on a Pacemaker Remote node that is already | + | | | listening on the default port. Any ``PCMK_remote_port`` | + | | | environment variable set on the host or in the container | + | | | is ignored for bundle connections. | + +----------------+---------+------------------------------------------------------------+ + +.. _s-resource-bundle-note-replica-names: + +.. note:: + Replicas are named by the bundle id plus a dash and an integer counter starting - with zero. For example, if a bundle named +httpd-bundle+ has +replicas=2+, its - containers will be named +httpd-bundle-0+ and +httpd-bundle-1+. - ==== - - Additionally, a +network+ element may optionally contain one or more - +port-mapping+ elements. - indexterm:[XML element,port-mapping] - - .Attributes of a port-mapping Element - [width="95%",cols="2m,1,<4",options="header",align="center"] - |========================================================= - - |Attribute - |Default - |Description - - |id - | - |A unique name for the port mapping (required) - indexterm:[XML attribute,id attribute,port-mapping element] - indexterm:[XML element,port-mapping element,id attribute] - - |port - | - |If this is specified, connections to this TCP port number on the host network - (on the container's assigned IP address, if +ip-range-start+ is specified) - will be forwarded to the container network. Exactly one of +port+ or +range+ - must be specified in a +port-mapping+. - indexterm:[XML attribute,port attribute,port-mapping element] - indexterm:[XML element,port-mapping element,port attribute] - - |internal-port - |value of +port+ - |If +port+ and this are specified, connections to +port+ on the host's network - will be forwarded to this port on the container network. - indexterm:[XML attribute,internal-port attribute,port-mapping element] - indexterm:[XML element,port-mapping element,internal-port attribute] - - |range - | - |If this is specified, connections to these TCP port numbers (expressed as - 'first_port'-'last_port') on the host network (on the container's assigned IP - address, if +ip-range-start+ is specified) will be forwarded to the same ports - in the container network. Exactly one of +port+ or +range+ must be specified - in a +port-mapping+. - indexterm:[XML attribute,range attribute,port-mapping element] - indexterm:[XML element,port-mapping element,range attribute] - - |========================================================= - - [NOTE] - ==== - If the bundle contains a +primitive+, Pacemaker will automatically map the - +control-port+, so it is not necessary to specify that port in a - +port-mapping+. - ==== - - [[s-bundle-storage]] - === Bundle Storage Properties === - - A bundle may optionally contain one +storage+ element. A +storage+ element - has no properties of its own, but may contain one or more +storage-mapping+ - elements. - indexterm:[XML element,storage element] - indexterm:[XML element,storage-mapping element] - indexterm:[Resource,Bundle,Storage] - - .Attributes of a storage-mapping Element - [width="95%",cols="2m,1,<4",options="header",align="center"] - |========================================================= - - |Attribute - |Default - |Description - - |id - | - |A unique name for the storage mapping (required) - indexterm:[XML attribute,id attribute,storage-mapping element] - indexterm:[XML element,storage-mapping element,id attribute] - - |source-dir - | - |The absolute path on the host's filesystem that will be mapped into the - container. Exactly one of +source-dir+ and +source-dir-root+ must be specified - in a +storage-mapping+. - indexterm:[XML attribute,source-dir attribute,storage-mapping element] - indexterm:[XML element,storage-mapping element,source-dir attribute] - - |source-dir-root - | - |The start of a path on the host's filesystem that will be mapped into the - container, using a different subdirectory on the host for each container - instance. The subdirectory will be named the same as the - <>. - Exactly one of +source-dir+ and +source-dir-root+ must be specified in a - +storage-mapping+. - indexterm:[XML attribute,source-dir-root attribute,storage-mapping element] - indexterm:[XML element,storage-mapping element,source-dir-root attribute] - - |target-dir - | - |The path name within the container where the host storage will be mapped - (required) - indexterm:[XML attribute,target-dir attribute,storage-mapping element] - indexterm:[XML element,storage-mapping element,target-dir attribute] - - |options - | - |A comma-separated list of file system mount options to use when mapping the - storage - indexterm:[XML attribute,options attribute,storage-mapping element] - indexterm:[XML element,storage-mapping element,options attribute] - - |========================================================= - - [NOTE] - ==== + with zero. For example, if a bundle named **httpd-bundle** has **replicas=2**, its + containers will be named **httpd-bundle-0** and **httpd-bundle-1**. + +.. index:: + pair: XML element; port-mapping + +Additionally, a ``network`` element may optionally contain one or more +``port-mapping`` elements. + +.. table:: **Attributes of a port-mapping Element** + + +---------------+-------------------+------------------------------------------------------+ + | Attribute | Default | Description | + +===============+===================+======================================================+ + | id | | .. index:: | + | | | single: port-mapping; attribute, id | + | | | single: attribute; id (port-mapping) | + | | | single: id; port-mapping attribute | + | | | | + | | | A unique name for the port mapping (required) | + +---------------+-------------------+------------------------------------------------------+ + | port | | .. index:: | + | | | single: port-mapping; attribute, port | + | | | single: attribute; port (port-mapping) | + | | | single: port; port-mapping attribute | + | | | | + | | | If this is specified, connections to this TCP port | + | | | number on the host network (on the container's | + | | | assigned IP address, if ``ip-range-start`` is | + | | | specified) will be forwarded to the container | + | | | network. Exactly one of ``port`` or ``range`` | + | | | must be specified in a ``port-mapping``. | + +---------------+-------------------+------------------------------------------------------+ + | internal-port | value of ``port`` | .. index:: | + | | | single: port-mapping; attribute, internal-port | + | | | single: attribute; internal-port (port-mapping) | + | | | single: internal-port; port-mapping attribute | + | | | | + | | | If ``port`` and this are specified, connections | + | | | to ``port`` on the host's network will be | + | | | forwarded to this port on the container network. | + +---------------+-------------------+------------------------------------------------------+ + | range | | .. index:: | + | | | single: port-mapping; attribute, range | + | | | single: attribute; range (port-mapping) | + | | | single: range; port-mapping attribute | + | | | | + | | | If this is specified, connections to these TCP | + | | | port numbers (expressed as *first_port*-*last_port*) | + | | | on the host network (on the container's assigned IP | + | | | address, if ``ip-range-start`` is specified) will | + | | | be forwarded to the same ports in the container | + | | | network. Exactly one of ``port`` or ``range`` | + | | | must be specified in a ``port-mapping``. | + +---------------+-------------------+------------------------------------------------------+ + +.. note:: + + If the bundle contains a ``primitive``, Pacemaker will automatically map the + ``control-port``, so it is not necessary to specify that port in a + ``port-mapping``. + +.. index: + pair: XML element; storage + pair: XML element; storage-mapping + single: resource; bundle + +.. _s-bundle-storage: + +Bundle Storage Properties +_________________________ + +A bundle may optionally contain one ``storage`` element. A ``storage`` element +has no properties of its own, but may contain one or more ``storage-mapping`` +elements. + +.. table:: **Attributes of a storage-mapping Element** + + +-----------------+---------+-------------------------------------------------------------+ + | Attribute | Default | Description | + +=================+=========+=============================================================+ + | id | | .. index:: | + | | | single: storage-mapping; attribute, id | + | | | single: attribute; id (storage-mapping) | + | | | single: id; storage-mapping attribute | + | | | | + | | | A unique name for the storage mapping (required) | + +-----------------+---------+-------------------------------------------------------------+ + | source-dir | | .. index:: | + | | | single: storage-mapping; attribute, source-dir | + | | | single: attribute; source-dir (storage-mapping) | + | | | single: source-dir; storage-mapping attribute | + | | | | + | | | The absolute path on the host's filesystem that will be | + | | | mapped into the container. Exactly one of ``source-dir`` | + | | | and ``source-dir-root`` must be specified in a | + | | | ``storage-mapping``. | + +-----------------+---------+-------------------------------------------------------------+ + | source-dir-root | | .. index:: | + | | | single: storage-mapping; attribute, source-dir-root | + | | | single: attribute; source-dir-root (storage-mapping) | + | | | single: source-dir-root; storage-mapping attribute | + | | | | + | | | The start of a path on the host's filesystem that will | + | | | be mapped into the container, using a different | + | | | subdirectory on the host for each container instance. | + | | | The subdirectory will be named the same as the | + | | | :ref:`replica name `. | + | | | Exactly one of ``source-dir`` and ``source-dir-root`` | + | | | must be specified in a ``storage-mapping``. | + +-----------------+---------+-------------------------------------------------------------+ + | target-dir | | .. index:: | + | | | single: storage-mapping; attribute, target-dir | + | | | single: attribute; target-dir (storage-mapping) | + | | | single: target-dir; storage-mapping attribute | + | | | | + | | | The path name within the container where the host | + | | | storage will be mapped (required) | + +-----------------+---------+-------------------------------------------------------------+ + | options | | .. index:: | + | | | single: storage-mapping; attribute, options | + | | | single: attribute; options (storage-mapping) | + | | | single: options; storage-mapping attribute | + | | | | + | | | A comma-separated list of file system mount | + | | | options to use when mapping the storage | + +-----------------+---------+-------------------------------------------------------------+ + +.. note:: + Pacemaker does not define the behavior if the source directory does not already exist on the host. However, it is expected that the container technology and/or its resource agent will create the source directory in that case. - ==== - [NOTE] - ==== - If the bundle contains a +primitive+, +.. note:: + + If the bundle contains a ``primitive``, Pacemaker will automatically map the equivalent of - +source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey+ - and +source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log+ into the + ``source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey`` + and ``source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log`` into the container, so it is not necessary to specify those paths in a - +storage-mapping+. - ==== + ``storage-mapping``. - [IMPORTANT] - ==== - The +PCMK_authkey_location+ environment variable must not be set to anything - other than the default of `/etc/pacemaker/authkey` on any node in the cluster. - ==== +.. important:: + + The ``PCMK_authkey_location`` environment variable must not be set to anything + other than the default of ``/etc/pacemaker/authkey`` on any node in the cluster. - [IMPORTANT] - ==== +.. important:: + If SELinux is used in enforcing mode on the host, you must ensure the container is allowed to use any storage you mount into it. For Docker and podman bundles, adding "Z" to the mount options will create a container-specific label for the mount that allows the container access. - ==== - - === Bundle Primitive === - - indexterm:[Resource,Bundle,Primitive] - - A bundle may optionally contain one <> - resource. The primitive may have operations, instance attributes, and - meta-attributes defined, as usual. + +.. index:: + single: resource; bundle - If a bundle contains a primitive resource, the container image must include - the Pacemaker Remote daemon, and at least one of +ip-range-start+ or - +control-port+ must be configured in the bundle. Pacemaker will create an - implicit +ocf:pacemaker:remote+ resource for the connection, launch - Pacemaker Remote within the container, and monitor and manage the primitive - resource via Pacemaker Remote. +Bundle Primitive +________________ - If the bundle has more than one container instance (replica), the primitive - resource will function as an implicit <> -- a - <> if the bundle has +masters+ greater - than zero. +A bundle may optionally contain one :ref:`primitive ` +resource. The primitive may have operations, instance attributes, and +meta-attributes defined, as usual. + +If a bundle contains a primitive resource, the container image must include +the Pacemaker Remote daemon, and at least one of ``ip-range-start`` or +``control-port`` must be configured in the bundle. Pacemaker will create an +implicit **ocf:pacemaker:remote** resource for the connection, launch +Pacemaker Remote within the container, and monitor and manage the primitive +resource via Pacemaker Remote. + +If the bundle has more than one container instance (replica), the primitive +resource will function as an implicit :ref:`clone ` -- a +:ref:`promotable clone ` if the bundle has ``promoted-max`` +greater than zero. - [NOTE] - ==== +.. note:: + If you want to pass environment variables to a bundle's Pacemaker Remote connection or primitive, you have two options: * Environment variables whose value is the same regardless of the underlying host - may be set using the container element's +options+ attribute. + may be set using the container element's ``options`` attribute. * If you want variables to have host-specific values, you can use the - <> element to map a file on the host as - +/etc/pacemaker/pcmk-init.env+ in the container. Pacemaker Remote will parse + :ref:`storage-mapping ` element to map a file on the host as + ``/etc/pacemaker/pcmk-init.env`` in the container. Pacemaker Remote will parse this file as a shell-like format, with variables set as NAME=VALUE, ignoring blank lines and comments starting with "#". - ==== - [IMPORTANT] - ==== - When a bundle has a +primitive+, Pacemaker on all cluster nodes must be able to +.. important:: + + When a bundle has a ``primitive``, Pacemaker on all cluster nodes must be able to contact Pacemaker Remote inside the bundle's containers. - * The containers must have an accessible network (for example, +network+ should - not be set to "none" with a +primitive+). + * The containers must have an accessible network (for example, ``network`` should + not be set to "none" with a ``primitive``). * The default, using a distinct network space inside the container, works in - combination with +ip-range-start+. Any firewall must allow access from all - cluster nodes to the +control-port+ on the container IPs. + combination with ``ip-range-start``. Any firewall must allow access from all + cluster nodes to the ``control-port`` on the container IPs. * If the container shares the host's network space (for example, by setting - +network+ to "host"), a unique +control-port+ should be specified for each + ``network`` to "host"), a unique ``control-port`` should be specified for each bundle. Any firewall must allow access from all cluster nodes to the - +control-port+ on all cluster and remote node IPs. - ==== + ``control-port`` on all cluster and remote node IPs. - [[s-bundle-attributes]] - === Bundle Node Attributes === - - indexterm:[Resource,Bundle,Node Attributes] - - If the bundle has a +primitive+, the primitive's resource agent may want to set - node attributes such as <>. However, with - containers, it is not apparent which node should get the attribute. - - If the container uses shared storage that is the same no matter which node the - container is hosted on, then it is appropriate to use the promotion score on the - bundle node itself. - - On the other hand, if the container uses storage exported from the underlying host, - then it may be more appropriate to use the promotion score on the underlying host. - - Since this depends on the particular situation, the - +container-attribute-target+ resource meta-attribute allows the user to specify - which approach to use. If it is set to +host+, then user-defined node attributes - will be checked on the underlying host. If it is anything else, the local node - (in this case the bundle node) is used as usual. - - This only applies to user-defined attributes; the cluster will always check the - local node for cluster-defined attributes such as +#uname+. +.. index:: + single: resource; bundle + +.. _s-bundle-attributes: + +Bundle Node Attributes +______________________ - If +container-attribute-target+ is +host+, the cluster will pass additional - environment variables to the primitive's resource agent that allow it to set - node attributes appropriately: +CRM_meta_container_attribute_target+ (identical - to the meta-attribute value) and +CRM_meta_physical_host+ (the name of the - underlying host). +If the bundle has a ``primitive``, the primitive's resource agent may want to set +node attributes such as :ref:`promotion scores `. However, with +containers, it is not apparent which node should get the attribute. + +If the container uses shared storage that is the same no matter which node the +container is hosted on, then it is appropriate to use the promotion score on the +bundle node itself. + +On the other hand, if the container uses storage exported from the underlying host, +then it may be more appropriate to use the promotion score on the underlying host. + +Since this depends on the particular situation, the +``container-attribute-target`` resource meta-attribute allows the user to specify +which approach to use. If it is set to ``host``, then user-defined node attributes +will be checked on the underlying host. If it is anything else, the local node +(in this case the bundle node) is used as usual. + +This only applies to user-defined attributes; the cluster will always check the +local node for cluster-defined attributes such as ``#uname``. + +If ``container-attribute-target`` is ``host``, the cluster will pass additional +environment variables to the primitive's resource agent that allow it to set +node attributes appropriately: ``CRM_meta_container_attribute_target`` (identical +to the meta-attribute value) and ``CRM_meta_physical_host`` (the name of the +underlying host). - [NOTE] - ==== - When called by a resource agent, the `attrd_updater` and `crm_attribute` +.. note:: + + When called by a resource agent, the ``attrd_updater`` and ``crm_attribute`` commands will automatically check those environment variables and set attributes appropriately. - ==== - - === Bundle Meta-Attributes === - indexterm:[Resource,Bundle,Meta-attributes] - - Any meta-attribute set on a bundle will be inherited by the bundle's - primitive and any resources implicitly created by Pacemaker for the bundle. - - This includes options such as +priority+, +target-role+, and +is-managed+. See - <> for more information. - - === Limitations of Bundles === - - Restarting pacemaker while a bundle is unmanaged or the cluster is in - maintenance mode may cause the bundle to fail. +.. index:: + single: resource; bundle + +Bundle Meta-Attributes +______________________ - Bundles may not be explicitly cloned or included in groups. This includes the - bundle's primitive and any resources implicitly created by Pacemaker for the - bundle. (If +replicas+ is greater than 1, the bundle will behave like a clone - implicitly.) +Any meta-attribute set on a bundle will be inherited by the bundle's +primitive and any resources implicitly created by Pacemaker for the bundle. + +This includes options such as ``priority``, ``target-role``, and ``is-managed``. See +:ref:`resource_options` for more information. - Bundles do not have instance attributes, utilization attributes, or operations, - though a bundle's primitive may have them. +Limitations of Bundles +______________________ - A bundle with a primitive can run on a Pacemaker Remote node only if the bundle - uses a distinct +control-port+. +Restarting pacemaker while a bundle is unmanaged or the cluster is in +maintenance mode may cause the bundle to fail. + +Bundles may not be explicitly cloned or included in groups. This includes the +bundle's primitive and any resources implicitly created by Pacemaker for the +bundle. (If ``replicas`` is greater than 1, the bundle will behave like a clone +implicitly.) + +Bundles do not have instance attributes, utilization attributes, or operations, +though a bundle's primitive may have them. + +A bundle with a primitive can run on a Pacemaker Remote node only if the bundle +uses a distinct ``control-port``. + +.. [#] Of course, the service must support running multiple instances. + +.. [#] These are historical terms that will eventually be replaced, but the extensive + use of them and the need for backward compatibility makes it a long process. + You may see examples using a **master** tag instead of a **clone** tag with the + **promotable** meta-attribute set to **true**; the **master** tag is supported, but + deprecated, and will be removed in a future version. You may also see such + services referred to as *multi-state* or *stateful*; these means the same thing + as *promotable*. + +.. [#] Docker is a trademark of Docker, Inc. No endorsement by or association with + Docker, Inc. is implied. diff --git a/doc/sphinx/Pacemaker_Explained/constraints.rst b/doc/sphinx/Pacemaker_Explained/constraints.rst index 23647448c3..98d272257f 100644 --- a/doc/sphinx/Pacemaker_Explained/constraints.rst +++ b/doc/sphinx/Pacemaker_Explained/constraints.rst @@ -1,927 +1,986 @@ +.. index:: + single: constraint + single: resource; constraint + +.. _constraints: + Resource Constraints -------------------- -.. Convert_to_RST: - - anchor:ch-constraints[Chapter 7, Alerts] - indexterm:[Resource,Constraint] - - == Scores == - - indexterm:[Resource,Score] - indexterm:[Node,Score] - Scores of all kinds are integral to how the cluster works. - Practically everything from moving a resource to deciding which - resource to stop in a degraded cluster is achieved by manipulating - scores in some way. - - Scores are calculated per resource and node. Any node with a - negative score for a resource can't run that resource. The cluster - places a resource on the node with the highest score for it. - - === Infinity Math === - - Pacemaker implements +INFINITY+ (or equivalently, ++INFINITY+) internally as a - score of 1,000,000. Addition and subtraction with it follow these three basic - rules: - - * Any value + +INFINITY+ = +INFINITY+ - * Any value - +INFINITY+ = +-INFINITY+ - * +INFINITY+ - +INFINITY+ = +-INFINITY+ - - [NOTE] - ====== +.. index:: + single: resource; score + single: node; score + +Scores +###### + +Scores of all kinds are integral to how the cluster works. +Practically everything from moving a resource to deciding which +resource to stop in a degraded cluster is achieved by manipulating +scores in some way. + +Scores are calculated per resource and node. Any node with a +negative score for a resource can't run that resource. The cluster +places a resource on the node with the highest score for it. + +Infinity Math +_____________ + +Pacemaker implements **INFINITY** (or equivalently, **+INFINITY**) internally as a +score of 1,000,000. Addition and subtraction with it follow these three basic +rules: + +* Any value + **INFINITY** = **INFINITY** + +* Any value - **INFINITY** = -**INFINITY** + +* **INFINITY** - **INFINITY** = **-INFINITY** + +.. note:: + What if you want to use a score higher than 1,000,000? Typically this possibility arises when someone wants to base the score on some external metric that might go above 1,000,000. - + The short answer is you can't. - + The long answer is it is sometimes possible work around this limitation creatively. You may be able to set the score to some computed value based on the external metric rather than use the metric directly. For nodes, you can store the metric as a node attribute, and query the attribute when computing the score (possibly as part of a custom resource agent). - ====== - .. _location-constraint: +.. index:: + single: location constraint + single: constraint; location + Deciding Which Nodes a Resource Can Run On ########################################## +*Location constraints* tell the cluster which nodes a resource can run on. + +There are two alternative strategies. One way is to say that, by default, +resources can run anywhere, and then the location constraints specify nodes +that are not allowed (an *opt-out* cluster). The other way is to start with +nothing able to run anywhere, and use location constraints to selectively +enable allowed nodes (an *opt-in* cluster). + +Whether you should choose opt-in or opt-out depends on your +personal preference and the make-up of your cluster. If most of your +resources can run on most of the nodes, then an opt-out arrangement is +likely to result in a simpler configuration. On the other-hand, if +most resources can only run on a small subset of nodes, an opt-in +configuration might be simpler. -.. Convert_to_RST_2: - - indexterm:[Constraint,Location Constraint] - 'Location constraints' tell the cluster which nodes a resource can run on. - - There are two alternative strategies. One way is to say that, by default, - resources can run anywhere, and then the location constraints specify nodes - that are not allowed (an 'opt-out' cluster). The other way is to start with - nothing able to run anywhere, and use location constraints to selectively - enable allowed nodes (an 'opt-in' cluster). - - Whether you should choose opt-in or opt-out depends on your - personal preference and the make-up of your cluster. If most of your - resources can run on most of the nodes, then an opt-out arrangement is - likely to result in a simpler configuration. On the other-hand, if - most resources can only run on a small subset of nodes, an opt-in - configuration might be simpler. - - === Location Properties === - - indexterm:[XML element,rsc_location element] - indexterm:[Constraint,Location Constraint,rsc_location element] - - .Attributes of a rsc_location Element - [width="95%",cols="2m,1,<5",options="header",align="center"] - |========================================================= - - |Attribute - |Default - |Description - - |id - | - |A unique name for the constraint (required) - indexterm:[XML attribute,id attribute,rsc_location element] - indexterm:[XML element,rsc_location element,id attribute] - - |rsc - | - |The name of the resource to which this constraint applies. A location - constraint must either have a +rsc+, have a +rsc-pattern+, or contain at least - one resource set. - indexterm:[XML attribute,rsc attribute,rsc_location element] - indexterm:[XML element,rsc_location element,rsc attribute] - - |rsc-pattern - | - |A pattern matching the names of resources to which this constraint applies. - The syntax is the same as - http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04[POSIX] - extended regular expressions, with the addition of an initial '!' indicating - that resources 'not' matching the pattern are selected. If the regular - expression contains submatches, and the constraint is governed by a - <>, the submatches can be referenced as +%0+ through +%9+ in - the rule's +score-attribute+ or a rule expression's +attribute+. A location - constraint must either have a +rsc+, have a +rsc-pattern+, or contain at least - one resource set. - indexterm:[XML attribute,rsc-pattern attribute,rsc_location element] - indexterm:[XML element,rsc_location element,rsc-pattern attribute] - - |node - | - |The name of the node to which this constraint applies. A location constraint - must either have a +node+ and +score+, or contain at least one rule. - indexterm:[XML attribute,node attribute,rsc_location element] - indexterm:[XML element,rsc_location element,node attribute] - - |score - | - |Positive values indicate a preference for running the affected resource(s) on - +node+ -- the higher the value, the stronger the preference. Negative values - indicate the resource(s) should avoid this node (a value of +-INFINITY+ - changes "should" to "must"). A location constraint must either have a +node+ - and +score+, or contain at least one rule. - indexterm:[XML attribute,score attribute,rsc_location element] - indexterm:[XML element,rsc_location element,score attribute] - - |resource-discovery - |always - a|Whether Pacemaker should perform resource discovery (that is, check whether - the resource is already running) for this resource on this node. This should - normally be left as the default, so that rogue instances of a service can be - stopped when they are running where they are not supposed to be. However, - there are two situations where disabling resource discovery is a good idea: - when a service is not installed on a node, discovery might return an error - (properly written OCF agents will not, so this is usually only seen with other - agent types); and when Pacemaker Remote is used to scale a cluster to hundreds - of nodes, limiting resource discovery to allowed nodes can significantly boost - performance. - - * +always:+ Always perform resource discovery for the specified resource on this node. - * +never:+ Never perform resource discovery for the specified resource on this node. - This option should generally be used with a -INFINITY score, although that is not strictly - required. - * +exclusive:+ Perform resource discovery for the specified resource only on - this node (and other nodes similarly marked as +exclusive+). Multiple location - constraints using +exclusive+ discovery for the same resource across - different nodes creates a subset of nodes resource-discovery is exclusive to. - If a resource is marked for +exclusive+ discovery on one or more nodes, that - resource is only allowed to be placed within that subset of nodes. - - indexterm:[XML attribute,resource-discovery attribute,rsc_location element] - indexterm:[XML element,rsc_location element,resource-discovery attribute] - indexterm:[Constraint,Location Constraint,Resource Discovery] - - |========================================================= - - [WARNING] - ========= - Setting resource-discovery to +never+ or +exclusive+ removes Pacemaker's +.. index:: + pair: XML element; rsc_location + single: constraint; location + single: constraint; rsc_location + +Location Properties +___________________ + +.. table:: **Attributes of a rsc_location Element** + + +--------------------+---------+----------------------------------------------------------------------------------------------+ + | Attribute | Default | Description | + +====================+=========+==============================================================================================+ + | id | | .. index:: | + | | | single: rsc_location; attribute, id | + | | | single: attribute; id (rsc_location) | + | | | single: id; rsc_location attribute | + | | | | + | | | A unique name for the constraint (required) | + +--------------------+---------+----------------------------------------------------------------------------------------------+ + | rsc | | .. index:: | + | | | single: rsc_location; attribute, rsc | + | | | single: attribute; rsc (rsc_location) | + | | | single: rsc; rsc_location attribute | + | | | | + | | | The name of the resource to which this constraint | + | | | applies. A location constraint must either have a | + | | | ``rsc``, have a ``rsc-pattern``, or contain at | + | | | least one resource set. | + +--------------------+---------+----------------------------------------------------------------------------------------------+ + | rsc-pattern | | .. index:: | + | | | single: rsc_location; attribute, rsc-pattern | + | | | single: attribute; rsc-pattern (rsc_location) | + | | | single: rsc-pattern; rsc_location attribute | + | | | | + | | | A pattern matching the names of resources to which | + | | | this constraint applies. The syntax is the same as | + | | | `POSIX `_ | + | | | extended regular expressions, with the addition of an | + | | | initial *!* indicating that resources *not* matching | + | | | the pattern are selected. If the regular expression | + | | | contains submatches, and the constraint is governed by | + | | | a :ref:`rule `, the submatches can be | + | | | referenced as **%0** through **%9** in the rule's | + | | | ``score-attribute`` or a rule expression's ``attribute``. | + | | | A location constraint must either have a ``rsc``, have a | + | | | ``rsc-pattern``, or contain at least one resource set. | + +--------------------+---------+----------------------------------------------------------------------------------------------+ + | node | | .. index:: | + | | | single: rsc_location; attribute, node | + | | | single: attribute; node (rsc_location) | + | | | single: node; rsc_location attribute | + | | | | + | | | The name of the node to which this constraint applies. | + | | | A location constraint must either have a ``node`` and | + | | | ``score``, or contain at least one rule. | + +--------------------+---------+----------------------------------------------------------------------------------------------+ + | score | | .. index:: | + | | | single: rsc_location; attribute, score | + | | | single: attribute; score (rsc_location) | + | | | single: score; rsc_location attribute | + | | | | + | | | Positive values indicate a preference for running the | + | | | affected resource(s) on ``node`` -- the higher the value, | + | | | the stronger the preference. Negative values indicate | + | | | the resource(s) should avoid this node (a value of | + | | | **-INFINITY** changes "should" to "must"). A location | + | | | constraint must either have a ``node`` and ``score``, | + | | | or contain at least one rule. | + +--------------------+---------+----------------------------------------------------------------------------------------------+ + | resource-discovery | always | .. index:: | + | | | single: rsc_location; attribute, resource-discovery | + | | | single: attribute; resource-discovery (rsc_location) | + | | | single: resource-discovery; rsc_location attribute | + | | | | + | | | Whether Pacemaker should perform resource discovery | + | | | (that is, check whether the resource is already running) | + | | | for this resource on this node. This should normally be | + | | | left as the default, so that rogue instances of a | + | | | service can be stopped when they are running where they | + | | | are not supposed to be. However, there are two | + | | | situations where disabling resource discovery is a good | + | | | idea: when a service is not installed on a node, | + | | | discovery might return an error (properly written OCF | + | | | agents will not, so this is usually only seen with other | + | | | agent types); and when Pacemaker Remote is used to scale | + | | | a cluster to hundreds of nodes, limiting resource | + | | | discovery to allowed nodes can significantly boost | + | | | performance. | + | | | | + | | | * ``always:`` Always perform resource discovery for | + | | | the specified resource on this node. | + | | | | + | | | * ``never:`` Never perform resource discovery for the | + | | | specified resource on this node. This option should | + | | | generally be used with a -INFINITY score, although | + | | | that is not strictly required. | + | | | | + | | | * ``exclusive:`` Perform resource discovery for the | + | | | specified resource only on this node (and other nodes | + | | | similarly marked as ``exclusive``). Multiple location | + | | | constraints using ``exclusive`` discovery for the | + | | | same resource across different nodes creates a subset | + | | | of nodes resource-discovery is exclusive to. If a | + | | | resource is marked for ``exclusive`` discovery on one | + | | | or more nodes, that resource is only allowed to be | + | | | placed within that subset of nodes. | + +--------------------+---------+----------------------------------------------------------------------------------------------+ + +.. warning:: + + Setting ``resource-discovery`` to ``never`` or ``exclusive`` removes Pacemaker's ability to detect and stop unwanted instances of a service running where it's not supposed to be. It is up to the system administrator (you!) - to make sure that the service can 'never' be active on nodes without - resource-discovery (such as by leaving the relevant software uninstalled). - ========= - - === Asymmetrical "Opt-In" Clusters === - indexterm:[Asymmetrical Clusters] - indexterm:[Opt-In Clusters] - - To create an opt-in cluster, start by preventing resources from - running anywhere by default: - - ---- + to make sure that the service can *never* be active on nodes without + ``resource-discovery`` (such as by leaving the relevant software uninstalled). + +.. index:: + single: Asymmetrical Clusters + single: Opt-In Clusters + +Asymmetrical "Opt-In" Clusters +______________________________ + +To create an opt-in cluster, start by preventing resources from running anywhere +by default: + +.. code-block:: none + # crm_attribute --name symmetric-cluster --update false - ---- - - Then start enabling nodes. The following fragment says that the web - server prefers *sles-1*, the database prefers *sles-2* and both can - fail over to *sles-3* if their most preferred node fails. - - .Opt-in location constraints for two resources - ====== - [source,XML] - ------- - - - - - - - ------- - ====== - - === Symmetrical "Opt-Out" Clusters === - indexterm:[Symmetrical Clusters] - indexterm:[Opt-Out Clusters] - - To create an opt-out cluster, start by allowing resources to run - anywhere by default: - - ---- + +Then start enabling nodes. The following fragment says that the web +server prefers **sles-1**, the database prefers **sles-2** and both can +fail over to **sles-3** if their most preferred node fails. + +.. topic:: Opt-in location constraints for two resources + + .. code-block:: xml + + + + + + + + +.. index:: + single: Symmetrical Clusters + single: Opt-Out Clusters + +Symmetrical "Opt-Out" Clusters +______________________________ + +To create an opt-out cluster, start by allowing resources to run +anywhere by default: + +.. code-block:: none + # crm_attribute --name symmetric-cluster --update true - ---- - - Then start disabling nodes. The following fragment is the equivalent - of the above opt-in configuration. - - .Opt-out location constraints for two resources - ====== - [source,XML] - ------- - - - - - - - ------- - ====== - - [[node-score-equal]] - === What if Two Nodes Have the Same Score === - - If two nodes have the same score, then the cluster will choose one. - This choice may seem random and may not be what was intended, however - the cluster was not given enough information to know any better. - - .Constraints where a resource prefers two nodes equally - ====== - [source,XML] - ------- - - - - - - - - ------- - ====== - - In the example above, assuming no other constraints and an inactive - cluster, +Webserver+ would probably be placed on +sles-1+ and +Database+ on - +sles-2+. It would likely have placed +Webserver+ based on the node's - uname and +Database+ based on the desire to spread the resource load - evenly across the cluster. However other factors can also be involved - in more complex configurations. - - [[s-resource-ordering]] - == Specifying the Order in which Resources Should Start/Stop == - - indexterm:[Constraint,Ordering Constraint] - indexterm:[Resource,Start Order] - - 'Ordering constraints' tell the cluster the order in which certain - resource actions should occur. - - [IMPORTANT] - ==== - Ordering constraints affect 'only' the ordering of resource actions; - they do 'not' require that the resources be placed on the + +Then start disabling nodes. The following fragment is the equivalent +of the above opt-in configuration. + +.. topic:: Opt-out location constraints for two resources + + .. code-block:: xml + + + + + + + + +.. _node-score-equal: + +What if Two Nodes Have the Same Score +_____________________________________ + +If two nodes have the same score, then the cluster will choose one. +This choice may seem random and may not be what was intended, however +the cluster was not given enough information to know any better. + +.. topic:: Constraints where a resource prefers two nodes equally + + .. code-block:: xml + + + + + + + + + +In the example above, assuming no other constraints and an inactive +cluster, **Webserver** would probably be placed on **sles-1** and **Database** on +**sles-2**. It would likely have placed **Webserver** based on the node's +uname and **Database** based on the desire to spread the resource load +evenly across the cluster. However other factors can also be involved +in more complex configurations. + +.. index:: + single: constraint; ordering + single: resource; start order + +.. _s-resource-ordering: + +Specifying the Order in which Resources Should Start/Stop +######################################################### + +*Ordering constraints* tell the cluster the order in which certain +resource actions should occur. + +.. important:: + + Ordering constraints affect *only* the ordering of resource actions; + they do *not* require that the resources be placed on the same node. If you want resources to be started on the same node - 'and' in a specific order, you need both an ordering constraint 'and' - a colocation constraint (see <>), or - alternatively, a group (see <>). - ==== - - === Ordering Properties === - - indexterm:[XML element,rsc_order element] - indexterm:[Constraint,Ordering Constraint,rsc_order element] - - .Attributes of a rsc_order Element - [width="95%",cols="1m,1,<4",options="header",align="center"] - |========================================================= - - |Field - |Default - |Description - - |id - | - |A unique name for the constraint - indexterm:[XML attribute,id attribute,rsc_order element] - indexterm:[XML element,rsc_order element,id attribute] - - |first - | - |Name of the resource that the +then+ resource depends on - indexterm:[XML attribute,first attribute,rsc_order element] - indexterm:[XML element,rsc_order element,first attribute] - - |then - | - |Name of the dependent resource - indexterm:[XML attribute,then attribute,rsc_order element] - indexterm:[XML element,rsc_order element,then attribute] - - |first-action - |start - |The action that the +first+ resource must complete before +then-action+ - can be initiated for the +then+ resource. Allowed values: +start+, - +stop+, +promote+, +demote+. - indexterm:[XML attribute,first-action attribute,rsc_order element] - indexterm:[XML element,rsc_order element,first-action attribute] - - |then-action - |value of +first-action+ - |The action that the +then+ resource can execute only after the - +first-action+ on the +first+ resource has completed. Allowed - values: +start+, +stop+, +promote+, +demote+. - indexterm:[XML attribute,then-action attribute,rsc_order element] - indexterm:[XML element,rsc_order element,then-action attribute] - - |kind - |Mandatory - a|How to enforce the constraint. Allowed values: - - * +Mandatory:+ +then-action+ will never be initiated for the +then+ resource - unless and until +first-action+ successfully completes for the +first+ - resource. - * +Optional:+ The constraint applies only if both specified resource actions - are scheduled in the same transition (that is, in response to the same - cluster state). This means that +then-action+ is allowed on the +then+ - resource regardless of the state of the +first+ resource, but if both actions - happen to be scheduled at the same time, they will be ordered. - * +Serialize:+ Ensure that the specified actions are never performed - concurrently for the specified resources. +First-action+ and +then-action+ - can be executed in either order, but one must complete before the other can - be initiated. An example use case is when resource start-up puts a high load - on the host. - - indexterm:[XML attribute,kind attribute,rsc_order element] - indexterm:[XML element,rsc_order element,kind attribute] - - |symmetrical - |TRUE for +Mandatory+ and +Optional+ kinds. FALSE for +Serialize+ kind. - |If true, the reverse of the constraint applies for the opposite action (for - example, if B starts after A starts, then B stops before A stops). - +Serialize+ orders cannot be symmetrical. - indexterm:[XML attribute,symmetrical attribute,rsc_order element] - indexterm:[XML element,rsc_order element,symmetrical attribute] - - |========================================================= - - +Promote+ and +demote+ apply to the master role of - <> resources. - - === Optional and mandatory ordering === - - Here is an example of ordering constraints where +Database+ 'must' start before - +Webserver+, and +IP+ 'should' start before +Webserver+ if they both need to be - started: - - .Optional and mandatory ordering constraints - ====== - [source,XML] - ------- - - - - - ------- - ====== - - Because the above example lets +symmetrical+ default to TRUE, - +Webserver+ must be stopped before +Database+ can be stopped, - and +Webserver+ should be stopped before +IP+ - if they both need to be stopped. - - [[s-resource-colocation]] - == Placing Resources Relative to other Resources == - - indexterm:[Constraint,Colocation Constraint] - indexterm:[Resource,Location Relative to Other Resources] - 'Colocation constraints' tell the cluster that the location of one resource - depends on the location of another one. - - Colocation has an important side-effect: it affects the order in which - resources are assigned to a node. Think about it: You can't place A relative to - B unless you know where B is. - footnote:[ - While the human brain is sophisticated enough to read the constraint - in any order and choose the correct one depending on the situation, - the cluster is not quite so smart. Yet. - ] - - So when you are creating colocation constraints, it is important to - consider whether you should colocate A with B, or B with A. - - Another thing to keep in mind is that, assuming A is colocated with - B, the cluster will take into account A's preferences when - deciding which node to choose for B. - - For a detailed look at exactly how this occurs, see - http://clusterlabs.org/doc/Colocation_Explained.pdf[Colocation Explained]. - - [IMPORTANT] - ==== - Colocation constraints affect 'only' the placement of resources; they do 'not' + *and* in a specific order, you need both an ordering constraint *and* + a colocation constraint (see :ref:`s-resource-colocation`), or + alternatively, a group (see :ref:`group-resources`). + +.. index:: + pair: XML element; rsc_order + pair: constraint; ordering + +Ordering Properties +___________________ + +.. table:: **Attributes of a rsc_order Element** + + +--------------+----------------------------+-------------------------------------------------------------------+ + | Field | Default | Description | + +==============+============================+===================================================================+ + | id | | .. index:: | + | | | single: rsc_order; attribute, id | + | | | single: attribute; id (rsc_order) | + | | | single: id; rsc_order attribute | + | | | | + | | | A unique name for the constraint | + +--------------+----------------------------+-------------------------------------------------------------------+ + | first | | .. index:: | + | | | single: rsc_order; attribute, first | + | | | single: attribute; first (rsc_order) | + | | | single: first; rsc_order attribute | + | | | | + | | | Name of the resource that the ``then`` resource | + | | | depends on | + +--------------+----------------------------+-------------------------------------------------------------------+ + | then | | .. index:: | + | | | single: rsc_order; attribute, then | + | | | single: attribute; then (rsc_order) | + | | | single: then; rsc_order attribute | + | | | | + | | | Name of the dependent resource | + +--------------+----------------------------+-------------------------------------------------------------------+ + | first-action | start | .. index:: | + | | | single: rsc_order; attribute, first-action | + | | | single: attribute; first-action (rsc_order) | + | | | single: first-action; rsc_order attribute | + | | | | + | | | The action that the ``first`` resource must complete | + | | | before ``then-action`` can be initiated for the ``then`` | + | | | resource. Allowed values: ``start``, ``stop``, | + | | | ``promote``, ``demote``. | + +--------------+----------------------------+-------------------------------------------------------------------+ + | then-action | value of ``first-action`` | .. index:: | + | | | single: rsc_order; attribute, then-action | + | | | single: attribute; then-action (rsc_order) | + | | | single: first-action; rsc_order attribute | + | | | | + | | | The action that the ``then`` resource can execute only | + | | | after the ``first-action`` on the ``first`` resource has | + | | | completed. Allowed values: ``start``, ``stop``, | + | | | ``promote``, ``demote``. | + +--------------+----------------------------+-------------------------------------------------------------------+ + | kind | Mandatory | .. index:: | + | | | single: rsc_order; attribute, kind | + | | | single: attribute; kind (rsc_order) | + | | | single: kind; rsc_order attribute | + | | | | + | | | How to enforce the constraint. Allowed values: | + | | | | + | | | * ``Mandatory:`` ``then-action`` will never be initiated | + | | | for the ``then`` resource unless and until ``first-action`` | + | | | successfully completes for the ``first`` resource. | + | | | | + | | | * ``Optional:`` The constraint applies only if both specified | + | | | resource actions are scheduled in the same transition | + | | | (that is, in response to the same cluster state). This | + | | | means that ``then-action`` is allowed on the ``then`` | + | | | resource regardless of the state of the ``first`` resource, | + | | | but if both actions happen to be scheduled at the same time, | + | | | they will be ordered. | + | | | | + | | | * ``Serialize:`` Ensure that the specified actions are never | + | | | performed concurrently for the specified resources. | + | | | ``First-action`` and ``then-action`` can be executed in either | + | | | order, but one must complete before the other can be initiated. | + | | | An example use case is when resource start-up puts a high load | + | | | on the host. | + +--------------+----------------------------+-------------------------------------------------------------------+ + | symmetrical | TRUE for ``Mandatory`` and | .. index:: | + | | ``Optional`` kinds. FALSE | single: rsc_order; attribute, symmetrical | + | | for ``Serialize`` kind. | single: attribute; symmetrical (rsc)order) | + | | | single: symmetrical; rsc_order attribute | + | | | | + | | | If true, the reverse of the constraint applies for the | + | | | opposite action (for example, if B starts after A starts, | + | | | then B stops before A stops). ``Serialize`` orders cannot | + | | | be symmetrical. | + +--------------+----------------------------+-------------------------------------------------------------------+ + +``Promote`` and ``demote`` apply to the master role of :ref:`promotable ` +resources. + +Optional and mandatory ordering +_______________________________ + +Here is an example of ordering constraints where **Database** *must* start before +**Webserver**, and **IP** *should* start before **Webserver** if they both need to be +started: + +.. topic:: Optional and mandatory ordering constraints + + .. code-block:: xml + + + + + + +Because the above example lets ``symmetrical`` default to TRUE, **Webserver** +must be stopped before **Database** can be stopped, and **Webserver** should be +stopped before **IP** if they both need to be stopped. + +.. index:: + single: constraint; colocation + single: resource; location relative to other resources + +.. _s-resource-colocation: + +Placing Resources Relative to other Resources +############################################# + +*Colocation constraints* tell the cluster that the location of one resource +depends on the location of another one. + +Colocation has an important side-effect: it affects the order in which +resources are assigned to a node. Think about it: You can't place A relative to +B unless you know where B is [#]_. + +So when you are creating colocation constraints, it is important to +consider whether you should colocate A with B, or B with A. + +Another thing to keep in mind is that, assuming A is colocated with +B, the cluster will take into account A's preferences when +deciding which node to choose for B. + +For a detailed look at exactly how this occurs, see +`Colocation Explained `_. + +.. important:: + + Colocation constraints affect *only* the placement of resources; they do *not* require that the resources be started in a particular order. If you want - resources to be started on the same node 'and' in a specific order, you need - both an ordering constraint (see <>) 'and' a colocation - constraint, or alternatively, a group (see <>). - ==== - - === Colocation Properties === - - indexterm:[XML element,rsc_colocation element] - indexterm:[Constraint,Colocation Constraint,rsc_colocation element] - - .Attributes of a rsc_colocation Constraint - [width="95%",cols="1m,1,<4",options="header",align="center"] - |========================================================= - - |Field - |Default - |Description - - |id - | - |A unique name for the constraint (required). - indexterm:[XML attribute,id attribute,rsc_colocation element] - indexterm:[XML element,rsc_colocation element,id attribute] - - |rsc - | - |The name of a resource that should be located relative to +with-rsc+ (required). - indexterm:[XML attribute,rsc attribute,rsc_colocation element] - indexterm:[XML element,rsc_colocation element,rsc attribute] - - |with-rsc - | - |The name of the resource used as the colocation target. The cluster will - decide where to put this resource first and then decide where to put +rsc+ (required). - indexterm:[XML attribute,with-rsc attribute,rsc_colocation element] - indexterm:[XML element,rsc_colocation element,with-rsc attribute] - - |node-attribute - |#uname - |The node attribute that must be the same on the node running +rsc+ and the - node running +with-rsc+ for the constraint to be satisfied. (For details, - see <>.) - indexterm:[XML attribute,node-attribute attribute,rsc_colocation element] - indexterm:[XML element,rsc_colocation element,node-attribute attribute] - - |score - | - |Positive values indicate the resources should run on the same - node. Negative values indicate the resources should run on - different nodes. Values of \+/- +INFINITY+ change "should" to "must". - indexterm:[XML attribute,score attribute,rsc_colocation element] - indexterm:[XML element,rsc_colocation element,score attribute] - - |========================================================= - - === Mandatory Placement === - - Mandatory placement occurs when the constraint's score is - ++INFINITY+ or +-INFINITY+. In such cases, if the constraint can't be - satisfied, then the +rsc+ resource is not permitted to run. For - +score=INFINITY+, this includes cases where the +with-rsc+ resource is - not active. - - If you need resource +A+ to always run on the same machine as - resource +B+, you would add the following constraint: - - .Mandatory colocation constraint for two resources - ==== - [source,XML] - - ==== - - Remember, because +INFINITY+ was used, if +B+ can't run on any - of the cluster nodes (for whatever reason) then +A+ will not - be allowed to run. Whether +A+ is running or not has no effect on +B+. - - Alternatively, you may want the opposite -- that +A+ 'cannot' - run on the same machine as +B+. In this case, use - +score="-INFINITY"+. - - .Mandatory anti-colocation constraint for two resources - ==== - [source,XML] - - ==== - - Again, by specifying +-INFINITY+, the constraint is binding. So if the - only place left to run is where +B+ already is, then - +A+ may not run anywhere. - - As with +INFINITY+, +B+ can run even if +A+ is stopped. - However, in this case +A+ also can run if +B+ is stopped, because it still - meets the constraint of +A+ and +B+ not running on the same node. - - === Advisory Placement === - - If mandatory placement is about "must" and "must not", then advisory - placement is the "I'd prefer if" alternative. For constraints with - scores greater than +-INFINITY+ and less than +INFINITY+, the cluster - will try to accommodate your wishes but may ignore them if the - alternative is to stop some of the cluster resources. - - As in life, where if enough people prefer something it effectively - becomes mandatory, advisory colocation constraints can combine with - other elements of the configuration to behave as if they were - mandatory. - - .Advisory colocation constraint for two resources - ==== - [source,XML] - - ==== - - [[s-coloc-attribute]] - === Colocation by Node Attribute === - - The +node-attribute+ property of a colocation constraints allows you to express - the requirement, "these resources must be on similar nodes". - - As an example, imagine that you have two Storage Area Networks (SANs) that are - not controlled by the cluster, and each node is connected to one or the other. - You may have two resources +r1+ and +r2+ such that +r2+ needs to use the same - SAN as +r1+, but doesn't necessarily have to be on the same exact node. - In such a case, you could define a <> named - +san+, with the value +san1+ or +san2+ on each node as appropriate. Then, you - could colocate +r2+ with +r1+ using +node-attribute+ set to +san+. - - [[s-resource-sets]] - == Resource Sets == - - 'Resource sets' allow multiple resources to be affected by a single constraint. - indexterm:[Constraint,Resource Set] - indexterm:[Resource,Resource Set] - - .A set of 3 resources - ==== - [source,XML] - ---- - - - - - - ---- - ==== - - Resource sets are valid inside +rsc_location+, - +rsc_order+ (see <>), - +rsc_colocation+ (see <>), - and +rsc_ticket+ (see <>) constraints. - - A resource set has a number of properties that can be set, - though not all have an effect in all contexts. - - .Attributes of a resource_set Element - [width="95%",cols="2m,1,<5",options="header",align="center"] - |========================================================= - - |Field - |Default - |Description - - |id - | - |A unique name for the set - indexterm:[XML attribute,id attribute,resource_set element] - indexterm:[XML element,resource_set element,id attribute] - - |sequential - |true - |Whether the members of the set must be acted on in order. - Meaningful within +rsc_order+ and +rsc_colocation+. - indexterm:[XML attribute,sequential attribute,resource_set element] - indexterm:[XML element,resource_set element,sequential attribute] - - |require-all - |true - |Whether all members of the set must be active before continuing. - With the current implementation, the cluster may continue even if only one - member of the set is started, but if more than one member of the set is - starting at the same time, the cluster will still wait until all of those have - started before continuing (this may change in future versions). - Meaningful within +rsc_order+. - indexterm:[XML attribute,require-all attribute,resource_set element] - indexterm:[XML element,resource_set element,require-all attribute] - - |role - | - |Limit the effect of the constraint to the specified role. - Meaningful within +rsc_location+, +rsc_colocation+ and +rsc_ticket+. - indexterm:[XML attribute,role attribute,resource_set element] - indexterm:[XML element,resource_set element,role attribute] - - |action - | - |Limit the effect of the constraint to the specified action. - Meaningful within +rsc_order+. - indexterm:[XML attribute,action attribute,resource_set element] - indexterm:[XML element,resource_set element,action attribute] - - |score - | - |'Advanced use only.' Use a specific score for this set within the constraint. - indexterm:[XML attribute,score attribute,resource_set element] - indexterm:[XML element,resource_set element,score attribute] - - |========================================================= - - [[s-resource-sets-ordering]] - == Ordering Sets of Resources == - - A common situation is for an administrator to create a chain of - ordered resources, such as: - - .A chain of ordered resources - ====== - [source,XML] - ------- - - - - - - ------- - ====== - - .Visual representation of the four resources' start order for the above constraints - image::images/resource-set.png["Ordered set",width="16cm",height="2.5cm",align="center"] - - === Ordered Set === - - To simplify this situation, resource sets (see <>) can be used - within ordering constraints: - - .A chain of ordered resources expressed as a set - ====== - [source,XML] - ------- - - - - - - - - - - - ------- - ====== - - While the set-based format is not less verbose, it is significantly - easier to get right and maintain. - - [IMPORTANT] - ========= + resources to be started on the same node *and* in a specific order, you need + both an ordering constraint (see :ref:`s-resource-ordering`) *and* a colocation + constraint, or alternatively, a group (see :ref:`group-resources`). + +.. index:: + pair: XML element; rsc_colocation + pair: constraint; colocation + +Colocation Properties +_____________________ + +.. table:: **Attributes of a rsc_colocation Constraint** + + +----------------+---------+--------------------------------------------------------+ + | Field | Default | Description | + +================+=========+========================================================+ + | id | | .. index:: | + | | | single: rsc_colocation; attribute, id | + | | | single: attribute; id (rsc_colocation) | + | | | single: id; rsc_colocation attribute | + | | | | + | | | A unique name for the constraint (required). | + +----------------+---------+--------------------------------------------------------+ + | rsc | | .. index:: | + | | | single: rsc_colocation; attribute, rsc | + | | | single: attribute; rsc (rsc_colocation) | + | | | single: rsc; rsc_colocation attribute | + | | | | + | | | The name of a resource that should be located | + | | | relative to ``with-rsc`` (required). | + +----------------+---------+--------------------------------------------------------+ + | with-rsc | | .. index:: | + | | | single: rsc_colocation; attribute, with-rsc | + | | | single: attribute; with-rsc (rsc_colocation) | + | | | single: with-rsc; rsc_colocation attribute | + | | | | + | | | The name of the resource used as the colocation | + | | | target. The cluster will decide where to put this | + | | | resource first and then decide where to put | + | | | ``rsc`` (required). | + +----------------+---------+--------------------------------------------------------+ + | node-attribute | #uname | .. index:: | + | | | single: rsc_colocation; attribute, node-attribute | + | | | single: attribute; node-attribute (rsc_colocation) | + | | | single: node-attribute; rsc_colocation attribute | + | | | | + | | | The node attribute that must be the same on the | + | | | node running ``rsc`` and the node running ``with-rsc`` | + | | | for the constraint to be satisfied. (For details, | + | | | see :ref:`s-coloc-attribute`.) | + +----------------+---------+--------------------------------------------------------+ + | score | | .. index:: | + | | | single: rsc_colocation; attribute, score | + | | | single: attribute; score (rsc_colocation) | + | | | single: score; rsc_colocation attribute | + | | | | + | | | Positive values indicate the resources should run on | + | | | the same node. Negative values indicate the resources | + | | | should run on different nodes. Values of | + | | | +/- **INFINITY** change "should" to "must". | + +----------------+---------+--------------------------------------------------------+ + +Mandatory Placement +___________________ + +Mandatory placement occurs when the constraint's score is +**+INFINITY** or **-INFINITY**. In such cases, if the constraint can't be +satisfied, then the **rsc** resource is not permitted to run. For +``score=INFINITY``, this includes cases where the ``with-rsc`` resource is +not active. + +If you need resource **A** to always run on the same machine as +resource **B**, you would add the following constraint: + +.. topic:: Mandatory colocation constraint for two resources + + .. code-block:: xml + + + +Remember, because **INFINITY** was used, if **B** can't run on any +of the cluster nodes (for whatever reason) then **A** will not +be allowed to run. Whether **A** is running or not has no effect on **B**. + +Alternatively, you may want the opposite -- that **A** *cannot* +run on the same machine as **B**. In this case, use ``score="-INFINITY"``. + +.. topic:: Mandatory anti-colocation constraint for two resources + + .. code-block:: xml + + + +Again, by specifying **-INFINITY**, the constraint is binding. So if the +only place left to run is where **B** already is, then **A** may not run anywhere. + +As with **INFINITY**, **B** can run even if **A** is stopped. However, in this +case **A** also can run if **B** is stopped, because it still meets the +constraint of **A** and **B** not running on the same node. + +Advisory Placement +__________________ + +If mandatory placement is about "must" and "must not", then advisory +placement is the "I'd prefer if" alternative. For constraints with +scores greater than **-INFINITY** and less than **INFINITY**, the cluster +will try to accommodate your wishes but may ignore them if the +alternative is to stop some of the cluster resources. + +As in life, where if enough people prefer something it effectively +becomes mandatory, advisory colocation constraints can combine with +other elements of the configuration to behave as if they were +mandatory. + +.. topic:: Advisory colocation constraint for two resources + + .. code-block:: xml + + + +.. _s-coloc-attribute: + +Colocation by Node Attribute +____________________________ + +The ``node-attribute`` property of a colocation constraints allows you to express +the requirement, "these resources must be on similar nodes". + +As an example, imagine that you have two Storage Area Networks (SANs) that are +not controlled by the cluster, and each node is connected to one or the other. +You may have two resources **r1** and **r2** such that **r2** needs to use the same +SAN as **r1**, but doesn't necessarily have to be on the same exact node. +In such a case, you could define a :ref:`node attribute ` named +**san**, with the value **san1** or **san2** on each node as appropriate. Then, you +could colocate **r2** with **r1** using ``node-attribute`` set to **san**. + +.. _s-resource-sets: + +Resource Sets +############# + +.. index:: + single: constraint; resource set + single: resource; resource set + +*Resource sets* allow multiple resources to be affected by a single constraint. + +.. topic:: A set of 3 resources + + .. code-block:: xml + + + + + + + +Resource sets are valid inside ``rsc_location``, ``rsc_order`` +(see :ref:`s-resource-sets-ordering`), ``rsc_colocation`` +(see :ref:`s-resource-sets-colocation`), and ``rsc_ticket`` +(see :ref:`ticket-constraints`) constraints. + +A resource set has a number of properties that can be set, though not all +have an effect in all contexts. + +.. index:: + pair: XML element; resource_set + +.. topic:: **Attributes of a resource_set Element** + + +-------------+---------+--------------------------------------------------------+ + | Field | Default | Description | + +=============+=========+========================================================+ + | id | | .. index:: | + | | | single: resource_set; attribute, id | + | | | single: attribute; id (resource_set) | + | | | single: id; resource_set attribute | + | | | | + | | | A unique name for the set | + +-------------+---------+--------------------------------------------------------+ + | sequential | true | .. index:: | + | | | single: resource_set; attribute, sequential | + | | | single: attribute; sequential (resource_set) | + | | | single: sequential; resource_set attribute | + | | | | + | | | Whether the members of the set must be acted on in | + | | | order. Meaningful within ``rsc_order`` and | + | | | ``rsc_colocation``. | + +-------------+---------+--------------------------------------------------------+ + | require-all | true | .. index:: | + | | | single: resource_set; attribute, require-all | + | | | single: attribute; require-all (resource_set) | + | | | single: require-all; resource_set attribute | + | | | | + | | | Whether all members of the set must be active before | + | | | continuing. With the current implementation, the | + | | | cluster may continue even if only one member of the | + | | | set is started, but if more than one member of the set | + | | | is starting at the same time, the cluster will still | + | | | wait until all of those have started before continuing | + | | | (this may change in future versions). Meaningful | + | | | within ``rsc_order``. | + +-------------+---------+--------------------------------------------------------+ + | role | | .. index:: | + | | | single: resource_set; attribute, role | + | | | single: attribute; role (resource_set) | + | | | single: role; resource_set attribute | + | | | | + | | | Limit the effect of the constraint to the specified | + | | | role. Meaningful within ``rsc_location``, | + | | | ``rsc_colocation`` and ``rsc_ticket``. | + +-------------+---------+--------------------------------------------------------+ + | action | | .. index:: | + | | | single: resource_set; attribute, action | + | | | single: attribute; action (resource_set) | + | | | single: action; resource_set attribute | + | | | | + | | | Limit the effect of the constraint to the specified | + | | | action. Meaningful within ``rsc_order``. | + +-------------+---------+--------------------------------------------------------+ + | score | | .. index:: | + | | | single: resource_set; attribute, score | + | | | single: attribute; score (resource_set) | + | | | single: score; resource_set attribute | + | | | | + | | | *Advanced use only.* Use a specific score for this | + | | | set within the constraint. | + +-------------+---------+--------------------------------------------------------+ + +.. _s-resource-sets-ordering: + +Ordering Sets of Resources +########################## + +A common situation is for an administrator to create a chain of ordered +resources, such as: + +.. topic:: A chain of ordered resources + + .. code-block:: xml + + + + + + + +.. topic:: Visual representation of the four resources' start order for the above constraints + + .. image:: ../../shared/en-US/images/resource-set.png + :alt: Ordered set + +Ordered Set +___________ + +To simplify this situation, resource sets (see :ref:`s-resource-sets`) can be used +within ordering constraints: + +.. topic:: A chain of ordered resources expressed as a set + + .. code-block:: xml + + + + + + + + + + + + +While the set-based format is not less verbose, it is significantly easier to +get right and maintain. + +.. important:: + If you use a higher-level tool, pay attention to how it exposes this - functionality. Depending on the tool, creating a set +A B+ may be equivalent to - +A then B+, or +B then A+. - ========= - - === Ordering Multiple Sets === - - The syntax can be expanded to allow sets of resources to be ordered relative to - each other, where the members of each individual set may be ordered or - unordered (controlled by the +sequential+ property). In the example below, +A+ - and +B+ can both start in parallel, as can +C+ and +D+, however +C+ and +D+ can - only start once _both_ +A+ _and_ +B+ are active. - - .Ordered sets of unordered resources - ====== - [source,XML] - ------- - - - - - - - - - - - - - ------- - ====== - - .Visual representation of the start order for two ordered sets of unordered resources - image::images/two-sets.png["Two ordered sets",width="13cm",height="7.5cm",align="center"] - - Of course either set -- or both sets -- of resources can also be - internally ordered (by setting +sequential="true"+) and there is no - limit to the number of sets that can be specified. - - .Advanced use of set ordering - Three ordered sets, two of which are internally unordered - ====== - [source,XML] - ------- - - - - - - - - - - - - - - - - - ------- - ====== - - .Visual representation of the start order for the three sets defined above - image::images/three-sets.png["Three ordered sets",width="16cm",height="7.5cm",align="center"] - - [IMPORTANT] - ==== - An ordered set with +sequential=false+ makes sense only if there is another + functionality. Depending on the tool, creating a set **A B** may be equivalent to + **A then B**, or **B then A**. + +Ordering Multiple Sets +______________________ + +The syntax can be expanded to allow sets of resources to be ordered relative to +each other, where the members of each individual set may be ordered or +unordered (controlled by the ``sequential`` property). In the example below, **A** +and **B** can both start in parallel, as can **C** and **D**, however **C** and +**D** can only start once *both* **A** *and* **B** are active. + +.. topic:: Ordered sets of unordered resources + + .. code-block:: xml + + + + + + + + + + + + + + +.. topic:: Visual representation of the start order for two ordered sets of + unordered resources + + .. image:: ../../shared/en-US/images/two-sets.png + :alt: Two ordered sets + +Of course either set -- or both sets -- of resources can also be internally +ordered (by setting ``sequential="true"``) and there is no limit to the number +of sets that can be specified. + +.. topic:: Advanced use of set ordering - Three ordered sets, two of which are + internally unordered + + .. code-block:: xml + + + + + + + + + + + + + + + + + + +.. topic:: Visual representation of the start order for the three sets defined above + + .. image:: ../../shared/en-US/images/three-sets.png + :alt: Three ordered sets + +.. important:: + + An ordered set with ``sequential=false`` makes sense only if there is another set in the constraint. Otherwise, the constraint has no effect. - ==== - - === Resource Set OR Logic === - - The unordered set logic discussed so far has all been "AND" logic. - To illustrate this take the 3 resource set figure in the previous section. - Those sets can be expressed, +(A and B) then \(C) then (D) then (E and F)+. - - Say for example we want to change the first set, +(A and B)+, to use "OR" logic - so the sets look like this: +(A or B) then \(C) then (D) then (E and F)+. - This functionality can be achieved through the use of the +require-all+ - option. This option defaults to TRUE which is why the - "AND" logic is used by default. Setting +require-all=false+ means only one - resource in the set needs to be started before continuing on to the next set. - - .Resource Set "OR" logic: Three ordered sets, where the first set is internally unordered with "OR" logic - ====== - [source,XML] - ------- - - - - - - - - - - - - - - - - - ------- - ====== - - [IMPORTANT] - ==== - An ordered set with +require-all=false+ makes sense only in conjunction with - +sequential=false+. Think of it like this: +sequential=false+ modifies the set + +Resource Set OR Logic +_____________________ + +The unordered set logic discussed so far has all been "AND" logic. To illustrate +this take the 3 resource set figure in the previous section. Those sets can be +expressed, **(A and B) then (C) then (D) then (E and F)**. + +Say for example we want to change the first set, **(A and B)**, to use "OR" logic +so the sets look like this: **(A or B) then (C) then (D) then (E and F)**. This +functionality can be achieved through the use of the ``require-all`` option. +This option defaults to TRUE which is why the "AND" logic is used by default. +Setting ``require-all=false`` means only one resource in the set needs to be +started before continuing on to the next set. + +.. topic:: Resource Set "OR" logic: Three ordered sets, where the first set is + internally unordered with "OR" logic + + .. code-block:: xml + + + + + + + + + + + + + + + + + + +.. important:: + + An ordered set with ``require-all=false`` makes sense only in conjunction with + ``sequential=false``. Think of it like this: ``sequential=false`` modifies the set to be an unordered set using "AND" logic by default, and adding - +require-all=false+ flips the unordered set's "AND" logic to "OR" logic. - ==== - - [[s-resource-sets-colocation]] - == Colocating Sets of Resources == - - Another common situation is for an administrator to create a set of - colocated resources. - - The simplest way to do this is to define a resource group (see - <>), but that cannot always accurately express the desired - relationships. For example, maybe the resources do not need to be ordered. - - Another way would be to define each relationship as an individual constraint, - but that causes a difficult-to-follow constraint explosion as the number of - resources and combinations grow. - - .Colocation chain as individual constraints, where A is placed first, then B, then C, then D - ====== - [source,XML] - ------- - - - - - - ------- - ====== - - To express complicated relationships with a simplified syntax - footnote:[which is not the same as saying easy to follow], - <> can be used within colocation constraints. - - .Equivalent colocation chain expressed using +resource_set+ - ====== - [source,XML] - ------- - - - - - - - - - - - ------- - ====== - - [NOTE] - ==== - Within a +resource_set+, the resources are listed in the order they are - _placed_, which is the reverse of the order in which they are _colocated_. - In the above example, resource +A+ is placed before resource +B+, which is - the same as saying resource +B+ is colocated with resource +A+. - ==== - - As with individual constraints, a resource that can't be active prevents any - resource that must be colocated with it from being active. In both of the two - previous examples, if +B+ is unable to run, then both +C+ and by inference +D+ - must remain stopped. - - [IMPORTANT] - ========= + ``require-all=false`` flips the unordered set's "AND" logic to "OR" logic. + +.. _s-resource-sets-colocation: + +Colocating Sets of Resources +############################ + +Another common situation is for an administrator to create a set of +colocated resources. + +The simplest way to do this is to define a resource group (see +:ref:`group-resources`), but that cannot always accurately express the desired +relationships. For example, maybe the resources do not need to be ordered. + +Another way would be to define each relationship as an individual constraint, +but that causes a difficult-to-follow constraint explosion as the number of +resources and combinations grow. + +.. topic:: Colocation chain as individual constraints, where A is placed first, + then B, then C, then D + + .. code-block:: xml + + + + + + + +To express complicated relationships with a simplified syntax [#]_, +:ref:`resource sets ` can be used within colocation constraints. + +.. topic:: Equivalent colocation chain expressed using **resource_set** + + .. code-block:: xml + + + + + + + + + + + + +.. note:: + + Within a ``resource_set``, the resources are listed in the order they are + *placed*, which is the reverse of the order in which they are *colocated*. + In the above example, resource **A** is placed before resource **B**, which is + the same as saying resource **B** is colocated with resource **A**. + +As with individual constraints, a resource that can't be active prevents any +resource that must be colocated with it from being active. In both of the two +previous examples, if **B** is unable to run, then both **C** and by inference **D** +must remain stopped. + +.. important:: + If you use a higher-level tool, pay attention to how it exposes this - functionality. Depending on the tool, creating a set +A B+ may be equivalent to - +A with B+, or +B with A+. - ========= - - Resource sets can also be used to tell the cluster that entire _sets_ of - resources must be colocated relative to each other, while the individual - members within any one set may or may not be colocated relative to each other - (determined by the set's +sequential+ property). - - In the following example, resources +B+, +C+, and +D+ will each be colocated - with +A+ (which will be placed first). +A+ must be able to run in order for any - of the resources to run, but any of +B+, +C+, or +D+ may be stopped without - affecting any of the others. - - .Using colocated sets to specify a shared dependency - ====== - [source,XML] - ------- - - - - - - - - - - - - - ------- - ====== - - [NOTE] - ==== + functionality. Depending on the tool, creating a set **A B** may be equivalent to + **A with B**, or **B with A**. + +Resource sets can also be used to tell the cluster that entire *sets* of +resources must be colocated relative to each other, while the individual +members within any one set may or may not be colocated relative to each other +(determined by the set's ``sequential`` property). + +In the following example, resources **B**, **C**, and **D** will each be colocated +with **A** (which will be placed first). **A** must be able to run in order for any +of the resources to run, but any of **B**, **C**, or **D** may be stopped without +affecting any of the others. + +.. topic:: Using colocated sets to specify a shared dependency + + .. code-block:: xml + + + + + + + + + + + + + + +.. note:: + Pay close attention to the order in which resources and sets are listed. While the members of any one sequential set are placed first to last (i.e., the colocation dependency is last with first), multiple sets are placed last to first (i.e. the colocation dependency is first with last). - ==== - - [IMPORTANT] - ==== - A colocated set with +sequential="false"+ makes sense only if there is + +.. important:: + + A colocated set with ``sequential="false"`` makes sense only if there is another set in the constraint. Otherwise, the constraint has no effect. - ==== - - There is no inherent limit to the number and size of the sets used. - The only thing that matters is that in order for any member of one set - in the constraint to be active, all members of sets listed after it must also - be active (and naturally on the same node); and if a set has +sequential="true"+, - then in order for one member of that set to be active, all members listed - before it must also be active. - - If desired, you can restrict the dependency to instances of promotable clone - resources that are in a specific role, using the set's +role+ property. - - .Colocation in which the members of the middle set have no interdependencies, and the last set listed applies only to instances in the master role - ====== - [source,XML] - ------- - - - - - - - - - - - - - - - - - - ------- - ====== - - .Visual representation of the above example (resources are placed from left to right) - image::images/pcmk-colocated-sets.png["Colocation chain",width="960px",height="431px",align="center"] - - [NOTE] - ==== - Unlike ordered sets, colocated sets do not use the +require-all+ option. - ==== + +There is no inherent limit to the number and size of the sets used. +The only thing that matters is that in order for any member of one set +in the constraint to be active, all members of sets listed after it must also +be active (and naturally on the same node); and if a set has ``sequential="true"``, +then in order for one member of that set to be active, all members listed +before it must also be active. + +If desired, you can restrict the dependency to instances of promotable clone +resources that are in a specific role, using the set's ``role`` property. + +.. topic:: Colocation in which the members of the middle set have no interdependencies, + and the last set listed applies only to instances in the master role + + .. code-block:: xml + + + + + + + + + + + + + + + + + + + +.. topic:: Visual representation of the above example (resources are placed from + left to right) + + .. image:: ../../shared/en-US/images/pcmk-colocated-sets.png + :alt: Colocation chain + +.. note:: + + Unlike ordered sets, colocated sets do not use the ``require-all`` option. + +.. [#] While the human brain is sophisticated enough to read the constraint + in any order and choose the correct one depending on the situation, + the cluster is not quite so smart. Yet. + +.. [#] which is not the same as saying easy to follow diff --git a/doc/sphinx/Pacemaker_Explained/resources.rst b/doc/sphinx/Pacemaker_Explained/resources.rst index 592e299121..7e89b4a7d6 100644 --- a/doc/sphinx/Pacemaker_Explained/resources.rst +++ b/doc/sphinx/Pacemaker_Explained/resources.rst @@ -1,980 +1,984 @@ .. _resource: Cluster Resources ----------------- .. Convert_to_RST: [[s-resource-primitive]] == What is a Cluster Resource? == indexterm:[Resource] A resource is a service made highly available by a cluster. The simplest type of resource, a 'primitive' resource, is described in this chapter. More complex forms, such as groups and clones, are described in later chapters. Every primitive resource has a 'resource agent'. A resource agent is an external program that abstracts the service it provides and present a consistent view to the cluster. This allows the cluster to be agnostic about the resources it manages. The cluster doesn't need to understand how the resource works because it relies on the resource agent to do the right thing when given a `start`, `stop` or `monitor` command. For this reason, it is crucial that resource agents are well-tested. Typically, resource agents come in the form of shell scripts. However, they can be written using any technology (such as C, Python or Perl) that the author is comfortable with. [[s-resource-supported]] == Resource Classes == indexterm:[Resource,class] Pacemaker supports several classes of agents: * OCF * LSB * Upstart * Systemd * Service * Fencing * Nagios Plugins === Open Cluster Framework === indexterm:[Resource,OCF] indexterm:[OCF,Resources] indexterm:[Open Cluster Framework,Resources] The OCF standard footnote:[See https://github.com/ClusterLabs/OCF-spec/tree/master/ra . The Pacemaker implementation has been somewhat extended from the OCF specs.] is basically an extension of the Linux Standard Base conventions for init scripts to: * support parameters, * make them self-describing, and * make them extensible OCF specs have strict definitions of the exit codes that actions must return. footnote:[ The resource-agents source code includes the `ocf-tester` script, which can be useful in this regard. ] The cluster follows these specifications exactly, and giving the wrong exit code will cause the cluster to behave in ways you will likely find puzzling and annoying. In particular, the cluster needs to distinguish a completely stopped resource from one which is in some erroneous and indeterminate state. Parameters are passed to the resource agent as environment variables, with the special prefix +OCF_RESKEY_+. So, a parameter which the user thinks of as +ip+ will be passed to the resource agent as +OCF_RESKEY_ip+. The number and purpose of the parameters is left to the resource agent; however, the resource agent should use the `meta-data` command to advertise any that it supports. The OCF class is the most preferred as it is an industry standard, highly flexible (allowing parameters to be passed to agents in a non-positional manner) and self-describing. For more information, see the http://www.linux-ha.org/wiki/OCF_Resource_Agents[reference] and the 'Resource Agents' chapter of 'Pacemaker Administration'. === Linux Standard Base === indexterm:[Resource,LSB] indexterm:[LSB,Resources] indexterm:[Linux Standard Base,Resources] 'LSB' resource agents are more commonly known as 'init scripts'. If a full path is not given, they are assumed to be located in +/etc/init.d+. Commonly, they are provided by the OS distribution. In order to be used with a Pacemaker cluster, they must conform to the LSB specification. footnote:[ See http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html for the LSB Spec as it relates to init scripts. ] [WARNING] ==== Many distributions or particular software packages claim LSB compliance but ship with broken init scripts. For details on how to check whether your init script is LSB-compatible, see the 'Resource Agents' chapter of 'Pacemaker Administration'. Common problematic violations of the LSB standard include: * Not implementing the +status+ operation at all * Not observing the correct exit status codes for +start+/+stop+/+status+ actions * Starting a started resource returns an error * Stopping a stopped resource returns an error ==== [IMPORTANT] ==== Remember to make sure the computer is _not_ configured to start any services at boot time -- that should be controlled by the cluster. ==== [[s-resource-supported-systemd]] === Systemd === indexterm:[Resource,Systemd] indexterm:[Systemd,Resources] Some newer distributions have replaced the old http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of initialization daemons and scripts with an alternative called http://www.freedesktop.org/wiki/Software/systemd[Systemd]. Pacemaker is able to manage these services _if they are present_. Instead of init scripts, systemd has 'unit files'. Generally, the services (unit files) are provided by the OS distribution, but there are online guides for converting from init scripts. footnote:[For example, http://0pointer.de/blog/projects/systemd-for-admins-3.html] [IMPORTANT] ==== Remember to make sure the computer is _not_ configured to start any services at boot time -- that should be controlled by the cluster. ==== === Upstart === indexterm:[Resource,Upstart] indexterm:[Upstart,Resources] Some newer distributions have replaced the old http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of initialization daemons (and scripts) with an alternative called http://upstart.ubuntu.com/[Upstart]. Pacemaker is able to manage these services _if they are present_. Instead of init scripts, upstart has 'jobs'. Generally, the services (jobs) are provided by the OS distribution. [IMPORTANT] ==== Remember to make sure the computer is _not_ configured to start any services at boot time -- that should be controlled by the cluster. ==== === System Services === indexterm:[Resource,System Services] indexterm:[System Service,Resources] Since there are various types of system services (+systemd+, +upstart+, and +lsb+), Pacemaker supports a special +service+ alias which intelligently figures out which one applies to a given cluster node. This is particularly useful when the cluster contains a mix of +systemd+, +upstart+, and +lsb+. In order, Pacemaker will try to find the named service as: . an LSB init script . a Systemd unit file . an Upstart job === STONITH === indexterm:[Resource,STONITH] indexterm:[STONITH,Resources] The STONITH class is used exclusively for fencing-related resources. This is discussed later in <>. === Nagios Plugins === indexterm:[Resource,Nagios Plugins] indexterm:[Nagios Plugins,Resources] Nagios Plugins footnote:[The project has two independent forks, hosted at https://www.nagios-plugins.org/ and https://www.monitoring-plugins.org/. Output from both projects' plugins is similar, so plugins from either project can be used with pacemaker.] allow us to monitor services on remote hosts. Pacemaker is able to do remote monitoring with the plugins _if they are present_. A common use case is to configure them as resources belonging to a resource container (usually a virtual machine), and the container will be restarted if any of them has failed. Another use is to configure them as ordinary resources to be used for monitoring hosts or services via the network. The supported parameters are same as the long options of the plugin. - [[primitive-resource]] - == Resource Properties == +.. _primitive-resource: + +Resource Properties +################### + +.. Convert_to_RST: These values tell the cluster which resource agent to use for the resource, where to find that resource agent and what standards it conforms to. .Properties of a Primitive Resource [width="95%",cols="1m,<6",options="header",align="center"] |========================================================= |Field |Description |id |Your name for the resource indexterm:[id,Resource] indexterm:[Resource,Property,id] |class |The standard the resource agent conforms to. Allowed values: +lsb+, +nagios+, +ocf+, +service+, +stonith+, +systemd+, +upstart+ indexterm:[class,Resource] indexterm:[Resource,Property,class] |type |The name of the Resource Agent you wish to use. E.g. +IPaddr+ or +Filesystem+ indexterm:[type,Resource] indexterm:[Resource,Property,type] |provider |The OCF spec allows multiple vendors to supply the same resource agent. To use the OCF resource agents supplied by the Heartbeat project, you would specify +heartbeat+ here. indexterm:[provider,Resource] indexterm:[Resource,Property,provider] |========================================================= The XML definition of a resource can be queried with the `crm_resource` tool. For example: ---- # crm_resource --resource Email --query-xml ---- might produce: .A system resource definition ===== [source,XML] ===== [NOTE] ===== One of the main drawbacks to system services (LSB, systemd or Upstart) resources is that they do not allow any parameters! ===== //// See https://tools.ietf.org/html/rfc5737 for choice of example IP address //// .An OCF resource definition ===== [source,XML] ------- ------- ===== .. _resource_options: Resource Options ################ .. Convert_to_RST_2: Resources have two types of options: 'meta-attributes' and 'instance attributes'. Meta-attributes apply to any type of resource, while instance attributes are specific to each resource agent. === Resource Meta-Attributes === Meta-attributes are used by the cluster to decide how a resource should behave and can be easily set using the `--meta` option of the `crm_resource` command. .Meta-attributes of a Primitive Resource [width="95%",cols="2m,2,<5",options="header",align="center"] |========================================================= |Field |Default |Description |priority |0 |If not all resources can be active, the cluster will stop lower priority resources in order to keep higher priority ones active. indexterm:[priority,Resource Option] indexterm:[Resource,Option,priority] |target-role |Started a|What state should the cluster attempt to keep this resource in? Allowed values: * +Stopped:+ Force the resource to be stopped * +Started:+ Allow the resource to be started (and in the case of <>, promoted to master if appropriate) * +Slave:+ Allow the resource to be started, but only in Slave mode if the resource is <> * +Master:+ Equivalent to +Started+ indexterm:[target-role,Resource Option] indexterm:[Resource,Option,target-role] |is-managed |TRUE |Is the cluster allowed to start and stop the resource? Allowed values: +true+, +false+ indexterm:[is-managed,Resource Option] indexterm:[Resource,Option,is-managed] |maintenance |FALSE |Similar to the +maintenance-mode+ <>, but for a single resource. If true, the resource will not be started, stopped, or monitored on any node. This differs from +is-managed+ in that monitors will not be run. Allowed values: +true+, +false+ indexterm:[maintenance,Resource Option] indexterm:[Resource,Option,maintenance] .. _resource-stickiness: placeholder .. Convert_to_RST_3: |resource-stickiness |1 for individual clone instances, 0 for all other resources |A score that will be added to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. indexterm:[resource-stickiness,Resource Option] indexterm:[Resource,Option,resource-stickiness] .. _requires: placeholder .. Convert_to_RST_4: |requires |+quorum+ for resources with a +class+ of +stonith+, otherwise +unfencing+ if unfencing is active in the cluster, otherwise +fencing+ if +stonith-enabled+ is true, otherwise +quorum+ a|Conditions under which the resource can be started Allowed values: * +nothing:+ can always be started * +quorum:+ The cluster can only start this resource if a majority of the configured nodes are active * +fencing:+ The cluster can only start this resource if a majority of the configured nodes are active _and_ any failed or unknown nodes have been <> * +unfencing:+ The cluster can only start this resource if a majority of the configured nodes are active _and_ any failed or unknown nodes have been fenced _and_ only on nodes that have been <> indexterm:[requires,Resource Option] indexterm:[Resource,Option,requires] |migration-threshold |INFINITY |How many failures may occur for this resource on a node, before this node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible); by constrast, the cluster treats INFINITY (the default) as a very large but finite number. This option has an effect only if the failed operation specifies +on-fail+ as +restart+ (the default), and additionally for failed +start+ operations, if the cluster property +start-failure-is-fatal+ is +false+. indexterm:[migration-threshold,Resource Option] indexterm:[Resource,Option,migration-threshold] |failure-timeout |0 |How many seconds to wait before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. indexterm:[failure-timeout,Resource Option] indexterm:[Resource,Option,failure-timeout] |multiple-active |stop_start a|What should the cluster do if it ever finds the resource active on more than one node? Allowed values: * +block:+ mark the resource as unmanaged * +stop_only:+ stop all active instances and leave them that way * +stop_start:+ stop all active instances and start the resource in one location only indexterm:[multiple-active,Resource Option] indexterm:[Resource,Option,multiple-active] |allow-migrate |TRUE for ocf:pacemaker:remote resources, FALSE otherwise |Whether the cluster should try to "live migrate" this resource when it needs to be moved (see <>) |container-attribute-target | |Specific to bundle resources; see <> |remote-node | |The name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. +WARNING:+ This value cannot overlap with any resource or node IDs. |remote-port |3121 |If +remote-node+ is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. |remote-addr |value of +remote-node+ |If +remote-node+ is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. |remote-connect-timeout |60s |If +remote-node+ is specified, how long before a pending guest connection will time out. |========================================================= As an example of setting resource options, if you performed the following commands on an LSB Email resource: ------- # crm_resource --meta --resource Email --set-parameter priority --parameter-value 100 # crm_resource -m -r Email -p multiple-active -v block ------- the resulting resource definition might be: .An LSB resource with cluster options ===== [source,XML] ------- ------- ===== In addition to the cluster-defined meta-attributes described above, you may also configure arbitrary meta-attributes of your own choosing. Most commonly, this would be done for use in <>. For example, an IT department might define a custom meta-attribute to indicate which company department each resource is intended for. To reduce the chance of name collisions with cluster-defined meta-attributes added in the future, it is recommended to use a unique, organization-specific prefix for such attributes. [[s-resource-defaults]] === Setting Global Defaults for Resource Meta-Attributes === To set a default value for a resource option, add it to the +rsc_defaults+ section with `crm_attribute`. For example, ---- # crm_attribute --type rsc_defaults --name is-managed --update false ---- would prevent the cluster from starting or stopping any of the resources in the configuration (unless of course the individual resources were specifically enabled by having their +is-managed+ set to +true+). === Resource Instance Attributes === The resource agents of some resource classes (lsb, systemd and upstart 'not' among them) can be given parameters which determine how they behave and which instance of a service they control. If your resource agent supports parameters, you can add them with the `crm_resource` command. For example, ---- # crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2 ---- would create an entry in the resource like this: .An example OCF resource with instance attributes ===== [source,XML] ------- ------- ===== For an OCF resource, the result would be an environment variable called +OCF_RESKEY_ip+ with a value of +192.0.2.2+. The list of instance attributes supported by an OCF resource agent can be found by calling the resource agent with the `meta-data` command. The output contains an XML description of all the supported attributes, their purpose and default values. .Displaying the metadata for the Dummy resource agent template ===== ---- # export OCF_ROOT=/usr/lib/ocf # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data ---- [source,XML] ------- 1.0 This is a Dummy Resource Agent. It does absolutely nothing except keep track of whether its running or not. Its purpose in life is for testing and to serve as a template for RA writers. NB: Please pay attention to the timeouts specified in the actions section below. They should be meaningful for the kind of resource the agent manages. They should be the minimum advised timeouts, but they shouldn't/cannot cover _all_ possible resource instances. So, try to be neither overly generous nor too stingy, but moderate. The minimum timeouts should never be below 10 seconds. Example stateless resource agent Location to store the resource state in. State file Fake attribute that can be changed to cause a reload Fake attribute that can be changed to cause a reload Number of seconds to sleep during operations. This can be used to test how the cluster reacts to operation timeouts. Operation sleep duration in seconds. ------- ===== .. _operation: Resource Operations ################### .. Convert_to_RST_5: indexterm:[Resource,Action] 'Operations' are actions the cluster can perform on a resource by calling the resource agent. Resource agents must support certain common operations such as start, stop, and monitor, and may implement any others. Operations may be explicitly configured for two purposes: to override defaults for options (such as timeout) that the cluster will use whenever it initiates the operation, and to run an operation on a recurring basis (for example, to monitor the resource for failure). .An OCF resource with a non-default start timeout ===== [source,XML] ------- ------- ===== Pacemaker identifies operations by a combination of name and interval, so this combination must be unique for each resource. That is, you should not configure two operations for the same resource with the same name and interval. .. _operation_properties: Operation Properties ____________________ .. Convert_to_RST_6: Operation properties may be specified directly in the +op+ element as XML attributes, or in a separate +meta_attributes+ block as +nvpair+ elements. XML attributes take precedence over +nvpair+ elements if both are specified. .Properties of an Operation [width="95%",cols="2m,3,<6",options="header",align="center"] |========================================================= |Field |Default |Description |id | |A unique name for the operation. indexterm:[id,Action Property] indexterm:[Action,Property,id] |name | |The action to perform. This can be any action supported by the agent; common values include +monitor+, +start+, and +stop+. indexterm:[name,Action Property] indexterm:[Action,Property,name] |interval |0 |How frequently (in seconds) to perform the operation. A value of 0 means "when needed". A positive value defines a 'recurring action', which is typically used with <>. indexterm:[interval,Action Property] indexterm:[Action,Property,interval] |timeout | |How long to wait before declaring the action has failed indexterm:[timeout,Action Property] indexterm:[Action,Property,timeout] |on-fail a|Varies by action: * +stop+: +fence+ if +stonith-enabled+ is true or +block+ otherwise * +demote+: +on-fail+ of the +monitor+ action with +role+ set to +Master+, if present, enabled, and configured to a value other than +demote+, or +restart+ otherwise * all other actions: +restart+ a|The action to take if this action ever fails. Allowed values: * +ignore:+ Pretend the resource did not fail. * +block:+ Don't perform any further operations on the resource. * +stop:+ Stop the resource and do not start it elsewhere. * +demote:+ Demote the resource, without a full restart. This is valid only for +promote+ actions, and for +monitor+ actions with both a nonzero +interval+ and +role+ set to +Master+; for any other action, a configuration error will be logged, and the default behavior will be used. * +restart:+ Stop the resource and start it again (possibly on a different node). * +fence:+ STONITH the node on which the resource failed. * +standby:+ Move _all_ resources away from the node on which the resource failed. indexterm:[on-fail,Action Property] indexterm:[Action,Property,on-fail] |enabled |TRUE |If +false+, ignore this operation definition. This is typically used to pause a particular recurring +monitor+ operation; for instance, it can complement the respective resource being unmanaged (+is-managed=false+), as this alone will <>. Disabling the operation does not suppress all actions of the given type. Allowed values: +true+, +false+. indexterm:[enabled,Action Property] indexterm:[Action,Property,enabled] |record-pending |TRUE |If +true+, the intention to perform the operation is recorded so that GUIs and CLI tools can indicate that an operation is in progress. This is best set as an _operation default_ (see <>). Allowed values: +true+, +false+. indexterm:[enabled,Action Property] indexterm:[Action,Property,enabled] |role | |Run the operation only on node(s) that the cluster thinks should be in the specified role. This only makes sense for recurring +monitor+ operations. Allowed (case-sensitive) values: +Stopped+, +Started+, and in the case of <>, +Slave+ and +Master+. indexterm:[role,Action Property] indexterm:[Action,Property,role] |========================================================= [NOTE] ==== When +on-fail+ is set to +demote+, recovery from failure by a successful demote causes the cluster to recalculate whether and where a new instance should be promoted. The node with the failure is eligible, so if master scores have not changed, it will be promoted again. There is no direct equivalent of +migration-threshold+ for the master role, but the same effect can be achieved with a location constraint using a <> with a node attribute expression for the resource's fail count. For example, to immediately ban the master role from a node with any failed promote or master monitor: [source,XML] ---- ---- This example assumes that there is a promotable clone of the +my_primitive+ resource (note that the primitive name, not the clone name, is used in the rule), and that there is a recurring 10-second-interval monitor configured for the master role (fail count attributes specify the interval in milliseconds). ==== [[s-resource-monitoring]] === Monitoring Resources for Failure === When Pacemaker first starts a resource, it runs one-time +monitor+ operations (referred to as 'probes') to ensure the resource is running where it's supposed to be, and not running where it's not supposed to be. (This behavior can be affected by the +resource-discovery+ location constraint property.) Other than those initial probes, Pacemaker will 'not' (by default) check that the resource continues to stay healthy. footnote:[Currently, anyway. Automatic monitoring operations may be added in a future version of Pacemaker.] You must configure +monitor+ operations explicitly to perform these checks. .An OCF resource with a recurring health check ===== [source,XML] ------- ------- ===== By default, a +monitor+ operation will ensure that the resource is running where it is supposed to. The +target-role+ property can be used for further checking. For example, if a resource has one +monitor+ operation with +interval=10 role=Started+ and a second +monitor+ operation with +interval=11 role=Stopped+, the cluster will run the first monitor on any nodes it thinks 'should' be running the resource, and the second monitor on any nodes that it thinks 'should not' be running the resource (for the truly paranoid, who want to know when an administrator manually starts a service by mistake). [NOTE] ==== Currently, monitors with +role=Stopped+ are not implemented for <> resources. ==== [[s-monitoring-unmanaged]] === Monitoring Resources When Administration is Disabled === Recurring +monitor+ operations behave differently under various administrative settings: * When a resource is unmanaged (by setting +is-managed=false+): No monitors will be stopped. + If the unmanaged resource is stopped on a node where the cluster thinks it should be running, the cluster will detect and report that it is not, but it will not consider the monitor failed, and will not try to start the resource until it is managed again. + Starting the unmanaged resource on a different node is strongly discouraged and will at least cause the cluster to consider the resource failed, and may require the resource's +target-role+ to be set to +Stopped+ then +Started+ to be recovered. * When a node is put into standby: All resources will be moved away from the node, and all +monitor+ operations will be stopped on the node, except those specifying +role+ as +Stopped+ (which will be newly initiated if appropriate). * When the cluster is put into maintenance mode: All resources will be marked as unmanaged. All monitor operations will be stopped, except those specifying +role+ as +Stopped+ (which will be newly initiated if appropriate). As with single unmanaged resources, starting a resource on a node other than where the cluster expects it to be will cause problems. [[s-operation-defaults]] === Setting Global Defaults for Operations === You can change the global default values for operation properties in a given cluster. These are defined in an +op_defaults+ section of the CIB's +configuration+ section, and can be set with `crm_attribute`. For example, ---- # crm_attribute --type op_defaults --name timeout --update 20s ---- would default each operation's +timeout+ to 20 seconds. If an operation's definition also includes a value for +timeout+, then that value would be used for that operation instead. === When Implicit Operations Take a Long Time === The cluster will always perform a number of implicit operations: +start+, +stop+ and a non-recurring +monitor+ operation used at startup to check whether the resource is already active. If one of these is taking too long, then you can create an entry for them and specify a longer timeout. .An OCF resource with custom timeouts for its implicit actions ===== [source,XML] ------- ------- ===== === Multiple Monitor Operations === Provided no two operations (for a single resource) have the same name and interval, you can have as many +monitor+ operations as you like. In this way, you can do a superficial health check every minute and progressively more intense ones at higher intervals. To tell the resource agent what kind of check to perform, you need to provide each monitor with a different value for a common parameter. The OCF standard creates a special parameter called +OCF_CHECK_LEVEL+ for this purpose and dictates that it is "made available to the resource agent without the normal +OCF_RESKEY+ prefix". Whatever name you choose, you can specify it by adding an +instance_attributes+ block to the +op+ tag. It is up to each resource agent to look for the parameter and decide how to use it. .An OCF resource with two recurring health checks, performing different levels of checks specified via +OCF_CHECK_LEVEL+. ===== [source,XML] ------- ------- ===== === Disabling a Monitor Operation === The easiest way to stop a recurring monitor is to just delete it. However, there can be times when you only want to disable it temporarily. In such cases, simply add +enabled=false+ to the operation's definition. .Example of an OCF resource with a disabled health check ===== [source,XML] ------- ------- ===== This can be achieved from the command line by executing: ---- # cibadmin --modify --xml-text '' ---- Once you've done whatever you needed to do, you can then re-enable it with ---- # cibadmin --modify --xml-text '' ----