diff --git a/doc/sphinx/Pacemaker_Explained/acls.rst b/doc/sphinx/Pacemaker_Explained/acls.rst index 2beb7475c7..878f8f64b3 100644 --- a/doc/sphinx/Pacemaker_Explained/acls.rst +++ b/doc/sphinx/Pacemaker_Explained/acls.rst @@ -1,476 +1,476 @@ .. index:: single: Access Control List (ACL) .. _acl: Access Control Lists (ACLs) --------------------------- By default, the ``root`` user or any user in the |CRM_DAEMON_GROUP| group can modify Pacemaker's CIB without restriction. Pacemaker offers *access control lists (ACLs)* to provide more fine-grained authorization. .. important:: Being able to modify the CIB's resource section allows a user to run any executable file as root, by configuring it as an LSB resource with a full path. ACL Prerequisites ################# In order to use ACLs: * The ``enable-acl`` :ref:`cluster option ` must be set to true. * Desired users must have user accounts in the |CRM_DAEMON_GROUP| group on all cluster nodes in the cluster. * If your CIB was created before Pacemaker 1.1.12, it might need to be updated to the current schema (using ``cibadmin --upgrade`` or a higher-level tool equivalent) in order to use the syntax documented here. * Prior to the 2.1.0 release, the Pacemaker software had to have been built with ACL support. If you are using an older release, your installation supports ACLs only if the output of the command ``pacemakerd --features`` contains ``acls``. In newer versions, ACLs are always enabled. .. important:: ``enable-acl`` should be set either by the root user, or as part of a batch of CIB changes including roles and users. Otherwise, the user setting it might lock themselves out from making any further changes. .. index:: single: Access Control List (ACL); acls pair: acls; XML element ACL Configuration ################# ACLs are specified within an ``acls`` element of the CIB. The ``acls`` element may contain any number of ``acl_role``, ``acl_target``, and ``acl_group`` elements. .. index:: single: Access Control List (ACL); acl_role pair: acl_role; XML element ACL Roles ######### An ACL *role* is a collection of permissions allowing or denying access to particular portions of the CIB. A role is configured with an ``acl_role`` element in the CIB ``acls`` section. .. table:: **Properties of an acl_role element** :widths: 1 3 +------------------+-----------------------------------------------------------+ | Attribute | Description | +==================+===========================================================+ | id | .. index:: | | | single: acl_role; id (attribute) | | | single: id; acl_role attribute | | | single: attribute; id (acl_role) | | | | | | A unique name for the role *(required)* | +------------------+-----------------------------------------------------------+ | description | .. index:: | | | single: acl_role; description (attribute) | | | single: description; acl_role attribute | | | single: attribute; description (acl_role) | | | | - | | Arbitrary text (not used by Pacemaker) | + | | Arbitrary text for user's use (ignored by Pacemaker) | +------------------+-----------------------------------------------------------+ An ``acl_role`` element may contain any number of ``acl_permission`` elements. .. index:: single: Access Control List (ACL); acl_permission pair: acl_permission; XML element .. table:: **Properties of an acl_permission element** :widths: 1 3 +------------------+-----------------------------------------------------------+ | Attribute | Description | +==================+===========================================================+ | id | .. index:: | | | single: acl_permission; id (attribute) | | | single: id; acl_permission attribute | | | single: attribute; id (acl_permission) | | | | | | A unique name for the permission *(required)* | +------------------+-----------------------------------------------------------+ | description | .. index:: | | | single: acl_permission; description (attribute) | | | single: description; acl_permission attribute | | | single: attribute; description (acl_permission) | | | | - | | Arbitrary text (not used by Pacemaker) | + | | Arbitrary text for user's use (ignored by Pacemaker) | +------------------+-----------------------------------------------------------+ | kind | .. index:: | | | single: acl_permission; kind (attribute) | | | single: kind; acl_permission attribute | | | single: attribute; kind (acl_permission) | | | | | | The access being granted. Allowed values are ``read``, | | | ``write``, and ``deny``. A value of ``write`` grants both | | | read and write access. | +------------------+-----------------------------------------------------------+ | object-type | .. index:: | | | single: acl_permission; object-type (attribute) | | | single: object-type; acl_permission attribute | | | single: attribute; object-type (acl_permission) | | | | | | The name of an XML element in the CIB to which the | | | permission applies. (Exactly one of ``object-type``, | | | ``xpath``, and ``reference`` must be specified for a | | | permission.) | +------------------+-----------------------------------------------------------+ | attribute | .. index:: | | | single: acl_permission; attribute (attribute) | | | single: attribute; acl_permission attribute | | | single: attribute; attribute (acl_permission) | | | | | | If specified, the permission applies only to | | | ``object-type`` elements that have this attribute set (to | | | any value). If not specified, the permission applies to | | | all ``object-type`` elements. May only be used with | | | ``object-type``. | +------------------+-----------------------------------------------------------+ | reference | .. index:: | | | single: acl_permission; reference (attribute) | | | single: reference; acl_permission attribute | | | single: attribute; reference (acl_permission) | | | | | | The ID of an XML element in the CIB to which the | | | permission applies. (Exactly one of ``object-type``, | | | ``xpath``, and ``reference`` must be specified for a | | | permission.) | +------------------+-----------------------------------------------------------+ | xpath | .. index:: | | | single: acl_permission; xpath (attribute) | | | single: xpath; acl_permission attribute | | | single: attribute; xpath (acl_permission) | | | | | | An `XPath `_ | | | specification selecting an XML element in the CIB to | | | which the permission applies. Attributes may be specified | | | in the XPath to select particular elements, but the | | | permissions apply to the entire element. (Exactly one of | | | ``object-type``, ``xpath``, and ``reference`` must be | | | specified for a permission.) | +------------------+-----------------------------------------------------------+ .. important:: * Permissions are applied to the selected XML element's entire XML subtree (all elements enclosed within it). * Write permission grants the ability to create, modify, or remove the element and its subtree, and also the ability to create any "scaffolding" elements (enclosing elements that do not have attributes other than an ID). * Permissions for more specific matches (more deeply nested elements) take precedence over more general ones. * If multiple permissions are configured for the same match (for example, in different roles applied to the same user), any ``deny`` permission takes precedence, then ``write``, then lastly ``read``. ACL Targets and Groups ###################### ACL targets correspond to user accounts on the system. .. index:: single: Access Control List (ACL); acl_target pair: acl_target; XML element .. table:: **Properties of an acl_target element** :widths: 1 3 +------------------+-----------------------------------------------------------+ | Attribute | Description | +==================+===========================================================+ | id | .. index:: | | | single: acl_target; id (attribute) | | | single: id; acl_target attribute | | | single: attribute; id (acl_target) | | | | | | A unique identifier for the target (if ``name`` is not | | | specified, this must be the name of the user account) | | | *(required)* | +------------------+-----------------------------------------------------------+ | name | .. index:: | | | single: acl_target; name (attribute) | | | single: name; acl_target attribute | | | single: attribute; name (acl_target) | | | | | | If specified, the user account name (this allows you to | | | specify a user name that is already used as the ``id`` | | | for some other configuration element) *(since 2.1.5)* | +------------------+-----------------------------------------------------------+ ACL groups correspond to groups on the system. Any role configured for these groups apply to all users in that group *(since 2.1.5)*. .. index:: single: Access Control List (ACL); acl_group pair: acl_group; XML element .. table:: **Properties of an acl_group element** :widths: 1 3 +------------------+-----------------------------------------------------------+ | Attribute | Description | +==================+===========================================================+ | id | .. index:: | | | single: acl_group; id (attribute) | | | single: id; acl_group attribute | | | single: attribute; id (acl_group) | | | | | | A unique identifier for the group (if ``name`` is not | | | specified, this must be the group name) *(required)* | +------------------+-----------------------------------------------------------+ | name | .. index:: | | | single: acl_group; name (attribute) | | | single: name; acl_group attribute | | | single: attribute; name (acl_group) | | | | | | If specified, the group name (this allows you to specify | | | a group name that is already used as the ``id`` for some | | | other configuration element) | +------------------+-----------------------------------------------------------+ Each ``acl_target`` and ``acl_group`` element may contain any number of ``role`` elements. .. note:: If the system users and groups are defined by some network service (such as LDAP), the cluster itself will be unaffected by outages in the service, but affected users and groups will not be able to make changes to the CIB. .. index:: single: Access Control List (ACL); role pair: role; XML element .. table:: **Properties of a role element** :widths: 1 3 +------------------+-----------------------------------------------------------+ | Attribute | Description | +==================+===========================================================+ | id | .. index:: | | | single: role; id (attribute) | | | single: id; role attribute | | | single: attribute; id (role) | | | | | | The ``id`` of an ``acl_role`` element that specifies | | | permissions granted to the enclosing target or group. | +------------------+-----------------------------------------------------------+ .. important:: The ``root`` and |CRM_DAEMON_USER| user accounts always have full access to the CIB, regardless of ACLs. For all other user accounts, when ``enable-acl`` is true, permission to all parts of the CIB is denied by default (permissions must be explicitly granted). ACLs and Pacemaker Remote Nodes ############################### ACLs apply differently on Pacemaker Remote nodes, which are assumed to be special-purpose hosts without typical user accounts. Instead, CIB modifications coming from a Pacemaker Remote node use the node's name as the ACL user name, and ``pacemaker-remote`` as the role. ACL Examples ############ .. code-block:: xml In the above example, the user ``alice`` has the minimal permissions necessary to run basic Pacemaker CLI tools, including using ``crm_mon`` to view the cluster status, without being able to modify anything. The user ``bob`` can view the entire configuration and status of the cluster, but not make any changes. The user ``carol`` can read everything, and change selected cluster properties as well as resource roles and location constraints. Finally, ``dave`` has full read and write access to the entire CIB. Looking at the ``minimal`` role in more depth, it is designed to allow read access to the ``cib`` tag itself, while denying access to particular portions of its subtree (which is the entire CIB). This is because the DC node is indicated in the ``cib`` tag, so ``crm_mon`` will not be able to report the DC otherwise. However, this does change the security model to allow by default, since any portions of the CIB not explicitly denied will be readable. The ``cib`` read access could be removed and replaced with read access to just the ``crm_config`` and ``status`` sections, for a safer approach at the cost of not seeing the DC in status output. For a simpler configuration, the ``minimal`` role allows read access to the entire ``crm_config`` section, which contains cluster properties. It would be possible to allow read access to specific properties instead (such as ``stonith-enabled``, ``dc-uuid``, ``have-quorum``, and ``cluster-name``) to restrict access further while still allowing status output, but cluster properties are unlikely to be considered sensitive. ACL Limitations ############### Actions performed via IPC rather than the CIB _____________________________________________ ACLs apply *only* to the CIB. That means ACLs apply to command-line tools that operate by reading or writing the CIB, such as ``crm_attribute`` when managing permanent node attributes, ``crm_mon``, and ``cibadmin``. However, command-line tools that communicate directly with Pacemaker daemons via IPC are not affected by ACLs. For example, users in the |CRM_DAEMON_GROUP| group may still do the following, regardless of ACLs: * Query transient node attribute values using ``crm_attribute`` and ``attrd_updater``. * Query basic node information using ``crm_node``. * Erase resource operation history using ``crm_resource``. * Query fencing configuration information, and execute fencing against nodes, using ``stonith_admin``. ACLs and Pacemaker Remote _________________________ ACLs apply to commands run on Pacemaker Remote nodes using the Pacemaker Remote node's name as the ACL user name. The idea is that Pacemaker Remote nodes (especially virtual machines and containers) are likely to be purpose-built and have different user accounts from full cluster nodes. diff --git a/doc/sphinx/Pacemaker_Explained/alerts.rst b/doc/sphinx/Pacemaker_Explained/alerts.rst index f4cad72cb7..27000ed941 100644 --- a/doc/sphinx/Pacemaker_Explained/alerts.rst +++ b/doc/sphinx/Pacemaker_Explained/alerts.rst @@ -1,277 +1,284 @@ .. _alerts: .. index:: single: alert single: resource; alert single: node; alert single: fencing; alert pair: XML element; alert pair: XML element; alerts Alerts ------ *Alerts* may be configured to take some external action when a cluster event occurs (node failure, resource starting or stopping, etc.). .. index:: pair: alert; agent Alert Agents ############ As with resource agents, the cluster calls an external program (an *alert agent*) to handle alerts. The cluster passes information about the event to the agent via environment variables. Agents can do anything desired with this information (send an e-mail, log to a file, update a monitoring system, etc.). .. topic:: Simple alert configuration .. code-block:: xml In the example above, the cluster will call ``my-script.sh`` for each event. Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called *on* those nodes. For more information about sample alert agents provided by Pacemaker and about developing custom alert agents, see the *Pacemaker Administration* document. .. index:: single: alert; recipient pair: XML element; recipient Alert Recipients ################ Usually, alerts are directed towards a recipient. Thus, each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. .. topic:: Alert configuration with recipient .. code-block:: xml In the above example, the cluster will call ``my-script.sh`` for each event, passing the recipient ``some-address`` as an environment variable. The recipient may be anything the alert agent can recognize -- an IP address, an e-mail address, a file name, whatever the particular agent supports. .. index:: single: alert; meta-attributes single: meta-attribute; alert meta-attributes Alert Meta-Attributes ##################### As with resources, meta-attributes can be configured for alerts to change whether and how Pacemaker calls them. -.. table:: **Meta-Attributes of an Alert** +.. table:: **Meta-Attributes of an Alert or Recipient** :class: longtable :widths: 1 1 3 +------------------+---------------+-----------------------------------------------------+ | Meta-Attribute | Default | Description | +==================+===============+=====================================================+ + | description | | .. index:: | + | | | single: acl_permission; description (attribute) | + | | | single: description; acl_permission attribute | + | | | single: attribute; description (acl_permission) | + | | | | + | | | Arbitrary text for user's use (ignored by Pacemaker)| + +------------------+---------------+-----------------------------------------------------+ | enabled | true | .. index:: | | | | single: alert; meta-attribute, enabled | | | | single: meta-attribute; enabled (alert) | | | | single: enabled; alert meta-attribute | | | | | | | | If false for an alert, the alert will not be used. | | | | If true for an alert and false for a particular | | | | recipient of that alert, that recipient will not be | | | | used. *(since 2.1.6)* | +------------------+---------------+-----------------------------------------------------+ | timestamp-format | %H:%M:%S.%06N | .. index:: | | | | single: alert; meta-attribute, timestamp-format | | | | single: meta-attribute; timestamp-format (alert) | | | | single: timestamp-format; alert meta-attribute | | | | | | | | Format the cluster will use when sending the | | | | event's timestamp to the agent. This is a string as | | | | used with the ``date(1)`` command. | +------------------+---------------+-----------------------------------------------------+ | timeout | 30s | .. index:: | | | | single: alert; meta-attribute, timeout | | | | single: meta-attribute; timeout (alert) | | | | single: timeout; alert meta-attribute | | | | | | | | If the alert agent does not complete within this | | | | amount of time, it will be terminated. | +------------------+---------------+-----------------------------------------------------+ Meta-attributes can be configured per alert and/or per recipient. .. topic:: Alert configuration with meta-attributes .. code-block:: xml In the above example, the ``my-script.sh`` will get called twice for each event, with each call using a 15-second timeout. One call will be passed the recipient ``someuser@example.com`` and a timestamp in the format ``%D %H:%M``, while the other call will be passed the recipient ``otheruser@example.com`` and a timestamp in the format ``%c``. .. index:: single: alert; instance attributes single: instance attribute; alert instance attributes Alert Instance Attributes ######################### As with resource agents, agent-specific configuration values may be configured as instance attributes. These will be passed to the agent as additional environment variables. The number, names and allowed values of these instance attributes are completely up to the particular agent. .. topic:: Alert configuration with instance attributes .. code-block:: xml .. index:: single: alert; filters pair: XML element; select pair: XML element; select_nodes pair: XML element; select_fencing pair: XML element; select_resources pair: XML element; select_attributes pair: XML element; attribute Alert Filters ############# By default, an alert agent will be called for node events, fencing events, and resource events. An agent may choose to ignore certain types of events, but there is still the overhead of calling it for those events. To eliminate that overhead, you may select which types of events the agent should receive. Alert filters are configured within a ``select`` element inside an ``alert`` element. .. list-table:: **Possible alert filters** :class: longtable :widths: 1 3 :header-rows: 1 * - Name - Events alerted * - select_nodes - A node joins or leaves the cluster (whether at the cluster layer for cluster nodes, or via a remote connection for Pacemaker Remote nodes). * - select_fencing - Fencing or unfencing of a node completes (whether successfully or not). * - select_resources - A resource action other than meta-data completes (whether successfully or not). * - select_attributes - A transient attribute value update is sent to the CIB. .. topic:: Alert configuration to receive only node events and fencing events .. code-block:: xml With ```` (the only event type not enabled by default), the agent will receive alerts when a node attribute changes. If you wish the agent to be called only when certain attributes change, you can configure that as well. .. topic:: Alert configuration to be called when certain node attributes change .. code-block:: xml Node attribute alerts are currently considered experimental. Alerts may be limited to attributes set via ``attrd_updater``, and agents may be called multiple times with the same attribute value. diff --git a/doc/sphinx/Pacemaker_Explained/collective.rst b/doc/sphinx/Pacemaker_Explained/collective.rst index 3665557574..8a271dd1b8 100644 --- a/doc/sphinx/Pacemaker_Explained/collective.rst +++ b/doc/sphinx/Pacemaker_Explained/collective.rst @@ -1,1199 +1,1193 @@ .. index: single: collective resource single: resource; collective Collective Resources -------------------- Pacemaker supports several types of *collective* resources, which consist of multiple, related resource instances. .. index: single: group resource single: resource; group .. _group-resources: Groups - A Syntactic Shortcut ############################# One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, we support the concept of groups. .. topic:: A group of two primitive resources .. code-block:: xml Although the example above contains only two resources, there is no limit to the number of resources a group can contain. The example is also sufficient to explain the fundamental properties of a group: * Resources are started in the order they appear in (**Public-IP** first, then **Email**) * Resources are stopped in the reverse order to which they appear in (**Email** first, then **Public-IP**) If a resource in the group can't run anywhere, then nothing after that is allowed to run, too. * If **Public-IP** can't run anywhere, neither can **Email**; * but if **Email** can't run anywhere, this does not affect **Public-IP** in any way The group above is logically equivalent to writing: .. topic:: How the cluster sees a group resource .. code-block:: xml Obviously as the group grows bigger, the reduced configuration effort can become significant. Another (typical) example of a group is a DRBD volume, the filesystem mount, an IP address, and an application that uses them. .. index:: pair: XML element; group Group Properties ________________ .. table:: **Properties of a Group Resource** :widths: 1 4 +-------------+------------------------------------------------------------------+ | Field | Description | +=============+==================================================================+ | id | .. index:: | | | single: group; property, id | | | single: property; id (group) | | | single: id; group property | | | | | | A unique name for the group | +-------------+------------------------------------------------------------------+ | description | .. index:: | | | single: group; attribute, description | | | single: attribute; description (group) | | | single: description; group attribute | | | | - | | An optional description of the group, for the user's own | - | | purposes. | - | | E.g. ``resources needed for website`` | + | | Arbitrary text for user's use (ignored by Pacemaker) | +-------------+------------------------------------------------------------------+ Group Options _____________ Groups inherit the ``priority``, ``target-role``, and ``is-managed`` properties from primitive resources. See :ref:`resource_options` for information about those properties. Group Instance Attributes _________________________ Groups have no instance attributes. However, any that are set for the group object will be inherited by the group's children. Group Contents ______________ Groups may only contain a collection of cluster resources (see :ref:`primitive-resource`). To refer to a child of a group resource, just use the child's ``id`` instead of the group's. Group Constraints _________________ Although it is possible to reference a group's children in constraints, it is usually preferable to reference the group itself. .. topic:: Some constraints involving groups .. code-block:: xml .. index:: pair: resource-stickiness; group Group Stickiness ________________ Stickiness, the measure of how much a resource wants to stay where it is, is additive in groups. Every active resource of the group will contribute its stickiness value to the group's total. So if the default ``resource-stickiness`` is 100, and a group has seven members, five of which are active, then the group as a whole will prefer its current location with a score of 500. .. index:: single: clone single: resource; clone .. _s-resource-clone: Clones - Resources That Can Have Multiple Active Instances ########################################################## *Clone* resources are resources that can have more than one copy active at the same time. This allows you, for example, to run a copy of a daemon on every node. You can clone any primitive or group resource [#]_. Anonymous versus Unique Clones ______________________________ A clone resource is configured to be either *anonymous* or *globally unique*. Anonymous clones are the simplest. These behave completely identically everywhere they are running. Because of this, there can be only one instance of an anonymous clone active per node. The instances of globally unique clones are distinct entities. All instances are launched identically, but one instance of the clone is not identical to any other instance, whether running on the same node or a different node. As an example, a cloned IP address can use special kernel functionality such that each instance handles a subset of requests for the same IP address. .. index:: single: promotable clone single: resource; promotable .. _s-resource-promotable: Promotable clones _________________ If a clone is *promotable*, its instances can perform a special role that Pacemaker will manage via the ``promote`` and ``demote`` actions of the resource agent. Services that support such a special role have various terms for the special role and the default role: primary and secondary, master and replica, controller and worker, etc. Pacemaker uses the terms *promoted* and *unpromoted* to be agnostic to what the service calls them or what they do. All that Pacemaker cares about is that an instance comes up in the unpromoted role when started, and the resource agent supports the ``promote`` and ``demote`` actions to manage entering and exiting the promoted role. .. index:: pair: XML element; clone Clone Properties ________________ .. table:: **Properties of a Clone Resource** :widths: 1 4 +-------------+------------------------------------------------------------------+ | Field | Description | +=============+==================================================================+ | id | .. index:: | | | single: clone; property, id | | | single: property; id (clone) | | | single: id; clone property | | | | | | A unique name for the clone | +-------------+------------------------------------------------------------------+ | description | .. index:: | | | single: clone; attribute, description | | | single: attribute; description (clone) | | | single: description; clone attribute | | | | - | | An optional description of the clone, for the user's own | - | | purposes. | - | | E.g. ``IP address for website`` | + | | Arbitrary text for user's use (ignored by Pacemaker) | +-------------+------------------------------------------------------------------+ .. index:: pair: options; clone Clone Options _____________ :ref:`Options ` inherited from primitive resources: ``priority, target-role, is-managed`` .. table:: **Clone-specific configuration options** :class: longtable :widths: 1 1 3 +-------------------+-----------------+-------------------------------------------------------+ | Field | Default | Description | +===================+=================+=======================================================+ | globally-unique | **true** if | .. index:: | | | clone-node-max | single: clone; option, globally-unique | | | is greater than | single: option; globally-unique (clone) | | | 1, otherwise | single: globally-unique; clone option | | | **false** | | | | | If **true**, each clone instance performs a | | | | distinct function, such that a single node can run | | | | more than one instance at the same time | +-------------------+-----------------+-------------------------------------------------------+ | clone-max | 0 | .. index:: | | | | single: clone; option, clone-max | | | | single: option; clone-max (clone) | | | | single: clone-max; clone option | | | | | | | | The maximum number of clone instances that can | | | | be started across the entire cluster. If 0, the | | | | number of nodes in the cluster will be used. | +-------------------+-----------------+-------------------------------------------------------+ | clone-node-max | 1 | .. index:: | | | | single: clone; option, clone-node-max | | | | single: option; clone-node-max (clone) | | | | single: clone-node-max; clone option | | | | | | | | If the clone is globally unique, this is the maximum | | | | number of clone instances that can be started | | | | on a single node | +-------------------+-----------------+-------------------------------------------------------+ | clone-min | 0 | .. index:: | | | | single: clone; option, clone-min | | | | single: option; clone-min (clone) | | | | single: clone-min; clone option | | | | | | | | Require at least this number of clone instances | | | | to be runnable before allowing resources | | | | depending on the clone to be runnable. A value | | | | of 0 means require all clone instances to be | | | | runnable. | +-------------------+-----------------+-------------------------------------------------------+ | notify | false | .. index:: | | | | single: clone; option, notify | | | | single: option; notify (clone) | | | | single: notify; clone option | | | | | | | | Call the resource agent's **notify** action for | | | | all active instances, before and after starting | | | | or stopping any clone instance. The resource | | | | agent must support this action. | | | | Allowed values: **false**, **true** | +-------------------+-----------------+-------------------------------------------------------+ | ordered | false | .. index:: | | | | single: clone; option, ordered | | | | single: option; ordered (clone) | | | | single: ordered; clone option | | | | | | | | If **true**, clone instances must be started | | | | sequentially instead of in parallel. | | | | Allowed values: **false**, **true** | +-------------------+-----------------+-------------------------------------------------------+ | interleave | false | .. index:: | | | | single: clone; option, interleave | | | | single: option; interleave (clone) | | | | single: interleave; clone option | | | | | | | | When this clone is ordered relative to another | | | | clone, if this option is **false** (the default), | | | | the ordering is relative to *all* instances of | | | | the other clone, whereas if this option is | | | | **true**, the ordering is relative only to | | | | instances on the same node. | | | | Allowed values: **false**, **true** | +-------------------+-----------------+-------------------------------------------------------+ | promotable | false | .. index:: | | | | single: clone; option, promotable | | | | single: option; promotable (clone) | | | | single: promotable; clone option | | | | | | | | If **true**, clone instances can perform a | | | | special role that Pacemaker will manage via the | | | | resource agent's **promote** and **demote** | | | | actions. The resource agent must support these | | | | actions. | | | | Allowed values: **false**, **true** | +-------------------+-----------------+-------------------------------------------------------+ | promoted-max | 1 | .. index:: | | | | single: clone; option, promoted-max | | | | single: option; promoted-max (clone) | | | | single: promoted-max; clone option | | | | | | | | If ``promotable`` is **true**, the number of | | | | instances that can be promoted at one time | | | | across the entire cluster | +-------------------+-----------------+-------------------------------------------------------+ | promoted-node-max | 1 | .. index:: | | | | single: clone; option, promoted-node-max | | | | single: option; promoted-node-max (clone) | | | | single: promoted-node-max; clone option | | | | | | | | If the clone is promotable and globally unique, this | | | | is the number of instances that can be promoted at | | | | one time on a single node (up to ``clone-node-max``) | +-------------------+-----------------+-------------------------------------------------------+ .. note:: **Deprecated Terminology** In older documentation and online examples, you may see promotable clones referred to as *multi-state*, *stateful*, or *master/slave*; these mean the same thing as *promotable*. Certain syntax is supported for backward compatibility, but is deprecated and will be removed in a future version: * Using the ``master-max`` meta-attribute instead of ``promoted-max`` * Using the ``master-node-max`` meta-attribute instead of ``promoted-node-max`` * Using ``Master`` as a role name instead of ``Promoted`` * Using ``Slave`` as a role name instead of ``Unpromoted`` Clone Contents ______________ Clones must contain exactly one primitive or group resource. .. topic:: A clone that runs a web server on all nodes .. code-block:: xml .. warning:: You should never reference the name of a clone's child (the primitive or group resource being cloned). If you think you need to do this, you probably need to re-evaluate your design. Clone Instance Attribute ________________________ Clones have no instance attributes; however, any that are set here will be inherited by the clone's child. .. index:: single: clone; constraint Clone Constraints _________________ In most cases, a clone will have a single instance on each active cluster node. If this is not the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently from those for primitive resources except that the clone's **id** is used. .. topic:: Some constraints involving clones .. code-block:: xml Ordering constraints behave slightly differently for clones. In the example above, ``apache-stats`` will wait until all copies of ``apache-clone`` that need to be started have done so before being started itself. Only if *no* copies can be started will ``apache-stats`` be prevented from being active. Additionally, the clone will wait for ``apache-stats`` to be stopped before stopping itself. Colocation of a primitive or group resource with a clone means that the resource can run on any node with an active instance of the clone. The cluster will choose an instance based on where the clone is running and the resource's own location preferences. Colocation between clones is also possible. If one clone **A** is colocated with another clone **B**, the set of allowed locations for **A** is limited to nodes on which **B** is (or will be) active. Placement is then performed normally. .. index:: single: promotable clone; constraint .. _promotable-clone-constraints: Promotable Clone Constraints ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For promotable clone resources, the ``first-action`` and/or ``then-action`` fields for ordering constraints may be set to ``promote`` or ``demote`` to constrain the promoted role, and colocation constraints may contain ``rsc-role`` and/or ``with-rsc-role`` fields. .. topic:: Constraints involving promotable clone resources .. code-block:: xml In the example above, **myApp** will wait until one of the database copies has been started and promoted before being started itself on the same node. Only if no copies can be promoted will **myApp** be prevented from being active. Additionally, the cluster will wait for **myApp** to be stopped before demoting the database. Colocation of a primitive or group resource with a promotable clone resource means that it can run on any node with an active instance of the promotable clone resource that has the specified role (``Promoted`` or ``Unpromoted``). In the example above, the cluster will choose a location based on where database is running in the promoted role, and if there are multiple promoted instances it will also factor in **myApp**'s own location preferences when deciding which location to choose. Colocation with regular clones and other promotable clone resources is also possible. In such cases, the set of allowed locations for the **rsc** clone is (after role filtering) limited to nodes on which the ``with-rsc`` promotable clone resource is (or will be) in the specified role. Placement is then performed as normal. Using Promotable Clone Resources in Colocation Sets ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When a promotable clone is used in a :ref:`resource set ` inside a colocation constraint, the resource set may take a ``role`` attribute. In the following example, an instance of **B** may be promoted only on a node where **A** is in the promoted role. Additionally, resources **C** and **D** must be located on a node where both **A** and **B** are promoted. .. topic:: Colocate C and D with A's and B's promoted instances .. code-block:: xml Using Promotable Clone Resources in Ordered Sets ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When a promotable clone is used in a :ref:`resource set ` inside an ordering constraint, the resource set may take an ``action`` attribute. .. topic:: Start C and D after first promoting A and B .. code-block:: xml In the above example, **B** cannot be promoted until **A** has been promoted. Additionally, resources **C** and **D** must wait until **A** and **B** have been promoted before they can start. .. index:: pair: resource-stickiness; clone .. _s-clone-stickiness: Clone Stickiness ________________ To achieve stable assignments, clones are slightly sticky by default. If no value for ``resource-stickiness`` is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving instances around the cluster. .. note:: For globally unique clones, this may result in multiple instances of the clone staying on a single node, even after another eligible node becomes active (for example, after being put into standby mode then made active again). If you do not want this behavior, specify a ``resource-stickiness`` of 0 for the clone temporarily and let the cluster adjust, then set it back to 1 if you want the default behavior to apply again. .. important:: If ``resource-stickiness`` is set in the ``rsc_defaults`` section, it will apply to clone instances as well. This means an explicit ``resource-stickiness`` of 0 in ``rsc_defaults`` works differently from the implicit default used when ``resource-stickiness`` is not specified. Monitoring Promotable Clone Resources _____________________________________ The usual monitor actions are insufficient to monitor a promotable clone resource, because Pacemaker needs to verify not only that the resource is active, but also that its actual role matches its intended one. Define two monitoring actions: the usual one will cover the unpromoted role, and an additional one with ``role="Promoted"`` will cover the promoted role. .. topic:: Monitoring both states of a promotable clone resource .. code-block:: xml .. important:: It is crucial that *every* monitor operation has a different interval! Pacemaker currently differentiates between operations only by resource and interval; so if (for example) a promotable clone resource had the same monitor interval for both roles, Pacemaker would ignore the role when checking the status -- which would cause unexpected return codes, and therefore unnecessary complications. .. _s-promotion-scores: Determining Which Instance is Promoted ______________________________________ Pacemaker can choose a promotable clone instance to be promoted in one of two ways: * Promotion scores: These are node attributes set via the ``crm_attribute`` command using the ``--promotion`` option, which generally would be called by the resource agent's start action if it supports promotable clones. This tool automatically detects both the resource and host, and should be used to set a preference for being promoted. Based on this, ``promoted-max``, and ``promoted-node-max``, the instance(s) with the highest preference will be promoted. * Constraints: Location constraints can indicate which nodes are most preferred to be promoted. .. topic:: Explicitly preferring node1 to be promoted .. code-block:: xml .. index: single: bundle single: resource; bundle pair: container; Docker pair: container; podman .. _s-resource-bundle: Bundles - Containerized Resources ################################# Pacemaker supports a special syntax for launching a service inside a `container `_ with any infrastructure it requires: the *bundle*. Pacemaker bundles support `Docker `_ and `podman `_ *(since 2.0.1)* container technologies. [#]_ .. topic:: A bundle for a containerized web server .. code-block:: xml Bundle Prerequisites ____________________ Before configuring a bundle in Pacemaker, the user must install the appropriate container launch technology (Docker or podman), and supply a fully configured container image, on every node allowed to run the bundle. Pacemaker will create an implicit resource of type **ocf:heartbeat:docker** or **ocf:heartbeat:podman** to manage a bundle's container. The user must ensure that the appropriate resource agent is installed on every node allowed to run the bundle. .. index:: pair: XML element; bundle Bundle Properties _________________ .. table:: **XML Attributes of a bundle Element** :widths: 1 4 +-------------+------------------------------------------------------------------+ | Field | Description | +=============+==================================================================+ | id | .. index:: | | | single: bundle; attribute, id | | | single: attribute; id (bundle) | | | single: id; bundle attribute | | | | | | A unique name for the bundle (required) | +-------------+------------------------------------------------------------------+ | description | .. index:: | | | single: bundle; attribute, description | | | single: attribute; description (bundle) | | | single: description; bundle attribute | | | | - | | An optional description of the group, for the user's own | - | | purposes. | - | | E.g. ``manages the container that runs the service`` | + | | Arbitrary text for user's use (ignored by Pacemaker) | +-------------+------------------------------------------------------------------+ A bundle must contain exactly one ``docker`` or ``podman`` element. .. index:: pair: XML element; docker pair: XML element; podman Bundle Container Properties ___________________________ .. table:: **XML attributes of a docker or podman Element** :class: longtable :widths: 2 3 4 +-------------------+------------------------------------+---------------------------------------------------+ | Attribute | Default | Description | +===================+====================================+===================================================+ | image | | .. index:: | | | | single: docker; attribute, image | | | | single: attribute; image (docker) | | | | single: image; docker attribute | | | | single: podman; attribute, image | | | | single: attribute; image (podman) | | | | single: image; podman attribute | | | | | | | | Container image tag (required) | +-------------------+------------------------------------+---------------------------------------------------+ | replicas | Value of ``promoted-max`` | .. index:: | | | if that is positive, else 1 | single: docker; attribute, replicas | | | | single: attribute; replicas (docker) | | | | single: replicas; docker attribute | | | | single: podman; attribute, replicas | | | | single: attribute; replicas (podman) | | | | single: replicas; podman attribute | | | | | | | | A positive integer specifying the number of | | | | container instances to launch | +-------------------+------------------------------------+---------------------------------------------------+ | replicas-per-host | 1 | .. index:: | | | | single: docker; attribute, replicas-per-host | | | | single: attribute; replicas-per-host (docker) | | | | single: replicas-per-host; docker attribute | | | | single: podman; attribute, replicas-per-host | | | | single: attribute; replicas-per-host (podman) | | | | single: replicas-per-host; podman attribute | | | | | | | | A positive integer specifying the number of | | | | container instances allowed to run on a | | | | single node | +-------------------+------------------------------------+---------------------------------------------------+ | promoted-max | 0 | .. index:: | | | | single: docker; attribute, promoted-max | | | | single: attribute; promoted-max (docker) | | | | single: promoted-max; docker attribute | | | | single: podman; attribute, promoted-max | | | | single: attribute; promoted-max (podman) | | | | single: promoted-max; podman attribute | | | | | | | | A non-negative integer that, if positive, | | | | indicates that the containerized service | | | | should be treated as a promotable service, | | | | with this many replicas allowed to run the | | | | service in the promoted role | +-------------------+------------------------------------+---------------------------------------------------+ | network | | .. index:: | | | | single: docker; attribute, network | | | | single: attribute; network (docker) | | | | single: network; docker attribute | | | | single: podman; attribute, network | | | | single: attribute; network (podman) | | | | single: network; podman attribute | | | | | | | | If specified, this will be passed to the | | | | ``docker run`` or ``podman run`` command as the | | | | network setting for the container. | +-------------------+------------------------------------+---------------------------------------------------+ | run-command | ``/usr/sbin/pacemaker-remoted`` if | .. index:: | | | bundle contains a **primitive**, | single: docker; attribute, run-command | | | otherwise none | single: attribute; run-command (docker) | | | | single: run-command; docker attribute | | | | single: podman; attribute, run-command | | | | single: attribute; run-command (podman) | | | | single: run-command; podman attribute | | | | | | | | This command will be run inside the container | | | | when launching it ("PID 1"). If the bundle | | | | contains a **primitive**, this command *must* | | | | start ``pacemaker-remoted`` (but could, for | | | | example, be a script that does other stuff, too). | +-------------------+------------------------------------+---------------------------------------------------+ | options | | .. index:: | | | | single: docker; attribute, options | | | | single: attribute; options (docker) | | | | single: options; docker attribute | | | | single: podman; attribute, options | | | | single: attribute; options (podman) | | | | single: options; podman attribute | | | | | | | | Extra command-line options to pass to the | | | | ``docker run`` or ``podman run`` command | +-------------------+------------------------------------+---------------------------------------------------+ .. note:: Considerations when using cluster configurations or container images from Pacemaker 1.1: * If the container image has a pre-2.0.0 version of Pacemaker, set ``run-command`` to ``/usr/sbin/pacemaker_remoted`` (note the underbar instead of dash). * ``masters`` is accepted as an alias for ``promoted-max``, but is deprecated since 2.0.0, and support for it will be removed in a future version. Bundle Network Properties _________________________ A bundle may optionally contain one ```` element. .. index:: pair: XML element; network single: bundle; network .. table:: **XML attributes of a network Element** :widths: 2 1 5 +----------------+---------+------------------------------------------------------------+ | Attribute | Default | Description | +================+=========+============================================================+ | add-host | TRUE | .. index:: | | | | single: network; attribute, add-host | | | | single: attribute; add-host (network) | | | | single: add-host; network attribute | | | | | | | | If TRUE, and ``ip-range-start`` is used, Pacemaker will | | | | automatically ensure that ``/etc/hosts`` inside the | | | | containers has entries for each | | | | :ref:`replica name ` | | | | and its assigned IP. | +----------------+---------+------------------------------------------------------------+ | ip-range-start | | .. index:: | | | | single: network; attribute, ip-range-start | | | | single: attribute; ip-range-start (network) | | | | single: ip-range-start; network attribute | | | | | | | | If specified, Pacemaker will create an implicit | | | | ``ocf:heartbeat:IPaddr2`` resource for each container | | | | instance, starting with this IP address, using up to | | | | ``replicas`` sequential addresses. These addresses can be | | | | used from the host's network to reach the service inside | | | | the container, though it is not visible within the | | | | container itself. Only IPv4 addresses are currently | | | | supported. | +----------------+---------+------------------------------------------------------------+ | host-netmask | 32 | .. index:: | | | | single: network; attribute; host-netmask | | | | single: attribute; host-netmask (network) | | | | single: host-netmask; network attribute | | | | | | | | If ``ip-range-start`` is specified, the IP addresses | | | | are created with this CIDR netmask (as a number of bits). | +----------------+---------+------------------------------------------------------------+ | host-interface | | .. index:: | | | | single: network; attribute; host-interface | | | | single: attribute; host-interface (network) | | | | single: host-interface; network attribute | | | | | | | | If ``ip-range-start`` is specified, the IP addresses are | | | | created on this host interface (by default, it will be | | | | determined from the IP address). | +----------------+---------+------------------------------------------------------------+ | control-port | 3121 | .. index:: | | | | single: network; attribute; control-port | | | | single: attribute; control-port (network) | | | | single: control-port; network attribute | | | | | | | | If the bundle contains a ``primitive``, the cluster will | | | | use this integer TCP port for communication with | | | | Pacemaker Remote inside the container. Changing this is | | | | useful when the container is unable to listen on the | | | | default port, for example, when the container uses the | | | | host's network rather than ``ip-range-start`` (in which | | | | case ``replicas-per-host`` must be 1), or when the bundle | | | | may run on a Pacemaker Remote node that is already | | | | listening on the default port. Any ``PCMK_remote_port`` | | | | environment variable set on the host or in the container | | | | is ignored for bundle connections. | +----------------+---------+------------------------------------------------------------+ .. _s-resource-bundle-note-replica-names: .. note:: Replicas are named by the bundle id plus a dash and an integer counter starting with zero. For example, if a bundle named **httpd-bundle** has **replicas=2**, its containers will be named **httpd-bundle-0** and **httpd-bundle-1**. .. index:: pair: XML element; port-mapping Additionally, a ``network`` element may optionally contain one or more ``port-mapping`` elements. .. table:: **Attributes of a port-mapping Element** :widths: 2 1 5 +---------------+-------------------+------------------------------------------------------+ | Attribute | Default | Description | +===============+===================+======================================================+ | id | | .. index:: | | | | single: port-mapping; attribute, id | | | | single: attribute; id (port-mapping) | | | | single: id; port-mapping attribute | | | | | | | | A unique name for the port mapping (required) | +---------------+-------------------+------------------------------------------------------+ | port | | .. index:: | | | | single: port-mapping; attribute, port | | | | single: attribute; port (port-mapping) | | | | single: port; port-mapping attribute | | | | | | | | If this is specified, connections to this TCP port | | | | number on the host network (on the container's | | | | assigned IP address, if ``ip-range-start`` is | | | | specified) will be forwarded to the container | | | | network. Exactly one of ``port`` or ``range`` | | | | must be specified in a ``port-mapping``. | +---------------+-------------------+------------------------------------------------------+ | internal-port | value of ``port`` | .. index:: | | | | single: port-mapping; attribute, internal-port | | | | single: attribute; internal-port (port-mapping) | | | | single: internal-port; port-mapping attribute | | | | | | | | If ``port`` and this are specified, connections | | | | to ``port`` on the host's network will be | | | | forwarded to this port on the container network. | +---------------+-------------------+------------------------------------------------------+ | range | | .. index:: | | | | single: port-mapping; attribute, range | | | | single: attribute; range (port-mapping) | | | | single: range; port-mapping attribute | | | | | | | | If this is specified, connections to these TCP | | | | port numbers (expressed as *first_port*-*last_port*) | | | | on the host network (on the container's assigned IP | | | | address, if ``ip-range-start`` is specified) will | | | | be forwarded to the same ports in the container | | | | network. Exactly one of ``port`` or ``range`` | | | | must be specified in a ``port-mapping``. | +---------------+-------------------+------------------------------------------------------+ .. note:: If the bundle contains a ``primitive``, Pacemaker will automatically map the ``control-port``, so it is not necessary to specify that port in a ``port-mapping``. .. index: pair: XML element; storage pair: XML element; storage-mapping single: bundle; storage .. _s-bundle-storage: Bundle Storage Properties _________________________ A bundle may optionally contain one ``storage`` element. A ``storage`` element has no properties of its own, but may contain one or more ``storage-mapping`` elements. .. table:: **Attributes of a storage-mapping Element** :widths: 2 1 5 +-----------------+---------+-------------------------------------------------------------+ | Attribute | Default | Description | +=================+=========+=============================================================+ | id | | .. index:: | | | | single: storage-mapping; attribute, id | | | | single: attribute; id (storage-mapping) | | | | single: id; storage-mapping attribute | | | | | | | | A unique name for the storage mapping (required) | +-----------------+---------+-------------------------------------------------------------+ | source-dir | | .. index:: | | | | single: storage-mapping; attribute, source-dir | | | | single: attribute; source-dir (storage-mapping) | | | | single: source-dir; storage-mapping attribute | | | | | | | | The absolute path on the host's filesystem that will be | | | | mapped into the container. Exactly one of ``source-dir`` | | | | and ``source-dir-root`` must be specified in a | | | | ``storage-mapping``. | +-----------------+---------+-------------------------------------------------------------+ | source-dir-root | | .. index:: | | | | single: storage-mapping; attribute, source-dir-root | | | | single: attribute; source-dir-root (storage-mapping) | | | | single: source-dir-root; storage-mapping attribute | | | | | | | | The start of a path on the host's filesystem that will | | | | be mapped into the container, using a different | | | | subdirectory on the host for each container instance. | | | | The subdirectory will be named the same as the | | | | :ref:`replica name `. | | | | Exactly one of ``source-dir`` and ``source-dir-root`` | | | | must be specified in a ``storage-mapping``. | +-----------------+---------+-------------------------------------------------------------+ | target-dir | | .. index:: | | | | single: storage-mapping; attribute, target-dir | | | | single: attribute; target-dir (storage-mapping) | | | | single: target-dir; storage-mapping attribute | | | | | | | | The path name within the container where the host | | | | storage will be mapped (required) | +-----------------+---------+-------------------------------------------------------------+ | options | | .. index:: | | | | single: storage-mapping; attribute, options | | | | single: attribute; options (storage-mapping) | | | | single: options; storage-mapping attribute | | | | | | | | A comma-separated list of file system mount | | | | options to use when mapping the storage | +-----------------+---------+-------------------------------------------------------------+ .. note:: Pacemaker does not define the behavior if the source directory does not already exist on the host. However, it is expected that the container technology and/or its resource agent will create the source directory in that case. .. note:: If the bundle contains a ``primitive``, Pacemaker will automatically map the equivalent of ``source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey`` and ``source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log`` into the container, so it is not necessary to specify those paths in a ``storage-mapping``. .. important:: The ``PCMK_authkey_location`` environment variable must not be set to anything other than the default of ``/etc/pacemaker/authkey`` on any node in the cluster. .. important:: If SELinux is used in enforcing mode on the host, you must ensure the container is allowed to use any storage you mount into it. For Docker and podman bundles, adding "Z" to the mount options will create a container-specific label for the mount that allows the container access. .. index:: single: bundle; primitive Bundle Primitive ________________ A bundle may optionally contain one :ref:`primitive ` resource. The primitive may have operations, instance attributes, and meta-attributes defined, as usual. If a bundle contains a primitive resource, the container image must include the Pacemaker Remote daemon, and at least one of ``ip-range-start`` or ``control-port`` must be configured in the bundle. Pacemaker will create an implicit **ocf:pacemaker:remote** resource for the connection, launch Pacemaker Remote within the container, and monitor and manage the primitive resource via Pacemaker Remote. If the bundle has more than one container instance (replica), the primitive resource will function as an implicit :ref:`clone ` -- a :ref:`promotable clone ` if the bundle has ``promoted-max`` greater than zero. .. note:: If you want to pass environment variables to a bundle's Pacemaker Remote connection or primitive, you have two options: * Environment variables whose value is the same regardless of the underlying host may be set using the container element's ``options`` attribute. * If you want variables to have host-specific values, you can use the :ref:`storage-mapping ` element to map a file on the host as ``/etc/pacemaker/pcmk-init.env`` in the container *(since 2.0.3)*. Pacemaker Remote will parse this file as a shell-like format, with variables set as NAME=VALUE, ignoring blank lines and comments starting with "#". .. important:: When a bundle has a ``primitive``, Pacemaker on all cluster nodes must be able to contact Pacemaker Remote inside the bundle's containers. * The containers must have an accessible network (for example, ``network`` should not be set to "none" with a ``primitive``). * The default, using a distinct network space inside the container, works in combination with ``ip-range-start``. Any firewall must allow access from all cluster nodes to the ``control-port`` on the container IPs. * If the container shares the host's network space (for example, by setting ``network`` to "host"), a unique ``control-port`` should be specified for each bundle. Any firewall must allow access from all cluster nodes to the ``control-port`` on all cluster and remote node IPs. .. index:: single: bundle; node attributes .. _s-bundle-attributes: Bundle Node Attributes ______________________ If the bundle has a ``primitive``, the primitive's resource agent may want to set node attributes such as :ref:`promotion scores `. However, with containers, it is not apparent which node should get the attribute. If the container uses shared storage that is the same no matter which node the container is hosted on, then it is appropriate to use the promotion score on the bundle node itself. On the other hand, if the container uses storage exported from the underlying host, then it may be more appropriate to use the promotion score on the underlying host. Since this depends on the particular situation, the ``container-attribute-target`` resource meta-attribute allows the user to specify which approach to use. If it is set to ``host``, then user-defined node attributes will be checked on the underlying host. If it is anything else, the local node (in this case the bundle node) is used as usual. This only applies to user-defined attributes; the cluster will always check the local node for cluster-defined attributes such as ``#uname``. If ``container-attribute-target`` is ``host``, the cluster will pass additional environment variables to the primitive's resource agent that allow it to set node attributes appropriately: ``CRM_meta_container_attribute_target`` (identical to the meta-attribute value) and ``CRM_meta_physical_host`` (the name of the underlying host). .. note:: When called by a resource agent, the ``attrd_updater`` and ``crm_attribute`` commands will automatically check those environment variables and set attributes appropriately. .. index:: single: bundle; meta-attributes Bundle Meta-Attributes ______________________ Any meta-attribute set on a bundle will be inherited by the bundle's primitive and any resources implicitly created by Pacemaker for the bundle. This includes options such as ``priority``, ``target-role``, and ``is-managed``. See :ref:`resource_options` for more information. Bundles support clone meta-attributes including ``notify``, ``ordered``, and ``interleave``. Limitations of Bundles ______________________ Restarting pacemaker while a bundle is unmanaged or the cluster is in maintenance mode may cause the bundle to fail. Bundles may not be explicitly cloned or included in groups. This includes the bundle's primitive and any resources implicitly created by Pacemaker for the bundle. (If ``replicas`` is greater than 1, the bundle will behave like a clone implicitly.) Bundles do not have instance attributes, utilization attributes, or operations, though a bundle's primitive may have them. A bundle with a primitive can run on a Pacemaker Remote node only if the bundle uses a distinct ``control-port``. .. [#] Of course, the service must support running multiple instances. .. [#] Docker is a trademark of Docker, Inc. No endorsement by or association with Docker, Inc. is implied. diff --git a/doc/sphinx/Pacemaker_Explained/operations.rst b/doc/sphinx/Pacemaker_Explained/operations.rst index b8a324b8ab..a9b33e0621 100644 --- a/doc/sphinx/Pacemaker_Explained/operations.rst +++ b/doc/sphinx/Pacemaker_Explained/operations.rst @@ -1,688 +1,699 @@ .. index:: single: resource; action single: resource; operation .. _operation: Resource Operations ------------------- *Operations* are actions the cluster can perform on a resource by calling the resource agent. Resource agents must support certain common operations such as start, stop, and monitor, and may implement any others. Operations may be explicitly configured for two purposes: to override defaults for options (such as timeout) that the cluster will use whenever it initiates the operation, and to run an operation on a recurring basis (for example, to monitor the resource for failure). .. topic:: An OCF resource with a non-default start timeout .. code-block:: xml Pacemaker identifies operations by a combination of name and interval, so this combination must be unique for each resource. That is, you should not configure two operations for the same resource with the same name and interval. .. _operation_properties: Operation Properties #################### The ``id``, ``name``, ``interval``, and ``role`` operation properties may be specified only as XML attributes of the ``op`` element. Other operation properties may be specified in any of the following ways, from highest precedence to lowest: * directly in the ``op`` element as an XML attribute * in an ``nvpair`` element within a ``meta_attributes`` element within the ``op`` element * in an ``nvpair`` element within a ``meta_attributes`` element within :ref:`operation defaults ` If not specified, the default from the table below is used. .. list-table:: **Operation Properties** :class: longtable :widths: 2 2 3 4 :header-rows: 1 * - Name - Type - Default - Description * - .. _op_id: .. index:: pair: op; id single: id; action property single: action; property, id id - :ref:`id ` - - A unique identifier for the XML element *(required)* * - .. _op_name: .. index:: pair: op; name single: name; action property single: action; property, name name - :ref:`text ` - - An action name supported by the resource agent *(required)* * - .. _op_interval: .. index:: pair: op; interval single: interval; action property single: action; property, interval interval - :ref:`duration ` - 0 - If this is a positive value, Pacemaker will schedule recurring instances of this operation at the given interval (which makes sense only with :ref:`name ` set to :ref:`monitor `). If this is 0, Pacemaker will apply other properties configured for this operation to instances that are scheduled as needed during normal cluster operation. *(required)* + * - .. _op_description: + + .. index:: + pair: op; description + single: description; action property + single: action; property, description + + description + - :ref:`text ` + - + - Arbitrary text for user's use (ignored by Pacemaker) * - .. _op_role: .. index:: pair: op; role single: role; action property single: action; property, role role - :ref:`enumeration ` - - If this is set, the operation configuration applies only on nodes where the cluster expects the resource to be in the specified role. This makes sense only for recurring monitors. Allowed values: ``Started``, ``Stopped``, and in the case of :ref:`promotable clone resources `, ``Unpromoted`` and ``Promoted``. * - .. _op_timeout: .. index:: pair: op; timeout single: timeout; action property single: action; property, timeout timeout - :ref:`timeout ` - 20s - If resource agent execution does not complete within this amount of time, the action will be considered failed. **Note:** timeouts for fencing agents are handled specially (see the :ref:`fencing` chapter). * - .. _op_on_fail: .. index:: pair: op; on-fail single: on-fail; action property single: action; property, on-fail on-fail - :ref:`enumeration ` - * If ``name`` is ``stop``: ``fence`` if :ref:`stonith-enabled ` is true, otherwise ``block`` * If ``name`` is ``demote``: ``on-fail`` of the ``monitor`` action with ``role`` set to ``Promoted``, if present, enabled, and configured to a value other than ``demote``, or ``restart`` otherwise * Otherwise: ``restart`` - How the cluster should respond to a failure of this action. Allowed values: * ``ignore:`` Pretend the resource did not fail * ``block:`` Do not perform any further operations on the resource * ``stop:`` Stop the resource and leave it stopped * ``demote:`` Demote the resource, without a full restart. This is valid only for ``promote`` actions, and for ``monitor`` actions with both a nonzero ``interval`` and ``role`` set to ``Promoted``; for any other action, a configuration error will be logged, and the default behavior will be used. *(since 2.0.5)* * ``restart:`` Stop the resource, and start it again if allowed (possibly on a different node) * ``fence:`` Fence the node on which the resource failed * ``standby:`` Put the node on which the resource failed in standby mode (forcing *all* resources away) * - .. _op_enabled: .. index:: pair: op; enabled single: enabled; action property single: action; property, enabled enabled - :ref:`boolean ` - true - If ``false``, ignore this operation definition. This does not suppress all actions of this type, but is typically used to pause a recurring monitor. This can complement the resource being unmanaged (:ref:`is-managed ` set to ``false``), which does not stop recurring operations. Maintenance mode, which does stop configured monitors, overrides this setting. * - .. _op_record_pending: .. index:: pair: op; record-pending single: record-pending; action property single: action; property, record-pending record-pending - :ref:`boolean ` - true - Operation results are always recorded when the operation completes (successful or not). If this is ``true``, operations will also be recorded when initiated, so that status output can indicate that the operation is in progress. *(deprecated since 3.0.0)* .. note:: Only one action can be configured for any given combination of ``name`` and ``interval``. .. note:: When ``on-fail`` is set to ``demote``, recovery from failure by a successful demote causes the cluster to recalculate whether and where a new instance should be promoted. The node with the failure is eligible, so if promotion scores have not changed, it will be promoted again. There is no direct equivalent of ``migration-threshold`` for the promoted role, but the same effect can be achieved with a location constraint using a :ref:`rule ` with a node attribute expression for the resource's fail count. For example, to immediately ban the promoted role from a node with any failed promote or promoted instance monitor: .. code-block:: xml This example assumes that there is a promotable clone of the ``my_primitive`` resource (note that the primitive name, not the clone name, is used in the rule), and that there is a recurring 10-second-interval monitor configured for the promoted role (fail count attributes specify the interval in milliseconds). .. _s-resource-monitoring: Monitoring Resources for Failure ################################ When Pacemaker first starts a resource, it runs one-time ``monitor`` operations (referred to as *probes*) to ensure the resource is running where it's supposed to be, and not running where it's not supposed to be. (This behavior can be affected by the ``resource-discovery`` location constraint property.) Other than those initial probes, Pacemaker will *not* (by default) check that the resource continues to stay healthy [#]_. You must configure ``monitor`` operations explicitly to perform these checks. .. topic:: An OCF resource with a recurring health check .. code-block:: xml By default, a ``monitor`` operation will ensure that the resource is running where it is supposed to. The ``target-role`` property can be used for further checking. For example, if a resource has one ``monitor`` operation with ``interval=10 role=Started`` and a second ``monitor`` operation with ``interval=11 role=Stopped``, the cluster will run the first monitor on any nodes it thinks *should* be running the resource, and the second monitor on any nodes that it thinks *should not* be running the resource (for the truly paranoid, who want to know when an administrator manually starts a service by mistake). .. note:: Currently, monitors with ``role=Stopped`` are not implemented for :ref:`clone ` resources. Custom Recurring Operations ########################### Typically, only ``monitor`` operations should be configured as recurring. However, it is possible to implement a custom action name in an OCF agent and then configure that as a recurring operation. This could be useful, for example, to run a report, rotate a log, or clean temporary files related to a particular service. Failures of custom recurring operations will be ignored by the cluster and will not be reported in cluster status *(since 3.0.0; previously, they would be treated like failed monitors)*. A fail count and last failure timestamp will be recorded as transient node attributes, and those node attributes will be erased by the ``crm_resource --cleanup`` command. .. _s-operation-defaults: Setting Global Defaults for Operations ###################################### You can change the global default values for operation properties in a given cluster. These are defined in an ``op_defaults`` section of the CIB's ``configuration`` section, and can be set with ``crm_attribute``. For example, .. code-block:: none # crm_attribute --type op_defaults --name timeout --update 20s would default each operation's ``timeout`` to 20 seconds. If an operation's definition also includes a value for ``timeout``, then that value would be used for that operation instead. When Implicit Operations Take a Long Time ######################################### The cluster will always perform a number of implicit operations: ``start``, ``stop`` and a non-recurring ``monitor`` operation used at startup to check whether the resource is already active. If one of these is taking too long, then you can create an entry for them and specify a longer timeout. .. topic:: An OCF resource with custom timeouts for its implicit actions .. code-block:: xml Multiple Monitor Operations ########################### Provided no two operations (for a single resource) have the same name and interval, you can have as many ``monitor`` operations as you like. In this way, you can do a superficial health check every minute and progressively more intense ones at higher intervals. To tell the resource agent what kind of check to perform, you need to provide each monitor with a different value for a common parameter. The OCF standard creates a special parameter called ``OCF_CHECK_LEVEL`` for this purpose and dictates that it is "made available to the resource agent without the normal ``OCF_RESKEY`` prefix". Whatever name you choose, you can specify it by adding an ``instance_attributes`` block to the ``op`` tag. It is up to each resource agent to look for the parameter and decide how to use it. .. topic:: An OCF resource with two recurring health checks, performing different levels of checks specified via ``OCF_CHECK_LEVEL``. .. code-block:: xml Disabling a Monitor Operation ############################# The easiest way to stop a recurring monitor is to just delete it. However, there can be times when you only want to disable it temporarily. In such cases, simply add ``enabled=false`` to the operation's definition. .. topic:: Example of an OCF resource with a disabled health check .. code-block:: xml This can be achieved from the command line by executing: .. code-block:: none # cibadmin --modify --xml-text '' Once you've done whatever you needed to do, you can then re-enable it with .. code-block:: none # cibadmin --modify --xml-text '' .. index:: single: start-delay; operation attribute single: interval-origin; operation attribute single: interval; interval-origin single: operation; interval-origin single: operation; start-delay Specifying When Recurring Actions are Performed ############################################### By default, recurring actions are scheduled relative to when the resource started. In some cases, you might prefer that a recurring action start relative to a specific date and time. For example, you might schedule an in-depth monitor to run once every 24 hours, and want it to run outside business hours. To do this, set the operation's ``interval-origin``. The cluster uses this point to calculate the correct ``start-delay`` such that the operation will occur at ``interval-origin`` plus a multiple of the operation interval. For example, if the recurring operation's interval is 24h, its ``interval-origin`` is set to 02:00, and it is currently 14:32, then the cluster would initiate the operation after 11 hours and 28 minutes. The value specified for ``interval`` and ``interval-origin`` can be any date/time conforming to the `ISO8601 standard `_. By way of example, to specify an operation that would run on the first Monday of 2021 and every Monday after that, you would add: .. topic:: Example recurring action that runs relative to base date/time .. code-block:: xml .. index:: single: resource; failure recovery single: operation; failure recovery .. _failure-handling: Handling Resource Failure ######################### By default, Pacemaker will attempt to recover failed resources by restarting them. However, failure recovery is highly configurable. .. index:: single: resource; failure count single: operation; failure count Failure Counts ______________ Pacemaker tracks resource failures for each combination of node, resource, and operation (start, stop, monitor, etc.). You can query the fail count for a particular node, resource, and/or operation using the ``crm_failcount`` command. For example, to see how many times the 10-second monitor for ``myrsc`` has failed on ``node1``, run: .. code-block:: none # crm_failcount --query -r myrsc -N node1 -n monitor -I 10s If you omit the node, ``crm_failcount`` will use the local node. If you omit the operation and interval, ``crm_failcount`` will display the sum of the fail counts for all operations on the resource. You can use ``crm_resource --cleanup`` or ``crm_failcount --delete`` to clear fail counts. For example, to clear the above monitor failures, run: .. code-block:: none # crm_resource --cleanup -r myrsc -N node1 -n monitor -I 10s If you omit the resource, ``crm_resource --cleanup`` will clear failures for all resources. If you omit the node, it will clear failures on all nodes. If you omit the operation and interval, it will clear the failures for all operations on the resource. .. note:: Even when cleaning up only a single operation, all failed operations will disappear from the status display. This allows us to trigger a re-check of the resource's current status. Higher-level tools may provide other commands for querying and clearing fail counts. The ``crm_mon`` tool shows the current cluster status, including any failed operations. To see the current fail counts for any failed resources, call ``crm_mon`` with the ``--failcounts`` option. This shows the fail counts per resource (that is, the sum of any operation fail counts for the resource). .. index:: single: migration-threshold; resource meta-attribute single: resource; migration-threshold Failure Response ________________ Normally, if a running resource fails, pacemaker will try to stop it and start it again. Pacemaker will choose the best location to start it each time, which may be the same node that it failed on. However, if a resource fails repeatedly, it is possible that there is an underlying problem on that node, and you might desire trying a different node in such a case. Pacemaker allows you to set your preference via the ``migration-threshold`` resource meta-attribute. [#]_ If you define ``migration-threshold`` to *N* for a resource, it will be banned from the original node after *N* failures there. .. note:: The ``migration-threshold`` is per *resource*, even though fail counts are tracked per *operation*. The operation fail counts are added together to compare against the ``migration-threshold``. By default, fail counts remain until manually cleared by an administrator using ``crm_resource --cleanup`` or ``crm_failcount --delete`` (hopefully after first fixing the failure's cause). It is possible to have fail counts expire automatically by setting the ``failure-timeout`` resource meta-attribute. .. important:: A successful operation does not clear past failures. If a recurring monitor operation fails once, succeeds many times, then fails again days later, its fail count is 2. Fail counts are cleared only by manual intervention or failure timeout. For example, setting ``migration-threshold`` to 2 and ``failure-timeout`` to ``60s`` would cause the resource to move to a new node after 2 failures, and allow it to move back (depending on stickiness and constraint scores) after one minute. .. note:: ``failure-timeout`` is measured since the most recent failure. That is, older failures do not individually time out and lower the fail count. Instead, all failures are timed out simultaneously (and the fail count is reset to 0) if there is no new failure for the timeout period. There are two exceptions to the migration threshold: when a resource either fails to start or fails to stop. If the cluster property ``start-failure-is-fatal`` is set to ``true`` (which is the default), start failures cause the fail count to be set to ``INFINITY`` and thus always cause the resource to move immediately. Stop failures are slightly different and crucial. If a resource fails to stop and fencing is enabled, then the cluster will fence the node in order to be able to start the resource elsewhere. If fencing is disabled, then the cluster has no way to continue and will not try to start the resource elsewhere, but will try to stop it again after any failure timeout or clearing. .. index:: single: reload single: reload-agent Reloading an Agent After a Definition Change ############################################ The cluster automatically detects changes to the configuration of active resources. The cluster's normal response is to stop the service (using the old definition) and start it again (with the new definition). This works, but some resource agents are smarter and can be told to use a new set of options without restarting. To take advantage of this capability, the resource agent must: * Implement the ``reload-agent`` action. What it should do depends completely on your application! .. note:: Resource agents may also implement a ``reload`` action to make the managed service reload its own *native* configuration. This is different from ``reload-agent``, which makes effective changes in the resource's *Pacemaker* configuration (specifically, the values of the agent's reloadable parameters). * Advertise the ``reload-agent`` operation in the ``actions`` section of its meta-data. * Set the ``reloadable`` attribute to 1 in the ``parameters`` section of its meta-data for any parameters eligible to be reloaded after a change. Once these requirements are satisfied, the cluster will automatically know to reload the resource (instead of restarting) when a reloadable parameter changes. .. note:: Metadata will not be re-read unless the resource needs to be started. If you edit the agent of an already active resource to set a parameter reloadable, the resource may restart the first time the parameter value changes. .. note:: If both a reloadable and non-reloadable parameter are changed simultaneously, the resource will be restarted. .. _live-migration: Migrating Resources ################### Normally, when the cluster needs to move a resource, it fully restarts the resource (that is, it stops the resource on the current node and starts it on the new node). However, some types of resources, such as many virtual machines, are able to move to another location without loss of state (often referred to as live migration or hot migration). In pacemaker, this is called live migration. Pacemaker can be configured to migrate a resource when moving it, rather than restarting it. Not all resources are able to migrate; see the :ref:`migration checklist ` below. Even those that can, won't do so in all situations. Conceptually, there are two requirements from which the other prerequisites follow: * The resource must be active and healthy at the old location; and * everything required for the resource to run must be available on both the old and new locations. The cluster is able to accommodate both *push* and *pull* migration models by requiring the resource agent to support two special actions: ``migrate_to`` (performed on the current location) and ``migrate_from`` (performed on the destination). In push migration, the process on the current location transfers the resource to the new location where is it later activated. In this scenario, most of the work would be done in the ``migrate_to`` action and, if anything, the activation would occur during ``migrate_from``. Conversely for pull, the ``migrate_to`` action is practically empty and ``migrate_from`` does most of the work, extracting the relevant resource state from the old location and activating it. There is no wrong or right way for a resource agent to implement migration, as long as it works. .. _migration_checklist: .. topic:: Migration Checklist * The resource may not be a clone. * The resource agent standard must be OCF. * The resource must not be in a failed or degraded state. * The resource agent must support ``migrate_to`` and ``migrate_from`` actions, and advertise them in its meta-data. * The resource must have the ``allow-migrate`` meta-attribute set to ``true`` (which is not the default). If an otherwise migratable resource depends on another resource via an ordering constraint, there are special situations in which it will be restarted rather than migrated. For example, if the resource depends on a clone, and at the time the resource needs to be moved, the clone has instances that are stopping and instances that are starting, then the resource will be restarted. The scheduler is not yet able to model this situation correctly and so takes the safer (if less optimal) path. Also, if a migratable resource depends on a non-migratable resource, and both need to be moved, the migratable resource will be restarted. .. rubric:: Footnotes .. [#] Currently, anyway. Automatic monitoring operations may be added in a future version of Pacemaker. .. [#] The naming of this option was perhaps unfortunate as it is easily confused with live migration, the process of moving a resource from one node to another without stopping it. Xen virtual guests are the most common example of resources that can be migrated in this manner. diff --git a/doc/sphinx/Pacemaker_Explained/resources.rst b/doc/sphinx/Pacemaker_Explained/resources.rst index 0c384b1f2b..aa7bdb82ad 100644 --- a/doc/sphinx/Pacemaker_Explained/resources.rst +++ b/doc/sphinx/Pacemaker_Explained/resources.rst @@ -1,832 +1,831 @@ .. _resource: Resources --------- .. _s-resource-primitive: .. index:: single: resource A *resource* is a service managed by Pacemaker. The simplest type of resource, a *primitive*, is described in this chapter. More complex forms, such as groups and clones, are described in later chapters. Every primitive has a *resource agent* that provides Pacemaker a standardized interface for managing the service. This allows Pacemaker to be agnostic about the services it manages. Pacemaker doesn't need to understand how the service works because it relies on the resource agent to do the right thing when asked. Every resource has a *standard* (also called *class*) specifying the interface that its resource agent follows, and a *type* identifying the specific service being managed. .. _s-resource-supported: .. index:: single: resource; standard Resource Standards ################## Pacemaker can use resource agents complying with these standards, described in more detail below: * ocf * lsb * systemd * service * stonith Support for some standards is controlled by build options and so might not be available in any particular build of Pacemaker. The command ``crm_resource --list-standards`` will show which standards are supported by the local build. .. index:: single: resource; OCF single: OCF; resources single: Open Cluster Framework; resources Open Cluster Framework ______________________ The Open Cluster Framework (OCF) Resource Agent API is a ClusterLabs standard for managing services. It is the most preferred since it is specifically designed for use in a Pacemaker cluster. OCF agents are scripts that support a variety of actions including ``start``, ``stop``, and ``monitor``. They may accept parameters, making them more flexible than other standards. The number and purpose of parameters is left to the agent, which advertises them via the ``meta-data`` action. Unlike other standards, OCF agents have a *provider* as well as a standard and type. For more information, see the "Resource Agents" chapter of *Pacemaker Administration* and the `OCF standard `_. .. _s-resource-supported-systemd: .. index:: single: Resource; Systemd single: Systemd; resources Systemd _______ Most Linux distributions use `Systemd `_ for system initialization and service management. *Unit files* specify how to manage services and are usually provided by the distribution. Pacemaker can manage systemd services. Simply create a resource with ``systemd`` as the resource standard and the unit file name as the resource type. Do *not* run ``systemctl enable`` on the unit. .. important:: Make sure that any systemd services to be controlled by the cluster are *not* enabled to start at boot. .. index:: single: resource; LSB single: LSB; resources single: Linux Standard Base; resources Linux Standard Base ___________________ *LSB* resource agents, also known as `SysV-style `_, are scripts that provide start, stop, and status actions for a service. They are provided by some operating system distributions. If a full path is not given, they are assumed to be located in a directory specified when your Pacemaker software was built (usually ``/etc/init.d``). In order to be used with Pacemaker, they must conform to the `LSB specification `_ as it relates to init scripts. .. warning:: Some LSB scripts do not fully comply with the standard. For details on how to check whether your script is LSB-compatible, see the "Resource Agents" chapter of `Pacemaker Administration`. Common problems include: * Not implementing the ``status`` action * Not observing the correct exit status codes * Starting a started resource returns an error * Stopping a stopped resource returns an error .. important:: Make sure the host is *not* configured to start any LSB services at boot that will be controlled by the cluster. .. index:: single: Resource; System Services single: System Service; resources System Services _______________ Since there is more than one type of system service (``systemd`` and ``lsb``), Pacemaker supports a special ``service`` alias which intelligently figures out which one applies to a given cluster node. This is particularly useful when the cluster contains a mix of ``systemd`` and ``lsb``. If the ``service`` standard is specified, Pacemaker will try to find the named service as an LSB init script, and if none exists, a systemd unit file. .. index:: single: Resource; STONITH single: STONITH; resources STONITH _______ The ``stonith`` standard is used for managing fencing devices, discussed later in :ref:`fencing`. .. _primitive-resource: Resource Properties ################### These values tell the cluster which resource agent to use for the resource, where to find that resource agent and what standards it conforms to. .. table:: **Properties of a Primitive Resource** :widths: 1 4 +-------------+------------------------------------------------------------------+ | Field | Description | +=============+==================================================================+ | id | .. index:: | | | single: id; resource | | | single: resource; property, id | | | | | | Your name for the resource | +-------------+------------------------------------------------------------------+ | class | .. index:: | | | single: class; resource | | | single: resource; property, class | | | | | | The standard the resource agent conforms to. Allowed values: | | | ``lsb``, ``ocf``, ``service``, ``stonith``, and ``systemd`` | +-------------+------------------------------------------------------------------+ | description | .. index:: | | | single: description; resource | | | single: resource; property, description | | | | - | | A description of the Resource Agent, intended for local use. | - | | E.g. ``IP address for website`` | + | | Arbitrary text for user's use (ignored by Pacemaker) | +-------------+------------------------------------------------------------------+ | type | .. index:: | | | single: type; resource | | | single: resource; property, type | | | | | | The name of the Resource Agent you wish to use. E.g. | | | ``IPaddr`` or ``Filesystem`` | +-------------+------------------------------------------------------------------+ | provider | .. index:: | | | single: provider; resource | | | single: resource; property, provider | | | | | | The OCF spec allows multiple vendors to supply the same resource | | | agent. To use the OCF resource agents supplied by the Heartbeat | | | project, you would specify ``heartbeat`` here. | +-------------+------------------------------------------------------------------+ The XML definition of a resource can be queried with the **crm_resource** tool. For example: .. code-block:: none # crm_resource --resource Email --query-xml might produce: .. topic:: A system resource definition .. code-block:: xml .. note:: One of the main drawbacks to system services (lsb and systemd) is that they do not allow parameters .. topic:: An OCF resource definition .. code-block:: xml .. _resource_options: Resource Options ################ Resources have two types of options: *meta-attributes* and *instance attributes*. Meta-attributes apply to any type of resource, while instance attributes are specific to each resource agent. Resource Meta-Attributes ________________________ Meta-attributes are used by the cluster to decide how a resource should behave and can be easily set using the ``--meta`` option of the **crm_resource** command. .. list-table:: **Meta-attributes of a Primitive Resource** :class: longtable :widths: 2 2 3 5 :header-rows: 1 * - Name - Type - Default - Description * - .. _meta_priority: .. index:: single: priority; resource option single: resource; option, priority priority - :ref:`score ` - 0 - If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. * - .. _meta_critical: .. index:: single: critical; resource option single: resource; option, critical critical - :ref:`boolean ` - true - Use this value as the default for ``influence`` in all :ref:`colocation constraints ` involving this resource, as well as in the implicit colocation constraints created if this resource is in a :ref:`group `. For details, see :ref:`s-coloc-influence`. *(since 2.1.0)* * - .. _meta_target_role: .. index:: single: target-role; resource option single: resource; option, target-role target-role - :ref:`enumeration ` - Started - What state should the cluster attempt to keep this resource in? Allowed values: * ``Stopped:`` Force the resource to be stopped * ``Started:`` Allow the resource to be started (and in the case of :ref:`promotable ` clone resources, promoted if appropriate) * ``Unpromoted:`` Allow the resource to be started, but only in the unpromoted role if the resource is :ref:`promotable ` * ``Promoted:`` Equivalent to ``Started`` * - .. _meta_is_managed: .. _is_managed: .. index:: single: is-managed; resource option single: resource; option, is-managed is-managed - :ref:`boolean ` - true - If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. Maintenance mode overrides this setting. * - .. _meta_maintenance: .. _rsc_maintenance: .. index:: single: maintenance; resource option single: resource; option, maintenance maintenance - :ref:`boolean ` - false - If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying ``role`` as ``Stopped``). If true, the :ref:`maintenance-mode ` cluster option or :ref:`maintenance ` node attribute overrides this. * - .. _meta_resource_stickiness: .. _resource-stickiness: .. index:: single: resource-stickiness; resource option single: resource; option, resource-stickiness resource-stickiness - :ref:`score ` - 1 for individual clone instances, 0 for all other resources - A score that will be added to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. * - .. _meta_requires: .. _requires: .. index:: single: requires; resource option single: resource; option, requires requires - :ref:`enumeration ` - ``quorum`` for resources with a ``class`` of ``stonith``, otherwise ``unfencing`` if unfencing is active in the cluster, otherwise ``fencing`` if ``stonith-enabled`` is true, otherwise ``quorum`` - Conditions under which the resource can be started. Allowed values: * ``nothing:`` The cluster can always start this resource. * ``quorum:`` The cluster can start this resource only if a majority of the configured nodes are active. * ``fencing:`` The cluster can start this resource only if a majority of the configured nodes are active *and* any failed or unknown nodes have been :ref:`fenced `. * ``unfencing:`` The cluster can only start this resource if a majority of the configured nodes are active *and* any failed or unknown nodes have been fenced *and* only on nodes that have been :ref:`unfenced `. * - .. _meta_migration_threshold: .. index:: single: migration-threshold; resource option single: resource; option, migration-threshold migration-threshold - :ref:`score ` - INFINITY - How many failures may occur for this resource on a node, before this node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible); by contrast, the cluster treats ``INFINITY`` (the default) as a very large but finite number. This option has an effect only if the failed operation specifies ``on-fail`` as ``restart`` (the default), and additionally for failed ``start`` operations, if the cluster property ``start-failure-is-fatal`` is ``false``. * - .. _meta_failure_timeout: .. index:: single: failure-timeout; resource option single: resource; option, failure-timeout failure-timeout - :ref:`duration ` - 0 - Ignore previously failed resource actions after this much time has passed without new failures (potentially allowing the resource back to the node on which it failed, if it previously reached its ``migration-threshold`` there). A value of 0 indicates that failures do not expire. **WARNING:** If this value is low, and pending cluster activity prevents the cluster from responding to a failure within that time, then the failure will be ignored completely and will not cause recovery of the resource, even if a recurring action continues to report failure. It should be at least greater than the longest :ref:`action timeout ` for all resources in the cluster. A value in hours or days is reasonable. * - .. _meta_multiple_active: .. index:: single: multiple-active; resource option single: resource; option, multiple-active multiple-active - :ref:`enumeration ` - stop_start - What should the cluster do if it ever finds the resource active on more than one node? Allowed values: * ``block``: mark the resource as unmanaged * ``stop_only``: stop all active instances and leave them that way * ``stop_start``: stop all active instances and start the resource in one location only * ``stop_unexpected``: stop all active instances except where the resource should be active (this should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused; note that any resources ordered after this will still need to be restarted) *(since 2.1.3)* * - .. _meta_allow_migrate: .. index:: single: allow-migrate; resource option single: resource; option, allow-migrate allow-migrate - :ref:`boolean ` - true for ``ocf:pacemaker:remote`` resources, false otherwise - Whether the cluster should try to "live migrate" this resource when it needs to be moved (see :ref:`live-migration`) * - .. _meta_allow_unhealthy_nodes: .. index:: single: allow-unhealthy-nodes; resource option single: resource; option, allow-unhealthy-nodes allow-unhealthy-nodes - :ref:`boolean ` - false - Whether the resource should be able to run on a node even if the node's health score would otherwise prevent it (see :ref:`node-health`) *(since 2.1.3)* * - .. _meta_container_attribute_target: .. index:: single: container-attribute-target; resource option single: resource; option, container-attribute-target container-attribute-target - :ref:`enumeration ` - - Specific to bundle resources; see :ref:`s-bundle-attributes` As an example of setting resource options, if you performed the following commands on an LSB Email resource: .. code-block:: none # crm_resource --meta --resource Email --set-parameter priority --parameter-value 100 # crm_resource -m -r Email -p multiple-active -v block the resulting resource definition might be: .. topic:: An LSB resource with cluster options .. code-block:: xml In addition to the cluster-defined meta-attributes described above, you may also configure arbitrary meta-attributes of your own choosing. Most commonly, this would be done for use in :ref:`rules `. For example, an IT department might define a custom meta-attribute to indicate which company department each resource is intended for. To reduce the chance of name collisions with cluster-defined meta-attributes added in the future, it is recommended to use a unique, organization-specific prefix for such attributes. .. _s-resource-defaults: Setting Global Defaults for Resource Meta-Attributes ____________________________________________________ To set a default value for a resource option, add it to the ``rsc_defaults`` section with ``crm_attribute``. For example, .. code-block:: none # crm_attribute --type rsc_defaults --name is-managed --update false would prevent the cluster from starting or stopping any of the resources in the configuration (unless of course the individual resources were specifically enabled by having their ``is-managed`` set to ``true``). Resource Instance Attributes ____________________________ The resource agents of some resource standards (lsb and systemd *not* among them) can be given parameters which determine how they behave and which instance of a service they control. If your resource agent supports parameters, you can add them with the ``crm_resource`` command. For example, .. code-block:: none # crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2 would create an entry in the resource like this: .. topic:: An example OCF resource with instance attributes .. code-block:: xml For an OCF resource, the result would be an environment variable called ``OCF_RESKEY_ip`` with a value of ``192.0.2.2``. The list of instance attributes supported by an OCF resource agent can be found by calling the resource agent with the ``meta-data`` command. The output contains an XML description of all the supported attributes, their purpose and default values. .. topic:: Displaying the metadata for the Dummy resource agent template .. code-block:: none # export OCF_ROOT=/usr/lib/ocf # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data .. code-block:: xml 1.1 This is a dummy OCF resource agent. It does absolutely nothing except keep track of whether it is running or not, and can be configured so that actions fail or take a long time. Its purpose is primarily for testing, and to serve as a template for resource agent writers. Example stateless resource agent Location to store the resource state in. State file Fake password field Password Fake attribute that can be changed to cause a reload Fake attribute that can be changed to cause a reload Number of seconds to sleep during operations. This can be used to test how the cluster reacts to operation timeouts. Operation sleep duration in seconds. Start, migrate_from, and reload-agent actions will return failure if running on the host specified here, but the resource will run successfully anyway (future monitor calls will find it running). This can be used to test on-fail=ignore. Report bogus start failure on specified host If this is set, the environment will be dumped to this file for every call. Environment dump file Pacemaker Remote Resources ########################## :ref:`Pacemaker Remote ` nodes are defined by resources. .. _remote_nodes: .. index:: single: node; remote single: Pacemaker Remote; remote node single: remote node Remote nodes ____________ A remote node is defined by a connection resource using the special, built-in **ocf:pacemaker:remote** resource agent. .. list-table:: **ocf:pacemaker:remote Instance Attributes** :class: longtable :widths: 2 2 3 5 :header-rows: 1 * - Name - Type - Default - Description * - .. _remote_server: .. index:: pair: remote node; server server - :ref:`text ` - resource ID - Hostname or IP address used to connect to the remote node. The remote executor on the remote node must be configured to accept connections on this address. * - .. _remote_port: .. index:: pair: remote node; port port - :ref:`port ` - 3121 - TCP port on the remote node used for its Pacemaker Remote connection. The remote executor on the remote node must be configured to listen on this port. * - .. _remote_reconnect_interval: .. index:: pair: remote node; reconnect_interval reconnect_interval - :ref:`duration ` - 0 - If positive, the cluster will attempt to reconnect to a remote node at this interval after an active connection has been lost. Otherwise, the cluster will attempt to reconnect immediately (after any fencing, if needed). .. _guest_nodes: .. index:: single: node; guest single: Pacemaker Remote; guest node single: guest node Guest Nodes ___________ When configuring a virtual machine as a guest node, the virtual machine is created using one of the usual resource agents for that purpose (for example, **ocf:heartbeat:VirtualDomain** or **ocf:heartbeat:Xen**), with additional meta-attributes. No restrictions are enforced on what agents may be used to create a guest node, but obviously the agent must create a distinct environment capable of running the remote executor and cluster resources. An additional requirement is that fencing the node hosting the guest node resource must be sufficient for ensuring the guest node is stopped. This means that not all hypervisors supported by **VirtualDomain** may be used to create guest nodes; if the guest can survive the hypervisor being fenced, it is unsuitable for use as a guest node. .. list-table:: **Guest node meta-attributes** :class: longtable :widths: 2 2 3 5 :header-rows: 1 * - Name - Type - Default - Description * - .. _meta_remote_node: .. index:: single: remote-node; resource option single: resource; option, remote-node remote-node - :ref:`text ` - - If specified, this resource defines a guest node using this node name. The guest must be configured to run the remote executor when it is started. This value *must not* be the same as any resource or node ID. * - .. _meta_remote_addr: .. index:: single: remote-addr; resource option single: resource; option, remote-addr remote-addr - :ref:`text ` - value of ``remote-node`` - If ``remote-node`` is specified, the hostname or IP address used to connect to the guest. The remote executor on the guest must be configured to accept connections on this address. * - .. _meta_remote_port: .. index:: single: remote-port; resource option single: resource; option, remote-port remote-port - :ref:`port ` - 3121 - If ``remote-node`` is specified, the port on the guest used for its Pacemaker Remote connection. The remote executor on the guest must be configured to listen on this port. * - .. _meta_remote_connect_timeout: .. index:: single: remote-connect-timeout; resource option single: resource; option, remote-connect-timeout remote-connect-timeout - :ref:`timeout ` - 60s - If ``remote-node`` is specified, how long before a pending guest connection will time out. * - .. _meta_remote_allow_migrate: .. index:: single: remote-allow-migrate; resource option single: resource; option, remote-allow-migrate remote-allow-migrate - :ref:`boolean ` - true - If ``remote-node`` is specified, this acts as the ``allow-migrate`` meta-attribute for its implicitly created remote connection resource (``ocf:pacemaker:remote``). Removing Pacemaker Remote Nodes _______________________________ If the resource creating a remote node connection or guest node is removed from the configuration, status output may continue to show the affected node (as offline). If you want to get rid of that output, run the following command, replacing ``$NODE_NAME`` appropriately: .. code-block:: none # crm_node --force --remove $NODE_NAME .. WARNING:: Be absolutely sure that there are no references to the node's resource in the configuration before running the above command.