diff --git a/doc/sphinx/Pacemaker_Administration/moving.rst b/doc/sphinx/Pacemaker_Administration/moving.rst index 5881dd2910..2c3c4449a7 100644 --- a/doc/sphinx/Pacemaker_Administration/moving.rst +++ b/doc/sphinx/Pacemaker_Administration/moving.rst @@ -1,305 +1,303 @@ Moving Resources ---------------- .. index:: single: resource; move Moving Resources Manually ######################### There are primarily two occasions when you would want to move a resource from its current location: when the whole node is under maintenance, and when a single resource needs to be moved. .. index:: single: standby mode single: node; standby mode Standby Mode ____________ Since everything eventually comes down to a score, you could create constraints for every resource to prevent them from running on one node. While Pacemaker configuration can seem convoluted at times, not even we would require this of administrators. Instead, you can set a special node attribute which tells the cluster "don't let anything run here". There is even a helpful tool to help query and set it, called ``crm_standby``. To check the standby status of the current machine, run: .. code-block:: none # crm_standby -G A value of ``on`` indicates that the node is *not* able to host any resources, while a value of ``off`` says that it *can*. You can also check the status of other nodes in the cluster by specifying the `--node` option: .. code-block:: none # crm_standby -G --node sles-2 To change the current node's standby status, use ``-v`` instead of ``-G``: .. code-block:: none # crm_standby -v on Again, you can change another host's value by supplying a hostname with ``--node``. A cluster node in standby mode will not run resources, but still contributes to quorum, and may fence or be fenced by nodes. Moving One Resource ___________________ When only one resource is required to move, we could do this by creating location constraints. However, once again we provide a user-friendly shortcut as part of the ``crm_resource`` command, which creates and modifies the extra constraints for you. If ``Email`` were running on ``sles-1`` and you wanted it moved to a specific location, the command would look something like: .. code-block:: none # crm_resource -M -r Email -H sles-2 Behind the scenes, the tool will create the following location constraint: .. code-block:: xml It is important to note that subsequent invocations of ``crm_resource -M`` are not cumulative. So, if you ran these commands: .. code-block:: none # crm_resource -M -r Email -H sles-2 # crm_resource -M -r Email -H sles-3 then it is as if you had never performed the first command. To allow the resource to move back again, use: .. code-block:: none # crm_resource -U -r Email Note the use of the word *allow*. The resource *can* move back to its original location, but depending on ``resource-stickiness``, location constraints, and so forth, it might stay where it is. To be absolutely certain that it moves back to ``sles-1``, move it there before issuing the call to ``crm_resource -U``: .. code-block:: none # crm_resource -M -r Email -H sles-1 # crm_resource -U -r Email Alternatively, if you only care that the resource should be moved from its current location, try: .. code-block:: none # crm_resource -B -r Email which will instead create a negative constraint, like: .. code-block:: xml This will achieve the desired effect, but will also have long-term consequences. As the tool will warn you, the creation of a ``-INFINITY`` constraint will prevent the resource from running on that node until ``crm_resource -U`` is used. This includes the situation where every other cluster node is no longer available! In some cases, such as when ``resource-stickiness`` is set to ``INFINITY``, it is possible that you will end up with nodes with the same score, forcing the cluster to choose one (which may not be the one you want). The tool can detect some of these cases and deals with them by creating both positive and negative constraints. For example: .. code-block:: xml which has the same long-term consequences as discussed earlier. Moving Resources Due to Connectivity Changes ############################################ You can configure the cluster to move resources when external connectivity is lost in two steps. .. index:: single: ocf:pacemaker:ping resource single: ping resource Tell Pacemaker to Monitor Connectivity ______________________________________ First, add an ``ocf:pacemaker:ping`` resource to the cluster. The ``ping`` resource uses the system utility of the same name to a test whether a list of machines (specified by DNS hostname or IP address) are reachable, and uses the results to maintain a node attribute. The node attribute is called ``pingd`` by default, but is customizable in order to allow multiple ping groups to be defined. Normally, the ping resource should run on all cluster nodes, which means that you'll need to create a clone. A template for this can be found below, along with a description of the most interesting parameters. -.. table:: **Commonly Used ocf:pacemaker:ping Resource Parameters** +.. list-table:: **Commonly Used ocf:pacemaker:ping Resource Parameters** :widths: 20 80 - - +--------------------+--------------------------------------------------------------+ - | Resource Parameter | Description | - +====================+==============================================================+ - | dampen | .. index:: | - | | single: ocf:pacemaker:ping resource; dampen parameter | - | | single: dampen; ocf:pacemaker:ping resource parameter | - | | | - | | The time to wait (dampening) for further changes to occur. | - | | Use this to prevent a resource from bouncing around the | - | | cluster when cluster nodes notice the loss of connectivity | - | | at slightly different times. | - +--------------------+--------------------------------------------------------------+ - | multiplier | .. index:: | - | | single: ocf:pacemaker:ping resource; multiplier parameter | - | | single: multiplier; ocf:pacemaker:ping resource parameter | - | | | - | | The number of connected ping nodes gets multiplied by this | - | | value to get a score. Useful when there are multiple ping | - | | nodes configured. | - +--------------------+--------------------------------------------------------------+ - | host_list | .. index:: | - | | single: ocf:pacemaker:ping resource; host_list parameter | - | | single: host_list; ocf:pacemaker:ping resource parameter | - | | | - | | The machines to contact in order to determine the current | - | | connectivity status. Allowed values include resolvable DNS | - | | connectivity host names, IPv4 addresses, and IPv6 addresses. | - +--------------------+--------------------------------------------------------------+ + :header-rows: 1 + + * - Resource Parameter + - Description + * - dampen + - .. index:: + single: ocf:pacemaker:ping resource; dampen parameter + single: dampen; ocf:pacemaker:ping resource parameter + + The time to wait (dampening) for further changes to occur. Use this to + prevent a resource from bouncing around the cluster when cluster nodes + notice the loss of connectivity at slightly different times. + * - multiplier + - .. index:: + single: ocf:pacemaker:ping resource; multiplier parameter + single: multiplier; ocf:pacemaker:ping resource parameter + + The number of connected ping nodes gets multiplied by this value to get + a score. Useful when there are multiple ping nodes configured. + * - host_list + - .. index:: + single: ocf:pacemaker:ping resource; host_list parameter + single: host_list; ocf:pacemaker:ping resource parameter + + The machines to contact in order to determine the current connectivity + status. Allowed values include resolvable DNS connectivity host names, + IPv4 addresses, and IPv6 addresses. .. topic:: Example ping resource that checks node connectivity once every minute .. code-block:: xml .. important:: You're only half done. The next section deals with telling Pacemaker how to deal with the connectivity status that ``ocf:pacemaker:ping`` is recording. Tell Pacemaker How to Interpret the Connectivity Data _____________________________________________________ .. important:: Before attempting the following, make sure you understand rules. See the "Rules" chapter of the *Pacemaker Explained* document for details. There are a number of ways to use the connectivity data. The most common setup is for people to have a single ping target (for example, the service network's default gateway), to prevent the cluster from running a resource on any unconnected node. .. topic:: Don't run a resource on unconnected nodes .. code-block:: xml A more complex setup is to have a number of ping targets configured. You can require the cluster to only run resources on nodes that can connect to all (or a minimum subset) of them. .. topic:: Run only on nodes connected to three or more ping targets .. code-block:: xml ... ... ... Alternatively, you can tell the cluster only to *prefer* nodes with the best connectivity, by using ``score-attribute`` in the rule. Just be sure to set ``multiplier`` to a value higher than that of ``resource-stickiness`` (and don't set either of them to ``INFINITY``). .. topic:: Prefer node with most connected ping nodes .. code-block:: xml It is perhaps easier to think of this in terms of the simple constraints that the cluster translates it into. For example, if ``sles-1`` is connected to all five ping nodes but ``sles-2`` is only connected to two, then it would be as if you instead had the following constraints in your configuration: .. topic:: How the cluster translates the above location constraint .. code-block:: xml The advantage is that you don't have to manually update any constraints whenever your network connectivity changes. You can also combine the concepts above into something even more complex. The example below shows how you can prefer the node with the most connected ping nodes provided they have connectivity to at least three (again assuming that ``multiplier`` is set to 1000). .. topic:: More complex example of choosing location based on connectivity .. code-block:: xml diff --git a/doc/sphinx/Pacemaker_Administration/tools.rst b/doc/sphinx/Pacemaker_Administration/tools.rst index ffcc379505..7911c335bf 100644 --- a/doc/sphinx/Pacemaker_Administration/tools.rst +++ b/doc/sphinx/Pacemaker_Administration/tools.rst @@ -1,562 +1,576 @@ .. index:: command-line tool Using Pacemaker Command-Line Tools ---------------------------------- .. index:: single: command-line tool; output format .. _cmdline_output: Controlling Command Line Output ############################### Some of the pacemaker command line utilities have been converted to a new output system. Among these tools are ``crm_mon`` and ``stonith_admin``. This is an ongoing project, and more tools will be converted over time. This system lets you control the formatting of output with ``--output-as=`` and the destination of output with ``--output-to=``. The available formats vary by tool, but at least plain text and XML are supported by all tools that use the new system. The default format is plain text. The default destination is stdout but can be redirected to any file. Some formats support command line options for changing the style of the output. For instance: .. code-block:: none # crm_mon --help-output Usage: crm_mon [OPTION?] Provides a summary of cluster's current state. Outputs varying levels of detail in a number of different formats. Output Options: --output-as=FORMAT Specify output format as one of: console (default), html, text, xml --output-to=DEST Specify file name for output (or "-" for stdout) --html-cgi Add text needed to use output in a CGI program --html-stylesheet=URI Link to an external CSS stylesheet --html-title=TITLE Page title .. index:: single: crm_mon single: command-line tool; crm_mon .. _crm_mon: Monitor a Cluster with crm_mon ############################## The ``crm_mon`` utility displays the current state of an active cluster. It can show the cluster status organized by node or by resource, and can be used in either single-shot or dynamically updating mode. It can also display operations performed and information about failures. Using this tool, you can examine the state of the cluster for irregularities, and see how it responds when you cause or simulate failures. See the manual page or the output of ``crm_mon --help`` for a full description of its many options. .. topic:: Sample output from crm_mon -1 .. code-block:: none Cluster Summary: * Stack: corosync * Current DC: node2 (version 2.0.0-1) - partition with quorum * Last updated: Mon Jan 29 12:18:42 2018 * Last change: Mon Jan 29 12:18:40 2018 by root via crm_attribute on node3 * 5 nodes configured * 2 resources configured Node List: * Online: [ node1 node2 node3 node4 node5 ] * Active resources: * Fencing (stonith:fence_xvm): Started node1 * IP (ocf:heartbeat:IPaddr2): Started node2 .. topic:: Sample output from crm_mon -n -1 .. code-block:: none Cluster Summary: * Stack: corosync * Current DC: node2 (version 2.0.0-1) - partition with quorum * Last updated: Mon Jan 29 12:21:48 2018 * Last change: Mon Jan 29 12:18:40 2018 by root via crm_attribute on node3 * 5 nodes configured * 2 resources configured * Node List: * Node node1: online * Fencing (stonith:fence_xvm): Started * Node node2: online * IP (ocf:heartbeat:IPaddr2): Started * Node node3: online * Node node4: online * Node node5: online As mentioned in an earlier chapter, the DC is the node is where decisions are made. The cluster elects a node to be DC as needed. The only significance of the choice of DC to an administrator is the fact that its logs will have the most information about why decisions were made. .. index:: pair: crm_mon; CSS .. _crm_mon_css: Styling crm_mon HTML output ___________________________ Various parts of ``crm_mon``'s HTML output have a CSS class associated with them. Not everything does, but some of the most interesting portions do. In the following example, the status of each node has an ``online`` class and the details of each resource have an ``rsc-ok`` class. .. code-block:: html

Node List

  • Node: cluster01 online
    • ping (ocf::pacemaker:ping): Started
  • Node: cluster02 online
    • ping (ocf::pacemaker:ping): Started
By default, a stylesheet for styling these classes is included in the head of the HTML output. The relevant portions of this stylesheet that would be used in the above example is: .. code-block:: css If you want to override some or all of the styling, simply create your own stylesheet, place it on a web server, and pass ``--html-stylesheet=`` to ``crm_mon``. The link is added after the default stylesheet, so your changes take precedence. You don't need to duplicate the entire default. Only include what you want to change. .. index:: single: cibadmin single: command-line tool; cibadmin .. _cibadmin: Edit the CIB XML with cibadmin ############################## The most flexible tool for modifying the configuration is Pacemaker's ``cibadmin`` command. With ``cibadmin``, you can query, add, remove, update or replace any part of the configuration. All changes take effect immediately, so there is no need to perform a reload-like operation. The simplest way of using ``cibadmin`` is to use it to save the current configuration to a temporary file, edit that file with your favorite text or XML editor, and then upload the revised configuration. .. topic:: Safely using an editor to modify the cluster configuration .. code-block:: none # cibadmin --query > tmp.xml # vi tmp.xml # cibadmin --replace --xml-file tmp.xml Some of the better XML editors can make use of a RELAX NG schema to help make sure any changes you make are valid. The schema describing the configuration can be found in ``pacemaker.rng``, which may be deployed in a location such as ``/usr/share/pacemaker`` depending on your operating system distribution and how you installed the software. If you want to modify just one section of the configuration, you can query and replace just that section to avoid modifying any others. .. topic:: Safely using an editor to modify only the resources section .. code-block:: none # cibadmin --query --scope resources > tmp.xml # vi tmp.xml # cibadmin --replace --scope resources --xml-file tmp.xml To quickly delete a part of the configuration, identify the object you wish to delete by XML tag and id. For example, you might search the CIB for all STONITH-related configuration: .. topic:: Searching for STONITH-related configuration items .. code-block:: none # cibadmin --query | grep stonith If you wanted to delete the ``primitive`` tag with id ``child_DoFencing``, you would run: .. code-block:: none # cibadmin --delete --xml-text '' See the cibadmin man page for more options. .. warning:: Never edit the live ``cib.xml`` file directly. Pacemaker will detect such changes and refuse to use the configuration. .. index:: single: crm_shadow single: command-line tool; crm_shadow .. _crm_shadow: Batch Configuration Changes with crm_shadow ########################################### Often, it is desirable to preview the effects of a series of configuration changes before updating the live configuration all at once. For this purpose, ``crm_shadow`` creates a "shadow" copy of the configuration and arranges for all the command-line tools to use it. To begin, simply invoke ``crm_shadow --create`` with a name of your choice, and follow the simple on-screen instructions. Shadow copies are identified with a name to make it possible to have more than one. .. warning:: Read this section and the on-screen instructions carefully; failure to do so could result in destroying the cluster's active configuration! .. topic:: Creating and displaying the active sandbox .. code-block:: none # crm_shadow --create test Setting up shadow instance Type Ctrl-D to exit the crm_shadow shell shadow[test]: shadow[test] # crm_shadow --which test From this point on, all cluster commands will automatically use the shadow copy instead of talking to the cluster's active configuration. Once you have finished experimenting, you can either make the changes active via the ``--commit`` option, or discard them using the ``--delete`` option. Again, be sure to follow the on-screen instructions carefully! For a full list of ``crm_shadow`` options and commands, invoke it with the ``--help`` option. .. topic:: Use sandbox to make multiple changes all at once, discard them, and verify real configuration is untouched .. code-block:: none shadow[test] # crm_failcount -r rsc_c001n01 -G scope=status name=fail-count-rsc_c001n01 value=0 shadow[test] # crm_standby --node c001n02 -v on shadow[test] # crm_standby --node c001n02 -G scope=nodes name=standby value=on shadow[test] # cibadmin --erase --force shadow[test] # cibadmin --query shadow[test] # crm_shadow --delete test --force Now type Ctrl-D to exit the crm_shadow shell shadow[test] # exit # crm_shadow --which No active shadow configuration defined # cibadmin -Q See the next section, :ref:`crm_simulate`, for how to test your changes before committing them to the live cluster. .. index:: single: crm_simulate single: command-line tool; crm_simulate .. _crm_simulate: Simulate Cluster Activity with crm_simulate ########################################### The command-line tool `crm_simulate` shows the results of the same logic the cluster itself uses to respond to a particular cluster configuration and status. As always, the man page is the primary documentation, and should be consulted for further details. This section aims for a better conceptual explanation and practical examples. Replaying cluster decision-making logic _______________________________________ At any given time, one node in a Pacemaker cluster will be elected DC, and that node will run Pacemaker's scheduler to make decisions. Each time decisions need to be made (a "transition"), the DC will have log messages like "Calculated transition ... saving inputs in ..." with a file name. You can grab the named file and replay the cluster logic to see why particular decisions were made. The file contains the live cluster configuration at that moment, so you can also look at it directly to see the value of node attributes, etc., at that time. The simplest usage is (replacing $FILENAME with the actual file name): .. topic:: Simulate cluster response to a given CIB .. code-block:: none # crm_simulate --simulate --xml-file $FILENAME That will show the cluster state when the process started, the actions that need to be taken ("Transition Summary"), and the resulting cluster state if the actions succeed. Most actions will have a brief description of why they were required. The transition inputs may be compressed. ``crm_simulate`` can handle these compressed files directly, though if you want to edit the file, you'll need to uncompress it first. You can do the same simulation for the live cluster configuration at the current moment. This is useful mainly when using ``crm_shadow`` to create a sandbox version of the CIB; the ``--live-check`` option will use the shadow CIB if one is in effect. .. topic:: Simulate cluster response to current live CIB or shadow CIB .. code-block:: none # crm_simulate --simulate --live-check Why decisions were made _______________________ To get further insight into the "why", it gets user-unfriendly very quickly. If you add the ``--show-scores`` option, you will also see all the scores that went into the decision-making. The node with the highest cumulative score for a resource will run it. You can look for ``-INFINITY`` scores in particular to see where complete bans came into effect. You can also add ``-VVVV`` to get more detailed messages about what's happening under the hood. You can add up to two more V's even, but that's usually useful only if you're a masochist or tracing through the source code. Visualizing the action sequence _______________________________ Another handy feature is the ability to generate a visual graph of the actions needed, using the ``--save-dotfile`` option. This relies on the separate Graphviz [#]_ project. .. topic:: Generate a visual graph of cluster actions from a saved CIB .. code-block:: none # crm_simulate --simulate --xml-file $FILENAME --save-dotfile $FILENAME.dot # dot $FILENAME.dot -Tsvg > $FILENAME.svg ``$FILENAME.dot`` will contain a GraphViz representation of the cluster's response to your changes, including all actions with their ordering dependencies. ``$FILENAME.svg`` will be the same information in a standard graphical format that you can view in your browser or other app of choice. You could, of course, use other ``dot`` options to generate other formats. How to interpret the graphical output: * Bubbles indicate actions, and arrows indicate ordering dependencies * Resource actions have text of the form ``__ `` indicating that the specified action will be executed for the specified resource on the specified node, once if interval is 0 or at specified recurring interval otherwise * Actions with black text will be sent to the executor (that is, the appropriate agent will be invoked) * Actions with orange text are "pseudo" actions that the cluster uses internally for ordering but require no real activity * Actions with a solid green border are part of the transition (that is, the cluster will attempt to execute them in the given order -- though a transition can be interrupted by action failure or new events) * Dashed arrows indicate dependencies that are not present in the transition graph * Actions with a dashed border will not be executed. If the dashed border is blue, the cluster does not feel the action needs to be executed. If the dashed border is red, the cluster would like to execute the action but cannot. Any actions depending on an action with a dashed border will not be able to execute. * Loops should not happen, and should be reported as a bug if found. .. topic:: Small Cluster Transition .. image:: ../shared/images/Policy-Engine-small.png :alt: An example transition graph as represented by Graphviz :align: center In the above example, it appears that a new node, ``pcmk-2``, has come online and that the cluster is checking to make sure ``rsc1``, ``rsc2`` and ``rsc3`` are not already running there (indicated by the ``rscN_monitor_0`` entries). Once it did that, and assuming the resources were not active there, it would have liked to stop ``rsc1`` and ``rsc2`` on ``pcmk-1`` and move them to ``pcmk-2``. However, there appears to be some problem and the cluster cannot or is not permitted to perform the stop actions which implies it also cannot perform the start actions. For some reason, the cluster does not want to start ``rsc3`` anywhere. .. topic:: Complex Cluster Transition .. image:: ../shared/images/Policy-Engine-big.png :alt: Complex transition graph that you're not expected to be able to read :align: center What-if scenarios _________________ You can make changes to the saved or shadow CIB and simulate it again, to see how Pacemaker would react differently. You can edit the XML by hand, use command-line tools such as ``cibadmin`` with either a shadow CIB or the ``CIB_file`` environment variable set to the filename, or use higher-level tool support (see the man pages of the specific tool you're using for how to perform actions on a saved CIB file rather than the live CIB). You can also inject node failures and/or action failures into the simulation; see the ``crm_simulate`` man page for more details. This capability is useful when using a shadow CIB to edit the configuration. Before committing the changes to the live cluster with ``crm_shadow --commit``, you can use ``crm_simulate`` to see how the cluster will react to the changes. .. _crm_attribute: .. index:: single: attrd_updater single: command-line tool; attrd_updater single: crm_attribute single: command-line tool; crm_attribute Manage Node Attributes, Cluster Options and Defaults with crm_attribute and attrd_updater ######################################################################################### ``crm_attribute`` and ``attrd_updater`` are confusingly similar tools with subtle differences. ``attrd_updater`` can query and update node attributes. ``crm_attribute`` can query and update not only node attributes, but also cluster options, resource defaults, and operation defaults. To understand the differences, it helps to understand the various types of node attribute. -.. table:: **Types of Node Attributes** +.. list-table:: **Types of Node Attributes** :widths: 20 16 16 16 16 16 - - +-----------+----------+-------------------+------------------+----------------+----------------+ - | Type | Recorded | Recorded in | Survive full | Manageable by | Manageable by | - | | in CIB? | attribute manager | cluster restart? | crm_attribute? | attrd_updater? | - | | | memory? | | | | - +===========+==========+===================+==================+================+================+ - | permanent | yes | no | yes | yes | no | - +-----------+----------+-------------------+------------------+----------------+----------------+ - | transient | yes | yes | no | yes | yes | - +-----------+----------+-------------------+------------------+----------------+----------------+ - | private | no | yes | no | no | yes | - +-----------+----------+-------------------+------------------+----------------+----------------+ + :header-rows: 1 + + * - Type + - Recorded in CIB? + - Recorded in attribute manager memory? + - Survive full cluster restart? + - Manageable by by crm_attribute? + - Manageable by attrd_updater? + * - permanent + - yes + - no + - yes + - yes + - no + * - transient + - yes + - yes + - no + - yes + - yes + * - private + - no + - yes + - no + - no + - yes As you can see from the table above, ``crm_attribute`` can manage permanent and transient node attributes, while ``attrd_updater`` can manage transient and private node attributes. The difference between the two tools lies mainly in *how* they update node attributes: ``attrd_updater`` always contacts the Pacemaker attribute manager directly, while ``crm_attribute`` will contact the attribute manager only for transient node attributes, and will instead modify the CIB directly for permanent node attributes (and for transient node attributes when unable to contact the attribute manager). By contacting the attribute manager directly, ``attrd_updater`` can change an attribute's "dampening" (whether changes are immediately flushed to the CIB or after a specified amount of time, to minimize disk writes for frequent changes), set private node attributes (which are never written to the CIB), and set attributes for nodes that don't yet exist. By modifying the CIB directly, ``crm_attribute`` can set permanent node attributes (which are only in the CIB and not managed by the attribute manager), and can be used with saved CIB files and shadow CIBs. However a transient node attribute is set, it is synchronized between the CIB and the attribute manager, on all nodes. .. index:: single: crm_failcount single: command-line tool; crm_failcount single: crm_node single: command-line tool; crm_node single: crm_report single: command-line tool; crm_report single: crm_standby single: command-line tool; crm_standby single: crm_verify single: command-line tool; crm_verify single: stonith_admin single: command-line tool; stonith_admin Other Commonly Used Tools ######################### Other command-line tools include: * ``crm_failcount``: query or delete resource fail counts * ``crm_node``: manage cluster nodes * ``crm_report``: generate a detailed cluster report for bug submissions * ``crm_resource``: manage cluster resources * ``crm_standby``: manage standby status of nodes * ``crm_verify``: validate a CIB * ``stonith_admin``: manage fencing devices See the manual pages for details. .. rubric:: Footnotes .. [#] Graph visualization software. See http://www.graphviz.org/ for details. diff --git a/doc/sphinx/Pacemaker_Administration/upgrading.rst b/doc/sphinx/Pacemaker_Administration/upgrading.rst index 1c9ea062cc..9d87bca571 100644 --- a/doc/sphinx/Pacemaker_Administration/upgrading.rst +++ b/doc/sphinx/Pacemaker_Administration/upgrading.rst @@ -1,566 +1,579 @@ .. index:: upgrade Upgrading a Pacemaker Cluster ----------------------------- .. index:: version Pacemaker Versioning #################### Pacemaker has an overall release version, plus separate version numbers for certain internal components. .. index:: single: version; release * **Pacemaker release version:** This version consists of three numbers (*x.y.z*). The major version number (the *x* in *x.y.z*) increases when at least some rolling upgrades are not possible from the previous major version. For example, a rolling upgrade from 1.0.8 to 1.1.15 should always be supported, but a rolling upgrade from 1.0.8 to 2.0.0 may not be possible. The minor version (the *y* in *x.y.z*) increases when there are significant changes in cluster default behavior, tool behavior, and/or the API interface (for software that utilizes Pacemaker libraries). The main benefit is to alert you to pay closer attention to the release notes, to see if you might be affected. The release counter (the *z* in *x.y.z*) is increased with all public releases of Pacemaker, which typically include both bug fixes and new features. .. index:: single: feature set single: version; feature set * **CRM feature set:** This version number applies to the communication between full cluster nodes, and is used to avoid problems in mixed-version clusters. The major version number increases when nodes with different versions would not work (rolling upgrades are not allowed). The minor version number increases when mixed-version clusters are allowed only during rolling upgrades. The minor-minor version number is ignored, but allows resource agents to detect cluster support for various features. [#]_ Pacemaker ensures that the longest-running node is the cluster's DC. This ensures new features are not enabled until all nodes are upgraded to support them. .. index:: single: version; Pacemaker Remote protocol * **Pacemaker Remote protocol version:** This version applies to communication between a Pacemaker Remote node and the cluster. It increases when an older cluster node would have problems hosting the connection to a newer Pacemaker Remote node. To avoid these problems, Pacemaker Remote nodes will accept connections only from cluster nodes with the same or newer Pacemaker Remote protocol version. Unlike with CRM feature set differences between full cluster nodes, mixed Pacemaker Remote protocol versions between Pacemaker Remote nodes and full cluster nodes are fine, as long as the Pacemaker Remote nodes have the older version. This can be useful, for example, to host a legacy application in an older operating system version used as a Pacemaker Remote node. .. index:: single: version; XML schema * **XML schema version:** Pacemaker’s configuration syntax — what's allowed in the Configuration Information Base (CIB) — has its own version. This allows the configuration syntax to evolve over time while still allowing clusters with older configurations to work without change. .. index:: single: upgrade; methods Upgrading Cluster Software ########################## There are three approaches to upgrading a cluster, each with advantages and disadvantages. -.. table:: **Upgrade Methods** +.. list-table:: **Upgrade Methods** :widths: 16 14 14 14 14 14 14 + :header-rows: 1 - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ - | Method | Available| Can be | Service| Service | Exercises| Allows | - | | between | used with| outage | recovery| failover | change of| - | | all | Pacemaker| during | during | logic | messaging| - | | versions | Remote | upgrade| upgrade | | layer | - | | | nodes | | | | [#]_ | - +===================================================+==========+==========+========+=========+==========+==========+ - | Complete cluster shutdown | yes | yes | always | N/A | no | yes | - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ - | Rolling (node by node) | no | yes | always | yes | yes | no | - | | | | [#]_ | | | | - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ - | Detach and reattach | yes | no | only | no | no | yes | - | | | | due to | | | | - | | | | failure| | | | - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ + * - Method + - Available between all versions + - Can be used with Pacemaker Remote nodes + - Service outage during upgrade + - Service recovery during upgrade + - Exercises failover logic + - Allows change of messaging layer [#]_ + * - Complete cluster shutdown + - yes + - yes + - always + - N/A + - no + - yes + * - Rolling (node by node) + - no + - yes + - always [#]_ + - yes + - yes + - no + * - Detach and reattach + - yes + - no + - only due to failure + - no + - no + - yes .. index:: single: upgrade; shutdown Complete Cluster Shutdown _________________________ In this scenario, one shuts down all cluster nodes and resources, then upgrades all the nodes before restarting the cluster. #. On each node: a. Shutdown the cluster software (pacemaker and the messaging layer). #. Upgrade the Pacemaker software. This may also include upgrading the messaging layer and/or the underlying operating system. #. Check the configuration with the ``crm_verify`` tool. #. On each node: a. Start the cluster software. Currently, only Corosync version 2 and greater is supported as the cluster layer, but if another stack is supported in the future, the stack does not need to be the same one before the upgrade. One variation of this approach is to build a new cluster on new hosts. This allows the new version to be tested beforehand, and minimizes downtime by having the new nodes ready to be placed in production as soon as the old nodes are shut down. .. index:: single: upgrade; rolling upgrade Rolling (node by node) ______________________ In this scenario, each node is removed from the cluster, upgraded, and then brought back online, until all nodes are running the newest version. Special considerations when planning a rolling upgrade: * If you plan to upgrade other cluster software -- such as the messaging layer -- at the same time, consult that software's documentation for its compatibility with a rolling upgrade. * If the major version number is changing in the Pacemaker version you are upgrading to, a rolling upgrade may not be possible. Read the new version's release notes (as well the information here) for what limitations may exist. * If the CRM feature set is changing in the Pacemaker version you are upgrading to, you should run a mixed-version cluster only during a small rolling upgrade window. If one of the older nodes drops out of the cluster for any reason, it will not be able to rejoin until it is upgraded. * If the Pacemaker Remote protocol version is changing, all cluster nodes should be upgraded before upgrading any Pacemaker Remote nodes. See the `Pacemaker release calendar `_ on the ClusterLabs wiki to figure out whether the CRM feature set and/or Pacemaker Remote protocol version changed between the Pacemaker release versions in your rolling upgrade. To perform a rolling upgrade, on each node in turn: #. Put the node into standby mode, and wait for any active resources to be moved cleanly to another node. (This step is optional, but allows you to deal with any resource issues before the upgrade.) #. Shut down Pacemaker or ``pacemaker-remoted``. #. If a cluster node, shut down the messaging layer. #. Upgrade the Pacemaker software. This may also include upgrading the messaging layer and/or the underlying operating system. #. If this is the first node to be upgraded, check the configuration with the ``crm_verify`` tool. #. If a cluster node, start the messaging layer. This must be the same messaging layer (currently only Corosync version 2 and greater is supported) that the rest of the cluster is using. #. Start Pacemaker or ``pacemaker-remoted``. .. note:: Even if a rolling upgrade from the current version of the cluster to the newest version is not directly possible, it may be possible to perform a rolling upgrade in multiple steps, by upgrading to an intermediate version first. The following table lists compatible versions for all other nodes in the cluster when upgrading a cluster node. .. list-table:: **Version Compatibility for Cluster Nodes** :class: longtable :widths: 50 50 :header-rows: 1 * - Version Being Installed - Minimum Compatible Version * - Pacemaker 3.y.z - Pacemaker 2.0.0 * - Pacemaker 2.y.z - Pacemaker 1.1.11 [#]_ * - Pacemaker 1.y.z - Pacemaker 1.0.0 * - Pacemaker 0.6.z to 0.7.z - Pacemaker 0.6.0 When upgrading a Pacemaker Remote node, all cluster nodes must be running at least the minimum version listed in the table below. .. list-table:: **Cluster Node Version Compatibility for Pacemaker Remote Nodes** :class: longtable :widths: 50 50 :header-rows: 1 * - Pacemaker Remote Version - Minimum Cluster Node Version * - Pacemaker 3.y.z - Pacemaker 2.0.0 * - Pacemaker 1.1.9 to 2.1.z - Pacemaker 1.1.9 [#]_ .. index:: single: upgrade; detach and reattach Detach and Reattach ___________________ The reattach method is a variant of a complete cluster shutdown, where the resources are left active and get re-detected when the cluster is restarted. This method may not be used if the cluster contains any Pacemaker Remote nodes. #. Tell the cluster to stop managing services. This is required to allow the services to remain active after the cluster shuts down. .. code-block:: none # crm_attribute --name maintenance-mode --update true #. On each node, shutdown the cluster software (pacemaker and the messaging layer), and upgrade the Pacemaker software. This may also include upgrading the messaging layer. While the underlying operating system may be upgraded at the same time, that will be more likely to cause outages in the detached services (certainly, if a reboot is required). #. Check the configuration with the ``crm_verify`` tool. #. On each node, start the cluster software. Currently, only Corosync version 2 and greater is supported as the cluster layer, but if another stack is supported in the future, the stack does not need to be the same one before the upgrade. #. Verify that the cluster re-detected all resources correctly. #. Allow the cluster to resume managing resources again: .. code-block:: none # crm_attribute --name maintenance-mode --delete .. note:: While the goal of the detach-and-reattach method is to avoid disturbing running services, resources may still move after the upgrade if any resource's location is governed by a rule based on transient node attributes. Transient node attributes are erased when the node leaves the cluster. A common example is using the ``ocf:pacemaker:ping`` resource to set a node attribute used to locate other resources. .. index:: pair: upgrade; CIB Upgrading the Configuration ########################### The CIB schema version can change from one Pacemaker version to another. After cluster software is upgraded, the cluster will continue to use the older schema version that it was previously using. This can be useful, for example, when administrators have written tools that modify the configuration, and are based on the older syntax. [#]_ However, when using an older syntax, new features may be unavailable, and there is a performance impact, since the cluster must do a non-persistent configuration upgrade before each transition. So while using the old syntax is possible, it is not advisable to continue using it indefinitely. Even if you wish to continue using the old syntax, it is a good idea to follow the upgrade procedure outlined below, except for the last step, to ensure that the new software has no problems with your existing configuration (since it will perform much the same task internally). If you are brave, it is sufficient simply to run ``cibadmin --upgrade``. A more cautious approach would proceed like this: #. Create a shadow copy of the configuration. The later commands will automatically operate on this copy, rather than the live configuration. .. code-block:: none # crm_shadow --create shadow .. index:: single: configuration; verify #. Verify the configuration is valid with the new software (which may be stricter about syntax mistakes, or may have dropped support for deprecated features): .. code-block:: none # crm_verify --live-check #. Fix any errors or warnings. #. Perform the upgrade: .. code-block:: none # cibadmin --upgrade #. If this step fails, there are three main possibilities: a. The configuration was not valid to start with (did you do steps 2 and 3?). #. The transformation failed; `report a bug `_. #. The transformation was successful but produced an invalid result. If the result of the transformation is invalid, you may see a number of errors from the validation library. If these are not helpful, try the manual upgrade procedure described below. #. Check the changes: .. code-block:: none # crm_shadow --diff If at this point there is anything about the upgrade that you wish to fine-tune (for example, to change some of the automatic IDs), now is the time to do so: .. code-block:: none # crm_shadow --edit This will open the configuration in your favorite editor (whichever is specified by the standard ``$EDITOR`` environment variable). #. Preview how the cluster will react: .. code-block:: none # crm_simulate --live-check --save-dotfile shadow.dot -S # dot -Tsvg shadow.dot -o shadow.svg You can then view shadow.svg with any compatible image viewer or web browser. Verify that either no resource actions will occur or that you are happy with any that are scheduled. If the output contains actions you do not expect (possibly due to changes to the score calculations), you may need to make further manual changes. See :ref:`crm_simulate` for further details on how to interpret the output of ``crm_simulate`` and ``dot``. #. Upload the changes: .. code-block:: none # crm_shadow --commit shadow --force In the unlikely event this step fails, please report a bug. .. note:: It is also possible to perform the configuration upgrade steps manually: #. Locate the ``upgrade*.xsl`` conversion scripts provided with the source code. These will often be installed in a location such as ``/usr/share/pacemaker``, or may be obtained from the `source repository `_. #. Run the conversion scripts that apply to your older version, for example: .. code-block:: none # xsltproc /path/to/upgrade06.xsl config06.xml > config10.xml #. Locate the ``pacemaker.rng`` script (from the same location as the xsl files). #. Check the XML validity: .. code-block:: none # xmllint --relaxng /path/to/pacemaker.rng config10.xml The advantage of this method is that it can be performed without the cluster running, and any validation errors are often more informative. What Changed in 2.1 ################### The Pacemaker 2.1 release is fully backward-compatible in both the CIB XML and the C API. Highlights: * Pacemaker now supports the **OCF Resource Agent API version 1.1**. Most notably, the ``Master`` and ``Slave`` role names have been renamed to ``Promoted`` and ``Unpromoted``. * Pacemaker now supports colocations where the dependent resource does not affect the primary resource's placement (via a new ``influence`` colocation constraint option and ``critical`` resource meta-attribute). This is intended for cases where a less-important resource must be colocated with an essential resource, but it is preferred to leave the less-important resource stopped if it fails, rather than move both resources. * If Pacemaker is built with libqb 2.0 or later, the detail log will use **millisecond-resolution timestamps**. * In addition to crm_mon and stonith_admin, the crmadmin, crm_resource, crm_simulate, and crm_verify commands now support the ``--output-as`` and ``--output-to`` options, including **XML output** (which scripts and higher-level tools are strongly recommended to use instead of trying to parse the text output, which may change from release to release). For a detailed list of changes, see the release notes and `Pacemaker 2.1 Changes `_ on the ClusterLabs wiki. What Changed in 2.0 ################### The main goal of the 2.0 release was to remove support for deprecated syntax, along with some small changes in default configuration behavior and tool behavior. Highlights: * Only Corosync version 2 and greater is now supported as the underlying cluster layer. Support for Heartbeat and Corosync 1 (including CMAN) is removed. * The Pacemaker detail log file is now stored in ``/var/log/pacemaker/pacemaker.log`` by default. * The record-pending cluster property now defaults to true, which allows status tools such as crm_mon to show operations that are in progress. * Support for a number of deprecated build options, environment variables, and configuration settings has been removed. * The ``master`` tag has been deprecated in favor of using the ``clone`` tag with the new ``promotable`` meta-attribute set to ``true``. "Master/slave" clone resources are now referred to as "promotable" clone resources. * The public API for Pacemaker libraries that software applications can use has changed significantly. For a detailed list of changes, see the release notes and `Pacemaker 2.0 Changes `_ on the ClusterLabs wiki. What Changed in 1.0 ################### New ___ * Failure timeouts. * New section for resource and operation defaults. * Tool for making offline configuration changes. * ``Rules``, ``instance_attributes``, ``meta_attributes`` and sets of operations can be defined once and referenced in multiple places. * The CIB now accepts XPath-based create/modify/delete operations. See ``cibadmin --help``. * Multi-dimensional colocation and ordering constraints. * The ability to connect to the CIB from non-cluster machines. * Allow recurring actions to be triggered at known times. Changed _______ * Syntax * All resource and cluster options now use dashes (-) instead of underscores (_) * ``master_slave`` was renamed to ``master`` * The ``attributes`` container tag was removed * The operation field ``pre-req`` has been renamed ``requires`` * All operations must have an ``interval``, ``start``/``stop`` must have it set to zero * The ``stonith-enabled`` option now defaults to true. * The cluster will refuse to start resources if ``stonith-enabled`` is true (or unset) and no STONITH resources have been defined * The attributes of colocation and ordering constraints were renamed for clarity. * ``resource-failure-stickiness`` has been replaced by ``migration-threshold``. * The parameters for command-line tools have been made consistent * Switched to 'RelaxNG' schema validation and 'libxml2' parser * id fields are now XML IDs which have the following limitations: * id's cannot contain colons (:) * id's cannot begin with a number * id's must be globally unique (not just unique for that tag) * Some fields (such as those in constraints that refer to resources) are IDREFs. This means that they must reference existing resources or objects in order for the configuration to be valid. Removing an object which is referenced elsewhere will therefore fail. * The CIB representation, from which a MD5 digest is calculated to verify CIBs on the nodes, has changed. This means that every CIB update will require a full refresh on any upgraded nodes until the cluster is fully upgraded to 1.0. This will result in significant performance degradation and it is therefore highly inadvisable to run a mixed 1.0/0.6 cluster for any longer than absolutely necessary. * Ping node information no longer needs to be added to ``ha.cf``. Simply include the lists of hosts in your ping resource(s). Removed _______ * Syntax * It is no longer possible to set resource meta options as top-level attributes. Use meta-attributes instead. * Resource and operation defaults are no longer read from ``crm_config``. .. rubric:: Footnotes .. [#] Before CRM feature set 3.1.0 (Pacemaker 2.0.0), the minor-minor version number was treated the same as the minor version. .. [#] Currently, Corosync version 2 and greater is the only supported cluster stack, but other stacks have been supported by past versions, and may be supported by future versions. .. [#] Any active resources will be moved off the node being upgraded, so there will be at least a brief outage unless all resources can be migrated "live". .. [#] Rolling upgrades from Pacemaker 1.1.z to 2.y.z are possible only if the cluster uses corosync version 2 or greater as its messaging layer, and the Cluster Information Base (CIB) uses schema 1.0 or higher in its ``validate-with`` property. .. [#] Pacemaker Remote versions 1.1.15 through 1.1.17 require cluster nodes to be at least version 1.1.15. Version 1.1.15 introduced an accidental remote protocol version bump, breaking rolling upgrade compatibility with older versions. This was fixed in 1.1.18. .. [#] As of Pacemaker 2.0.0, only schema versions pacemaker-1.0 and higher are supported (excluding pacemaker-1.1, which was a special case). diff --git a/doc/sphinx/Pacemaker_Explained/alerts.rst b/doc/sphinx/Pacemaker_Explained/alerts.rst index 720ea1b1cc..b73d57e346 100644 --- a/doc/sphinx/Pacemaker_Explained/alerts.rst +++ b/doc/sphinx/Pacemaker_Explained/alerts.rst @@ -1,297 +1,297 @@ .. _alerts: .. index:: single: alert single: resource; alert single: node; alert single: fencing; alert pair: XML element; alert pair: XML element; alerts Alerts ------ *Alerts* may be configured to take some external action when a cluster event occurs (node failure, resource starting or stopping, etc.). .. index:: pair: alert; agent Alert Agents ############ As with resource agents, the cluster calls an external program (an *alert agent*) to handle alerts. The cluster passes information about the event to the agent via environment variables. Agents can do anything desired with this information (send an e-mail, log to a file, update a monitoring system, etc.). .. topic:: Simple alert configuration .. code-block:: xml In the example above, the cluster will call ``my-script.sh`` for each event. Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called *on* those nodes. For more information about sample alert agents provided by Pacemaker and about developing custom alert agents, see the *Pacemaker Administration* document. .. index:: single: alert; recipient pair: XML element; recipient Alert Recipients ################ Usually, alerts are directed towards a recipient. Thus, each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. .. topic:: Alert configuration with recipient .. code-block:: xml In the above example, the cluster will call ``my-script.sh`` for each event, passing the recipient ``some-address`` as an environment variable. The recipient may be anything the alert agent can recognize -- an IP address, an e-mail address, a file name, whatever the particular agent supports. .. index:: single: alert; meta-attributes single: meta-attribute; alert meta-attributes Alert Meta-Attributes ##################### As with resources, meta-attributes can be configured for alerts to change whether and how Pacemaker calls them. -.. table:: **Meta-Attributes of an Alert or Recipient** +.. list-table:: **Meta-Attributes of an Alert or Recipient** :class: longtable :widths: 20 20 60 + :header-rows: 1 - +------------------+---------------+-----------------------------------------------------+ - | Meta-Attribute | Default | Description | - +==================+===============+=====================================================+ - | description | | .. index:: | - | | | single: acl_permission; description (attribute) | - | | | single: description; acl_permission attribute | - | | | single: attribute; description (acl_permission) | - | | | | - | | | Arbitrary text for user's use (ignored by Pacemaker)| - +------------------+---------------+-----------------------------------------------------+ - | enabled | true | .. index:: | - | | | single: alert; meta-attribute, enabled | - | | | single: meta-attribute; enabled (alert) | - | | | single: enabled; alert meta-attribute | - | | | | - | | | If false for an alert, the alert will not be used. | - | | | If true for an alert and false for a particular | - | | | recipient of that alert, that recipient will not be | - | | | used. *(since 2.1.6)* | - +------------------+---------------+-----------------------------------------------------+ - | timestamp-format | %H:%M:%S.%6N | .. index:: | - | | | single: alert; meta-attribute, timestamp-format | - | | | single: meta-attribute; timestamp-format (alert) | - | | | single: timestamp-format; alert meta-attribute | - | | | | - | | | Format the cluster will use when sending the | - | | | event's timestamp to the agent. This is a string as | - | | | used with the ``date(1)`` command, with the | - | | | following extension. ``"%xN"``, where ``x`` is a | - | | | number with ``1 <= x <= 6``, prints the fractional | - | | | seconds component of the timestamp at ``10^(-x)`` | - | | | resolution, without a decimal point (``'.'``). | - | | | Values are truncated toward zero, not rounded. | - | | | | - | | | Note: This is implemented using ``strftime()`` with | - | | | a 128-character buffer. If any format specifier's | - | | | expansion requires more than 128 characters, or if | - | | | any specifier expands to an empty string, then the | - | | | timestamp is discarded. (Expanding to an empty | - | | | string is not an error, but there is no way to | - | | | distinguish this from a too-small buffer.) | - +------------------+---------------+-----------------------------------------------------+ - | timeout | 30s | .. index:: | - | | | single: alert; meta-attribute, timeout | - | | | single: meta-attribute; timeout (alert) | - | | | single: timeout; alert meta-attribute | - | | | | - | | | If the alert agent does not complete within this | - | | | amount of time, it will be terminated. | - +------------------+---------------+-----------------------------------------------------+ + * - Meta-Attribute + - Default + - Description + * - description + - + - .. index:: + single: acl_permission; description (attribute) + single: description; acl_permission attribute + single: attribute; description (acl_permission) + + Arbitrary text for user's use (ignored by Pacemaker) + * - enabled + - true + - .. index:: + single: alert; meta-attribute, enabled + single: meta-attribute; enabled (alert) + single: enabled; alert meta-attribute + + If false for an alert, the alert will not be used. If true for an alert + and false for a particular recipient of that alert, that recipient will + not be used. *(since 2.1.6)* + * - timestamp-format + - %H:%M:%S.%6N + - .. index:: + single: alert; meta-attribute, timestamp-format + single: meta-attribute; timestamp-format (alert) + single: timestamp-format; alert meta-attribute + + Format the cluster will use when sending the event's timestamp to the + agent. This is a string as used with the ``date(1)`` command, with the + following extension. ``"%xN"``, where ``x`` is a number with + ``1 <= x <= 6``, prints the fractional seconds component of the timestamp + at ``10^(-x)`` resolution, without a decimal point (``'.'``). Values are + truncated toward zero, not rounded. + + Note: This is implemented using ``strftime()`` with a 128-character + buffer. If any format specifier's expansion requires more than 128 + characters, or if any specifier expands to an empty string, then the + timestamp is discarded. (Expanding to an empty string is not an error, + but there is no way to distinguish this from a too-small buffer.) + * - timeout + - 30s + - .. index:: + single: alert; meta-attribute, timeout + single: meta-attribute; timeout (alert) + single: timeout; alert meta-attribute + + If the alert agent does not complete within this amount of time, it + will be terminated. Meta-attributes can be configured per alert and/or per recipient. .. topic:: Alert configuration with meta-attributes .. code-block:: xml In the above example, the ``my-script.sh`` will get called twice for each event, with each call using a 15-second timeout. One call will be passed the recipient ``someuser@example.com`` and a timestamp in the format ``%D %H:%M``, while the other call will be passed the recipient ``otheruser@example.com`` and a timestamp in the format ``%c``. .. index:: single: alert; instance attributes single: instance attribute; alert instance attributes Alert Instance Attributes ######################### As with resource agents, agent-specific configuration values may be configured as instance attributes. These will be passed to the agent as additional environment variables. The number, names and allowed values of these instance attributes are completely up to the particular agent. .. topic:: Alert configuration with instance attributes .. code-block:: xml .. index:: single: alert; filters pair: XML element; select pair: XML element; select_nodes pair: XML element; select_fencing pair: XML element; select_resources pair: XML element; select_attributes pair: XML element; attribute Alert Filters ############# By default, an alert agent will be called for node events, fencing events, and resource events. An agent may choose to ignore certain types of events, but there is still the overhead of calling it for those events. To eliminate that overhead, you may select which types of events the agent should receive. Alert filters are configured within a ``select`` element inside an ``alert`` element. .. list-table:: **Possible Alert Filters** :class: longtable :widths: 25 75 :header-rows: 1 * - Name - Events alerted * - select_nodes - A node joins or leaves the cluster (whether at the cluster layer for cluster nodes, or via a remote connection for Pacemaker Remote nodes). * - select_fencing - Fencing or unfencing of a node completes (whether successfully or not). * - select_resources - A resource action other than meta-data completes (whether successfully or not). * - select_attributes - A transient attribute value update is sent to the CIB. .. topic:: Alert configuration to receive only node events and fencing events .. code-block:: xml With ```` (the only event type not enabled by default), the agent will receive alerts when a node attribute changes. If you wish the agent to be called only when certain attributes change, you can configure that as well. .. topic:: Alert configuration to be called when certain node attributes change .. code-block:: xml Node attribute alerts are currently considered experimental. Alerts may be limited to attributes set via ``attrd_updater``, and agents may be called multiple times with the same attribute value. diff --git a/doc/sphinx/Pacemaker_Explained/collective.rst b/doc/sphinx/Pacemaker_Explained/collective.rst index 429390d132..36142a7132 100644 --- a/doc/sphinx/Pacemaker_Explained/collective.rst +++ b/doc/sphinx/Pacemaker_Explained/collective.rst @@ -1,1193 +1,1203 @@ .. index: single: collective resource single: resource; collective Collective Resources -------------------- Pacemaker supports several types of *collective* resources, which consist of multiple, related resource instances. .. index: single: group resource single: resource; group .. _group-resources: Groups - A Syntactic Shortcut ############################# One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, we support the concept of groups. .. topic:: A group of two primitive resources .. code-block:: xml Although the example above contains only two resources, there is no limit to the number of resources a group can contain. The example is also sufficient to explain the fundamental properties of a group: * Resources are started in the order they appear in (**Public-IP** first, then **Email**) * Resources are stopped in the reverse order to which they appear in (**Email** first, then **Public-IP**) If a resource in the group can't run anywhere, then nothing after that is allowed to run, too. * If **Public-IP** can't run anywhere, neither can **Email**; * but if **Email** can't run anywhere, this does not affect **Public-IP** in any way The group above is logically equivalent to writing: .. topic:: How the cluster sees a group resource .. code-block:: xml Obviously as the group grows bigger, the reduced configuration effort can become significant. Another (typical) example of a group is a DRBD volume, the filesystem mount, an IP address, and an application that uses them. .. index:: pair: XML element; group Group Properties ________________ -.. table:: **Properties of a Group Resource** +.. list-table:: **Properties of a Group Resource** :widths: 25 75 + :header-rows: 1 - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: group; property, id | - | | single: property; id (group) | - | | single: id; group property | - | | | - | | A unique name for the group | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: group; attribute, description | - | | single: attribute; description (group) | - | | single: description; group attribute | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ + * - Field + - Description + * - id + - .. index:: + single: group; property, id + single: property; id (group) + single: id; group property + + A unique name for the group + * - description + - .. index:: + single: group; attribute, description + single: attribute; description (group) + single: description; group attribute + + Arbitrary text for user's use (ignored by Pacemaker) Group Options _____________ Groups inherit the ``priority``, ``target-role``, and ``is-managed`` properties from primitive resources. See :ref:`resource_options` for information about those properties. Group Instance Attributes _________________________ Groups have no instance attributes. However, any that are set for the group object will be inherited by the group's children. Group Contents ______________ Groups may only contain a collection of cluster resources (see :ref:`primitive-resource`). To refer to a child of a group resource, just use the child's ``id`` instead of the group's. Group Constraints _________________ Although it is possible to reference a group's children in constraints, it is usually preferable to reference the group itself. .. topic:: Some constraints involving groups .. code-block:: xml .. index:: pair: resource-stickiness; group Group Stickiness ________________ Stickiness, the measure of how much a resource wants to stay where it is, is additive in groups. Every active resource of the group will contribute its stickiness value to the group's total. So if the default ``resource-stickiness`` is 100, and a group has seven members, five of which are active, then the group as a whole will prefer its current location with a score of 500. .. index:: single: clone single: resource; clone .. _s-resource-clone: Clones - Resources That Can Have Multiple Active Instances ########################################################## *Clone* resources are resources that can have more than one copy active at the same time. This allows you, for example, to run a copy of a daemon on every node. You can clone any primitive or group resource [#]_. Anonymous versus Unique Clones ______________________________ A clone resource is configured to be either *anonymous* or *globally unique*. Anonymous clones are the simplest. These behave completely identically everywhere they are running. Because of this, there can be only one instance of an anonymous clone active per node. The instances of globally unique clones are distinct entities. All instances are launched identically, but one instance of the clone is not identical to any other instance, whether running on the same node or a different node. As an example, a cloned IP address can use special kernel functionality such that each instance handles a subset of requests for the same IP address. .. index:: single: promotable clone single: resource; promotable .. _s-resource-promotable: Promotable clones _________________ If a clone is *promotable*, its instances can perform a special role that Pacemaker will manage via the ``promote`` and ``demote`` actions of the resource agent. Services that support such a special role have various terms for the special role and the default role: primary and secondary, master and replica, controller and worker, etc. Pacemaker uses the terms *promoted* and *unpromoted* to be agnostic to what the service calls them or what they do. All that Pacemaker cares about is that an instance comes up in the unpromoted role when started, and the resource agent supports the ``promote`` and ``demote`` actions to manage entering and exiting the promoted role. .. index:: pair: XML element; clone Clone Properties ________________ -.. table:: **Properties of a Clone Resource** +.. list-table:: **Properties of a Clone Resource** :widths: 25 75 + :header-rows: 1 + + * - Field + - Description + * - id + - .. index:: + single: clone; property, id + single: property; id (clone) + single: id; clone property + + A unique name for the clone + * - description + - .. index:: + single: clone; attribute, description + single: attribute; description (clone) + single: description; clone attribute - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: clone; property, id | - | | single: property; id (clone) | - | | single: id; clone property | - | | | - | | A unique name for the clone | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: clone; attribute, description | - | | single: attribute; description (clone) | - | | single: description; clone attribute | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ + Arbitrary text for user's use (ignored by Pacemaker) .. index:: pair: options; clone Clone Options _____________ :ref:`Options ` inherited from primitive resources: ``priority, target-role, is-managed`` -.. table:: **Clone-Specific Configuration Options** +.. list-table:: **Clone-Specific Configuration Options** :class: longtable :widths: 20 20 60 - - +-------------------+-----------------+-------------------------------------------------------+ - | Field | Default | Description | - +===================+=================+=======================================================+ - | globally-unique | **true** if | .. index:: | - | | clone-node-max | single: clone; option, globally-unique | - | | is greater than | single: option; globally-unique (clone) | - | | 1 *(since* | single: globally-unique; clone option | - | | *3.0.0)*, | | - | | otherwise | If **true**, each clone instance performs a | - | | **false** | distinct function, such that a single node can run | - | | | more than one instance at the same time | - +-------------------+-----------------+-------------------------------------------------------+ - | clone-max | 0 | .. index:: | - | | | single: clone; option, clone-max | - | | | single: option; clone-max (clone) | - | | | single: clone-max; clone option | - | | | | - | | | The maximum number of clone instances that can | - | | | be started across the entire cluster. If 0, the | - | | | number of nodes in the cluster will be used. | - +-------------------+-----------------+-------------------------------------------------------+ - | clone-node-max | 1 | .. index:: | - | | | single: clone; option, clone-node-max | - | | | single: option; clone-node-max (clone) | - | | | single: clone-node-max; clone option | - | | | | - | | | If the clone is globally unique, this is the maximum | - | | | number of clone instances that can be started | - | | | on a single node | - +-------------------+-----------------+-------------------------------------------------------+ - | clone-min | 0 | .. index:: | - | | | single: clone; option, clone-min | - | | | single: option; clone-min (clone) | - | | | single: clone-min; clone option | - | | | | - | | | Require at least this number of clone instances | - | | | to be runnable before allowing resources | - | | | depending on the clone to be runnable. A value | - | | | of 0 means require all clone instances to be | - | | | runnable. | - +-------------------+-----------------+-------------------------------------------------------+ - | notify | false | .. index:: | - | | | single: clone; option, notify | - | | | single: option; notify (clone) | - | | | single: notify; clone option | - | | | | - | | | Call the resource agent's **notify** action for | - | | | all active instances, before and after starting | - | | | or stopping any clone instance. The resource | - | | | agent must support this action. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | ordered | false | .. index:: | - | | | single: clone; option, ordered | - | | | single: option; ordered (clone) | - | | | single: ordered; clone option | - | | | | - | | | If **true**, clone instances must be started | - | | | sequentially instead of in parallel. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | interleave | false | .. index:: | - | | | single: clone; option, interleave | - | | | single: option; interleave (clone) | - | | | single: interleave; clone option | - | | | | - | | | When this clone is ordered relative to another | - | | | clone, if this option is **false** (the default), | - | | | the ordering is relative to *all* instances of | - | | | the other clone, whereas if this option is | - | | | **true**, the ordering is relative only to | - | | | instances on the same node. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | promotable | false | .. index:: | - | | | single: clone; option, promotable | - | | | single: option; promotable (clone) | - | | | single: promotable; clone option | - | | | | - | | | If **true**, clone instances can perform a | - | | | special role that Pacemaker will manage via the | - | | | resource agent's **promote** and **demote** | - | | | actions. The resource agent must support these | - | | | actions. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | promoted-max | 1 | .. index:: | - | | | single: clone; option, promoted-max | - | | | single: option; promoted-max (clone) | - | | | single: promoted-max; clone option | - | | | | - | | | If ``promotable`` is **true**, the number of | - | | | instances that can be promoted at one time | - | | | across the entire cluster | - +-------------------+-----------------+-------------------------------------------------------+ - | promoted-node-max | 1 | .. index:: | - | | | single: clone; option, promoted-node-max | - | | | single: option; promoted-node-max (clone) | - | | | single: promoted-node-max; clone option | - | | | | - | | | If the clone is promotable and globally unique, this | - | | | is the number of instances that can be promoted at | - | | | one time on a single node (up to ``clone-node-max``) | - +-------------------+-----------------+-------------------------------------------------------+ + :header-rows: 1 + + * - Field + - Default + - Description + * - globally-unique + - **true** if clone-node-max is greater than 1 *(since 3.0.0)*, otherwise + **false** + - .. index:: + single: clone; option, globally-unique + single: option; globally-unique (clone) + single: globally-unique; clone option + + If **true**, each clone instance performs a distinct function, such that + a single node can run more than one instance at the same time + * - clone-max + - 0 + - .. index:: + single: clone; option, clone-max + single: option; clone-max (clone) + single: clone-max; clone option + + The maximum number of clone instances that can be started across the + entire cluster. If 0, the number of nodes in the cluster will be used. + * - clone-node-max + - 1 + - .. index:: + single: clone; option, clone-node-max + single: option; clone-node-max (clone) + single: clone-node-max; clone option + + If the clone is globally unique, this is the maximum number of clone + instances that can be started on a single node + * - clone-min + - 0 + - .. index:: + single: clone; option, clone-min + single: option; clone-min (clone) + single: clone-min; clone option + + Require at least this number of clone instances to be runnable before + allowing resources depending on the clone to be runnable. A value of + 0 means require all clone instances to be runnable. + * - notify + - false + - .. index:: + single: clone; option, notify + single: option; notify (clone) + single: notify; clone option + + Call the resource agent's **notify** action for all active instances, + before and after starting or stopping any clone instance. The + resource agent must support this action. Allowed values: **false**, + **true** + * - ordered + - false + - .. index:: + single: clone; option, ordered + single: option; ordered (clone) + single: ordered; clone option + + If **true**, clone instances must be started sequentially instead of + in parallel. Allowed values: **false**, **true** + * - interleave + - false + - .. index:: + single: clone; option, interleave + single: option; interleave (clone) + single: interleave; clone option + + When this clone is ordered relative to another clone, if this option is + **false** (the default), the ordering is relative to *all* instances of + the other clone, whereas if this option is **true**, the ordering is + relative only to instances on the same node. Allowed values: **false**, + **true** + * - promotable + - false + - .. index:: + single: clone; option, promotable + single: option; promotable (clone) + single: promotable; clone option + + If **true**, clone instances can perform a special role that Pacemaker + will manage via the resource agent's **promote** and **demote** actions. + The resource agent must support these actions. Allowed values: + **false**, **true** + * - promoted-max + - 1 + - .. index:: + single: clone; option, promoted-max + single: option; promoted-max (clone) + single: promoted-max; clone option + + If ``promotable`` is **true**, the number of instances that can be + promoted at one time across the entire cluster + * - promoted-node-max + - 1 + - .. index:: + single: clone; option, promoted-node-max + single: option; promoted-node-max (clone) + single: promoted-node-max; clone option + + If the clone is promotable and globally unique, this is the number of + instances that can be promoted at one time on a single node (up to + ``clone-node-max``) .. note:: **Deprecated Terminology** In older documentation and online examples, you may see promotable clones referred to as *multi-state*, *stateful*, or *master/slave*; these mean the same thing as *promotable*. Certain syntax is supported for backward compatibility, but is deprecated and will be removed in a future version: * Using the ``master-max`` meta-attribute instead of ``promoted-max`` * Using the ``master-node-max`` meta-attribute instead of ``promoted-node-max`` * Using ``Master`` as a role name instead of ``Promoted`` * Using ``Slave`` as a role name instead of ``Unpromoted`` Clone Contents ______________ Clones must contain exactly one primitive or group resource. .. topic:: A clone that runs a web server on all nodes .. code-block:: xml .. warning:: You should never reference the name of a clone's child (the primitive or group resource being cloned). If you think you need to do this, you probably need to re-evaluate your design. Clone Instance Attribute ________________________ Clones have no instance attributes; however, any that are set here will be inherited by the clone's child. .. index:: single: clone; constraint Clone Constraints _________________ In most cases, a clone will have a single instance on each active cluster node. If this is not the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently from those for primitive resources except that the clone's **id** is used. .. topic:: Some constraints involving clones .. code-block:: xml Ordering constraints behave slightly differently for clones. In the example above, ``apache-stats`` will wait until all copies of ``apache-clone`` that need to be started have done so before being started itself. Only if *no* copies can be started will ``apache-stats`` be prevented from being active. Additionally, the clone will wait for ``apache-stats`` to be stopped before stopping itself. Colocation of a primitive or group resource with a clone means that the resource can run on any node with an active instance of the clone. The cluster will choose an instance based on where the clone is running and the resource's own location preferences. Colocation between clones is also possible. If one clone **A** is colocated with another clone **B**, the set of allowed locations for **A** is limited to nodes on which **B** is (or will be) active. Placement is then performed normally. .. index:: single: promotable clone; constraint .. _promotable-clone-constraints: Promotable Clone Constraints ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For promotable clone resources, the ``first-action`` and/or ``then-action`` fields for ordering constraints may be set to ``promote`` or ``demote`` to constrain the promoted role, and colocation constraints may contain ``rsc-role`` and/or ``with-rsc-role`` fields. .. topic:: Constraints involving promotable clone resources .. code-block:: xml In the example above, **myApp** will wait until one of the database copies has been started and promoted before being started itself on the same node. Only if no copies can be promoted will **myApp** be prevented from being active. Additionally, the cluster will wait for **myApp** to be stopped before demoting the database. Colocation of a primitive or group resource with a promotable clone resource means that it can run on any node with an active instance of the promotable clone resource that has the specified role (``Promoted`` or ``Unpromoted``). In the example above, the cluster will choose a location based on where database is running in the promoted role, and if there are multiple promoted instances it will also factor in **myApp**'s own location preferences when deciding which location to choose. Colocation with regular clones and other promotable clone resources is also possible. In such cases, the set of allowed locations for the **rsc** clone is (after role filtering) limited to nodes on which the ``with-rsc`` promotable clone resource is (or will be) in the specified role. Placement is then performed as normal. Using Promotable Clone Resources in Colocation Sets ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When a promotable clone is used in a :ref:`resource set ` inside a colocation constraint, the resource set may take a ``role`` attribute. In the following example, an instance of **B** may be promoted only on a node where **A** is in the promoted role. Additionally, resources **C** and **D** must be located on a node where both **A** and **B** are promoted. .. topic:: Colocate C and D with A's and B's promoted instances .. code-block:: xml Using Promotable Clone Resources in Ordered Sets ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When a promotable clone is used in a :ref:`resource set ` inside an ordering constraint, the resource set may take an ``action`` attribute. .. topic:: Start C and D after first promoting A and B .. code-block:: xml In the above example, **B** cannot be promoted until **A** has been promoted. Additionally, resources **C** and **D** must wait until **A** and **B** have been promoted before they can start. .. index:: pair: resource-stickiness; clone .. _s-clone-stickiness: Clone Stickiness ________________ To achieve stable assignments, clones are slightly sticky by default. If no value for ``resource-stickiness`` is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving instances around the cluster. .. note:: For globally unique clones, this may result in multiple instances of the clone staying on a single node, even after another eligible node becomes active (for example, after being put into standby mode then made active again). If you do not want this behavior, specify a ``resource-stickiness`` of 0 for the clone temporarily and let the cluster adjust, then set it back to 1 if you want the default behavior to apply again. .. important:: If ``resource-stickiness`` is set in the ``rsc_defaults`` section, it will apply to clone instances as well. This means an explicit ``resource-stickiness`` of 0 in ``rsc_defaults`` works differently from the implicit default used when ``resource-stickiness`` is not specified. Monitoring Promotable Clone Resources _____________________________________ The usual monitor actions are insufficient to monitor a promotable clone resource, because Pacemaker needs to verify not only that the resource is active, but also that its actual role matches its intended one. Define two monitoring actions: the usual one will cover the unpromoted role, and an additional one with ``role="Promoted"`` will cover the promoted role. .. topic:: Monitoring both states of a promotable clone resource .. code-block:: xml .. important:: It is crucial that *every* monitor operation has a different interval! Pacemaker currently differentiates between operations only by resource and interval; so if (for example) a promotable clone resource had the same monitor interval for both roles, Pacemaker would ignore the role when checking the status -- which would cause unexpected return codes, and therefore unnecessary complications. .. _s-promotion-scores: Determining Which Instance is Promoted ______________________________________ Pacemaker can choose a promotable clone instance to be promoted in one of two ways: * Promotion scores: These are node attributes set via the ``crm_attribute`` command using the ``--promotion`` option, which generally would be called by the resource agent's start action if it supports promotable clones. This tool automatically detects both the resource and host, and should be used to set a preference for being promoted. Based on this, ``promoted-max``, and ``promoted-node-max``, the instance(s) with the highest preference will be promoted. * Constraints: Location constraints can indicate which nodes are most preferred to be promoted. .. topic:: Explicitly preferring node1 to be promoted .. code-block:: xml .. index: single: bundle single: resource; bundle pair: container; Docker pair: container; podman .. _s-resource-bundle: Bundles - Containerized Resources ################################# Pacemaker supports a special syntax for launching a service inside a `container `_ with any infrastructure it requires: the *bundle*. Pacemaker bundles support `Docker `_ and `podman `_ *(since 2.0.1)* container technologies. [#]_ .. topic:: A bundle for a containerized web server .. code-block:: xml Bundle Prerequisites ____________________ Before configuring a bundle in Pacemaker, the user must install the appropriate container launch technology (Docker or podman), and supply a fully configured container image, on every node allowed to run the bundle. Pacemaker will create an implicit resource of type **ocf:heartbeat:docker** or **ocf:heartbeat:podman** to manage a bundle's container. The user must ensure that the appropriate resource agent is installed on every node allowed to run the bundle. .. index:: pair: XML element; bundle Bundle Properties _________________ -.. table:: **XML Attributes of a bundle Element** +.. list-table:: **XML Attributes of a bundle Element** :widths: 25 75 + :header-rows: 1 + + * - Field + - Description + * - id + - .. index:: + single: bundle; attribute, id + single: attribute; id (bundle) + single: id; bundle attribute + + A unique name for the bundle (required) + * - description + - .. index:: + single: bundle; attribute, description + single: attribute; description (bundle) + single: description; bundle attribute - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: bundle; attribute, id | - | | single: attribute; id (bundle) | - | | single: id; bundle attribute | - | | | - | | A unique name for the bundle (required) | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: bundle; attribute, description | - | | single: attribute; description (bundle) | - | | single: description; bundle attribute | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ + Arbitrary text for user's use (ignored by Pacemaker) A bundle must contain exactly one ``docker`` or ``podman`` element. .. index:: pair: XML element; docker pair: XML element; podman Bundle Container Properties ___________________________ -.. table:: **XML Attributes of a docker or podman Element** +.. list-table:: **XML Attributes of a docker or podman Element** :class: longtable :widths: 15 40 45 - - +-------------------+------------------------------------+---------------------------------------------------+ - | Attribute | Default | Description | - +===================+====================================+===================================================+ - | image | | .. index:: | - | | | single: docker; attribute, image | - | | | single: attribute; image (docker) | - | | | single: image; docker attribute | - | | | single: podman; attribute, image | - | | | single: attribute; image (podman) | - | | | single: image; podman attribute | - | | | | - | | | Container image tag (required) | - +-------------------+------------------------------------+---------------------------------------------------+ - | replicas | Value of ``promoted-max`` | .. index:: | - | | if that is positive, else 1 | single: docker; attribute, replicas | - | | | single: attribute; replicas (docker) | - | | | single: replicas; docker attribute | - | | | single: podman; attribute, replicas | - | | | single: attribute; replicas (podman) | - | | | single: replicas; podman attribute | - | | | | - | | | A positive integer specifying the number of | - | | | container instances to launch | - +-------------------+------------------------------------+---------------------------------------------------+ - | replicas-per-host | 1 | .. index:: | - | | | single: docker; attribute, replicas-per-host | - | | | single: attribute; replicas-per-host (docker) | - | | | single: replicas-per-host; docker attribute | - | | | single: podman; attribute, replicas-per-host | - | | | single: attribute; replicas-per-host (podman) | - | | | single: replicas-per-host; podman attribute | - | | | | - | | | A positive integer specifying the number of | - | | | container instances allowed to run on a | - | | | single node | - +-------------------+------------------------------------+---------------------------------------------------+ - | promoted-max | 0 | .. index:: | - | | | single: docker; attribute, promoted-max | - | | | single: attribute; promoted-max (docker) | - | | | single: promoted-max; docker attribute | - | | | single: podman; attribute, promoted-max | - | | | single: attribute; promoted-max (podman) | - | | | single: promoted-max; podman attribute | - | | | | - | | | A non-negative integer that, if positive, | - | | | indicates that the containerized service | - | | | should be treated as a promotable service, | - | | | with this many replicas allowed to run the | - | | | service in the promoted role | - +-------------------+------------------------------------+---------------------------------------------------+ - | network | | .. index:: | - | | | single: docker; attribute, network | - | | | single: attribute; network (docker) | - | | | single: network; docker attribute | - | | | single: podman; attribute, network | - | | | single: attribute; network (podman) | - | | | single: network; podman attribute | - | | | | - | | | If specified, this will be passed to the | - | | | ``docker run`` or ``podman run`` command as the | - | | | network setting for the container. | - +-------------------+------------------------------------+---------------------------------------------------+ - | run-command | ``/usr/sbin/pacemaker-remoted`` if | .. index:: | - | | bundle contains a **primitive**, | single: docker; attribute, run-command | - | | otherwise none | single: attribute; run-command (docker) | - | | | single: run-command; docker attribute | - | | | single: podman; attribute, run-command | - | | | single: attribute; run-command (podman) | - | | | single: run-command; podman attribute | - | | | | - | | | This command will be run inside the container | - | | | when launching it ("PID 1"). If the bundle | - | | | contains a **primitive**, this command *must* | - | | | start ``pacemaker-remoted`` (but could, for | - | | | example, be a script that does other stuff, too). | - +-------------------+------------------------------------+---------------------------------------------------+ - | options | | .. index:: | - | | | single: docker; attribute, options | - | | | single: attribute; options (docker) | - | | | single: options; docker attribute | - | | | single: podman; attribute, options | - | | | single: attribute; options (podman) | - | | | single: options; podman attribute | - | | | | - | | | Extra command-line options to pass to the | - | | | ``docker run`` or ``podman run`` command | - +-------------------+------------------------------------+---------------------------------------------------+ + :header-rows: 1 + + * - Attribute + - Default + - Description + * - image + - + - .. index:: + single: docker; attribute, image + single: attribute; image (docker) + single: image; docker attribute + single: podman; attribute, image + single: attribute; image (podman) + single: image; podman attribute + + Container image tag (required) + * - replicas + - Value of ``promoted-max`` if that is positive, else 1 + - .. index:: + single: docker; attribute, replicas + single: attribute; replicas (docker) + single: replicas; docker attribute + single: podman; attribute, replicas + single: attribute; replicas (podman) + single: replicas; podman attribute + + A positive integer specifying the number of container instances to launch + * - replicas-per-host + - 1 + - .. index:: + single: docker; attribute, replicas-per-host + single: attribute; replicas-per-host (docker) + single: replicas-per-host; docker attribute + single: podman; attribute, replicas-per-host + single: attribute; replicas-per-host (podman) + single: replicas-per-host; podman attribute + + A positive integer specifying the number of container instances allowed + to run on a single node + * - promoted-max + - 0 + - .. index:: + single: docker; attribute, promoted-max + single: attribute; promoted-max (docker) + single: promoted-max; docker attribute + single: podman; attribute, promoted-max + single: attribute; promoted-max (podman) + single: promoted-max; podman attribute + + A non-negative integer that, if positive, indicates that the containerized + service should be treated as a promotable service, with this many replicas + allowed to run the service in the promoted role + * - network + - + - .. index:: + single: docker; attribute, network + single: attribute; network (docker) + single: network; docker attribute + single: podman; attribute, network + single: attribute; network (podman) + single: network; podman attribute + + If specified, this will be passed to the ``docker run`` or ``podman run`` + command as the network setting for the container. + * - run-command + - ``/usr/sbin/pacemaker-remoted`` if bundle contains a **primitive**, + otherwise none + - .. index:: + single: docker; attribute, run-command + single: attribute; run-command (docker) + single: run-command; docker attribute + single: podman; attribute, run-command + single: attribute; run-command (podman) + single: run-command; podman attribute + + This command will be run inside the container when launching it ("PID 1"). + If the bundle contains a **primitive**, this command *must* start + ``pacemaker-remoted`` (but could, for example, be a script that does + other stuff, too). + * - options + - + - .. index:: + single: docker; attribute, options + single: attribute; options (docker) + single: options; docker attribute + single: podman; attribute, options + single: attribute; options (podman) + single: options; podman attribute + + Extra command-line options to pass to the ``docker run`` or + ``podman run`` command .. note:: Considerations when using cluster configurations or container images from Pacemaker 1.1: * If the container image has a pre-2.0.0 version of Pacemaker, set ``run-command`` to ``/usr/sbin/pacemaker_remoted`` (note the underbar instead of dash). * ``masters`` is accepted as an alias for ``promoted-max``, but is deprecated since 2.0.0, and support for it will be removed in a future version. Bundle Network Properties _________________________ A bundle may optionally contain one ```` element. .. index:: pair: XML element; network single: bundle; network -.. table:: **XML Attributes of a network Element** +.. list-table:: **XML Attributes of a network Element** + :class: longtable :widths: 20 20 60 - - +----------------+---------+------------------------------------------------------------+ - | Attribute | Default | Description | - +================+=========+============================================================+ - | add-host | TRUE | .. index:: | - | | | single: network; attribute, add-host | - | | | single: attribute; add-host (network) | - | | | single: add-host; network attribute | - | | | | - | | | If TRUE, and ``ip-range-start`` is used, Pacemaker will | - | | | automatically ensure that ``/etc/hosts`` inside the | - | | | containers has entries for each | - | | | :ref:`replica name ` | - | | | and its assigned IP. | - +----------------+---------+------------------------------------------------------------+ - | ip-range-start | | .. index:: | - | | | single: network; attribute, ip-range-start | - | | | single: attribute; ip-range-start (network) | - | | | single: ip-range-start; network attribute | - | | | | - | | | If specified, Pacemaker will create an implicit | - | | | ``ocf:heartbeat:IPaddr2`` resource for each container | - | | | instance, starting with this IP address, using up to | - | | | ``replicas`` sequential addresses. These addresses can be | - | | | used from the host's network to reach the service inside | - | | | the container, though it is not visible within the | - | | | container itself. Only IPv4 addresses are currently | - | | | supported. | - +----------------+---------+------------------------------------------------------------+ - | host-netmask | 32 | .. index:: | - | | | single: network; attribute; host-netmask | - | | | single: attribute; host-netmask (network) | - | | | single: host-netmask; network attribute | - | | | | - | | | If ``ip-range-start`` is specified, the IP addresses | - | | | are created with this CIDR netmask (as a number of bits). | - +----------------+---------+------------------------------------------------------------+ - | host-interface | | .. index:: | - | | | single: network; attribute; host-interface | - | | | single: attribute; host-interface (network) | - | | | single: host-interface; network attribute | - | | | | - | | | If ``ip-range-start`` is specified, the IP addresses are | - | | | created on this host interface (by default, it will be | - | | | determined from the IP address). | - +----------------+---------+------------------------------------------------------------+ - | control-port | 3121 | .. index:: | - | | | single: network; attribute; control-port | - | | | single: attribute; control-port (network) | - | | | single: control-port; network attribute | - | | | | - | | | If the bundle contains a ``primitive``, the cluster will | - | | | use this integer TCP port for communication with | - | | | Pacemaker Remote inside the container. Changing this is | - | | | useful when the container is unable to listen on the | - | | | default port, for example, when the container uses the | - | | | host's network rather than ``ip-range-start`` (in which | - | | | case ``replicas-per-host`` must be 1), or when the bundle | - | | | may run on a Pacemaker Remote node that is already | - | | | listening on the default port. Any ``PCMK_remote_port`` | - | | | environment variable set on the host or in the container | - | | | is ignored for bundle connections. | - +----------------+---------+------------------------------------------------------------+ + :header-rows: 1 + + * - Attribute + - Default + - Description + * - add-host + - TRUE + - .. index:: + single: network; attribute, add-host + single: attribute; add-host (network) + single: add-host; network attribute + + If TRUE, and ``ip-range-start`` is used, Pacemaker will automatically + ensure that ``/etc/hosts`` inside the containers has entries for each + :ref:`replica name ` and its + assigned IP. + * - ip-range-start + - + - .. index:: + single: network; attribute, ip-range-start + single: attribute; ip-range-start (network) + single: ip-range-start; network attribute + + If specified, Pacemaker will create an implicit ``ocf:heartbeat:IPaddr2`` + resource for each container instance, starting with this IP address, + using up to ``replicas`` sequential addresses. These addresses can be + used from the host's network to reach the service inside the container, + though it is not visible within the container itself. Only IPv4 + addresses are currently supported. + * - host-netmask + - 32 + - .. index:: + single: network; attribute; host-netmask + single: attribute; host-netmask (network) + single: host-netmask; network attribute + + If ``ip-range-start`` is specified, the IP addresses are created with + this CIDR netmask (as a number of bits). + * - host-interface + - + - .. index:: + single: network; attribute; host-interface + single: attribute; host-interface (network) + single: host-interface; network attribute + + If ``ip-range-start`` is specified, the IP addresses are created on this + host interface (by default, it will be determined from the IP address). + * - control-port + - 3121 + - .. index:: + single: network; attribute; control-port + single: attribute; control-port (network) + single: control-port; network attribute + + If the bundle contains a ``primitive``, the cluster will use this integer + TCP port for communication with Pacemaker Remote inside the container. + Changing this is useful when the container is unable to listen on the + default port, for example, when the container uses the host's network + rather than ``ip-range-start`` (in which case ``replicas-per-host`` must + be 1), or when the bundle may run on a Pacemaker Remote node that is + already listening on the default port. Any ``PCMK_remote_port`` + environment variable set on the host or in the container is ignored for + bundle connections. .. _s-resource-bundle-note-replica-names: .. note:: Replicas are named by the bundle id plus a dash and an integer counter starting with zero. For example, if a bundle named **httpd-bundle** has **replicas=2**, its containers will be named **httpd-bundle-0** and **httpd-bundle-1**. .. index:: pair: XML element; port-mapping Additionally, a ``network`` element may optionally contain one or more ``port-mapping`` elements. -.. table:: **Attributes of a port-mapping Element** +.. list-table:: **Attributes of a port-mapping Element** + :class: longtable :widths: 20 20 60 - - +---------------+-------------------+------------------------------------------------------+ - | Attribute | Default | Description | - +===============+===================+======================================================+ - | id | | .. index:: | - | | | single: port-mapping; attribute, id | - | | | single: attribute; id (port-mapping) | - | | | single: id; port-mapping attribute | - | | | | - | | | A unique name for the port mapping (required) | - +---------------+-------------------+------------------------------------------------------+ - | port | | .. index:: | - | | | single: port-mapping; attribute, port | - | | | single: attribute; port (port-mapping) | - | | | single: port; port-mapping attribute | - | | | | - | | | If this is specified, connections to this TCP port | - | | | number on the host network (on the container's | - | | | assigned IP address, if ``ip-range-start`` is | - | | | specified) will be forwarded to the container | - | | | network. Exactly one of ``port`` or ``range`` | - | | | must be specified in a ``port-mapping``. | - +---------------+-------------------+------------------------------------------------------+ - | internal-port | value of ``port`` | .. index:: | - | | | single: port-mapping; attribute, internal-port | - | | | single: attribute; internal-port (port-mapping) | - | | | single: internal-port; port-mapping attribute | - | | | | - | | | If ``port`` and this are specified, connections | - | | | to ``port`` on the host's network will be | - | | | forwarded to this port on the container network. | - +---------------+-------------------+------------------------------------------------------+ - | range | | .. index:: | - | | | single: port-mapping; attribute, range | - | | | single: attribute; range (port-mapping) | - | | | single: range; port-mapping attribute | - | | | | - | | | If this is specified, connections to these TCP | - | | | port numbers (expressed as *first_port*-*last_port*) | - | | | on the host network (on the container's assigned IP | - | | | address, if ``ip-range-start`` is specified) will | - | | | be forwarded to the same ports in the container | - | | | network. Exactly one of ``port`` or ``range`` | - | | | must be specified in a ``port-mapping``. | - +---------------+-------------------+------------------------------------------------------+ + :header-rows: 1 + + * - Attribute + - Default + - Description + * - id + - + - .. index:: + single: port-mapping; attribute, id + single: attribute; id (port-mapping) + single: id; port-mapping attribute + + A unique name for the port mapping (required) + * - port + - + - .. index:: + single: port-mapping; attribute, port + single: attribute; port (port-mapping) + single: port; port-mapping attribute + + If this is specified, connections to this TCP port number on the host + network (on the container's assigned IP address, if ``ip-range-start`` + is specified) will be forwarded to the container network. Exactly one + of ``port`` or ``range`` must be specified in a ``port-mapping``. + * - internal-port + - value of ``port`` + - .. index:: + single: port-mapping; attribute, internal-port + single: attribute; internal-port (port-mapping) + single: internal-port; port-mapping attribute + + If ``port`` and this are specified, connections to ``port`` on the host's + network will be forwarded to this port on the container network. + * - range + - + - .. index:: + single: port-mapping; attribute, range + single: attribute; range (port-mapping) + single: range; port-mapping attribute + + If this is specified, connections to these TCP port numbers (expressed as + *first_port*-*last_port*) on the host network (on the container's + assigned IP address, if ``ip-range-start`` is specified) will be forwarded + to the same ports in the container network. Exactly one of ``port`` or + ``range`` must be specified in a ``port-mapping``. .. note:: If the bundle contains a ``primitive``, Pacemaker will automatically map the ``control-port``, so it is not necessary to specify that port in a ``port-mapping``. .. index: pair: XML element; storage pair: XML element; storage-mapping single: bundle; storage .. _s-bundle-storage: Bundle Storage Properties _________________________ A bundle may optionally contain one ``storage`` element. A ``storage`` element has no properties of its own, but may contain one or more ``storage-mapping`` elements. -.. table:: **Attributes of a storage-mapping Element** +.. list-table:: **Attributes of a storage-mapping Element** + :class: longtable :widths: 20 20 60 - - +-----------------+---------+-------------------------------------------------------------+ - | Attribute | Default | Description | - +=================+=========+=============================================================+ - | id | | .. index:: | - | | | single: storage-mapping; attribute, id | - | | | single: attribute; id (storage-mapping) | - | | | single: id; storage-mapping attribute | - | | | | - | | | A unique name for the storage mapping (required) | - +-----------------+---------+-------------------------------------------------------------+ - | source-dir | | .. index:: | - | | | single: storage-mapping; attribute, source-dir | - | | | single: attribute; source-dir (storage-mapping) | - | | | single: source-dir; storage-mapping attribute | - | | | | - | | | The absolute path on the host's filesystem that will be | - | | | mapped into the container. Exactly one of ``source-dir`` | - | | | and ``source-dir-root`` must be specified in a | - | | | ``storage-mapping``. | - +-----------------+---------+-------------------------------------------------------------+ - | source-dir-root | | .. index:: | - | | | single: storage-mapping; attribute, source-dir-root | - | | | single: attribute; source-dir-root (storage-mapping) | - | | | single: source-dir-root; storage-mapping attribute | - | | | | - | | | The start of a path on the host's filesystem that will | - | | | be mapped into the container, using a different | - | | | subdirectory on the host for each container instance. | - | | | The subdirectory will be named the same as the | - | | | :ref:`replica name `. | - | | | Exactly one of ``source-dir`` and ``source-dir-root`` | - | | | must be specified in a ``storage-mapping``. | - +-----------------+---------+-------------------------------------------------------------+ - | target-dir | | .. index:: | - | | | single: storage-mapping; attribute, target-dir | - | | | single: attribute; target-dir (storage-mapping) | - | | | single: target-dir; storage-mapping attribute | - | | | | - | | | The path name within the container where the host | - | | | storage will be mapped (required) | - +-----------------+---------+-------------------------------------------------------------+ - | options | | .. index:: | - | | | single: storage-mapping; attribute, options | - | | | single: attribute; options (storage-mapping) | - | | | single: options; storage-mapping attribute | - | | | | - | | | A comma-separated list of file system mount | - | | | options to use when mapping the storage | - +-----------------+---------+-------------------------------------------------------------+ + :header-rows: 1 + + * - Attribute + - Default + - Description + * - id + - + - .. index:: + single: storage-mapping; attribute, id + single: attribute; id (storage-mapping) + single: id; storage-mapping attribute + + A unique name for the storage mapping (required) + * - source-dir + - + - .. index:: + single: storage-mapping; attribute, source-dir + single: attribute; source-dir (storage-mapping) + single: source-dir; storage-mapping attribute + + The absolute path on the host's filesystem that will be mapped into the + container. Exactly one of ``source-dir`` and ``source-dir-root`` must be + specified in a ``storage-mapping``. + * - source-dir-root + - + - .. index:: + single: storage-mapping; attribute, source-dir-root + single: attribute; source-dir-root (storage-mapping) + single: source-dir-root; storage-mapping attribute + + The start of a path on the host's filesystem that will be mapped into the + container, using a different subdirectory on the host for each container + instance. The subdirectory will be named the same as the + :ref:`replica name `. Exactly one + of ``source-dir`` and ``source-dir-root`` must be specified in a + ``storage-mapping``. + * - target-dir + - + - .. index:: + single: storage-mapping; attribute, target-dir + single: attribute; target-dir (storage-mapping) + single: target-dir; storage-mapping attribute + + The path name within the container where the host storage will be mapped + (required) + * - options + - + - .. index:: + single: storage-mapping; attribute, options + single: attribute; options (storage-mapping) + single: options; storage-mapping attribute + + A comma-separated list of file system mount options to use when mapping + the storage .. note:: Pacemaker does not define the behavior if the source directory does not already exist on the host. However, it is expected that the container technology and/or its resource agent will create the source directory in that case. .. note:: If the bundle contains a ``primitive``, Pacemaker will automatically map the equivalent of ``source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey`` and ``source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log`` into the container, so it is not necessary to specify those paths in a ``storage-mapping``. .. important:: The ``PCMK_authkey_location`` environment variable must not be set to anything other than the default of ``/etc/pacemaker/authkey`` on any node in the cluster. .. important:: If SELinux is used in enforcing mode on the host, you must ensure the container is allowed to use any storage you mount into it. For Docker and podman bundles, adding "Z" to the mount options will create a container-specific label for the mount that allows the container access. .. index:: single: bundle; primitive Bundle Primitive ________________ A bundle may optionally contain one :ref:`primitive ` resource. The primitive may have operations, instance attributes, and meta-attributes defined, as usual. If a bundle contains a primitive resource, the container image must include the Pacemaker Remote daemon, and at least one of ``ip-range-start`` or ``control-port`` must be configured in the bundle. Pacemaker will create an implicit **ocf:pacemaker:remote** resource for the connection, launch Pacemaker Remote within the container, and monitor and manage the primitive resource via Pacemaker Remote. If the bundle has more than one container instance (replica), the primitive resource will function as an implicit :ref:`clone ` -- a :ref:`promotable clone ` if the bundle has ``promoted-max`` greater than zero. .. note:: If you want to pass environment variables to a bundle's Pacemaker Remote connection or primitive, you have two options: * Environment variables whose value is the same regardless of the underlying host may be set using the container element's ``options`` attribute. * If you want variables to have host-specific values, you can use the :ref:`storage-mapping ` element to map a file on the host as ``/etc/pacemaker/pcmk-init.env`` in the container *(since 2.0.3)*. Pacemaker Remote will parse this file as a shell-like format, with variables set as NAME=VALUE, ignoring blank lines and comments starting with "#". .. important:: When a bundle has a ``primitive``, Pacemaker on all cluster nodes must be able to contact Pacemaker Remote inside the bundle's containers. * The containers must have an accessible network (for example, ``network`` should not be set to "none" with a ``primitive``). * The default, using a distinct network space inside the container, works in combination with ``ip-range-start``. Any firewall must allow access from all cluster nodes to the ``control-port`` on the container IPs. * If the container shares the host's network space (for example, by setting ``network`` to "host"), a unique ``control-port`` should be specified for each bundle. Any firewall must allow access from all cluster nodes to the ``control-port`` on all cluster and remote node IPs. .. index:: single: bundle; node attributes .. _s-bundle-attributes: Bundle Node Attributes ______________________ If the bundle has a ``primitive``, the primitive's resource agent may want to set node attributes such as :ref:`promotion scores `. However, with containers, it is not apparent which node should get the attribute. If the container uses shared storage that is the same no matter which node the container is hosted on, then it is appropriate to use the promotion score on the bundle node itself. On the other hand, if the container uses storage exported from the underlying host, then it may be more appropriate to use the promotion score on the underlying host. Since this depends on the particular situation, the ``container-attribute-target`` resource meta-attribute allows the user to specify which approach to use. If it is set to ``host``, then user-defined node attributes will be checked on the underlying host. If it is anything else, the local node (in this case the bundle node) is used as usual. This only applies to user-defined attributes; the cluster will always check the local node for cluster-defined attributes such as ``#uname``. If ``container-attribute-target`` is ``host``, the cluster will pass additional environment variables to the primitive's resource agent that allow it to set node attributes appropriately: ``CRM_meta_container_attribute_target`` (identical to the meta-attribute value) and ``CRM_meta_physical_host`` (the name of the underlying host). .. note:: When called by a resource agent, the ``attrd_updater`` and ``crm_attribute`` commands will automatically check those environment variables and set attributes appropriately. .. index:: single: bundle; meta-attributes Bundle Meta-Attributes ______________________ Any meta-attribute set on a bundle will be inherited by the bundle's primitive and any resources implicitly created by Pacemaker for the bundle. This includes options such as ``priority``, ``target-role``, and ``is-managed``. See :ref:`resource_options` for more information. Bundles support clone meta-attributes including ``notify``, ``ordered``, and ``interleave``. Limitations of Bundles ______________________ Restarting pacemaker while a bundle is unmanaged or the cluster is in maintenance mode may cause the bundle to fail. Bundles may not be explicitly cloned or included in groups. This includes the bundle's primitive and any resources implicitly created by Pacemaker for the bundle. (If ``replicas`` is greater than 1, the bundle will behave like a clone implicitly.) Bundles do not have instance attributes, utilization attributes, or operations, though a bundle's primitive may have them. A bundle with a primitive can run on a Pacemaker Remote node only if the bundle uses a distinct ``control-port``. .. [#] Of course, the service must support running multiple instances. .. [#] Docker is a trademark of Docker, Inc. No endorsement by or association with Docker, Inc. is implied. diff --git a/doc/sphinx/Pacemaker_Explained/constraints.rst b/doc/sphinx/Pacemaker_Explained/constraints.rst index 050c40598a..422fa451da 100644 --- a/doc/sphinx/Pacemaker_Explained/constraints.rst +++ b/doc/sphinx/Pacemaker_Explained/constraints.rst @@ -1,1142 +1,1147 @@ .. index:: single: constraint single: resource; constraint .. _constraints: Resource Constraints -------------------- .. _location-constraint: .. index:: single: location constraint single: constraint; location Deciding Which Nodes a Resource Can Run On ########################################## *Location constraints* tell the cluster which nodes a resource can run on. There are two alternative strategies. One way is to say that, by default, resources can run anywhere, and then the location constraints specify nodes that are not allowed (an *opt-out* cluster). The other way is to start with nothing able to run anywhere, and use location constraints to selectively enable allowed nodes (an *opt-in* cluster). Whether you should choose opt-in or opt-out depends on your personal preference and the make-up of your cluster. If most of your resources can run on most of the nodes, then an opt-out arrangement is likely to result in a simpler configuration. On the other-hand, if most resources can only run on a small subset of nodes, an opt-in configuration might be simpler. .. index:: pair: XML element; rsc_location single: constraint; rsc_location Location Properties ___________________ .. list-table:: **Attributes of a rsc_location Element** :class: longtable :widths: 15 15 10 60 :header-rows: 1 * - Name - Type - Default - Description * - .. rsc_location_id: .. index:: single: rsc_location; attribute, id single: attribute; id (rsc_location) single: id; rsc_location attribute id - :ref:`id ` - - A unique name for the constraint (required) * - .. rsc_location_rsc: .. index:: single: rsc_location; attribute, rsc single: attribute; rsc (rsc_location) single: rsc; rsc_location attribute rsc - :ref:`id ` - - The name of the resource to which this constraint applies. A location constraint must either have a ``rsc``, have a ``rsc-pattern``, or contain at least one resource set. * - .. rsc_pattern: .. index:: single: rsc_location; attribute, rsc-pattern single: attribute; rsc-pattern (rsc_location) single: rsc-pattern; rsc_location attribute rsc-pattern - :ref:`text ` - - A pattern matching the names of resources to which this constraint applies. The syntax is the same as `POSIX `_ extended regular expressions, with the addition of an initial ``!`` indicating that resources *not* matching the pattern are selected. If the regular expression contains submatches, and the constraint contains a :ref:`rule `, the submatches can be referenced as ``%1`` through ``%9`` in the rule's ``score-attribute`` or a rule expression's ``attribute`` (see :ref:`s-rsc-pattern-rules`). A location constraint must either have a ``rsc``, have a ``rsc-pattern``, or contain at least one resource set. * - .. rsc_location_node: .. index:: single: rsc_location; attribute, node single: attribute; node (rsc_location) single: node; rsc_location attribute node - :ref:`text ` - - The name of the node to which this constraint applies. A location constraint must either have a ``node`` and ``score``, or contain at least one rule. * - .. rsc_location_score: .. index:: single: rsc_location; attribute, score single: attribute; score (rsc_location) single: score; rsc_location attribute score - :ref:`score ` - - Positive values indicate a preference for running the affected resource(s) on ``node`` -- the higher the value, the stronger the preference. Negative values indicate the resource(s) should avoid this node (a value of **-INFINITY** changes "should" to "must"). A location constraint must either have a ``node`` and ``score``, or contain at least one rule. * - .. rsc_location_role: .. index:: single: rsc_location; attribute, role single: attribute; role (rsc_location) single: role; rsc_location attribute role - :ref:`enumeration ` - ``Started`` - This is significant only for :ref:`promotable clones `, is allowed only if ``rsc`` or ``rsc-pattern`` is set, and is ignored if the constraint contains a rule. Allowed values: * ``Started`` or ``Unpromoted``: The constraint affects the location of all instances of the resource. (A promoted instance must start in the unpromoted role before being promoted, so any location requirement for unpromoted instances also affects promoted instances.) * ``Promoted``: The constraint does not affect the location of instances, but instead affects which of the instances will be promoted. * - .. resource_discovery: .. index:: single: rsc_location; attribute, resource-discovery single: attribute; resource-discovery (rsc_location) single: resource-discovery; rsc_location attribute resource-discovery - :ref:`enumeration ` - always - Whether Pacemaker should perform resource discovery (that is, check whether the resource is already running) for this resource on this node. This should normally be left as the default, so that rogue instances of a service can be stopped when they are running where they are not supposed to be. However, there are two situations where disabling resource discovery is a good idea: when a service is not installed on a node, discovery might return an error (properly written OCF agents will not, so this is usually only seen with other agent types); and when Pacemaker Remote is used to scale a cluster to hundreds of nodes, limiting resource discovery to allowed nodes can significantly boost performance. Allowed values: * ``always:`` Always perform resource discovery for the specified resource on this node. * ``never:`` Never perform resource discovery for the specified resource on this node. This option should generally be used with a -INFINITY score, although that is not strictly required. * ``exclusive:`` Perform resource discovery for the specified resource only on this node (and other nodes similarly marked as ``exclusive``). Multiple location constraints using ``exclusive`` discovery for the same resource across different nodes creates a subset of nodes resource-discovery is exclusive to. If a resource is marked for ``exclusive`` discovery on one or more nodes, that resource is only allowed to be placed within that subset of nodes. .. warning:: Setting ``resource-discovery`` to ``never`` or ``exclusive`` removes Pacemaker's ability to detect and stop unwanted instances of a service running where it's not supposed to be. It is up to the system administrator (you!) to make sure that the service can *never* be active on nodes without ``resource-discovery`` (such as by leaving the relevant software uninstalled). .. index:: single: Asymmetrical Clusters single: Opt-In Clusters Asymmetrical "Opt-In" Clusters ______________________________ To create an opt-in cluster, start by preventing resources from running anywhere by default: .. code-block:: none # crm_attribute --name symmetric-cluster --update false Then start enabling nodes. The following fragment says that the web server prefers **sles-1**, the database prefers **sles-2** and both can fail over to **sles-3** if their most preferred node fails. .. topic:: Opt-in location constraints for two resources .. code-block:: xml .. index:: single: Symmetrical Clusters single: Opt-Out Clusters Symmetrical "Opt-Out" Clusters ______________________________ To create an opt-out cluster, start by allowing resources to run anywhere by default: .. code-block:: none # crm_attribute --name symmetric-cluster --update true Then start disabling nodes. The following fragment is the equivalent of the above opt-in configuration. .. topic:: Opt-out location constraints for two resources .. code-block:: xml .. _node-score-equal: What if Two Nodes Have the Same Score _____________________________________ If two nodes have the same score, then the cluster will choose one. This choice may seem random and may not be what was intended, however the cluster was not given enough information to know any better. .. topic:: Constraints where a resource prefers two nodes equally .. code-block:: xml In the example above, assuming no other constraints and an inactive cluster, **Webserver** would probably be placed on **sles-1** and **Database** on **sles-2**. It would likely have placed **Webserver** based on the node's uname and **Database** based on the desire to spread the resource load evenly across the cluster. However other factors can also be involved in more complex configurations. .. _s-rsc-pattern: Specifying locations using pattern matching ___________________________________________ A location constraint can affect all resources whose IDs match a given pattern. The following example bans resources named **ip-httpd**, **ip-asterisk**, **ip-gateway**, etc., from **node1**. .. topic:: Location constraint banning all resources matching a pattern from one node .. code-block:: xml .. index:: single: constraint; ordering single: resource; start order .. _s-resource-ordering: Specifying the Order in which Resources Should Start/Stop ######################################################### *Ordering constraints* tell the cluster the order in which certain resource actions should occur. .. important:: Ordering constraints affect *only* the ordering of resource actions; they do *not* require that the resources be placed on the same node. If you want resources to be started on the same node *and* in a specific order, you need both an ordering constraint *and* a colocation constraint (see :ref:`s-resource-colocation`), or alternatively, a group (see :ref:`group-resources`). .. index:: pair: XML element; rsc_order pair: constraint; rsc_order Ordering Properties ___________________ -.. table:: **Attributes of a rsc_order Element** +.. list-table:: **Attributes of a rsc_order Element** :class: longtable :widths: 15 30 55 + :header-rows: 1 + + * - Field + - Default + - Description + * - id + - + - .. index:: + single: rsc_order; attribute, id + single: attribute; id (rsc_order) + single: id; rsc_order attribute + + A unique name for the constraint + * - first + - + - .. index:: + single: rsc_order; attribute, first + single: attribute; first (rsc_order) + single: first; rsc_order attribute - +--------------+----------------------------+-------------------------------------------------------------------+ - | Field | Default | Description | - +==============+============================+===================================================================+ - | id | | .. index:: | - | | | single: rsc_order; attribute, id | - | | | single: attribute; id (rsc_order) | - | | | single: id; rsc_order attribute | - | | | | - | | | A unique name for the constraint | - +--------------+----------------------------+-------------------------------------------------------------------+ - | first | | .. index:: | - | | | single: rsc_order; attribute, first | - | | | single: attribute; first (rsc_order) | - | | | single: first; rsc_order attribute | - | | | | - | | | Name of the resource that the ``then`` resource | - | | | depends on | - +--------------+----------------------------+-------------------------------------------------------------------+ - | then | | .. index:: | - | | | single: rsc_order; attribute, then | - | | | single: attribute; then (rsc_order) | - | | | single: then; rsc_order attribute | - | | | | - | | | Name of the dependent resource | - +--------------+----------------------------+-------------------------------------------------------------------+ - | first-action | start | .. index:: | - | | | single: rsc_order; attribute, first-action | - | | | single: attribute; first-action (rsc_order) | - | | | single: first-action; rsc_order attribute | - | | | | - | | | The action that the ``first`` resource must complete | - | | | before ``then-action`` can be initiated for the ``then`` | - | | | resource. Allowed values: ``start``, ``stop``, | - | | | ``promote``, ``demote``. | - +--------------+----------------------------+-------------------------------------------------------------------+ - | then-action | value of ``first-action`` | .. index:: | - | | | single: rsc_order; attribute, then-action | - | | | single: attribute; then-action (rsc_order) | - | | | single: first-action; rsc_order attribute | - | | | | - | | | The action that the ``then`` resource can execute only | - | | | after the ``first-action`` on the ``first`` resource has | - | | | completed. Allowed values: ``start``, ``stop``, | - | | | ``promote``, ``demote``. | - +--------------+----------------------------+-------------------------------------------------------------------+ - | kind | Mandatory | .. index:: | - | | | single: rsc_order; attribute, kind | - | | | single: attribute; kind (rsc_order) | - | | | single: kind; rsc_order attribute | - | | | | - | | | How to enforce the constraint. Allowed values: | - | | | | - | | | * ``Mandatory:`` ``then-action`` will never be initiated | - | | | for the ``then`` resource unless and until ``first-action`` | - | | | successfully completes for the ``first`` resource. | - | | | | - | | | * ``Optional:`` The constraint applies only if both specified | - | | | resource actions are scheduled in the same transition | - | | | (that is, in response to the same cluster state). This | - | | | means that ``then-action`` is allowed on the ``then`` | - | | | resource regardless of the state of the ``first`` resource, | - | | | but if both actions happen to be scheduled at the same time, | - | | | they will be ordered. | - | | | | - | | | * ``Serialize:`` Ensure that the specified actions are never | - | | | performed concurrently for the specified resources. | - | | | ``First-action`` and ``then-action`` can be executed in either | - | | | order, but one must complete before the other can be initiated. | - | | | An example use case is when resource start-up puts a high load | - | | | on the host. | - +--------------+----------------------------+-------------------------------------------------------------------+ - | symmetrical | TRUE for ``Mandatory`` and | .. index:: | - | | ``Optional`` kinds. FALSE | single: rsc_order; attribute, symmetrical | - | | for ``Serialize`` kind. | single: attribute; symmetrical (rsc)order) | - | | | single: symmetrical; rsc_order attribute | - | | | | - | | | If true, the reverse of the constraint applies for the | - | | | opposite action (for example, if B starts after A starts, | - | | | then B stops before A stops). ``Serialize`` orders cannot | - | | | be symmetrical. | - +--------------+----------------------------+-------------------------------------------------------------------+ + Name of the resource that the ``then`` resource depends on + * - then + - + - .. index:: + single: rsc_order; attribute, then + single: attribute; then (rsc_order) + single: then; rsc_order attribute + + Name of the dependent resource + * - first-action + - start + - .. index:: + single: rsc_order; attribute, first-action + single: attribute; first-action (rsc_order) + single: first-action; rsc_order attribute + + The action that the ``first`` resource must complete before + ``then-action`` can be initiated for the ``then`` resource. Allowed + values: ``start``, ``stop``, ``promote``, ``demote``. + * - then-action + - value of ``first-action`` + - .. index:: + single: rsc_order; attribute, then-action + single: attribute; then-action (rsc_order) + single: then-action; rsc_order attribute + + The action that the ``then`` resource can execute only after the + ``first-action`` on the ``first`` resource has completed. Allowed + values: ``start``, ``stop``, ``promote``, ``demote``. + * - kind + - Mandatory + - .. index:: + single: rsc_order; attribute, kind + single: attribute; kind (rsc_order) + single: kind; rsc_order attribute + + How to enforce the constraint. Allowed values: + + * ``Mandatory:`` ``then-action`` will never be initiated for the + ``then`` resource unless and until ``first-action`` successfully + completes for the ``first`` resource. + + * ``Optional:`` The constraint applies only if both specified resource + actions are scheduled in the same transition (that is, in response to + the same cluster state). This means that ``then-action`` is allowed + on the ``then`` resource regardless of the state of the ``first`` + resource, but if both actions happen to be scheduled at the same time, + they will be ordered. + + * ``Serialize:`` Ensure that the specified actions are never performed + concurrently for the specified resources. ``First-action`` and + ``then-action`` can be executed in either order, but one must complete + before the other can be initiated. An example use case is when resource + start-up puts a high load on the host. + * - symmetrical + - TRUE for ``Mandatory`` and ``Optional`` kinds. FALSE for ``Serialize`` + kind. + - .. index:: + single: rsc_order; attribute, symmetrical + single: attribute; symmetrical (rsc)order) + single: symmetrical; rsc_order attribute + + If true, the reverse of the constraint applies for the opposite action (for + example, if B starts after A starts, then B stops before A stops). + ``Serialize`` orders cannot be symmetrical. ``Promote`` and ``demote`` apply to :ref:`promotable ` clone resources. Optional and mandatory ordering _______________________________ Here is an example of ordering constraints where **Database** *must* start before **Webserver**, and **IP** *should* start before **Webserver** if they both need to be started: .. topic:: Optional and mandatory ordering constraints .. code-block:: xml Because the above example lets ``symmetrical`` default to TRUE, **Webserver** must be stopped before **Database** can be stopped, and **Webserver** should be stopped before **IP** if they both need to be stopped. Symmetric and asymmetric ordering _________________________________ A mandatory symmetric ordering of "start A then start B" implies not only that the start actions must be ordered, but that B is not allowed to be active unless A is active. For example, if the ordering is added to the configuration when A is stopped (due to target-role, failure, etc.) and B is already active, then B will be stopped. By contrast, asymmetric ordering of "start A then start B" means the stops can occur in either order, which implies that B *can* remain active in the same situation. .. index:: single: colocation single: constraint; colocation single: resource; location relative to other resources .. _s-resource-colocation: Placing Resources Relative to other Resources ############################################# *Colocation constraints* tell the cluster that the location of one resource depends on the location of another one. Colocation has an important side-effect: it affects the order in which resources are assigned to a node. Think about it: You can't place A relative to B unless you know where B is [#]_. So when you are creating colocation constraints, it is important to consider whether you should colocate A with B, or B with A. .. important:: Colocation constraints affect *only* the placement of resources; they do *not* require that the resources be started in a particular order. If you want resources to be started on the same node *and* in a specific order, you need both an ordering constraint (see :ref:`s-resource-ordering`) *and* a colocation constraint, or alternatively, a group (see :ref:`group-resources`). .. index:: pair: XML element; rsc_colocation single: constraint; rsc_colocation Colocation Properties _____________________ -.. table:: **Attributes of a rsc_colocation Constraint** +.. list-table:: **Attributes of a rsc_colocation Constraint** :class: longtable :widths: 15 30 55 + :header-rows: 1 - +----------------+----------------+--------------------------------------------------------+ - | Field | Default | Description | - +================+================+========================================================+ - | id | | .. index:: | - | | | single: rsc_colocation; attribute, id | - | | | single: attribute; id (rsc_colocation) | - | | | single: id; rsc_colocation attribute | - | | | | - | | | A unique name for the constraint (required). | - +----------------+----------------+--------------------------------------------------------+ - | rsc | | .. index:: | - | | | single: rsc_colocation; attribute, rsc | - | | | single: attribute; rsc (rsc_colocation) | - | | | single: rsc; rsc_colocation attribute | - | | | | - | | | The name of a resource that should be located | - | | | relative to ``with-rsc``. A colocation constraint must | - | | | either contain at least one | - | | | :ref:`resource set `, or specify both | - | | | ``rsc`` and ``with-rsc``. | - +----------------+----------------+--------------------------------------------------------+ - | with-rsc | | .. index:: | - | | | single: rsc_colocation; attribute, with-rsc | - | | | single: attribute; with-rsc (rsc_colocation) | - | | | single: with-rsc; rsc_colocation attribute | - | | | | - | | | The name of the resource used as the colocation | - | | | target. The cluster will decide where to put this | - | | | resource first and then decide where to put ``rsc``. | - | | | A colocation constraint must either contain at least | - | | | one :ref:`resource set `, or specify | - | | | both ``rsc`` and ``with-rsc``. | - +----------------+----------------+--------------------------------------------------------+ - | node-attribute | #uname | .. index:: | - | | | single: rsc_colocation; attribute, node-attribute | - | | | single: attribute; node-attribute (rsc_colocation) | - | | | single: node-attribute; rsc_colocation attribute | - | | | | - | | | If ``rsc`` and ``with-rsc`` are specified, this node | - | | | attribute must be the same on the node running ``rsc`` | - | | | and the node running ``with-rsc`` for the constraint | - | | | to be satisfied. (For details, see | - | | | :ref:`s-coloc-attribute`.) | - +----------------+----------------+--------------------------------------------------------+ - | score | 0 | .. index:: | - | | | single: rsc_colocation; attribute, score | - | | | single: attribute; score (rsc_colocation) | - | | | single: score; rsc_colocation attribute | - | | | | - | | | Positive values indicate the resources should run on | - | | | the same node. Negative values indicate the resources | - | | | should run on different nodes. Values of | - | | | +/- ``INFINITY`` change "should" to "must". | - +----------------+----------------+--------------------------------------------------------+ - | rsc-role | Started | .. index:: | - | | | single: clone; ordering constraint, rsc-role | - | | | single: ordering constraint; rsc-role (clone) | - | | | single: rsc-role; clone ordering constraint | - | | | | - | | | If ``rsc`` and ``with-rsc`` are specified, and ``rsc`` | - | | | is a :ref:`promotable clone `, | - | | | the constraint applies only to ``rsc`` instances in | - | | | this role. Allowed values: ``Started``, ``Stopped``, | - | | | ``Promoted``, ``Unpromoted``. For details, see | - | | | :ref:`promotable-clone-constraints`. | - +----------------+----------------+--------------------------------------------------------+ - | with-rsc-role | Started | .. index:: | - | | | single: clone; ordering constraint, with-rsc-role | - | | | single: ordering constraint; with-rsc-role (clone) | - | | | single: with-rsc-role; clone ordering constraint | - | | | | - | | | If ``rsc`` and ``with-rsc`` are specified, and | - | | | ``with-rsc`` is a | - | | | :ref:`promotable clone `, the | - | | | constraint applies only to ``with-rsc`` instances in | - | | | this role. Allowed values: ``Started``, ``Stopped``, | - | | | ``Promoted``, ``Unpromoted``. For details, see | - | | | :ref:`promotable-clone-constraints`. | - +----------------+----------------+--------------------------------------------------------+ - | influence | value of | .. index:: | - | | ``critical`` | single: rsc_colocation; attribute, influence | - | | meta-attribute | single: attribute; influence (rsc_colocation) | - | | for ``rsc`` | single: influence; rsc_colocation attribute | - | | | | - | | | Whether to consider the location preferences of | - | | | ``rsc`` when ``with-rsc`` is already active. Allowed | - | | | values: ``true``, ``false``. For details, see | - | | | :ref:`s-coloc-influence`. *(since 2.1.0)* | - +----------------+----------------+--------------------------------------------------------+ + * - Field + - Default + - Description + * - id + - + - .. index:: + single: rsc_colocation; attribute, id + single: attribute; id (rsc_colocation) + single: id; rsc_colocation attribute + + A unique name for the constraint (required). + * - rsc + - + - .. index:: + single: rsc_colocation; attribute, rsc + single: attribute; rsc (rsc_colocation) + single: rsc; rsc_colocation attribute + + The name of a resource that should be located relative to ``with-rsc``. + A colocation constraint must either contain at least one :ref:`resource + set `, or specify both ``rsc`` and ``with-rsc``. + * - with-rsc + - + - .. index:: + single: rsc_colocation; attribute, with-rsc + single: attribute; with-rsc (rsc_colocation) + single: with-rsc; rsc_colocation attribute + + The name of the resource used as the colocation target. The cluster will + decide where to put this resource first and then decide where to put + ``rsc``. A colocation constraint must either contain at least one + :ref:`resource set `, or specify both ``rsc`` and + ``with-rsc``. + * - node-attribute + - #uname + - .. index:: + single: rsc_colocation; attribute, node-attribute + single: attribute; node-attribute (rsc_colocation) + single: node-attribute; rsc_colocation attribute + + If ``rsc`` and ``with-rsc`` are specified, this node attribute must be + the same on the node running ``rsc`` and the node running ``with-rsc`` + for the constraint to be satisfied. (For details, see + :ref:`s-coloc-attribute`.) + * - score + - 0 + - .. index:: + single: rsc_colocation; attribute, score + single: attribute; score (rsc_colocation) + single: score; rsc_colocation attribute + + Positive values indicate the resources should run on the same node. + Negative values indicate the resources should run on different nodes. + Values of +/- ``INFINITY`` change "should" to "must". + * - rsc-role + - Started + - .. index:: + single: clone; ordering constraint, rsc-role + single: ordering constraint; rsc-role (clone) + single: rsc-role; clone ordering constraint + + If ``rsc`` and ``with-rsc`` are specified, and ``rsc`` is a + :ref:`promotable clone `, the constraint applies + only to ``rsc`` instances in this role. Allowed values: ``Started``, + ``Stopped``, ``Promoted``, ``Unpromoted``. For details, see + :ref:`promotable-clone-constraints`. + * - with-rsc-role + - Started + - .. index:: + single: clone; ordering constraint, with-rsc-role + single: ordering constraint; with-rsc-role (clone) + single: with-rsc-role; clone ordering constraint + + If ``rsc`` and ``with-rsc`` are specified, and ``with-rsc`` is a + :ref:`promotable clone `, the constraint applies + only to ``with-rsc`` instances in this role. Allowed values: ``Started``, + ``Stopped``, ``Promoted``, ``Unpromoted``. For details, see + :ref:`promotable-clone-constraints`. + * - influence + - value of ``critical`` meta-attribute for ``rsc`` + - .. index:: + single: rsc_colocation; attribute, influence + single: attribute; influence (rsc_colocation) + single: influence; rsc_colocation attribute + + Whether to consider the location preferences of ``rsc`` when ``with-rsc`` + is already active. Allowed values: ``true``, ``false``. For details, + see :ref:`s-coloc-influence`. *(since 2.1.0)* Mandatory Placement ___________________ Mandatory placement occurs when the constraint's score is **+INFINITY** or **-INFINITY**. In such cases, if the constraint can't be satisfied, then the **rsc** resource is not permitted to run. For ``score=INFINITY``, this includes cases where the ``with-rsc`` resource is not active. If you need resource **A** to always run on the same machine as resource **B**, you would add the following constraint: .. topic:: Mandatory colocation constraint for two resources .. code-block:: xml Remember, because **INFINITY** was used, if **B** can't run on any of the cluster nodes (for whatever reason) then **A** will not be allowed to run. Whether **A** is running or not has no effect on **B**. Alternatively, you may want the opposite -- that **A** *cannot* run on the same machine as **B**. In this case, use ``score="-INFINITY"``. .. topic:: Mandatory anti-colocation constraint for two resources .. code-block:: xml Again, by specifying **-INFINITY**, the constraint is binding. So if the only place left to run is where **B** already is, then **A** may not run anywhere. As with **INFINITY**, **B** can run even if **A** is stopped. However, in this case **A** also can run if **B** is stopped, because it still meets the constraint of **A** and **B** not running on the same node. Advisory Placement __________________ If mandatory placement is about "must" and "must not", then advisory placement is the "I'd prefer if" alternative. For colocation constraints with scores greater than **-INFINITY** and less than **INFINITY**, the cluster will try to accommodate your wishes, but may ignore them if other factors outweigh the colocation score. Those factors might include other constraints, resource stickiness, failure thresholds, whether other resources would be prevented from being active, etc. .. topic:: Advisory colocation constraint for two resources .. code-block:: xml .. _s-coloc-attribute: Colocation by Node Attribute ____________________________ The ``node-attribute`` property of a colocation constraints allows you to express the requirement, "these resources must be on similar nodes". As an example, imagine that you have two Storage Area Networks (SANs) that are not controlled by the cluster, and each node is connected to one or the other. You may have two resources **r1** and **r2** such that **r2** needs to use the same SAN as **r1**, but doesn't necessarily have to be on the same exact node. In such a case, you could define a :ref:`node attribute ` named **san**, with the value **san1** or **san2** on each node as appropriate. Then, you could colocate **r2** with **r1** using ``node-attribute`` set to **san**. .. _s-coloc-influence: Colocation Influence ____________________ By default, if A is colocated with B, the cluster will take into account A's preferences when deciding where to place B, to maximize the chance that both resources can run. For a detailed look at exactly how this occurs, see `Colocation Explained `_. However, if ``influence`` is set to ``false`` in the colocation constraint, this will happen only if B is inactive and needing to be started. If B is already active, A's preferences will have no effect on placing B. An example of what effect this would have and when it would be desirable would be a nonessential reporting tool colocated with a resource-intensive service that takes a long time to start. If the reporting tool fails enough times to reach its migration threshold, by default the cluster will want to move both resources to another node if possible. Setting ``influence`` to ``false`` on the colocation constraint would mean that the reporting tool would be stopped in this situation instead, to avoid forcing the service to move. The ``critical`` resource meta-attribute is a convenient way to specify the default for all colocation constraints and groups involving a particular resource. .. note:: If a noncritical resource is a member of a group, all later members of the group will be treated as noncritical, even if they are marked as (or left to default to) critical. .. _s-resource-sets: Resource Sets ############# .. index:: single: constraint; resource set single: resource; resource set *Resource sets* allow multiple resources to be affected by a single constraint. .. topic:: A set of 3 resources .. code-block:: xml Resource sets are valid inside ``rsc_location``, ``rsc_order`` (see :ref:`s-resource-sets-ordering`), ``rsc_colocation`` (see :ref:`s-resource-sets-colocation`), and ``rsc_ticket`` (see :ref:`ticket-constraints`) constraints. A resource set has a number of properties that can be set, though not all have an effect in all contexts. .. index:: pair: XML element; resource_set -.. table:: **Attributes of a resource_set Element** +.. list-table:: **Attributes of a resource_set Element** :class: longtable :widths: 15 15 70 + :header-rows: 1 + + * - Field + - Default + - Description + * - id + - + - .. index:: + single: resource_set; attribute, id + single: attribute; id (resource_set) + single: id; resource_set attribute + + A unique name for the set (required) + * - sequential + - true + - .. index:: + single: resource_set; attribute, sequential + single: attribute; sequential (resource_set) + single: sequential; resource_set attribute + + Whether the members of the set must be acted on in order. Meaningful + within ``rsc_order`` and ``rsc_colocation``. + * - require-all + - true + - .. index:: + single: resource_set; attribute, require-all + single: attribute; require-all (resource_set) + single: require-all; resource_set attribute + + Whether all members of the set must be active before continuing. With + the current implementation, the cluster may continue even if only one + member of the set is started, but if more than one member of the set is + starting at the same time, the cluster will still wait until all of + those have started before continuing (this may change in future + versions). Meaningful within ``rsc_order``. + * - role + - + - .. index:: + single: resource_set; attribute, role + single: attribute; role (resource_set) + single: role; resource_set attribute + + The constraint applies only to resource set members that are + :ref:`s-resource-promotable` in this role. Meaningful within + ``rsc_location``, ``rsc_colocation`` and ``rsc_ticket``. Allowed + values: ``Started``, ``Promoted``, ``Unpromoted``. For details, see + :ref:`promotable-clone-constraints`. + * - action + - start + - .. index:: + single: resource_set; attribute, action + single: attribute; action (resource_set) + single: action; resource_set attribute + + The action that applies to *all members* of the set. Meaningful within + ``rsc_order``. Allowed values: ``start``, ``stop``, ``promote``, + ``demote``. + * - score + - + - .. index:: + single: resource_set; attribute, score + single: attribute; score (resource_set) + single: score; resource_set attribute + + *Advanced use only.* Use a specific score for this set. Meaningful + within ``rsc_location`` or ``rsc_colocation``. + * - kind + - + - .. index:: + single: resource_set; attribute, kind + single: attribute; kind (resource_set) + single: kind; resource_set attribute - +-------------+------------------+--------------------------------------------------------+ - | Field | Default | Description | - +=============+==================+========================================================+ - | id | | .. index:: | - | | | single: resource_set; attribute, id | - | | | single: attribute; id (resource_set) | - | | | single: id; resource_set attribute | - | | | | - | | | A unique name for the set (required) | - +-------------+------------------+--------------------------------------------------------+ - | sequential | true | .. index:: | - | | | single: resource_set; attribute, sequential | - | | | single: attribute; sequential (resource_set) | - | | | single: sequential; resource_set attribute | - | | | | - | | | Whether the members of the set must be acted on in | - | | | order. Meaningful within ``rsc_order`` and | - | | | ``rsc_colocation``. | - +-------------+------------------+--------------------------------------------------------+ - | require-all | true | .. index:: | - | | | single: resource_set; attribute, require-all | - | | | single: attribute; require-all (resource_set) | - | | | single: require-all; resource_set attribute | - | | | | - | | | Whether all members of the set must be active before | - | | | continuing. With the current implementation, the | - | | | cluster may continue even if only one member of the | - | | | set is started, but if more than one member of the set | - | | | is starting at the same time, the cluster will still | - | | | wait until all of those have started before continuing | - | | | (this may change in future versions). Meaningful | - | | | within ``rsc_order``. | - +-------------+------------------+--------------------------------------------------------+ - | role | | .. index:: | - | | | single: resource_set; attribute, role | - | | | single: attribute; role (resource_set) | - | | | single: role; resource_set attribute | - | | | | - | | | The constraint applies only to resource set members | - | | | that are :ref:`s-resource-promotable` in this | - | | | role. Meaningful within ``rsc_location``, | - | | | ``rsc_colocation`` and ``rsc_ticket``. | - | | | Allowed values: ``Started``, ``Promoted``, | - | | | ``Unpromoted``. For details, see | - | | | :ref:`promotable-clone-constraints`. | - +-------------+------------------+--------------------------------------------------------+ - | action | start | .. index:: | - | | | single: resource_set; attribute, action | - | | | single: attribute; action (resource_set) | - | | | single: action; resource_set attribute | - | | | | - | | | The action that applies to *all members* of the set. | - | | | Meaningful within ``rsc_order``. Allowed values: | - | | | ``start``, ``stop``, ``promote``, ``demote``. | - +-------------+------------------+--------------------------------------------------------+ - | score | | .. index:: | - | | | single: resource_set; attribute, score | - | | | single: attribute; score (resource_set) | - | | | single: score; resource_set attribute | - | | | | - | | | *Advanced use only.* Use a specific score for this | - | | | set. Meaningful within ``rsc_location`` or | - | | | ``rsc_colocation``. | - +-------------+------------------+--------------------------------------------------------+ - | kind | | .. index:: | - | | | single: resource_set; attribute, kind | - | | | single: attribute; kind (resource_set) | - | | | single: kind; resource_set attribute | - | | | | - | | | *Advanced use only.* Use a specific kind for this | - | | | set. Meaningful within ``rsc_order``. | - +-------------+------------------+--------------------------------------------------------+ + *Advanced use only.* Use a specific kind for this set. Meaningful within + ``rsc_order``. Anti-colocation Chains ______________________ Sometimes, you would like a set of resources to be anti-colocated with each other. For example, ``resource1``, ``resource2``, and ``resource3`` must all run on different nodes. A straightforward approach would be to configure either separate colocations or a resource set, with ``-INFINITY`` scores between all the resources. However, this will not work as expected. Resource sets may in the future gain new syntax for this specific situation, but for now, a workaround is to use :ref:`utilization ` instead of colocations to keep the resources apart. Create a utilization attribute for the anti-colocation, assign the same value to each resource, and give each node the capacity to run one resource. .. _s-resource-sets-ordering: Ordering Sets of Resources ########################## A common situation is for an administrator to create a chain of ordered resources, such as: .. topic:: A chain of ordered resources .. code-block:: xml .. topic:: Visual representation of the four resources' start order for the above constraints .. image:: images/resource-set.png :alt: Ordered set Ordered Set ___________ To simplify this situation, :ref:`s-resource-sets` can be used within ordering constraints: .. topic:: A chain of ordered resources expressed as a set .. code-block:: xml While the set-based format is not less verbose, it is significantly easier to get right and maintain. .. important:: If you use a higher-level tool, pay attention to how it exposes this functionality. Depending on the tool, creating a set **A B** may be equivalent to **A then B**, or **B then A**. Ordering Multiple Sets ______________________ The syntax can be expanded to allow sets of resources to be ordered relative to each other, where the members of each individual set may be ordered or unordered (controlled by the ``sequential`` property). In the example below, **A** and **B** can both start in parallel, as can **C** and **D**, however **C** and **D** can only start once *both* **A** *and* **B** are active. .. topic:: Ordered sets of unordered resources .. code-block:: xml .. topic:: Visual representation of the start order for two ordered sets of unordered resources .. image:: images/two-sets.png :alt: Two ordered sets Of course either set -- or both sets -- of resources can also be internally ordered (by setting ``sequential="true"``) and there is no limit to the number of sets that can be specified. .. topic:: Advanced use of set ordering - Three ordered sets, two of which are internally unordered .. code-block:: xml .. topic:: Visual representation of the start order for the three sets defined above .. image:: images/three-sets.png :alt: Three ordered sets .. important:: An ordered set with ``sequential=false`` makes sense only if there is another set in the constraint. Otherwise, the constraint has no effect. Resource Set OR Logic _____________________ The unordered set logic discussed so far has all been "AND" logic. To illustrate this take the 3 resource set figure in the previous section. Those sets can be expressed, **(A and B) then (C) then (D) then (E and F)**. Say for example we want to change the first set, **(A and B)**, to use "OR" logic so the sets look like this: **(A or B) then (C) then (D) then (E and F)**. This functionality can be achieved through the use of the ``require-all`` option. This option defaults to TRUE which is why the "AND" logic is used by default. Setting ``require-all=false`` means only one resource in the set needs to be started before continuing on to the next set. .. topic:: Resource Set "OR" logic: Three ordered sets, where the first set is internally unordered with "OR" logic .. code-block:: xml .. important:: An ordered set with ``require-all=false`` makes sense only in conjunction with ``sequential=false``. Think of it like this: ``sequential=false`` modifies the set to be an unordered set using "AND" logic by default, and adding ``require-all=false`` flips the unordered set's "AND" logic to "OR" logic. .. _s-resource-sets-colocation: Colocating Sets of Resources ############################ Another common situation is for an administrator to create a set of colocated resources. The simplest way to do this is to define a resource group (see :ref:`group-resources`), but that cannot always accurately express the desired relationships. For example, maybe the resources do not need to be ordered. Another way would be to define each relationship as an individual constraint, but that causes a difficult-to-follow constraint explosion as the number of resources and combinations grow. .. topic:: Colocation chain as individual constraints, where A is placed first, then B, then C, then D .. code-block:: xml To express complicated relationships with a simplified syntax [#]_, :ref:`resource sets ` can be used within colocation constraints. .. topic:: Equivalent colocation chain expressed using **resource_set** .. code-block:: xml .. note:: Within a ``resource_set``, the resources are listed in the order they are *placed*, which is the reverse of the order in which they are *colocated*. In the above example, resource **A** is placed before resource **B**, which is the same as saying resource **B** is colocated with resource **A**. As with individual constraints, a resource that can't be active prevents any resource that must be colocated with it from being active. In both of the two previous examples, if **B** is unable to run, then both **C** and by inference **D** must remain stopped. .. important:: If you use a higher-level tool, pay attention to how it exposes this functionality. Depending on the tool, creating a set **A B** may be equivalent to **A with B**, or **B with A**. Resource sets can also be used to tell the cluster that entire *sets* of resources must be colocated relative to each other, while the individual members within any one set may or may not be colocated relative to each other (determined by the set's ``sequential`` property). In the following example, resources **B**, **C**, and **D** will each be colocated with **A** (which will be placed first). **A** must be able to run in order for any of the resources to run, but any of **B**, **C**, or **D** may be stopped without affecting any of the others. .. topic:: Using colocated sets to specify a shared dependency .. code-block:: xml .. note:: Pay close attention to the order in which resources and sets are listed. While the members of any one sequential set are placed first to last (i.e., the colocation dependency is last with first), multiple sets are placed last to first (i.e. the colocation dependency is first with last). .. important:: A colocated set with ``sequential="false"`` makes sense only if there is another set in the constraint. Otherwise, the constraint has no effect. There is no inherent limit to the number and size of the sets used. The only thing that matters is that in order for any member of one set in the constraint to be active, all members of sets listed after it must also be active (and naturally on the same node); and if a set has ``sequential="true"``, then in order for one member of that set to be active, all members listed before it must also be active. If desired, you can restrict the dependency to instances of promotable clone resources that are in a specific role, using the set's ``role`` property. .. topic:: Colocation in which the members of the middle set have no interdependencies, and the last set listed applies only to promoted instances .. code-block:: xml .. topic:: Visual representation of the above example (resources are placed from left to right) .. image:: ../shared/images/pcmk-colocated-sets.png :alt: Colocation chain .. note:: Unlike ordered sets, colocated sets do not use the ``require-all`` option. External Resource Dependencies ############################## Sometimes, a resource will depend on services that are not managed by the cluster. An example might be a resource that requires a file system that is not managed by the cluster but mounted by systemd at boot time. To accommodate this, the pacemaker systemd service depends on a normally empty target called ``resource-agents-deps.target``. The system administrator may create a unit drop-in for that target specifying the dependencies, to ensure that the services are started before Pacemaker starts and stopped after Pacemaker stops. Typically, this is accomplished by placing a unit file in the ``/etc/systemd/system/resource-agents-deps.target.d`` directory, with directives such as ``Requires`` and ``After`` specifying the dependencies as needed. .. [#] While the human brain is sophisticated enough to read the constraint in any order and choose the correct one depending on the situation, the cluster is not quite so smart. Yet. .. [#] which is not the same as saying easy to follow diff --git a/doc/sphinx/Pacemaker_Explained/fencing.rst b/doc/sphinx/Pacemaker_Explained/fencing.rst index 7b16a6dc90..6ae836c258 100644 --- a/doc/sphinx/Pacemaker_Explained/fencing.rst +++ b/doc/sphinx/Pacemaker_Explained/fencing.rst @@ -1,1281 +1,1283 @@ .. index:: single: fencing single: STONITH .. _fencing: Fencing ------- What Is Fencing? ################ *Fencing* is the ability to make a node unable to run resources, even when that node is unresponsive to cluster commands. Fencing is also known as *STONITH*, an acronym for "Shoot The Other Node In The Head", since the most common fencing method is cutting power to the node. Another method is "fabric fencing", cutting the node's access to some capability required to run resources (such as network access or a shared disk). .. index:: single: fencing; why necessary Why Is Fencing Necessary? ######################### Fencing protects your data from being corrupted by malfunctioning nodes or unintentional concurrent access to shared resources. Fencing protects against the "split brain" failure scenario, where cluster nodes have lost the ability to reliably communicate with each other but are still able to run resources. If the cluster just assumed that uncommunicative nodes were down, then multiple instances of a resource could be started on different nodes. The effect of split brain depends on the resource type. For example, an IP address brought up on two hosts on a network will cause packets to randomly be sent to one or the other host, rendering the IP useless. For a database or clustered file system, the effect could be much more severe, causing data corruption or divergence. Fencing is also used when a resource cannot otherwise be stopped. If a resource fails to stop on a node, it cannot be started on a different node without risking the same type of conflict as split-brain. Fencing the original node ensures the resource can be safely started elsewhere. Users may also configure the ``on-fail`` property of :ref:`operation` or the ``loss-policy`` property of :ref:`ticket constraints ` to ``fence``, in which case the cluster will fence the resource's node if the operation fails or the ticket is lost. .. index:: single: fencing; device Fence Devices ############# A *fence device* or *fencing device* is a special type of resource that provides the means to fence a node. Examples of fencing devices include intelligent power switches and IPMI devices that accept SNMP commands to cut power to a node, and iSCSI controllers that allow SCSI reservations to be used to cut a node's access to a shared disk. Since fencing devices will be used to recover from loss of networking connectivity to other nodes, it is essential that they do not rely on the same network as the cluster itself, otherwise that network becomes a single point of failure. Since loss of a node due to power outage is indistinguishable from loss of network connectivity to that node, it is also essential that at least one fence device for a node does not share power with that node. For example, an on-board IPMI controller that shares power with its host should not be used as the sole fencing device for that host. Since fencing is used to isolate malfunctioning nodes, no fence device should rely on its target functioning properly. This includes, for example, devices that ssh into a node and issue a shutdown command (such devices might be suitable for testing, but never for production). .. index:: single: fencing; agent Fence Agents ############ A *fence agent* or *fencing agent* is a ``stonith``-class resource agent. The fence agent standard provides commands (such as ``off`` and ``reboot``) that the cluster can use to fence nodes. As with other resource agent classes, this allows a layer of abstraction so that Pacemaker doesn't need any knowledge about specific fencing technologies -- that knowledge is isolated in the agent. Pacemaker supports two fence agent standards, both inherited from no-longer-active projects: * Red Hat Cluster Suite (RHCS) style: These are typically installed in ``/usr/sbin`` with names starting with ``fence_``. * Linux-HA style: These typically have names starting with ``external/``. Pacemaker can support these agents using the **fence_legacy** RHCS-style agent as a wrapper, *if* support was enabled when Pacemaker was built, which requires the ``cluster-glue`` library. When a Fence Device Can Be Used ############################### Fencing devices do not actually "run" like most services. Typically, they just provide an interface for sending commands to an external device. Additionally, fencing may be initiated by Pacemaker, by other cluster-aware software such as DRBD or DLM, or manually by an administrator, at any point in the cluster life cycle, including before any resources have been started. To accommodate this, Pacemaker does not require the fence device resource to be "started" in order to be used. Whether a fence device is started or not determines whether a node runs any recurring monitor for the device, and gives the node a slight preference for being chosen to execute fencing using that device. By default, any node can execute any fencing device. If a fence device is disabled by setting its ``target-role`` to ``Stopped``, then no node can use that device. If a location constraint with a negative score prevents a specific node from "running" a fence device, then that node will never be chosen to execute fencing using the device. A node may fence itself, but the cluster will choose that only if no other nodes can do the fencing. A common configuration scenario is to have one fence device per target node. In such a case, users often configure anti-location constraints so that the target node does not monitor its own device. Limitations of Fencing Resources ################################ Fencing resources have certain limitations that other resource classes don't: * They may have only one set of meta-attributes and one set of instance attributes. * If :ref:`rules` are used to determine fencing resource options, these might be evaluated only when first read, meaning that later changes to the rules will have no effect. Therefore, it is better to avoid confusion and not use rules at all with fencing resources. These limitations could be revisited if there is sufficient user demand. .. index:: single: fencing; special instance attributes Special Meta-Attributes for Fencing Resources ############################################# The table below lists special resource meta-attributes that may be set for any fencing resource. -.. table:: **Additional Properties of Fencing Resources** +.. list-table:: **Additional Properties of Fencing Resources** :widths: 10 10 10 70 + :header-rows: 1 + * - Field + - Type + - Default + - Description + * - provides + - string + - + - .. index:: + single: provides - +----------------------+---------+--------------------+----------------------------------------+ - | Field | Type | Default | Description | - +======================+=========+====================+========================================+ - | provides | string | | .. index:: | - | | | | single: provides | - | | | | | - | | | | Any special capability provided by the | - | | | | fence device. Currently, only one such | - | | | | capability is meaningful: | - | | | | :ref:`unfencing `. | - +----------------------+---------+--------------------+----------------------------------------+ + Any special capability provided by the fence device. Currently, only one + such capability is meaningful: :ref:`unfencing `. .. _fencing-attributes: Special Instance Attributes for Fencing Resources ################################################# The table below lists special instance attributes that may be set for any fencing resource (*not* meta-attributes, even though they are interpreted by Pacemaker rather than the fence agent). These are also listed in the man page for ``pacemaker-fenced``. .. list-table:: **Additional Properties of Fencing Resources** :class: longtable :widths: 22 10 20 48 :header-rows: 1 * - Name - Type - Default - Description * - .. _primitive_stonith_timeout: .. index:: single: stonith-timeout (primitive instance attribute) stonith-timeout - :ref:`timeout ` - - This is not used by Pacemaker (see the ``pcmk_reboot_timeout``, ``pcmk_off_timeout``, etc., properties instead), but it may be used by Linux-HA fence agents. * - .. _pcmk_host_map: .. index:: single: pcmk_host_map pcmk_host_map - :ref:`text ` - - A mapping of node names to ports for devices that do not understand the node names. For example, ``node1:1;node2:2,3`` tells the cluster to use port 1 for ``node1`` and ports 2 and 3 for ``node2``. If ``pcmk_host_check`` is explicitly set to ``static-list``, either this or ``pcmk_host_list`` must be set. The port portion of the map may contain special characters such as spaces if preceded by a backslash *(since 2.1.2)*. * - .. _pcmk_host_list: .. index:: single: pcmk_host_list pcmk_host_list - :ref:`text ` - - Comma-separated list of nodes that can be targeted by this device (for example, ``node1,node2,node3``). If pcmk_host_check is ``static-list``, either this or ``pcmk_host_map`` must be set. * - .. _pcmk_host_check: .. index:: single: pcmk_host_check pcmk_host_check - :ref:`text ` - See :ref:`pcmk_host_check_default` - The method Pacemaker should use to determine which nodes can be targeted by this device. Allowed values: - * ``static-list:`` targets are listed in the ``pcmk_host_list`` or ``pcmk_host_map`` attribute + * ``static-list:`` targets are listed in the ``pcmk_host_list`` or + ``pcmk_host_map`` attribute * ``dynamic-list:`` query the device via the agent's ``list`` action * ``status:`` query the device via the agent's ``status`` action * ``none:`` assume the device can fence any node * - .. _pcmk_delay_max: .. index:: single: pcmk_delay_max pcmk_delay_max - :ref:`duration ` - 0s - Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. This is sometimes used in two-node clusters to ensure that the nodes don't fence each other at the same time. * - .. _pcmk_delay_base: .. index:: single: pcmk_delay_base pcmk_delay_base - :ref:`text ` - 0s - Enable a static delay before executing fencing actions. This can be used, for example, in two-node clusters to ensure that the nodes don't fence each other, by having separate fencing resources with different values. The node that is fenced with the shorter delay will lose a fencing race. The overall delay introduced by pacemaker is derived from this value plus a random delay such that the sum is kept below the maximum delay. A single device can have different delays per node using a host map *(since 2.1.2)*, for example ``node1:0s;node2:5s.`` * - .. _pcmk_action_limit: .. index:: single: pcmk_action_limit pcmk_action_limit - :ref:`integer ` - 1 - The maximum number of actions that can be performed in parallel on this device. A value of -1 means unlimited. Node fencing actions initiated by the cluster (as opposed to an administrator running the ``stonith_admin`` tool or the fencer running recurring device monitors and ``status`` and ``list`` commands) are additionally subject to the ``concurrent-fencing`` cluster property. * - .. _pcmk_host_argument: .. index:: single: pcmk_host_argument pcmk_host_argument - :ref:`text ` - ``port`` if the fence agent metadata advertises support for it, otherwise ``plug`` if supported, otherwise ``none`` - *Advanced use only.* Which parameter should be supplied to the fence agent to identify the node to be fenced. A value of ``none`` tells the cluster not to supply any additional parameters. * - .. _pcmk_reboot_action: .. index:: single: pcmk_reboot_action pcmk_reboot_action - :ref:`text ` - ``reboot`` - *Advanced use only.* The command to send to the resource agent in order to reboot a node. Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific command. * - .. _pcmk_reboot_timeout: .. index:: single: pcmk_reboot_timeout pcmk_reboot_timeout - :ref:`timeout ` - 60s - *Advanced use only.* Specify an alternate timeout (in seconds) to use for ``reboot`` actions instead of the value of ``stonith-timeout``. Some devices need much more or less time to complete than normal. Use this to specify an alternate, device-specific timeout. * - .. _pcmk_reboot_retries: .. index:: single: pcmk_reboot_retries pcmk_reboot_retries - :ref:`integer ` - 2 - *Advanced use only.* The maximum number of times to retry the ``reboot`` command within the timeout period. Some devices do not support multiple connections, and operations may fail if the device is busy with another task, so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries before giving up. * - .. _pcmk_off_action: .. index:: single: pcmk_off_action pcmk_off_action - :ref:`text ` - ``off`` - *Advanced use only.* The command to send to the resource agent in order to shut down a node. Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific command. * - .. _pcmk_off_timeout: .. index:: single: pcmk_off_timeout pcmk_off_timeout - :ref:`timeout ` - 60s - *Advanced use only.* Specify an alternate timeout (in seconds) to use for ``off`` actions instead of the value of ``stonith-timeout``. Some devices need much more or less time to complete than normal. Use this to specify an alternate, device-specific timeout. * - .. _pcmk_off_retries: .. index:: single: pcmk_off_retries pcmk_off_retries - :ref:`integer ` - 2 - *Advanced use only.* The maximum number of times to retry the ``off`` command within the timeout period. Some devices do not support multiple connections, and operations may fail if the device is busy with another task, so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries before giving up. * - .. _pcmk_list_action: .. index:: single: pcmk_list_action pcmk_list_action - :ref:`text ` - ``list`` - *Advanced use only.* The command to send to the resource agent in order to list nodes. Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific command. * - .. _pcmk_list_timeout: .. index:: single: pcmk_list_timeout pcmk_list_timeout - :ref:`timeout ` - 60s - *Advanced use only.* Specify an alternate timeout (in seconds) to use for ``list`` actions instead of the value of ``stonith-timeout``. Some devices need much more or less time to complete than normal. Use this to specify an alternate, device-specific timeout. * - .. _pcmk_list_retries: .. index:: single: pcmk_list_retries pcmk_list_retries - :ref:`integer ` - 2 - *Advanced use only.* The maximum number of times to retry the ``list`` command within the timeout period. Some devices do not support multiple connections, and operations may fail if the device is busy with another task, so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries before giving up. * - .. _pcmk_monitor_action: .. index:: single: pcmk_monitor_action pcmk_monitor_action - :ref:`text ` - ``monitor`` - *Advanced use only.* The command to send to the resource agent in order to report extended status. Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific command. * - .. _pcmk_monitor_timeout: .. index:: single: pcmk_monitor_timeout pcmk_monitor_timeout - :ref:`timeout ` - 60s - *Advanced use only.* Specify an alternate timeout (in seconds) to use for ``monitor`` actions instead of the value of ``stonith-timeout``. Some devices need much more or less time to complete than normal. Use this to specify an alternate, device-specific timeout. * - .. _pcmk_monitor_retries: .. index:: single: pcmk_monitor_retries pcmk_monitor_retries - :ref:`integer ` - 2 - *Advanced use only.* The maximum number of times to retry the ``monitor`` command within the timeout period. Some devices do not support multiple connections, and operations may fail if the device is busy with another task, so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries before giving up. * - .. _pcmk_status_action: .. index:: single: pcmk_status_action pcmk_status_action - :ref:`text ` - ``status`` - *Advanced use only.* The command to send to the resource agent in order to report status. Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific command. * - .. _pcmk_status_timeout: .. index:: single: pcmk_status_timeout pcmk_status_timeout - :ref:`timeout ` - 60s - *Advanced use only.* Specify an alternate timeout (in seconds) to use for ``status`` actions instead of the value of ``stonith-timeout``. Some devices need much more or less time to complete than normal. Use this to specify an alternate, device-specific timeout. * - .. _pcmk_status_retries: .. index:: single: pcmk_status_retries pcmk_status_retries - :ref:`integer ` - 2 - *Advanced use only.* The maximum number of times to retry the ``status`` command within the timeout period. Some devices do not support multiple connections, and operations may fail if the device is busy with another task, so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries before giving up. .. _pcmk_host_check_default: Default Check Type ################## If the user does not explicitly configure ``pcmk_host_check`` for a fence device, a default value appropriate to other configured parameters will be used: * If either ``pcmk_host_list`` or ``pcmk_host_map`` is configured, ``static-list`` will be used; * otherwise, if the fence device supports the ``list`` action, and the first attempt at using ``list`` succeeds, ``dynamic-list`` will be used; * otherwise, if the fence device supports the ``status`` action, ``status`` will be used; * otherwise, ``none`` will be used. .. index:: single: unfencing single: fencing; unfencing .. _unfencing: Unfencing ######### With fabric fencing (such as cutting network or shared disk access rather than power), it is expected that the cluster will fence the node, and then a system administrator must manually investigate what went wrong, correct any issues found, then reboot (or restart the cluster services on) the node. Once the node reboots and rejoins the cluster, some fabric fencing devices require an explicit command to restore the node's access. This capability is called *unfencing* and is typically implemented as the fence agent's ``on`` command. If any cluster resource has ``requires`` set to ``unfencing``, then that resource will not be probed or started on a node until that node has been unfenced. Fencing and Quorum ################## In general, a cluster partition may execute fencing only if the partition has quorum, and the ``stonith-enabled`` cluster property is set to true. However, there are exceptions: * The requirements apply only to fencing initiated by Pacemaker. If an administrator initiates fencing using the ``stonith_admin`` command, or an external application such as DLM initiates fencing using Pacemaker's C API, the requirements do not apply. * A cluster partition without quorum is allowed to fence any active member of that partition. As a corollary, this allows a ``no-quorum-policy`` of ``suicide`` to work. * If the ``no-quorum-policy`` cluster property is set to ``ignore``, then quorum is not required to execute fencing of any node. Fencing Timeouts ################ Fencing timeouts are complicated, since a single fencing operation can involve many steps, each of which may have a separate timeout. Fencing may be initiated in one of several ways: * An administrator may initiate fencing using the ``stonith_admin`` tool, which has a ``--timeout`` option (defaulting to 2 minutes) that will be used as the fence operation timeout. * An external application such as DLM may initiate fencing using the Pacemaker C API. The application will specify the fence operation timeout in this case, which might or might not be configurable by the user. * The cluster may initiate fencing itself. In this case, the ``stonith-timeout`` cluster property (defaulting to 1 minute) will be used as the fence operation timeout. However fencing is initiated, the initiator contacts Pacemaker's fencer (``pacemaker-fenced``) to request fencing. This connection and request has its own timeout, separate from the fencing operation timeout, but usually happens very quickly. The fencer will contact all fencers in the cluster to ask what devices they have available to fence the target node. The fence operation timeout will be used as the timeout for each of these queries. Once a fencing device has been selected, the fencer will check whether any action-specific timeout has been configured for the device, to use instead of the fence operation timeout. For example, if ``stonith-timeout`` is 60 seconds, but the fencing device has ``pcmk_reboot_timeout`` configured as 90 seconds, then a timeout of 90 seconds will be used for reboot actions using that device. A device may have retries configured, in which case the timeout applies across all attempts. For example, if a device has ``pcmk_reboot_retries`` configured as 2, and the first reboot attempt fails, the second attempt will only have whatever time is remaining in the action timeout after subtracting how much time the first attempt used. This means that if the first attempt fails due to using the entire timeout, no further attempts will be made. There is currently no way to configure a per-attempt timeout. If more than one device is required to fence a target, whether due to failure of the first device or a fencing topology with multiple devices configured for the target, each device will have its own separate action timeout. For all of the above timeouts, the fencer will generally multiply the configured value by 1.2 to get an actual value to use, to account for time needed by the fencer's own processing. Separate from the fencer's timeouts, some fence agents have internal timeouts for individual steps of their fencing process. These agents often have parameters to configure these timeouts, such as ``login-timeout``, ``shell-timeout``, or ``power-timeout``. Many such agents also have a ``disable-timeout`` parameter to ignore their internal timeouts and just let Pacemaker handle the timeout. This causes a difference in retry behavior. If ``disable-timeout`` is not set, and the agent hits one of its internal timeouts, it will report that as a failure to Pacemaker, which can then retry. If ``disable-timeout`` is set, and Pacemaker hits a timeout for the agent, then there will be no time remaining, and no retry will be done. Fence Devices Dependent on Other Resources ########################################## In some cases, a fence device may require some other cluster resource (such as an IP address) to be active in order to function properly. This is obviously undesirable in general: fencing may be required when the depended-on resource is not active, or fencing may be required because the node running the depended-on resource is no longer responding. However, this may be acceptable under certain conditions: * The dependent fence device should not be able to target any node that is allowed to run the depended-on resource. * The depended-on resource should not be disabled during production operation. * The ``concurrent-fencing`` cluster property should be set to ``true``. Otherwise, if both the node running the depended-on resource and some node targeted by the dependent fence device need to be fenced, the fencing of the node running the depended-on resource might be ordered first, making the second fencing impossible and blocking further recovery. With concurrent fencing, the dependent fence device might fail at first due to the depended-on resource being unavailable, but it will be retried and eventually succeed once the resource is brought back up. Even under those conditions, there is one unlikely problem scenario. The DC always schedules fencing of itself after any other fencing needed, to avoid unnecessary repeated DC elections. If the dependent fence device targets the DC, and both the DC and a different node running the depended-on resource need to be fenced, the DC fencing will always fail and block further recovery. Note, however, that losing a DC node entirely causes some other node to become DC and schedule the fencing, so this is only a risk when a stop or other operation with ``on-fail`` set to ``fencing`` fails on the DC. .. index:: single: fencing; configuration Configuring Fencing ################### Higher-level tools can provide simpler interfaces to this process, but using Pacemaker command-line tools, this is how you could configure a fence device. #. Find the correct driver: .. code-block:: none # stonith_admin --list-installed .. note:: You may have to install packages to make fence agents available on your host. Searching your available packages for ``fence-`` is usually helpful. Ensure the packages providing the fence agents you require are installed on every cluster node. #. Find the required parameters associated with the device (replacing ``$AGENT_NAME`` with the name obtained from the previous step): .. code-block:: none # stonith_admin --metadata --agent $AGENT_NAME #. Create a file called ``stonith.xml`` containing a primitive resource with a class of ``stonith``, a type equal to the agent name obtained earlier, and a parameter for each of the values returned in the previous step. #. If the device does not know how to fence nodes based on their uname, you may also need to set the special ``pcmk_host_map`` parameter. See :ref:`fencing-attributes` for details. #. If the device does not support the ``list`` command, you may also need to set the special ``pcmk_host_list`` and/or ``pcmk_host_check`` parameters. See :ref:`fencing-attributes` for details. #. If the device does not expect the target to be specified with the ``port`` parameter, you may also need to set the special ``pcmk_host_argument`` parameter. See :ref:`fencing-attributes` for details. #. Upload it into the CIB using cibadmin: .. code-block:: none # cibadmin --create --scope resources --xml-file stonith.xml #. Set ``stonith-enabled`` to true: .. code-block:: none # crm_attribute --type crm_config --name stonith-enabled --update true #. Once the stonith resource is running, you can test it by executing the following, replacing ``$NODE_NAME`` with the name of the node to fence (although you might want to stop the cluster on that machine first): .. code-block:: none # stonith_admin --reboot $NODE_NAME Example Fencing Configuration _____________________________ For this example, we assume we have a cluster node, ``pcmk-1``, whose IPMI controller is reachable at the IP address 192.0.2.1. The IPMI controller uses the username ``testuser`` and the password ``abc123``. #. Looking at what's installed, we may see a variety of available agents: .. code-block:: none # stonith_admin --list-installed .. code-block:: none (... some output omitted ...) fence_idrac fence_ilo3 fence_ilo4 fence_ilo5 fence_imm fence_ipmilan (... some output omitted ...) Perhaps after some reading some man pages and doing some Internet searches, we might decide ``fence_ipmilan`` is our best choice. #. Next, we would check what parameters ``fence_ipmilan`` provides: .. code-block:: none # stonith_admin --metadata -a fence_ipmilan .. code-block:: xml fence_ipmilan is an I/O Fencing agentwhich can be used with machines controlled by IPMI.This agent calls support software ipmitool (http://ipmitool.sf.net/). WARNING! This fence agent might report success before the node is powered off. You should use -m/method onoff if your fence device works correctly with that option. Fencing action IPMI Lan Auth type. Ciphersuite to use (same as ipmitool -C parameter) Hexadecimal-encoded Kg key for IPMIv2 authentication IP address or hostname of fencing device IP address or hostname of fencing device TCP/UDP port to use for connection with device Use Lanplus to improve security of connection Login name Method to fence Login password or passphrase Script to run to retrieve password Login password or passphrase Script to run to retrieve password IP address or hostname of fencing device (together with --port-as-ip) IP address or hostname of fencing device (together with --port-as-ip) Privilege level on IPMI device Bridge IPMI requests to the remote target address Login name Disable logging to stderr. Does not affect --verbose or --debug-file or logging to syslog. Verbose mode Write debug information to given file Write debug information to given file Display version information and exit Display help and exit Wait X seconds before fencing is started Path to ipmitool binary Wait X seconds for cmd prompt after login Make "port/plug" to be an alias to IP address Test X seconds for status change after ON/OFF Wait X seconds after issuing ON/OFF Wait X seconds for cmd prompt after issuing command Count of attempts to retry power on Use sudo (without password) when calling 3rd party software Use sudo (without password) when calling 3rd party software Path to sudo binary Once we've decided what parameter values we think we need, it is a good idea to run the fence agent's status action manually, to verify that our values work correctly: .. code-block:: none # fence_ipmilan --lanplus -a 192.0.2.1 -l testuser -p abc123 -o status Chassis Power is on #. Based on that, we might create a fencing resource configuration like this in ``stonith.xml`` (or any file name, just use the same name with ``cibadmin`` later): .. code-block:: xml .. note:: Even though the man page shows that the ``action`` parameter is supported, we do not provide that in the resource configuration. Pacemaker will supply an appropriate action whenever the fence device must be used. #. In this case, we don't need to configure ``pcmk_host_map`` because ``fence_ipmilan`` ignores the target node name and instead uses its ``ip`` parameter to know how to contact the IPMI controller. #. We do need to let Pacemaker know which cluster node can be fenced by this device, since ``fence_ipmilan`` doesn't support the ``list`` action. Add a line like this to the agent's instance attributes: .. code-block:: xml #. We don't need to configure ``pcmk_host_argument`` since ``ip`` is all the fence agent needs (it ignores the target name). #. Make the configuration active: .. code-block:: none # cibadmin --create --scope resources --xml-file stonith.xml #. Set ``stonith-enabled`` to true (this only has to be done once): .. code-block:: none # crm_attribute --type crm_config --name stonith-enabled --update true #. Since our cluster is still in testing, we can reboot ``pcmk-1`` without bothering anyone, so we'll test our fencing configuration by running this from one of the other cluster nodes: .. code-block:: none # stonith_admin --reboot pcmk-1 Then we will verify that the node did, in fact, reboot. We can repeat that process to create a separate fencing resource for each node. With some other fence device types, a single fencing resource is able to be used for all nodes. In fact, we could do that with ``fence_ipmilan``, using the ``port-as-ip`` parameter along with ``pcmk_host_map``. Either approach is fine. .. index:: single: fencing; topology single: fencing-topology single: fencing-level Fencing Topologies ################## Pacemaker supports fencing nodes with multiple devices through a feature called *fencing topologies*. Fencing topologies may be used to provide alternative devices in case one fails, or to require multiple devices to all be executed successfully in order to consider the node successfully fenced, or even a combination of the two. Create the individual devices as you normally would, then define one or more ``fencing-level`` entries in the ``fencing-topology`` section of the configuration. * Each fencing level is attempted in order of ascending ``index``. Allowed values are 1 through 9. * If a device fails, processing terminates for the current level. No further devices in that level are exercised, and the next level is attempted instead. * If the operation succeeds for all the listed devices in a level, the level is deemed to have passed. * The operation is finished when a level has passed (success), or all levels have been attempted (failed). * If the operation failed, the next step is determined by the scheduler and/or the controller. Some possible uses of topologies include: * Try on-board IPMI, then an intelligent power switch if that fails * Try fabric fencing of both disk and network, then fall back to power fencing if either fails * Wait up to a certain time for a kernel dump to complete, then cut power to the node -.. table:: **Attributes of a fencing-level Element** +.. list-table:: **Attributes of a fencing-level Element** :class: longtable :widths: 25 75 + :header-rows: 1 - +------------------+-----------------------------------------------------------------------------------------+ - | Attribute | Description | - +==================+=========================================================================================+ - | id | .. index:: | - | | pair: fencing-level; id | - | | | - | | A unique name for this element (required) | - +------------------+-----------------------------------------------------------------------------------------+ - | target | .. index:: | - | | pair: fencing-level; target | - | | | - | | The name of a single node to which this level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | target-pattern | .. index:: | - | | pair: fencing-level; target-pattern | - | | | - | | An extended regular expression (as defined in `POSIX | - | | `_) | - | | matching the names of nodes to which this level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | target-attribute | .. index:: | - | | pair: fencing-level; target-attribute | - | | | - | | The name of a node attribute that is set (to ``target-value``) for nodes to which this | - | | level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | target-value | .. index:: | - | | pair: fencing-level; target-value | - | | | - | | The node attribute value (of ``target-attribute``) that is set for nodes to which this | - | | level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | index | .. index:: | - | | pair: fencing-level; index | - | | | - | | The order in which to attempt the levels. Levels are attempted in ascending order | - | | *until one succeeds*. Valid values are 1 through 9. | - +------------------+-----------------------------------------------------------------------------------------+ - | devices | .. index:: | - | | pair: fencing-level; devices | - | | | - | | A comma-separated list of devices that must all be tried for this level | - +------------------+-----------------------------------------------------------------------------------------+ + * - Attribute + - Description + * - id + - .. index:: + pair: fencing-level; id + + A unique name for this element (required) + * - target + - .. index:: + pair: fencing-level; target + + The name of a single node to which this level applies + * - target-pattern + - .. index:: + pair: fencing-level; target-pattern + + An extended regular expression (as defined in `POSIX + `_) + matching the names of nodes to which this level applies + * - target-attribute + - .. index:: + pair: fencing-level; target-attribute + + The name of a node attribute that is set (to ``target-value``) for nodes to which this + level applies + * - target-value + - .. index:: + pair: fencing-level; target-value + + The node attribute value (of ``target-attribute``) that is set for nodes to which this + level applies + * - index + - .. index:: + pair: fencing-level; index + + The order in which to attempt the levels. Levels are attempted in ascending order + *until one succeeds*. Valid values are 1 through 9. + * - devices + - .. index:: + pair: fencing-level; devices + + A comma-separated list of devices that must all be tried for this level .. note:: **Fencing topology with different devices for different nodes** .. code-block:: xml ... ... Example Dual-Layer, Dual-Device Fencing Topologies __________________________________________________ The following example illustrates an advanced use of ``fencing-topology`` in a cluster with the following properties: * 2 nodes (prod-mysql1 and prod-mysql2) * the nodes have IPMI controllers reachable at 192.0.2.1 and 192.0.2.2 * the nodes each have two independent Power Supply Units (PSUs) connected to two independent Power Distribution Units (PDUs) reachable at 198.51.100.1 (port 10 and port 11) and 203.0.113.1 (port 10 and port 11) * fencing via the IPMI controller uses the ``fence_ipmilan`` agent (1 fence device per controller, with each device targeting a separate node) * fencing via the PDUs uses the ``fence_apc_snmp`` agent (1 fence device per PDU, with both devices targeting both nodes) * a random delay is used to lessen the chance of a "death match" * fencing topology is set to try IPMI fencing first then dual PDU fencing if that fails In a node failure scenario, Pacemaker will first select ``fence_ipmilan`` to try to kill the faulty node. Using the fencing topology, if that method fails, it will then move on to selecting ``fence_apc_snmp`` twice (once for the first PDU, then again for the second PDU). The fence action is considered successful only if both PDUs report the required status. If any of them fails, fencing loops back to the first fencing method, ``fence_ipmilan``, and so on, until the node is fenced or the fencing action is cancelled. .. note:: **First fencing method: single IPMI device per target** Each cluster node has it own dedicated IPMI controller that can be contacted for fencing using the following primitives: .. code-block:: xml .. note:: **Second fencing method: dual PDU devices** Each cluster node also has 2 distinct power supplies controlled by 2 distinct PDUs: * Node 1: PDU 1 port 10 and PDU 2 port 10 * Node 2: PDU 1 port 11 and PDU 2 port 11 The matching fencing agents are configured as follows: .. code-block:: xml .. note:: **Fencing topology** Now that all the fencing resources are defined, it's time to create the right topology. We want to first fence using IPMI and if that does not work, fence both PDUs to effectively and surely kill the node. .. code-block:: xml In ``fencing-topology``, the lowest ``index`` value for a target determines its first fencing method. Remapping Reboots ################# When the cluster needs to reboot a node, whether because ``stonith-action`` is ``reboot`` or because a reboot was requested externally (such as by ``stonith_admin --reboot``), it will remap that to other commands in two cases: * If the chosen fencing device does not support the ``reboot`` command, the cluster will ask it to perform ``off`` instead. * If a fencing topology level with multiple devices must be executed, the cluster will ask all the devices to perform ``off``, then ask the devices to perform ``on``. To understand the second case, consider the example of a node with redundant power supplies connected to intelligent power switches. Rebooting one switch and then the other would have no effect on the node. Turning both switches off, and then on, actually reboots the node. In such a case, the fencing operation will be treated as successful as long as the ``off`` commands succeed, because then it is safe for the cluster to recover any resources that were on the node. Timeouts and errors in the ``on`` phase will be logged but ignored. When a reboot operation is remapped, any action-specific timeout for the remapped action will be used (for example, ``pcmk_off_timeout`` will be used when executing the ``off`` command, not ``pcmk_reboot_timeout``). diff --git a/doc/sphinx/Pacemaker_Explained/nodes.rst b/doc/sphinx/Pacemaker_Explained/nodes.rst index fe0cfbb4ca..576ce7d391 100644 --- a/doc/sphinx/Pacemaker_Explained/nodes.rst +++ b/doc/sphinx/Pacemaker_Explained/nodes.rst @@ -1,613 +1,587 @@ .. index:: single: node Nodes ----- Pacemaker supports two basic types of nodes: *cluster nodes* and *Pacemaker Remote nodes*. .. index:: single: node; cluster node Cluster nodes _____________ Cluster nodes run Corosync and all Pacemaker components. They may run cluster resources, run all Pacemaker command-line tools, execute fencing actions, count toward cluster quorum, and serve as the cluster's Designated Controller (DC). Every cluster must have at least one cluster node. Scalability is limited by the cluster layer to around 32 cluster nodes. Host Clock Considerations ######################### In general, Pacemaker does not rely on time or time zones being synchronized across nodes. However, if the configuration uses date/time-based :ref:`rules `, synchronization is a good idea, otherwise the rules will evaluate differently depending on which node is the Designated Controller (DC). Also, synchronization is greatly helpful when comparing logs across multiple nodes for problem investigation. If a node's clock jumps forward, you may see relatively minor issues such as various timeouts suddenly being considered expired. If a node's clock jumps backward, more serious problems may occur, so this should be avoided. If the host clock is adjusted at boot, and Pacemaker is enabled at boot, Pacemaker's start should be ordered after the clock adjustment. When run under systemd, Pacemaker will automatically order itself after ``time-sync.target``. However, depending on the local setup, you may need to enable an additional service (for example, ``chronyd-wait.service``) for that to be effective, or write your own workaround (for example, see the discussion on `systemd issue#5097 `_. .. _pacemaker_remote: .. index:: pair: node; Pacemaker Remote Pacemaker Remote nodes ______________________ Pacemaker Remote nodes do not run Corosync or the usual Pacemaker components. Instead, they run only the *remote executor* (``pacemaker-remoted``), which waits for Pacemaker on a cluster node to give it instructions. They may run cluster resources and most command-line tools, but cannot perform other functions of full cluster nodes such as fencing execution, quorum voting, or DC eligibility. There is no hard limit on the number of Pacemaker Remote nodes. .. NOTE:: *Remote* in this document has nothing to do with physical proximity and instead refers to the node not being a member of the underlying Corosync cluster. Pacemaker Remote nodes are subject to the same latency requirements as cluster nodes, which means they are typically in the same data center. There are three types of Pacemaker Remote nodes: * A *remote node* boots outside Pacemaker control, and is typically a physical host. The connection to the remote node is managed as a :ref:`special type of resource ` configured by the user. * A *guest node* is a virtual machine or container configured to run Pacemaker's remote executor when launched, and is launched and managed by the cluster as a standard resource configured by the user with :ref:`special options `. * A *bundle node* is a guest node created for a container that is launched and managed by the cluster as part of a :ref:`bundle ` resource configured by the user. .. NOTE:: It is important to distinguish the various roles a virtual machine can serve in Pacemaker clusters: * A virtual machine can run the full cluster stack, in which case it is a cluster node and is not itself managed by the cluster. * A virtual machine can be managed by the cluster as a simple resource, without the cluster having any awareness of the services running within it. The virtual machine is *opaque* to the cluster. * A virtual machine can be a guest node, allowing the cluster to manage both the virtual machine and resources running within it. The virtual machine is *transparent* to the cluster. Defining a Node _______________ Each cluster node will have an entry in the ``nodes`` section containing at least an ID and a name. A cluster node's ID is defined by the cluster layer (Corosync). .. topic:: **Example Corosync cluster node entry** .. code-block:: xml Pacemaker Remote nodes are defined by a resource in the ``resources`` section. Remote nodes and guest nodes may optionally have an entry in the ``nodes`` section, primarily for permanent :ref:`node attributes `. Normally, the user should let the cluster populate the ``nodes`` section automatically. .. index:: single: node; name .. _node_name: Where Pacemaker Gets the Node Name ################################## The name that Pacemaker uses for a node in the configuration does not have to be the same as its local hostname. Pacemaker uses the following for a cluster node's name, in order of most preferred first: * The value of ``name`` in the ``nodelist`` section of ``corosync.conf`` (``nodeid`` must also be explicitly set there in order for Pacemaker to associate the name with the node) * The value of ``ring0_addr`` in the ``nodelist`` section of ``corosync.conf`` * The local hostname (value of ``uname -n``) A Pacemaker Remote node's name is defined in its resource configuration. If the cluster is running, the ``crm_node -n`` command will display the local node's name as used by the cluster. If a Corosync ``nodelist`` is used, ``crm_node --name-for-id`` with a Corosync node ID will display the name used by the node with the given Corosync ``nodeid``, for example: .. code-block:: none crm_node --name-for-id 2 .. index:: single: node; quorum-only single: quorum-only node Quorum-only Nodes _________________ One popular cluster design uses an even number of cluster nodes (often 2), with an additional lightweight host that contributes to providing quorum but cannot run resources. With Pacemaker, this can be achieved in either of two ways: * When Corosync is used as the underlying cluster layer, the lightweight host can run `qdevice `_ instead of Corosync and Pacemaker. * The lightweight host can be configured as a Pacemaker cluster node, and a :ref:`location constraint ` can be configured for the node with ``score`` set to ``-INFINITY``, ``rsc-pattern`` set to ``.*``, and ``resource-discovey`` set to ``never``. .. index:: single: node; attribute single: node attribute .. _node_attributes: Node Attributes _______________ Pacemaker allows node-specific values to be specified using *node attributes*. A node attribute has a name, and may have a distinct value for each node. Node attributes come in two types, *permanent* and *transient*. Permanent node attributes are kept within the ``node`` entry, and keep their values even if the cluster restarts on a node. Transient node attributes are kept in the CIB's ``status`` section, and go away when the cluster stops on the node. While certain node attributes have specific meanings to the cluster, they are mainly intended to allow administrators and resource agents to track any information desired. For example, an administrator might choose to define node attributes for how much RAM and disk space each node has, which OS each uses, or which server room rack each node is in. Users can configure :ref:`rules` that use node attributes to affect where resources are placed. Setting and querying node attributes #################################### Node attributes can be set and queried using the ``crm_attribute`` and ``attrd_updater`` commands, so that the user does not have to deal with XML configuration directly. Here is an example command to set a permanent node attribute, and the XML configuration that would be generated: .. topic:: **Result of using crm_attribute to specify which kernel pcmk-1 is running** .. code-block:: none # crm_attribute --type nodes --node pcmk-1 --name kernel --update $(uname -r) .. code-block:: xml To read back the value that was just set: .. code-block:: none # crm_attribute --type nodes --node pcmk-1 --name kernel --query scope=nodes name=kernel value=3.10.0-862.14.4.el7.x86_64 The ``--type nodes`` indicates that this is a permanent node attribute; ``--type status`` would indicate a transient node attribute. .. warning:: Attribute values with newline or tab characters are currently displayed with newlines as ``"\n"`` and tabs as ``"\t"``, when ``crm_attribute`` or ``attrd_updater`` query commands use ``--output-as=text`` or leave ``--output-as`` unspecified: .. code-block:: none # crm_attribute -N node1 -n test_attr -v "$(echo -e "a\nb\tc")" -t status # crm_attribute -N node1 -n test_attr --query -t status scope=status name=test_attr value=a\nb\tc This format is deprecated. In a future release, the values will be displayed with literal whitespace characters: .. code-block:: none # crm_attribute -N node1 -n test_attr --query -t status scope=status name=test_attr value=a b c Users should either avoid attribute values with newlines and tabs, or ensure that they can handle both formats. However, it's best to use ``--output-as=xml`` when parsing attribute values from output. Newlines, tabs, and special characters are replaced with XML character references that a conforming XML processor can recognize and convert to literals *(since 2.1.8)*: .. code-block:: none # crm_attribute -N node1 -n test_attr --query -t status --output-as=xml Special node attributes ####################### Certain node attributes have special meaning to the cluster. Node attribute names beginning with ``#`` are considered reserved for these special attributes. Some special attributes do not start with ``#``, for historical reasons. Certain special attributes are set automatically by the cluster, should never be modified directly, and can be used only within :ref:`rules`; these are listed under :ref:`built-in node attributes `. For true/false values, the cluster considers a value of "1", "y", "yes", "on", or "true" (case-insensitively) to be true, "0", "n", "no", "off", "false", or unset to be false, and anything else to be an error. -.. table:: **Node Attributes With Special Significance** +.. list-table:: **Node Attributes With Special Significance** :class: longtable :widths: 30 70 - - +----------------------------+-----------------------------------------------------+ - | Name | Description | - +============================+=====================================================+ - | fail-count-* | .. index:: | - | | pair: node attribute; fail-count | - | | | - | | Attributes whose names start with | - | | ``fail-count-`` are managed by the cluster | - | | to track how many times particular resource | - | | operations have failed on this node. These | - | | should be queried and cleared via the | - | | ``crm_failcount`` or | - | | ``crm_resource --cleanup`` commands rather | - | | than directly. | - +----------------------------+-----------------------------------------------------+ - | last-failure-* | .. index:: | - | | pair: node attribute; last-failure | - | | | - | | Attributes whose names start with | - | | ``last-failure-`` are managed by the cluster | - | | to track when particular resource operations | - | | have most recently failed on this node. | - | | These should be cleared via the | - | | ``crm_failcount`` or | - | | ``crm_resource --cleanup`` commands rather | - | | than directly. | - +----------------------------+-----------------------------------------------------+ - | maintenance | .. _node_maintenance: | - | | | - | | .. index:: | - | | pair: node attribute; maintenance | - | | | - | | If true, the cluster will not start or stop any | - | | resources on this node. Any resources active on the | - | | node become unmanaged, and any recurring operations | - | | for those resources (except those specifying | - | | ``role`` as ``Stopped``) will be paused. The | - | | :ref:`maintenance-mode ` cluster | - | | option, if true, overrides this. If this attribute | - | | is true, it overrides the | - | | :ref:`is-managed ` and | - | | :ref:`maintenance ` | - | | meta-attributes of affected resources and | - | | :ref:`enabled ` meta-attribute for | - | | affected recurring actions. Pacemaker should not be | - | | restarted on a node that is in single-node | - | | maintenance mode. | - +----------------------------+-----------------------------------------------------+ - | probe_complete | .. index:: | - | | pair: node attribute; probe_complete | - | | | - | | This is managed by the cluster to detect | - | | when nodes need to be reprobed, and should | - | | never be used directly. | - +----------------------------+-----------------------------------------------------+ - | resource-discovery-enabled | .. index:: | - | | pair: node attribute; resource-discovery-enabled | - | | | - | | If the node is a remote node, fencing is enabled, | - | | and this attribute is explicitly set to false | - | | (unset means true in this case), resource discovery | - | | (probes) will not be done on this node. This is | - | | highly discouraged; the ``resource-discovery`` | - | | location constraint property is preferred for this | - | | purpose. | - +----------------------------+-----------------------------------------------------+ - | shutdown | .. index:: | - | | pair: node attribute; shutdown | - | | | - | | This is managed by the cluster to orchestrate the | - | | shutdown of a node, and should never be used | - | | directly. | - +----------------------------+-----------------------------------------------------+ - | site-name | .. index:: | - | | pair: node attribute; site-name | - | | | - | | If set, this will be used as the value of the | - | | ``#site-name`` node attribute used in rules. (If | - | | not set, the value of the ``cluster-name`` cluster | - | | option will be used as ``#site-name`` instead.) | - +----------------------------+-----------------------------------------------------+ - | standby | .. index:: | - | | pair: node attribute; standby | - | | | - | | If true, the node is in standby mode. This is | - | | typically set and queried via the ``crm_standby`` | - | | command rather than directly. | - +----------------------------+-----------------------------------------------------+ - | terminate | .. index:: | - | | pair: node attribute; terminate | - | | | - | | If the value is true or begins with any nonzero | - | | number, the node will be fenced. This is typically | - | | set by tools rather than directly. | - +----------------------------+-----------------------------------------------------+ - | #digests-* | .. index:: | - | | pair: node attribute; #digests | - | | | - | | Attributes whose names start with ``#digests-`` are | - | | managed by the cluster to detect when | - | | :ref:`unfencing` needs to be redone, and should | - | | never be used directly. | - +----------------------------+-----------------------------------------------------+ - | #node-unfenced | .. index:: | - | | pair: node attribute; #node-unfenced | - | | | - | | When the node was last unfenced (as seconds since | - | | the epoch). This is managed by the cluster and | - | | should never be used directly. | - +----------------------------+-----------------------------------------------------+ + :header-rows: 1 + + * - Name + - Description + * - fail-count-* + - .. index:: + pair: node attribute; fail-count + + Attributes whose names start with ``fail-count-`` are managed by the + cluster to track how many times particular resource operations have + failed on this node. These should be queried and cleared via the + ``crm_failcount`` or ``crm_resource --cleanup`` commands rather than + directly. + * - last-failure-* + - .. index:: + pair: node attribute; last-failure + + Attributes whose names start with ``last-failure-`` are managed by the + cluster to track when particular resource operations have most recently + failed on this node. These should be cleared via the ``crm_failcount`` + or ``crm_resource --cleanup`` commands rather than directly. + * - maintenance + - .. _node_maintenance: + + .. index:: + pair: node attribute; maintenance + + If true, the cluster will not start or stop any resources on this node. + Any resources active on the node become unmanaged, and any recurring + operations for those resources (except those specifying ``role`` as + ``Stopped``) will be paused. The :ref:`maintenance-mode ` + cluster option, if true, overrides this. If this attribute is true, it + overrides the :ref:`is-managed ` and + :ref:`maintenance ` meta-attributes of affected resources + and :ref:`enabled ` meta-attribute for affected recurring + actions. Pacemaker should not be restarted on a node that is in + single-node maintenance mode. + * - probe_complete + - .. index:: + pair: node attribute; probe_complete + + This is managed by the cluster to detect when nodes need to be reprobed, + and should never be used directly. + * - resource-discovery-enabled + - .. index:: + pair: node attribute; resource-discovery-enabled + + If the node is a remote node, fencing is enabled, and this attribute is + explicitly set to false (unset means true in this case), resource + discovery (probes) will not be done on this node. This is highly + discouraged; the ``resource-discovery`` location constraint property is + preferred for this purpose. + * - shutdown + - .. index:: + pair: node attribute; shutdown + + This is managed by the cluster to orchestrate the shutdown of a node, and + should never be used directly. + * - site-name + - .. index:: + pair: node attribute; site-name + + If set, this will be used as the value of the ``#site-name`` node + attribute used in rules. (If not set, the value of the ``cluster-name`` + cluster option will be used as ``#site-name`` instead.) + * - standby + - .. index:: + pair: node attribute; standby + + If true, the node is in standby mode. This is typically set and queried + via the ``crm_standby`` command rather than directly. + * - terminate + - .. index:: + pair: node attribute; terminate + + If the value is true or begins with any nonzero number, the node will be + fenced. This is typically set by tools rather than directly. + * - #digests-* + - .. index:: + pair: node attribute; #digests + + Attributes whose names start with ``#digests-`` are managed by the cluster + to detect when :ref:`unfencing` needs to be redone, and should never be + used directly. + * - #node-unfenced + - .. index:: + pair: node attribute; #node-unfenced + + When the node was last unfenced (as seconds since the epoch). This is + managed by the cluster and should never be used directly. .. index:: single: node; health .. _node-health: Tracking Node Health ____________________ A node may be functioning adequately as far as cluster membership is concerned, and yet be "unhealthy" in some respect that makes it an undesirable location for resources. For example, a disk drive may be reporting SMART errors, or the CPU may be highly loaded. Pacemaker offers a way to automatically move resources off unhealthy nodes. .. index:: single: node attribute; health Node Health Attributes ###################### Pacemaker will treat any node attribute whose name starts with ``#health`` as an indicator of node health. Node health attributes may have one of the following values: -.. table:: **Allowed Values for Node Health Attributes** +.. list-table:: **Allowed Values for Node Health Attributes** :widths: 25 75 - - +------------+--------------------------------------------------------------+ - | Value | Intended significance | - +============+==============================================================+ - | ``red`` | .. index:: | - | | single: red; node health attribute value | - | | single: node attribute; health (red) | - | | | - | | This indicator is unhealthy | - +------------+--------------------------------------------------------------+ - | ``yellow`` | .. index:: | - | | single: yellow; node health attribute value | - | | single: node attribute; health (yellow) | - | | | - | | This indicator is close to unhealthy (whether worsening or | - | | recovering) | - +------------+--------------------------------------------------------------+ - | ``green`` | .. index:: | - | | single: green; node health attribute value | - | | single: node attribute; health (green) | - | | | - | | This indicator is healthy | - +------------+--------------------------------------------------------------+ - | *integer* | .. index:: | - | | single: score; node health attribute value | - | | single: node attribute; health (score) | - | | | - | | A numeric score to apply to all resources on this node (0 or | - | | positive is healthy, negative is unhealthy) | - +------------+--------------------------------------------------------------+ + :header-rows: 1 + + * - Value + - Intended significance + * - ``red`` + - .. index:: + single: red; node health attribute value + single: node attribute; health (red) + + This indicator is unhealthy + * - ``yellow`` + - .. index:: + single: yellow; node health attribute value + single: node attribute; health (yellow) + + This indicator is close to unhealthy (whether worsening or recovering) + * - ``green`` + - .. index:: + single: green; node health attribute value + single: node attribute; health (green) + + This indicator is healthy + * - *integer* + - .. index:: + single: score; node health attribute value + single: node attribute; health (score) + + A numeric score to apply to all resources on this node (0 or positive is + healthy, negative is unhealthy) .. note:: A health attribute may technically be transient or permanent, but generally only transient makes sense. .. note:: ``red``, ``yellow``, and ``green`` function as aliases for particular numeric scores as described later. .. index:: pair: cluster option; node-health-strategy Node Health Strategy #################### Pacemaker assigns a node health score to each node, as the sum of the values of all its node health attributes. This score will be used as a location constraint applied to this node for all resources. The ``node-health-strategy`` cluster option controls how Pacemaker responds to changes in node health attributes, and how it translates ``red``, ``yellow``, and ``green`` to scores. Allowed values are: -.. table:: **Node Health Strategies** +.. list-table:: **Node Health Strategies** :widths: 25 75 - - +----------------+----------------------------------------------------------+ - | Value | Effect | - +================+==========================================================+ - | none | .. index:: | - | | single: node-health-strategy; none | - | | single: none; node-health-strategy value | - | | | - | | Do not track node health attributes at all. | - +----------------+----------------------------------------------------------+ - | migrate-on-red | .. index:: | - | | single: node-health-strategy; migrate-on-red | - | | single: migrate-on-red; node-health-strategy value | - | | | - | | Assign the value of ``-INFINITY`` to ``red``, and 0 to | - | | ``yellow`` and ``green``. This will cause all resources | - | | to move off the node if any attribute is ``red``. | - +----------------+----------------------------------------------------------+ - | only-green | .. index:: | - | | single: node-health-strategy; only-green | - | | single: only-green; node-health-strategy value | - | | | - | | Assign the value of ``-INFINITY`` to ``red`` and | - | | ``yellow``, and 0 to ``green``. This will cause all | - | | resources to move off the node if any attribute is | - | | ``red`` or ``yellow``. | - +----------------+----------------------------------------------------------+ - | progressive | .. index:: | - | | single: node-health-strategy; progressive | - | | single: progressive; node-health-strategy value | - | | | - | | Assign the value of the ``node-health-red`` cluster | - | | option to ``red``, the value of ``node-health-yellow`` | - | | to ``yellow``, and the value of ``node-health-green`` to | - | | ``green``. Each node is additionally assigned a score of | - | | ``node-health-base`` (this allows resources to start | - | | even if some attributes are ``yellow``). This strategy | - | | gives the administrator finer control over how important | - | | each value is. | - +----------------+----------------------------------------------------------+ - | custom | .. index:: | - | | single: node-health-strategy; custom | - | | single: custom; node-health-strategy value | - | | | - | | Track node health attributes using the same values as | - | | ``progressive`` for ``red``, ``yellow``, and ``green``, | - | | but do not take them into account. The administrator is | - | | expected to implement a policy by defining :ref:`rules` | - | | referencing node health attributes. | - +----------------+----------------------------------------------------------+ + :header-rows: 1 + + * - Value + - Effect + * - none + - .. index:: + single: node-health-strategy; none + single: none; node-health-strategy value + + Do not track node health attributes at all. + * - migrate-on-red + - .. index:: + single: node-health-strategy; migrate-on-red + single: migrate-on-red; node-health-strategy value + + Assign the value of ``-INFINITY`` to ``red``, and 0 to ``yellow`` and + ``green``. This will cause all resources to move off the node if any + attribute is ``red``. + * - only-green + - .. index:: + single: node-health-strategy; only-green + single: only-green; node-health-strategy value + + Assign the value of ``-INFINITY`` to ``red`` and ``yellow``, and 0 to + ``green``. This will cause all resources to move off the node if any + attribute is ``red`` or ``yellow``. + * - progressive + - .. index:: + single: node-health-strategy; progressive + single: progressive; node-health-strategy value + + Assign the value of the ``node-health-red`` cluster option to ``red``, + the value of ``node-health-yellow`` to ``yellow``, and the value of + ``node-health-green`` to ``green``. Each node is additionally assigned a + score of ``node-health-base`` (this allows resources to start even if + some attributes are ``yellow``). This strategy gives the administrator + finer control over how important each value is. + * - custom + - .. index:: + single: node-health-strategy; custom + single: custom; node-health-strategy value + + Track node health attributes using the same values as ``progressive`` for + ``red``, ``yellow``, and ``green``, but do not take them into account. + The administrator is expected to implement a policy by defining :ref:`rules` + referencing node health attributes. Exempting a Resource from Health Restrictions ############################################# If you want a resource to be able to run on a node even if its health score would otherwise prevent it, set the resource's ``allow-unhealthy-nodes`` meta-attribute to ``true`` *(available since 2.1.3)*. This is particularly useful for node health agents, to allow them to detect when the node becomes healthy again. If you configure a health agent without this setting, then the health agent will be banned from an unhealthy node, and you will have to investigate and clear the health attribute manually once it is healthy to allow resources on the node again. If you want the meta-attribute to apply to a clone, it must be set on the clone itself, not on the resource being cloned. Configuring Node Health Agents ############################## Since Pacemaker calculates node health based on node attributes, any method that sets node attributes may be used to measure node health. The most common are resource agents and custom daemons. Pacemaker provides examples that can be used directly or as a basis for custom code. The ``ocf:pacemaker:HealthCPU``, ``ocf:pacemaker:HealthIOWait``, and ``ocf:pacemaker:HealthSMART`` resource agents set node health attributes based on CPU and disk status. To take advantage of this feature, add the resource to your cluster (generally as a cloned resource with a recurring monitor action, to continually check the health of all nodes). For example: .. topic:: Example HealthIOWait resource configuration .. code-block:: xml The resource agents use ``attrd_updater`` to set proper status for each node running this resource, as a node attribute whose name starts with ``#health`` (for ``HealthIOWait``, the node attribute is named ``#health-iowait``). When a node is no longer faulty, you can force the cluster to make it available to take resources without waiting for the next monitor, by setting the node health attribute to green. For example: .. topic:: **Force node1 to be marked as healthy** .. code-block:: none # attrd_updater --name "#health-iowait" --update "green" --node "node1" diff --git a/doc/sphinx/Pacemaker_Explained/resources.rst b/doc/sphinx/Pacemaker_Explained/resources.rst index 28c8e02ea0..6d812fe6a6 100644 --- a/doc/sphinx/Pacemaker_Explained/resources.rst +++ b/doc/sphinx/Pacemaker_Explained/resources.rst @@ -1,832 +1,832 @@ .. _resource: Resources --------- .. _s-resource-primitive: .. index:: single: resource A *resource* is a service managed by Pacemaker. The simplest type of resource, a *primitive*, is described in this chapter. More complex forms, such as groups and clones, are described in later chapters. Every primitive has a *resource agent* that provides Pacemaker a standardized interface for managing the service. This allows Pacemaker to be agnostic about the services it manages. Pacemaker doesn't need to understand how the service works because it relies on the resource agent to do the right thing when asked. Every resource has a *standard* (also called *class*) specifying the interface that its resource agent follows, and a *type* identifying the specific service being managed. .. _s-resource-supported: .. index:: single: resource; standard Resource Standards ################## Pacemaker can use resource agents complying with these standards, described in more detail below: * ocf * lsb * systemd * service * stonith Support for some standards is controlled by build options and so might not be available in any particular build of Pacemaker. The command ``crm_resource --list-standards`` will show which standards are supported by the local build. .. index:: single: resource; OCF single: OCF; resources single: Open Cluster Framework; resources Open Cluster Framework ______________________ The Open Cluster Framework (OCF) Resource Agent API is a ClusterLabs standard for managing services. It is the most preferred since it is specifically designed for use in a Pacemaker cluster. OCF agents are scripts that support a variety of actions including ``start``, ``stop``, and ``monitor``. They may accept parameters, making them more flexible than other standards. The number and purpose of parameters is left to the agent, which advertises them via the ``meta-data`` action. Unlike other standards, OCF agents have a *provider* as well as a standard and type. For more information, see the "Resource Agents" chapter of *Pacemaker Administration* and the `OCF standard `_. .. _s-resource-supported-systemd: .. index:: single: Resource; Systemd single: Systemd; resources Systemd _______ Most Linux distributions use `Systemd `_ for system initialization and service management. *Unit files* specify how to manage services and are usually provided by the distribution. Pacemaker can manage systemd units of type service, socket, mount, timer, or path. Simply create a resource with ``systemd`` as the resource standard and the unit file name as the resource type. Do *not* run ``systemctl enable`` on the unit. .. important:: Make sure that any systemd services to be controlled by the cluster are *not* enabled to start at boot. .. index:: single: resource; LSB single: LSB; resources single: Linux Standard Base; resources Linux Standard Base ___________________ *LSB* resource agents, also known as `SysV-style `_, are scripts that provide start, stop, and status actions for a service. They are provided by some operating system distributions. If a full path is not given, they are assumed to be located in a directory specified when your Pacemaker software was built (usually ``/etc/init.d``). In order to be used with Pacemaker, they must conform to the `LSB specification `_ as it relates to init scripts. .. warning:: Some LSB scripts do not fully comply with the standard. For details on how to check whether your script is LSB-compatible, see the "Resource Agents" chapter of `Pacemaker Administration`. Common problems include: * Not implementing the ``status`` action * Not observing the correct exit status codes * Starting a started resource returns an error * Stopping a stopped resource returns an error .. important:: Make sure the host is *not* configured to start any LSB services at boot that will be controlled by the cluster. .. index:: single: Resource; System Services single: System Service; resources System Services _______________ Since there is more than one type of system service (``systemd`` and ``lsb``), Pacemaker supports a special ``service`` alias which intelligently figures out which one applies to a given cluster node. This is particularly useful when the cluster contains a mix of ``systemd`` and ``lsb``. If the ``service`` standard is specified, Pacemaker will try to find the named service as an LSB init script, and if none exists, a systemd unit file. .. index:: single: Resource; STONITH single: STONITH; resources STONITH _______ The ``stonith`` standard is used for managing fencing devices, discussed later in :ref:`fencing`. .. _primitive-resource: Resource Properties ################### These values tell the cluster which resource agent to use for the resource, where to find that resource agent and what standards it conforms to. -.. table:: **Properties of a Primitive Resource** +.. list-table:: **Properties of a Primitive Resource** :widths: 25 75 + :header-rows: 1 - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: id; resource | - | | single: resource; property, id | - | | | - | | Your name for the resource | - +-------------+------------------------------------------------------------------+ - | class | .. index:: | - | | single: class; resource | - | | single: resource; property, class | - | | | - | | The standard the resource agent conforms to. Allowed values: | - | | ``lsb``, ``ocf``, ``service``, ``stonith``, and ``systemd`` | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: description; resource | - | | single: resource; property, description | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ - | type | .. index:: | - | | single: type; resource | - | | single: resource; property, type | - | | | - | | The name of the Resource Agent you wish to use. E.g. | - | | ``IPaddr`` or ``Filesystem`` | - +-------------+------------------------------------------------------------------+ - | provider | .. index:: | - | | single: provider; resource | - | | single: resource; property, provider | - | | | - | | The OCF spec allows multiple vendors to supply the same resource | - | | agent. To use the OCF resource agents supplied by the Heartbeat | - | | project, you would specify ``heartbeat`` here. | - +-------------+------------------------------------------------------------------+ + * - Field + - Description + * - id + - .. index:: + single: id; resource + single: resource; property, id + + Your name for the resource + * - class + - .. index:: + single: class; resource + single: resource; property, class + + The standard the resource agent conforms to. Allowed values: ``lsb``, + ``ocf``, ``service``, ``stonith``, and ``systemd`` + * - description + - .. index:: + single: description; resource + single: resource; property, description + + Arbitrary text for user's use (ignored by Pacemaker) + * - type + - .. index:: + single: type; resource + single: resource; property, type + + The name of the Resource Agent you wish to use. E.g. ``IPaddr`` or + ``Filesystem`` + * - provider + - .. index:: + single: provider; resource + single: resource; property, provider + + The OCF spec allows multiple vendors to supply the same resource agent. + To use the OCF resource agents supplied by the Heartbeat project, you + would specify ``heartbeat`` here. The XML definition of a resource can be queried with the **crm_resource** tool. For example: .. code-block:: none # crm_resource --resource Email --query-xml might produce: .. topic:: A system resource definition .. code-block:: xml .. note:: One of the main drawbacks to system services (lsb and systemd) is that they do not allow parameters .. topic:: An OCF resource definition .. code-block:: xml .. _resource_options: Resource Options ################ Resources have two types of options: *meta-attributes* and *instance attributes*. Meta-attributes apply to any type of resource, while instance attributes are specific to each resource agent. Resource Meta-Attributes ________________________ Meta-attributes are used by the cluster to decide how a resource should behave and can be easily set using the ``--meta`` option of the **crm_resource** command. .. list-table:: **Meta-Attributes of a Primitive Resource** :class: longtable :widths: 20 15 20 45 :header-rows: 1 * - Name - Type - Default - Description * - .. _meta_priority: .. index:: single: priority; resource option single: resource; option, priority priority - :ref:`score ` - 0 - If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. * - .. _meta_critical: .. index:: single: critical; resource option single: resource; option, critical critical - :ref:`boolean ` - true - Use this value as the default for ``influence`` in all :ref:`colocation constraints ` involving this resource, as well as in the implicit colocation constraints created if this resource is in a :ref:`group `. For details, see :ref:`s-coloc-influence`. *(since 2.1.0)* * - .. _meta_target_role: .. index:: single: target-role; resource option single: resource; option, target-role target-role - :ref:`enumeration ` - Started - What state should the cluster attempt to keep this resource in? Allowed values: * ``Stopped:`` Force the resource to be stopped * ``Started:`` Allow the resource to be started (and in the case of :ref:`promotable ` clone resources, promoted if appropriate) * ``Unpromoted:`` Allow the resource to be started, but only in the unpromoted role if the resource is :ref:`promotable ` * ``Promoted:`` Equivalent to ``Started`` * - .. _meta_is_managed: .. _is_managed: .. index:: single: is-managed; resource option single: resource; option, is-managed is-managed - :ref:`boolean ` - true - If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. Maintenance mode overrides this setting. * - .. _meta_maintenance: .. _rsc_maintenance: .. index:: single: maintenance; resource option single: resource; option, maintenance maintenance - :ref:`boolean ` - false - If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying ``role`` as ``Stopped``). If true, the :ref:`maintenance-mode ` cluster option or :ref:`maintenance ` node attribute overrides this. * - .. _meta_resource_stickiness: .. _resource-stickiness: .. index:: single: resource-stickiness; resource option single: resource; option, resource-stickiness resource-stickiness - :ref:`score ` - 1 for individual clone instances, 0 for all other resources - A score that will be added to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. * - .. _meta_requires: .. _requires: .. index:: single: requires; resource option single: resource; option, requires requires - :ref:`enumeration ` - ``quorum`` for resources with a ``class`` of ``stonith``, otherwise ``unfencing`` if unfencing is active in the cluster, otherwise ``fencing`` if ``stonith-enabled`` is true, otherwise ``quorum`` - Conditions under which the resource can be started. Allowed values: * ``nothing:`` The cluster can always start this resource. * ``quorum:`` The cluster can start this resource only if a majority of the configured nodes are active. * ``fencing:`` The cluster can start this resource only if a majority of the configured nodes are active *and* any failed or unknown nodes have been :ref:`fenced `. * ``unfencing:`` The cluster can only start this resource if a majority of the configured nodes are active *and* any failed or unknown nodes have been fenced *and* only on nodes that have been :ref:`unfenced `. * - .. _meta_migration_threshold: .. index:: single: migration-threshold; resource option single: resource; option, migration-threshold migration-threshold - :ref:`score ` - INFINITY - How many failures may occur for this resource on a node, before this node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible); by contrast, the cluster treats ``INFINITY`` (the default) as a very large but finite number. This option has an effect only if the failed operation specifies ``on-fail`` as ``restart`` (the default), and additionally for failed ``start`` operations, if the cluster property ``start-failure-is-fatal`` is ``false``. * - .. _meta_failure_timeout: .. index:: single: failure-timeout; resource option single: resource; option, failure-timeout failure-timeout - :ref:`duration ` - 0 - Ignore previously failed resource actions after this much time has passed without new failures (potentially allowing the resource back to the node on which it failed, if it previously reached its ``migration-threshold`` there). A value of 0 indicates that failures do not expire. **WARNING:** If this value is low, and pending cluster activity prevents the cluster from responding to a failure within that time, then the failure will be ignored completely and will not cause recovery of the resource, even if a recurring action continues to report failure. It should be at least greater than the longest :ref:`action timeout ` for all resources in the cluster. A value in hours or days is reasonable. * - .. _meta_multiple_active: .. index:: single: multiple-active; resource option single: resource; option, multiple-active multiple-active - :ref:`enumeration ` - stop_start - What should the cluster do if it ever finds the resource active on more than one node? Allowed values: * ``block``: mark the resource as unmanaged * ``stop_only``: stop all active instances and leave them that way * ``stop_start``: stop all active instances and start the resource in one location only * ``stop_unexpected``: stop all active instances except where the resource should be active (this should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused; note that any resources ordered after this will still need to be restarted) *(since 2.1.3)* * - .. _meta_allow_migrate: .. index:: single: allow-migrate; resource option single: resource; option, allow-migrate allow-migrate - :ref:`boolean ` - true for ``ocf:pacemaker:remote`` resources, false otherwise - Whether the cluster should try to "live migrate" this resource when it needs to be moved (see :ref:`live-migration`) * - .. _meta_allow_unhealthy_nodes: .. index:: single: allow-unhealthy-nodes; resource option single: resource; option, allow-unhealthy-nodes allow-unhealthy-nodes - :ref:`boolean ` - false - Whether the resource should be able to run on a node even if the node's health score would otherwise prevent it (see :ref:`node-health`) *(since 2.1.3)* * - .. _meta_container_attribute_target: .. index:: single: container-attribute-target; resource option single: resource; option, container-attribute-target container-attribute-target - :ref:`enumeration ` - - Specific to bundle resources; see :ref:`s-bundle-attributes` As an example of setting resource options, if you performed the following commands on an LSB Email resource: .. code-block:: none # crm_resource --meta --resource Email --set-parameter priority --parameter-value 100 # crm_resource -m -r Email -p multiple-active -v block the resulting resource definition might be: .. topic:: An LSB resource with cluster options .. code-block:: xml In addition to the cluster-defined meta-attributes described above, you may also configure arbitrary meta-attributes of your own choosing. Most commonly, this would be done for use in :ref:`rules `. For example, an IT department might define a custom meta-attribute to indicate which company department each resource is intended for. To reduce the chance of name collisions with cluster-defined meta-attributes added in the future, it is recommended to use a unique, organization-specific prefix for such attributes. .. _s-resource-defaults: Setting Global Defaults for Resource Meta-Attributes ____________________________________________________ To set a default value for a resource option, add it to the ``rsc_defaults`` section with ``crm_attribute``. For example, .. code-block:: none # crm_attribute --type rsc_defaults --name is-managed --update false would prevent the cluster from starting or stopping any of the resources in the configuration (unless of course the individual resources were specifically enabled by having their ``is-managed`` set to ``true``). Resource Instance Attributes ____________________________ The resource agents of some resource standards (lsb and systemd *not* among them) can be given parameters which determine how they behave and which instance of a service they control. If your resource agent supports parameters, you can add them with the ``crm_resource`` command. For example, .. code-block:: none # crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2 would create an entry in the resource like this: .. topic:: An example OCF resource with instance attributes .. code-block:: xml For an OCF resource, the result would be an environment variable called ``OCF_RESKEY_ip`` with a value of ``192.0.2.2``. The list of instance attributes supported by an OCF resource agent can be found by calling the resource agent with the ``meta-data`` command. The output contains an XML description of all the supported attributes, their purpose and default values. .. topic:: Displaying the metadata for the Dummy resource agent template .. code-block:: none # export OCF_ROOT=/usr/lib/ocf # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data .. code-block:: xml 1.1 This is a dummy OCF resource agent. It does absolutely nothing except keep track of whether it is running or not, and can be configured so that actions fail or take a long time. Its purpose is primarily for testing, and to serve as a template for resource agent writers. Example stateless resource agent Location to store the resource state in. State file Fake password field Password Fake attribute that can be changed to cause a reload Fake attribute that can be changed to cause a reload Number of seconds to sleep during operations. This can be used to test how the cluster reacts to operation timeouts. Operation sleep duration in seconds. Start, migrate_from, and reload-agent actions will return failure if running on the host specified here, but the resource will run successfully anyway (future monitor calls will find it running). This can be used to test on-fail=ignore. Report bogus start failure on specified host If this is set, the environment will be dumped to this file for every call. Environment dump file Pacemaker Remote Resources ########################## :ref:`Pacemaker Remote ` nodes are defined by resources. .. _remote_nodes: .. index:: single: node; remote single: Pacemaker Remote; remote node single: remote node Remote nodes ____________ A remote node is defined by a connection resource using the special, built-in **ocf:pacemaker:remote** resource agent. .. list-table:: **ocf:pacemaker:remote Instance Attributes** :class: longtable :widths: 25 10 15 50 :header-rows: 1 * - Name - Type - Default - Description * - .. _remote_server: .. index:: pair: remote node; server server - :ref:`text ` - resource ID - Hostname or IP address used to connect to the remote node. The remote executor on the remote node must be configured to accept connections on this address. * - .. _remote_port: .. index:: pair: remote node; port port - :ref:`port ` - 3121 - TCP port on the remote node used for its Pacemaker Remote connection. The remote executor on the remote node must be configured to listen on this port. * - .. _remote_reconnect_interval: .. index:: pair: remote node; reconnect_interval reconnect_interval - :ref:`duration ` - 0 - If positive, the cluster will attempt to reconnect to a remote node at this interval after an active connection has been lost. Otherwise, the cluster will attempt to reconnect immediately (after any fencing, if needed). .. _guest_nodes: .. index:: single: node; guest single: Pacemaker Remote; guest node single: guest node Guest Nodes ___________ When configuring a virtual machine as a guest node, the virtual machine is created using one of the usual resource agents for that purpose (for example, **ocf:heartbeat:VirtualDomain** or **ocf:heartbeat:Xen**), with additional meta-attributes. No restrictions are enforced on what agents may be used to create a guest node, but obviously the agent must create a distinct environment capable of running the remote executor and cluster resources. An additional requirement is that fencing the node hosting the guest node resource must be sufficient for ensuring the guest node is stopped. This means that not all hypervisors supported by **VirtualDomain** may be used to create guest nodes; if the guest can survive the hypervisor being fenced, it is unsuitable for use as a guest node. .. list-table:: **Guest Node Meta-Attributes** :class: longtable :widths: 25 10 20 45 :header-rows: 1 * - Name - Type - Default - Description * - .. _meta_remote_node: .. index:: single: remote-node; resource option single: resource; option, remote-node remote-node - :ref:`text ` - - If specified, this resource defines a guest node using this node name. The guest must be configured to run the remote executor when it is started. This value *must not* be the same as any resource or node ID. * - .. _meta_remote_addr: .. index:: single: remote-addr; resource option single: resource; option, remote-addr remote-addr - :ref:`text ` - value of ``remote-node`` - If ``remote-node`` is specified, the hostname or IP address used to connect to the guest. The remote executor on the guest must be configured to accept connections on this address. * - .. _meta_remote_port: .. index:: single: remote-port; resource option single: resource; option, remote-port remote-port - :ref:`port ` - 3121 - If ``remote-node`` is specified, the port on the guest used for its Pacemaker Remote connection. The remote executor on the guest must be configured to listen on this port. * - .. _meta_remote_connect_timeout: .. index:: single: remote-connect-timeout; resource option single: resource; option, remote-connect-timeout remote-connect-timeout - :ref:`timeout ` - 60s - If ``remote-node`` is specified, how long before a pending guest connection will time out. * - .. _meta_remote_allow_migrate: .. index:: single: remote-allow-migrate; resource option single: resource; option, remote-allow-migrate remote-allow-migrate - :ref:`boolean ` - true - If ``remote-node`` is specified, this acts as the ``allow-migrate`` meta-attribute for its implicitly created remote connection resource (``ocf:pacemaker:remote``). Removing Pacemaker Remote Nodes _______________________________ If the resource creating a remote node connection or guest node is removed from the configuration, status output may continue to show the affected node (as offline). If you want to get rid of that output, run the following command, replacing ``$NODE_NAME`` appropriately: .. code-block:: none # crm_node --force --remove $NODE_NAME .. WARNING:: Be absolutely sure that there are no references to the node's resource in the configuration before running the above command.