diff --git a/doc/sphinx/Pacemaker_Administration/configuring.rst b/doc/sphinx/Pacemaker_Administration/configuring.rst
index 415dd81967..16539020d1 100644
--- a/doc/sphinx/Pacemaker_Administration/configuring.rst
+++ b/doc/sphinx/Pacemaker_Administration/configuring.rst
@@ -1,278 +1,260 @@
.. index::
single: configuration
single: CIB
Configuring Pacemaker
---------------------
Pacemaker's configuration, the CIB, is stored in XML format. Cluster
administrators have multiple options for modifying the configuration either via
the XML, or at a more abstract (and easier for humans to understand) level.
Pacemaker reacts to configuration changes as soon as they are saved.
Pacemaker's command-line tools and most higher-level tools provide the ability
to batch changes together and commit them at once, rather than make a series of
small changes, which could cause avoid unnecessary actions as Pacemaker
responds to each change individually.
Pacemaker tracks revisions to the configuration and will reject any update
older than the current revision. Thus, it is a good idea to serialize all
changes to the configuration. Avoid attempting simultaneous changes, whether on
the same node or different nodes, and whether manually or using some automated
configuration tool.
.. note::
It is not necessary to update the configuration on all cluster nodes.
Pacemaker immediately synchronizes changes to all active members of the
cluster. To reduce bandwidth, the cluster only broadcasts the incremental
updates that result from your changes and uses checksums to ensure that each
copy is consistent.
Configuration Using Higher-level Tools
######################################
Most users will benefit from using higher-level tools provided by projects
separate from Pacemaker. Some of the most commonly used include the crm shell,
hawk, and pcs. [#]_
See those projects' documentation for details on how to configure Pacemaker
using them.
Configuration Using Pacemaker's Command-Line Tools
##################################################
Pacemaker provides lower-level, command-line tools to manage the cluster. Most
configuration tasks can be performed with these tools, without needing any XML
knowledge.
To enable STONITH for example, one could run:
.. code-block:: none
# crm_attribute --name stonith-enabled --update 1
Or, to check whether **node1** is allowed to run resources, there is:
.. code-block:: none
# crm_standby --query --node node1
Or, to change the failure threshold of **my-test-rsc**, one can use:
.. code-block:: none
# crm_resource -r my-test-rsc --set-parameter migration-threshold --parameter-value 3 --meta
Examples of using these tools for specific cases will be given throughout this
document where appropriate. See the man pages for further details.
See :ref:`cibadmin` for how to edit the CIB using XML.
See :ref:`crm_shadow` for a way to make a series of changes, then commit them
all at once to the live cluster.
.. index::
single: configuration; CIB properties
single: CIB; properties
single: CIB property
Working with CIB Properties
___________________________
Although these fields can be written to by the user, in
most cases the cluster will overwrite any values specified by the
user with the "correct" ones.
To change the ones that can be specified by the user, for example
``admin_epoch``, one should use:
.. code-block:: none
# cibadmin --modify --xml-text ''
A complete set of CIB properties will look something like this:
.. topic:: XML attributes set for a cib element
.. code-block:: xml
.. index::
single: configuration; cluster options
Querying and Setting Cluster Options
____________________________________
Cluster options can be queried and modified using the ``crm_attribute`` tool.
To get the current value of ``cluster-delay``, you can run:
.. code-block:: none
# crm_attribute --query --name cluster-delay
which is more simply written as
.. code-block:: none
# crm_attribute -G -n cluster-delay
If a value is found, you'll see a result like this:
.. code-block:: none
# crm_attribute -G -n cluster-delay
scope=crm_config name=cluster-delay value=60s
If no value is found, the tool will display an error:
.. code-block:: none
# crm_attribute -G -n clusta-deway
scope=crm_config name=clusta-deway value=(null)
Error performing operation: No such device or address
To use a different value (for example, 30 seconds), simply run:
.. code-block:: none
# crm_attribute --name cluster-delay --update 30s
To go back to the cluster's default value, you can delete the value, for example:
.. code-block:: none
# crm_attribute --name cluster-delay --delete
Deleted crm_config option: id=cib-bootstrap-options-cluster-delay name=cluster-delay
When Options are Listed More Than Once
______________________________________
If you ever see something like the following, it means that the option you're
modifying is present more than once.
.. topic:: Deleting an option that is listed twice
.. code-block:: none
# crm_attribute --name batch-limit --delete
Please choose from one of the matches below and supply the 'id' with --id
Multiple attributes match name=batch-limit in crm_config:
Value: 50 (set=cib-bootstrap-options, id=cib-bootstrap-options-batch-limit)
Value: 100 (set=custom, id=custom-batch-limit)
In such cases, follow the on-screen instructions to perform the requested
action. To determine which value is currently being used by the cluster, refer
to the "Rules" chapter of *Pacemaker Explained*.
.. index::
single: configuration; remote
.. _remote_connection:
Connecting from a Remote Machine
################################
Provided Pacemaker is installed on a machine, it is possible to connect to the
cluster even if the machine itself is not in the same cluster. To do this, one
simply sets up a number of environment variables and runs the same commands as
when working on a cluster node.
.. table:: **Environment Variables Used to Connect to Remote Instances of the CIB**
+----------------------+-----------+------------------------------------------------+
| Environment Variable | Default | Description |
+======================+===========+================================================+
| CIB_user | $USER | .. index:: |
| | | single: CIB_user |
| | | single: environment variable; CIB_user |
| | | |
| | | The user to connect as. Needs to be |
| | | part of the ``haclient`` group on |
| | | the target host. |
+----------------------+-----------+------------------------------------------------+
| CIB_passwd | | .. index:: |
| | | single: CIB_passwd |
| | | single: environment variable; CIB_passwd |
| | | |
| | | The user's password. Read from the |
| | | command line if unset. |
+----------------------+-----------+------------------------------------------------+
| CIB_server | localhost | .. index:: |
| | | single: CIB_server |
| | | single: environment variable; CIB_server |
| | | |
| | | The host to contact |
+----------------------+-----------+------------------------------------------------+
| CIB_port | | .. index:: |
| | | single: CIB_port |
| | | single: environment variable; CIB_port |
| | | |
| | | The port on which to contact the server; |
| | | required. |
+----------------------+-----------+------------------------------------------------+
| CIB_encrypted | TRUE | .. index:: |
| | | single: CIB_encrypted |
| | | single: environment variable; CIB_encrypted |
| | | |
| | | Whether to encrypt network traffic |
+----------------------+-----------+------------------------------------------------+
So, if **c001n01** is an active cluster node and is listening on port 1234
for connections, and **someuser** is a member of the **haclient** group,
then the following would prompt for **someuser**'s password and return
the cluster's current configuration:
.. code-block:: none
# export CIB_port=1234; export CIB_server=c001n01; export CIB_user=someuser;
# cibadmin -Q
For security reasons, the cluster does not listen for remote connections by
default. If you wish to allow remote access, you need to set the
``remote-tls-port`` (encrypted) or ``remote-clear-port`` (unencrypted) CIB
properties (i.e., those kept in the ``cib`` tag, like ``num_updates`` and
-``epoch``).
-
-.. table:: **Extra top-level CIB properties for remote access**
-
- +----------------------+-----------+------------------------------------------------------+
- | CIB Property | Default | Description |
- +======================+===========+======================================================+
- | remote-tls-port | | .. index:: |
- | | | single: remote-tls-port |
- | | | single: CIB property; remote-tls-port |
- | | | |
- | | | Listen for encrypted remote connections |
- | | | on this port. |
- +----------------------+-----------+------------------------------------------------------+
- | remote-clear-port | | .. index:: |
- | | | single: remote-clear-port |
- | | | single: CIB property; remote-clear-port |
- | | | |
- | | | Listen for plaintext remote connections |
- | | | on this port. |
- +----------------------+-----------+------------------------------------------------------+
+``epoch``). Encrypted communication is keyless, which makes it subject to
+man-in-the-middle attacks, and thus either option should be used only on
+protected networks.
.. important::
The Pacemaker version on the administration host must be the same or greater
than the version(s) on the cluster nodes. Otherwise, it may not have the
schema files necessary to validate the CIB.
.. rubric:: Footnotes
.. [#] For a list, see "Configuration Tools" at
https://clusterlabs.org/components.html
diff --git a/doc/sphinx/Pacemaker_Explained/options.rst b/doc/sphinx/Pacemaker_Explained/options.rst
index ca7ea2a8a3..201b2f6f7e 100644
--- a/doc/sphinx/Pacemaker_Explained/options.rst
+++ b/doc/sphinx/Pacemaker_Explained/options.rst
@@ -1,631 +1,650 @@
Cluster-Wide Configuration
--------------------------
.. index::
pair: XML element; cib
pair: XML element; configuration
Configuration Layout
####################
The cluster is defined by the Cluster Information Base (CIB), which uses XML
notation. The simplest CIB, an empty one, looks like this:
.. topic:: An empty configuration
.. code-block:: xml
The empty configuration above contains the major sections that make up a CIB:
* ``cib``: The entire CIB is enclosed with a ``cib`` element. Certain
fundamental settings are defined as attributes of this element.
* ``configuration``: This section -- the primary focus of this document --
contains traditional configuration information such as what resources the
cluster serves and the relationships among them.
* ``crm_config``: cluster-wide configuration options
* ``nodes``: the machines that host the cluster
* ``resources``: the services run by the cluster
* ``constraints``: indications of how resources should be placed
* ``status``: This section contains the history of each resource on each
node. Based on this data, the cluster can construct the complete current
state of the cluster. The authoritative source for this section is the
local executor (pacemaker-execd process) on each cluster node, and the
cluster will occasionally repopulate the entire section. For this reason,
it is never written to disk, and administrators are advised against
modifying it in any way.
In this document, configuration settings will be described as properties or
options based on how they are defined in the CIB:
* Properties are XML attributes of an XML element.
* Options are name-value pairs expressed as ``nvpair`` child elements of an XML
element.
Normally, you will use command-line tools that abstract the XML, so the
distinction will be unimportant; both properties and options are cluster
settings you can tweak.
CIB Properties
##############
Certain settings are defined by CIB properties (that is, attributes of the
``cib`` tag) rather than with the rest of the cluster configuration in the
``configuration`` section.
The reason is simply a matter of parsing. These options are used by the
configuration database which is, by design, mostly ignorant of the content it
holds. So the decision was made to place them in an easy-to-find location.
.. table:: **CIB Properties**
:class: longtable
:widths: 1 3
- +------------------+-----------------------------------------------------------+
- | Attribute | Description |
- +==================+===========================================================+
- | admin_epoch | .. index:: |
- | | pair: admin_epoch; cib |
- | | |
- | | When a node joins the cluster, the cluster performs a |
- | | check to see which node has the best configuration. It |
- | | asks the node with the highest (``admin_epoch``, |
- | | ``epoch``, ``num_updates``) tuple to replace the |
- | | configuration on all the nodes -- which makes setting |
- | | them, and setting them correctly, very important. |
- | | ``admin_epoch`` is never modified by the cluster; you can |
- | | use this to make the configurations on any inactive nodes |
- | | obsolete. |
- | | |
- | | **Warning:** Never set this value to zero. In such cases, |
- | | the cluster cannot tell the difference between your |
- | | configuration and the "empty" one used when nothing is |
- | | found on disk. |
- +------------------+-----------------------------------------------------------+
- | epoch | .. index:: |
- | | pair: epoch; cib |
- | | |
- | | The cluster increments this every time the configuration |
- | | is updated (usually by the administrator). |
- +------------------+-----------------------------------------------------------+
- | num_updates | .. index:: |
- | | pair: num_updates; cib |
- | | |
- | | The cluster increments this every time the configuration |
- | | or status is updated (usually by the cluster) and resets |
- | | it to 0 when epoch changes. |
- +------------------+-----------------------------------------------------------+
- | validate-with | .. index:: |
- | | pair: validate-with; cib |
- | | |
- | | Determines the type of XML validation that will be done |
- | | on the configuration. If set to ``none``, the cluster |
- | | will not verify that updates conform to the DTD (nor |
- | | reject ones that don't). |
- +------------------+-----------------------------------------------------------+
- | cib-last-written | .. index:: |
- | | pair: cib-last-written; cib |
- | | |
- | | Indicates when the configuration was last written to |
- | | disk. Maintained by the cluster; for informational |
- | | purposes only. |
- +------------------+-----------------------------------------------------------+
- | have-quorum | .. index:: |
- | | pair: have-quorum; cib |
- | | |
- | | Indicates if the cluster has quorum. If false, this may |
- | | mean that the cluster cannot start resources or fence |
- | | other nodes (see ``no-quorum-policy`` below). Maintained |
- | | by the cluster. |
- +------------------+-----------------------------------------------------------+
- | dc-uuid | .. index:: |
- | | pair: dc-uuid; cib |
- | | |
- | | Indicates which cluster node is the current leader. Used |
- | | by the cluster when placing resources and determining the |
- | | order of some events. Maintained by the cluster. |
- +------------------+-----------------------------------------------------------+
+ +-------------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +===================+===========================================================+
+ | admin_epoch | .. index:: |
+ | | pair: admin_epoch; cib |
+ | | |
+ | | When a node joins the cluster, the cluster performs a |
+ | | check to see which node has the best configuration. It |
+ | | asks the node with the highest (``admin_epoch``, |
+ | | ``epoch``, ``num_updates``) tuple to replace the |
+ | | configuration on all the nodes -- which makes setting |
+ | | them, and setting them correctly, very important. |
+ | | ``admin_epoch`` is never modified by the cluster; you can |
+ | | use this to make the configurations on any inactive nodes |
+ | | obsolete. |
+ | | |
+ | | **Warning:** Never set this value to zero. In such cases, |
+ | | the cluster cannot tell the difference between your |
+ | | configuration and the "empty" one used when nothing is |
+ | | found on disk. |
+ +-------------------+-----------------------------------------------------------+
+ | epoch | .. index:: |
+ | | pair: epoch; cib |
+ | | |
+ | | The cluster increments this every time the configuration |
+ | | is updated (usually by the administrator). |
+ +-------------------+-----------------------------------------------------------+
+ | num_updates | .. index:: |
+ | | pair: num_updates; cib |
+ | | |
+ | | The cluster increments this every time the configuration |
+ | | or status is updated (usually by the cluster) and resets |
+ | | it to 0 when epoch changes. |
+ +-------------------+-----------------------------------------------------------+
+ | validate-with | .. index:: |
+ | | pair: validate-with; cib |
+ | | |
+ | | Determines the type of XML validation that will be done |
+ | | on the configuration. If set to ``none``, the cluster |
+ | | will not verify that updates conform to the DTD (nor |
+ | | reject ones that don't). |
+ +-------------------+-----------------------------------------------------------+
+ | remote-tls-port | .. index:: |
+ | | pair: remote-tls-port; cib |
+ | | |
+ | | If set to a TCP port number, the CIB manager will listen |
+ | | for anonymously encrypted remote connections on this |
+ | | port, to allow for CIB administration from hosts not in |
+ | | the cluster. No key is used, so this should be used only |
+ | | on a protected network where man-in-the-middle attacks |
+ | | can be avoided. |
+ +-------------------+-----------------------------------------------------------+
+ | remote-clear-port | .. index:: |
+ | | pair: remote-clear-port; cib |
+ | | |
+ | | If set to a TCP port number, the CIB manager will listen |
+ | | for remote connections on this port, to allow for CIB |
+ | | administration from hosts not in the cluster. No |
+ | | encryption is used, so this should be used only on a |
+ | | protected network. |
+ +-------------------+-----------------------------------------------------------+
+ | cib-last-written | .. index:: |
+ | | pair: cib-last-written; cib |
+ | | |
+ | | Indicates when the configuration was last written to |
+ | | disk. Maintained by the cluster; for informational |
+ | | purposes only. |
+ +-------------------+-----------------------------------------------------------+
+ | have-quorum | .. index:: |
+ | | pair: have-quorum; cib |
+ | | |
+ | | Indicates if the cluster has quorum. If false, this may |
+ | | mean that the cluster cannot start resources or fence |
+ | | other nodes (see ``no-quorum-policy`` below). Maintained |
+ | | by the cluster. |
+ +-------------------+-----------------------------------------------------------+
+ | dc-uuid | .. index:: |
+ | | pair: dc-uuid; cib |
+ | | |
+ | | Indicates which cluster node is the current leader. Used |
+ | | by the cluster when placing resources and determining the |
+ | | order of some events. Maintained by the cluster. |
+ +-------------------+-----------------------------------------------------------+
.. _cluster_options:
Cluster Options
###############
Cluster options, as you might expect, control how the cluster behaves when
confronted with various situations.
They are grouped into sets within the ``crm_config`` section. In advanced
configurations, there may be more than one set. (This will be described later
in the chapter on :ref:`rules` where we will show how to have the cluster use
different sets of options during working hours than during weekends.) For now,
we will describe the simple case where each option is present at most once.
You can obtain an up-to-date list of cluster options, including their default
values, by running the ``man pacemaker-schedulerd`` and
``man pacemaker-controld`` commands.
.. table:: **Cluster Options**
:class: longtable
:widths: 2 1 4
+---------------------------+---------+----------------------------------------------------+
| Option | Default | Description |
+===========================+=========+====================================================+
| cluster-name | | .. index:: |
| | | pair: cluster option; cluster-name |
| | | |
| | | An (optional) name for the cluster as a whole. |
| | | This is mostly for users' convenience for use |
| | | as desired in administration, but this can be |
| | | used in the Pacemaker configuration in |
| | | :ref:`rules` (as the ``#cluster-name`` |
| | | :ref:`node attribute |
| | | `. It may |
| | | also be used by higher-level tools when |
| | | displaying cluster information, and by |
| | | certain resource agents (for example, the |
| | | ``ocf:heartbeat:GFS2`` agent stores the |
| | | cluster name in filesystem meta-data). |
+---------------------------+---------+----------------------------------------------------+
| dc-version | | .. index:: |
| | | pair: cluster option; dc-version |
| | | |
| | | Version of Pacemaker on the cluster's DC. |
| | | Determined automatically by the cluster. Often |
| | | includes the hash which identifies the exact |
| | | Git changeset it was built from. Used for |
| | | diagnostic purposes. |
+---------------------------+---------+----------------------------------------------------+
| cluster-infrastructure | | .. index:: |
| | | pair: cluster option; cluster-infrastructure |
| | | |
| | | The messaging stack on which Pacemaker is |
| | | currently running. Determined automatically by |
| | | the cluster. Used for informational and |
| | | diagnostic purposes. |
+---------------------------+---------+----------------------------------------------------+
| no-quorum-policy | stop | .. index:: |
| | | pair: cluster option; no-quorum-policy |
| | | |
| | | What to do when the cluster does not have |
| | | quorum. Allowed values: |
| | | |
| | | * ``ignore:`` continue all resource management |
| | | * ``freeze:`` continue resource management, but |
| | | don't recover resources from nodes not in the |
| | | affected partition |
| | | * ``stop:`` stop all resources in the affected |
| | | cluster partition |
| | | * ``demote:`` demote promotable resources and |
| | | stop all other resources in the affected |
| | | cluster partition *(since 2.0.5)* |
| | | * ``suicide:`` fence all nodes in the affected |
| | | cluster partition |
+---------------------------+---------+----------------------------------------------------+
| batch-limit | 0 | .. index:: |
| | | pair: cluster option; batch-limit |
| | | |
| | | The maximum number of actions that the cluster |
| | | may execute in parallel across all nodes. The |
| | | "correct" value will depend on the speed and |
| | | load of your network and cluster nodes. If zero, |
| | | the cluster will impose a dynamically calculated |
| | | limit only when any node has high load. If -1, the |
| | | cluster will not impose any limit. |
+---------------------------+---------+----------------------------------------------------+
| migration-limit | -1 | .. index:: |
| | | pair: cluster option; migration-limit |
| | | |
| | | The number of |
| | | :ref:`live migration ` actions |
| | | that the cluster is allowed to execute in |
| | | parallel on a node. A value of -1 means |
| | | unlimited. |
+---------------------------+---------+----------------------------------------------------+
| symmetric-cluster | true | .. index:: |
| | | pair: cluster option; symmetric-cluster |
| | | |
| | | Whether resources can run on any node by default |
| | | (if false, a resource is allowed to run on a |
| | | node only if a |
| | | :ref:`location constraint ` |
| | | enables it) |
+---------------------------+---------+----------------------------------------------------+
| stop-all-resources | false | .. index:: |
| | | pair: cluster option; stop-all-resources |
| | | |
| | | Whether all resources should be disallowed from |
| | | running (can be useful during maintenance) |
+---------------------------+---------+----------------------------------------------------+
| stop-orphan-resources | true | .. index:: |
| | | pair: cluster option; stop-orphan-resources |
| | | |
| | | Whether resources that have been deleted from |
| | | the configuration should be stopped. This value |
| | | takes precedence over ``is-managed`` (that is, |
| | | even unmanaged resources will be stopped when |
| | | orphaned if this value is ``true`` |
+---------------------------+---------+----------------------------------------------------+
| stop-orphan-actions | true | .. index:: |
| | | pair: cluster option; stop-orphan-actions |
| | | |
| | | Whether recurring :ref:`operations ` |
| | | that have been deleted from the configuration |
| | | should be cancelled |
+---------------------------+---------+----------------------------------------------------+
| start-failure-is-fatal | true | .. index:: |
| | | pair: cluster option; start-failure-is-fatal |
| | | |
| | | Whether a failure to start a resource on a |
| | | particular node prevents further start attempts |
| | | on that node? If ``false``, the cluster will |
| | | decide whether the node is still eligible based |
| | | on the resource's current failure count and |
| | | :ref:`migration-threshold `. |
+---------------------------+---------+----------------------------------------------------+
| enable-startup-probes | true | .. index:: |
| | | pair: cluster option; enable-startup-probes |
| | | |
| | | Whether the cluster should check the |
| | | pre-existing state of resources when the cluster |
| | | starts |
+---------------------------+---------+----------------------------------------------------+
| maintenance-mode | false | .. index:: |
| | | pair: cluster option; maintenance-mode |
| | | |
| | | Whether the cluster should refrain from |
| | | monitoring, starting and stopping resources |
+---------------------------+---------+----------------------------------------------------+
| stonith-enabled | true | .. index:: |
| | | pair: cluster option; stonith-enabled |
| | | |
| | | Whether the cluster is allowed to fence nodes |
| | | (for example, failed nodes and nodes with |
| | | resources that can't be stopped. |
| | | |
| | | If true, at least one fence device must be |
| | | configured before resources are allowed to run. |
| | | |
| | | If false, unresponsive nodes are immediately |
| | | assumed to be running no resources, and resource |
| | | recovery on online nodes starts without any |
| | | further protection (which can mean *data loss* |
| | | if the unresponsive node still accesses shared |
| | | storage, for example). See also the |
| | | :ref:`requires ` resource |
| | | meta-attribute. |
+---------------------------+---------+----------------------------------------------------+
| stonith-action | reboot | .. index:: |
| | | pair: cluster option; stonith-action |
| | | |
| | | Action the cluster should send to the fence agent |
| | | when a node must be fenced. Allowed values are |
| | | ``reboot``, ``off``, and (for legacy agents only) |
| | | ``poweroff``. |
+---------------------------+---------+----------------------------------------------------+
| stonith-timeout | 60s | .. index:: |
| | | pair: cluster option; stonith-timeout |
| | | |
| | | How long to wait for ``on``, ``off``, and |
| | | ``reboot`` fence actions to complete by default. |
+---------------------------+---------+----------------------------------------------------+
| stonith-max-attempts | 10 | .. index:: |
| | | pair: cluster option; stonith-max-attempts |
| | | |
| | | How many times fencing can fail for a target |
| | | before the cluster will no longer immediately |
| | | re-attempt it. |
+---------------------------+---------+----------------------------------------------------+
| stonith-watchdog-timeout | 0 | .. index:: |
| | | pair: cluster option; stonith-watchdog-timeout |
| | | |
| | | If nonzero, and the cluster detects |
| | | ``have-watchdog`` as ``true``, then watchdog-based |
| | | self-fencing will be performed via SBD when |
| | | fencing is required, without requiring a fencing |
| | | resource explicitly configured. |
| | | |
| | | If this is set to a positive value, unseen nodes |
| | | are assumed to self-fence within this much time. |
| | | |
| | | **Warning:** It must be ensured that this value is |
| | | larger than the ``SBD_WATCHDOG_TIMEOUT`` |
| | | environment variable on all nodes. Pacemaker |
| | | verifies the settings individually on all nodes |
| | | and prevents startup or shuts down if configured |
| | | wrongly on the fly. It is strongly recommended |
| | | that ``SBD_WATCHDOG_TIMEOUT`` be set to the same |
| | | value on all nodes. |
| | | |
| | | If this is set to a negative value, and |
| | | ``SBD_WATCHDOG_TIMEOUT`` is set, twice that value |
| | | will be used. |
| | | |
| | | **Warning:** In this case, it is essential (and |
| | | currently not verified by pacemaker) that |
| | | ``SBD_WATCHDOG_TIMEOUT`` is set to the same |
| | | value on all nodes. |
+---------------------------+---------+----------------------------------------------------+
| concurrent-fencing | false | .. index:: |
| | | pair: cluster option; concurrent-fencing |
| | | |
| | | Whether the cluster is allowed to initiate |
| | | multiple fence actions concurrently. Fence actions |
| | | initiated externally, such as via the |
| | | ``stonith_admin`` tool or an application such as |
| | | DLM, or by the fencer itself such as recurring |
| | | device monitors and ``status`` and ``list`` |
| | | commands, are not limited by this option. |
+---------------------------+---------+----------------------------------------------------+
| fence-reaction | stop | .. index:: |
| | | pair: cluster option; fence-reaction |
| | | |
| | | How should a cluster node react if notified of its |
| | | own fencing? A cluster node may receive |
| | | notification of its own fencing if fencing is |
| | | misconfigured, or if fabric fencing is in use that |
| | | doesn't cut cluster communication. Allowed values |
| | | are ``stop`` to attempt to immediately stop |
| | | pacemaker and stay stopped, or ``panic`` to |
| | | attempt to immediately reboot the local node, |
| | | falling back to stop on failure. The default is |
| | | likely to be changed to ``panic`` in a future |
| | | release. *(since 2.0.3)* |
+---------------------------+---------+----------------------------------------------------+
| priority-fencing-delay | 0 | .. index:: |
| | | pair: cluster option; priority-fencing-delay |
| | | |
| | | Apply this delay to any fencing targeting the lost |
| | | nodes with the highest total resource priority in |
| | | case we don't have the majority of the nodes in |
| | | our cluster partition, so that the more |
| | | significant nodes potentially win any fencing |
| | | match (especially meaningful in a split-brain of a |
| | | 2-node cluster). A promoted resource instance |
| | | takes the resource's priority plus 1 if the |
| | | resource's priority is not 0. Any static or random |
| | | delays introduced by ``pcmk_delay_base`` and |
| | | ``pcmk_delay_max`` configured for the |
| | | corresponding fencing resources will be added to |
| | | this delay. This delay should be significantly |
| | | greater than (safely twice) the maximum delay from |
| | | those parameters. *(since 2.0.4)* |
+---------------------------+---------+----------------------------------------------------+
| node-pending-timeout | 10min | .. index:: |
| | | pair: cluster option; node-pending-timeout |
| | | |
| | | A node that has joined the cluster can be pending |
| | | on joining the process group. We wait up to this |
| | | much time for it. If it times out, fencing |
| | | targeting the node will be issued if enabled. |
| | | *(since 2.1.7)* |
+---------------------------+---------+----------------------------------------------------+
| cluster-delay | 60s | .. index:: |
| | | pair: cluster option; cluster-delay |
| | | |
| | | Estimated maximum round-trip delay over the |
| | | network (excluding action execution). If the DC |
| | | requires an action to be executed on another node, |
| | | it will consider the action failed if it does not |
| | | get a response from the other node in this time |
| | | (after considering the action's own timeout). The |
| | | "correct" value will depend on the speed and load |
| | | of your network and cluster nodes. |
+---------------------------+---------+----------------------------------------------------+
| dc-deadtime | 20s | .. index:: |
| | | pair: cluster option; dc-deadtime |
| | | |
| | | How long to wait for a response from other nodes |
| | | during startup. The "correct" value will depend on |
| | | the speed/load of your network and the type of |
| | | switches used. |
+---------------------------+---------+----------------------------------------------------+
| cluster-ipc-limit | 500 | .. index:: |
| | | pair: cluster option; cluster-ipc-limit |
| | | |
| | | The maximum IPC message backlog before one cluster |
| | | daemon will disconnect another. This is of use in |
| | | large clusters, for which a good value is the |
| | | number of resources in the cluster multiplied by |
| | | the number of nodes. The default of 500 is also |
| | | the minimum. Raise this if you see |
| | | "Evicting client" messages for cluster daemon PIDs |
| | | in the logs. |
+---------------------------+---------+----------------------------------------------------+
| pe-error-series-max | -1 | .. index:: |
| | | pair: cluster option; pe-error-series-max |
| | | |
| | | The number of scheduler inputs resulting in errors |
| | | to save. Used when reporting problems. A value of |
| | | -1 means unlimited (report all), and 0 means none. |
+---------------------------+---------+----------------------------------------------------+
| pe-warn-series-max | 5000 | .. index:: |
| | | pair: cluster option; pe-warn-series-max |
| | | |
| | | The number of scheduler inputs resulting in |
| | | warnings to save. Used when reporting problems. A |
| | | value of -1 means unlimited (report all), and 0 |
| | | means none. |
+---------------------------+---------+----------------------------------------------------+
| pe-input-series-max | 4000 | .. index:: |
| | | pair: cluster option; pe-input-series-max |
| | | |
| | | The number of "normal" scheduler inputs to save. |
| | | Used when reporting problems. A value of -1 means |
| | | unlimited (report all), and 0 means none. |
+---------------------------+---------+----------------------------------------------------+
| enable-acl | false | .. index:: |
| | | pair: cluster option; enable-acl |
| | | |
| | | Whether :ref:`acl` should be used to authorize |
| | | modifications to the CIB |
+---------------------------+---------+----------------------------------------------------+
| placement-strategy | default | .. index:: |
| | | pair: cluster option; placement-strategy |
| | | |
| | | How the cluster should assign resources to nodes |
| | | (see :ref:`utilization`). Allowed values are |
| | | ``default``, ``utilization``, ``balanced``, and |
| | | ``minimal``. |
+---------------------------+---------+----------------------------------------------------+
| node-health-strategy | none | .. index:: |
| | | pair: cluster option; node-health-strategy |
| | | |
| | | How the cluster should react to node health |
| | | attributes (see :ref:`node-health`). Allowed values|
| | | are ``none``, ``migrate-on-red``, ``only-green``, |
| | | ``progressive``, and ``custom``. |
+---------------------------+---------+----------------------------------------------------+
| node-health-base | 0 | .. index:: |
| | | pair: cluster option; node-health-base |
| | | |
| | | The base health score assigned to a node. Only |
| | | used when ``node-health-strategy`` is |
| | | ``progressive``. |
+---------------------------+---------+----------------------------------------------------+
| node-health-green | 0 | .. index:: |
| | | pair: cluster option; node-health-green |
| | | |
| | | The score to use for a node health attribute whose |
| | | value is ``green``. Only used when |
| | | ``node-health-strategy`` is ``progressive`` or |
| | | ``custom``. |
+---------------------------+---------+----------------------------------------------------+
| node-health-yellow | 0 | .. index:: |
| | | pair: cluster option; node-health-yellow |
| | | |
| | | The score to use for a node health attribute whose |
| | | value is ``yellow``. Only used when |
| | | ``node-health-strategy`` is ``progressive`` or |
| | | ``custom``. |
+---------------------------+---------+----------------------------------------------------+
| node-health-red | 0 | .. index:: |
| | | pair: cluster option; node-health-red |
| | | |
| | | The score to use for a node health attribute whose |
| | | value is ``red``. Only used when |
| | | ``node-health-strategy`` is ``progressive`` or |
| | | ``custom``. |
+---------------------------+---------+----------------------------------------------------+
| cluster-recheck-interval | 15min | .. index:: |
| | | pair: cluster option; cluster-recheck-interval |
| | | |
| | | Pacemaker is primarily event-driven, and looks |
| | | ahead to know when to recheck the cluster for |
| | | failure timeouts and most time-based rules |
| | | *(since 2.0.3)*. However, it will also recheck the |
| | | cluster after this amount of inactivity. This has |
| | | two goals: rules with ``date_spec`` are only |
| | | guaranteed to be checked this often, and it also |
| | | serves as a fail-safe for some kinds of scheduler |
| | | bugs. A value of 0 disables this polling; positive |
| | | values are a time interval. |
+---------------------------+---------+----------------------------------------------------+
| shutdown-lock | false | .. index:: |
| | | pair: cluster option; shutdown-lock |
| | | |
| | | The default of false allows active resources to be |
| | | recovered elsewhere when their node is cleanly |
| | | shut down, which is what the vast majority of |
| | | users will want. However, some users prefer to |
| | | make resources highly available only for failures, |
| | | with no recovery for clean shutdowns. If this |
| | | option is true, resources active on a node when it |
| | | is cleanly shut down are kept "locked" to that |
| | | node (not allowed to run elsewhere) until they |
| | | start again on that node after it rejoins (or for |
| | | at most ``shutdown-lock-limit``, if set). Stonith |
| | | resources and Pacemaker Remote connections are |
| | | never locked. Clone and bundle instances and the |
| | | promoted role of promotable clones are currently |
| | | never locked, though support could be added in a |
| | | future release. Locks may be manually cleared |
| | | using the ``--refresh`` option of ``crm_resource`` |
| | | (both the resource and node must be specified; |
| | | this works with remote nodes if their connection |
| | | resource's ``target-role`` is set to ``Stopped``, |
| | | but not if Pacemaker Remote is stopped on the |
| | | remote node without disabling the connection |
| | | resource). *(since 2.0.4)* |
+---------------------------+---------+----------------------------------------------------+
| shutdown-lock-limit | 0 | .. index:: |
| | | pair: cluster option; shutdown-lock-limit |
| | | |
| | | If ``shutdown-lock`` is true, and this is set to a |
| | | nonzero time duration, locked resources will be |
| | | allowed to start after this much time has passed |
| | | since the node shutdown was initiated, even if the |
| | | node has not rejoined. (This works with remote |
| | | nodes only if their connection resource's |
| | | ``target-role`` is set to ``Stopped``.) |
| | | *(since 2.0.4)* |
+---------------------------+---------+----------------------------------------------------+
| remove-after-stop | false | .. index:: |
| | | pair: cluster option; remove-after-stop |
| | | |
| | | *Deprecated* Should the cluster remove |
| | | resources from Pacemaker's executor after they are |
| | | stopped? Values other than the default are, at |
| | | best, poorly tested and potentially dangerous. |
| | | This option is deprecated and will be removed in a |
| | | future release. |
+---------------------------+---------+----------------------------------------------------+
| startup-fencing | true | .. index:: |
| | | pair: cluster option; startup-fencing |
| | | |
| | | *Advanced Use Only:* Should the cluster fence |
| | | unseen nodes at start-up? Setting this to false is |
| | | unsafe, because the unseen nodes could be active |
| | | and running resources but unreachable. |
+---------------------------+---------+----------------------------------------------------+
| election-timeout | 2min | .. index:: |
| | | pair: cluster option; election-timeout |
| | | |
| | | *Advanced Use Only:* If you need to adjust this |
| | | value, it probably indicates the presence of a bug.|
+---------------------------+---------+----------------------------------------------------+
| shutdown-escalation | 20min | .. index:: |
| | | pair: cluster option; shutdown-escalation |
| | | |
| | | *Advanced Use Only:* If you need to adjust this |
| | | value, it probably indicates the presence of a bug.|
+---------------------------+---------+----------------------------------------------------+
| join-integration-timeout | 3min | .. index:: |
| | | pair: cluster option; join-integration-timeout |
| | | |
| | | *Advanced Use Only:* If you need to adjust this |
| | | value, it probably indicates the presence of a bug.|
+---------------------------+---------+----------------------------------------------------+
| join-finalization-timeout | 30min | .. index:: |
| | | pair: cluster option; join-finalization-timeout |
| | | |
| | | *Advanced Use Only:* If you need to adjust this |
| | | value, it probably indicates the presence of a bug.|
+---------------------------+---------+----------------------------------------------------+
| transition-delay | 0s | .. index:: |
| | | pair: cluster option; transition-delay |
| | | |
| | | *Advanced Use Only:* Delay cluster recovery for |
| | | the configured interval to allow for additional or |
| | | related events to occur. This can be useful if |
| | | your configuration is sensitive to the order in |
| | | which ping updates arrive. Enabling this option |
| | | will slow down cluster recovery under all |
| | | conditions. |
+---------------------------+---------+----------------------------------------------------+
diff --git a/maint/bumplibs.in b/maint/bumplibs.in
index a1426600d9..99698315b7 100644
--- a/maint/bumplibs.in
+++ b/maint/bumplibs.in
@@ -1,291 +1,290 @@
#!@BASH_PATH@
#
-# Copyright 2012-2021 the Pacemaker project contributors
+# Copyright 2012-2023 the Pacemaker project contributors
#
# The version control history for this file may have further details.
#
# This source code is licensed under the GNU General Public License version 2
# or later (GPLv2+) WITHOUT ANY WARRANTY.
#
# List regular expressions (not globs) that match all of a library's public API
# headers. Any files ending in "internal.h" will be excluded from matches.
declare -A HEADERS
HEADERS[cib]="include/crm/cib.h include/crm/cib/.*.h"
HEADERS[crmcommon]="include/crm/crm.h
include/crm/msg_xml.h
include/crm/common/.*.h"
HEADERS[crmcluster]="include/crm/cluster.h include/crm/cluster/.*.h"
HEADERS[crmservice]="include/crm/services.*.h"
HEADERS[lrmd]="include/crm/lrmd.*.h"
HEADERS[pacemaker]="include/pacemaker.*.h"
HEADERS[pe_rules]="include/crm/pengine/ru.*.h"
HEADERS[pe_status]="include/crm/pengine/[^r].*.h include/crm/pengine/r[^u].*.h"
HEADERS[stonithd]="include/crm/stonith-ng.h include/crm/fencing/.*.h"
yesno() {
local RESPONSE
read -p "$1 " RESPONSE
- case $(echo "$RESPONSE" | tr A-Z a-z) in
+ case $(echo "$RESPONSE" | tr '[:upper:]' '[:lower:]') in
y|yes|ano|ja|si|oui) return 0 ;;
*) return 1 ;;
esac
}
prompt_to_continue() {
yesno "Continue?" || exit 0
}
find_last_release() {
- if [ ! -z "$1" ]; then
+ if [ -n "$1" ]; then
echo "$1"
else
git tag -l | grep Pacemaker | grep -v rc | sort -Vr | head -n 1
fi
}
find_libs() {
find lib -name "*.am" -exec grep "lib.*_la_LDFLAGS.*version-info" \{\} \; \
| sed -e 's/lib\(.*\)_la_LDFLAGS.*/\1/'
}
find_makefile() {
find lib -name Makefile.am -exec grep -l "lib${1}_la.*version-info" \{\} \;
}
find_sources() {
local LIB="$1"
local AMFILE="$2"
local SOURCES
# Library makefiles should use "+=" to break up long sources lines rather
# than backslashed continuation lines, to allow this script to detect
# source files correctly. Warn if that's not the case.
if
- grep "lib${LIB}_la_SOURCES.*\\\\" $AMFILE
+ grep "lib${LIB}_la_SOURCES.*\\\\" "$AMFILE"
then
echo -e "\033[1;35m -- Sources list for lib$LIB is probably truncated! --\033[0m"
echo "Edit to use '+=' rather than backslashed continuation lines"
prompt_to_continue
fi
SOURCES=$(grep "^lib${LIB}_la_SOURCES" "$AMFILE" \
| sed -e 's/.*=//' -e 's/\\//' -e 's:\.\./gnu/:lib/gnu/:')
for SOURCE in $SOURCES; do
if
- echo $SOURCE | grep -q "/"
+ echo "$SOURCE" | grep -q "/"
then
echo "$SOURCE"
else
- echo "$(dirname $AMFILE)/$SOURCE"
+ echo "$(dirname "$AMFILE")/$SOURCE"
fi
done
}
find_headers_as_of() {
local TAG
local LIB
local FILE
local PATTERN
TAG="$1"
LIB="$2"
for FILE in $(git ls-tree -r --name-only "$TAG"); do
for PATTERN in ${HEADERS[$LIB]}; do
if [[ $FILE =~ $PATTERN ]] && [[ ! $FILE =~ internal.h$ ]]; then
echo "$FILE"
break
fi
done
done
}
extract_version() {
grep "lib${1}_la.*version-info" | sed -e 's/.*version-info\s*\(\S*\)/\1/'
}
shared_lib_name() {
local LIB="$1"
local VERSION="$2"
- echo "lib${LIB}.so.$(echo $VERSION | cut -d: -f 1)"
+ echo "lib${LIB}.so.$(echo "$VERSION" | cut -d: -f 1)"
}
process_lib() {
local LIB="$1"
local LAST_RELEASE="$2"
local AMFILE
local SOURCES
local HEADERS_LAST
local HEADERS_HEAD
local HEADERS_DIFF
local HEADERS_GONE
local HEADERS_ADDED
local CHANGE
local DEFAULT_CHANGE
if [ -z "${HEADERS[$LIB]}" ]; then
echo "Can't check lib$LIB until this script is updated with its headers"
prompt_to_continue
fi
AMFILE="$(find_makefile "$LIB")"
# Get current shared library version
- VER_NOW=$(cat $AMFILE | extract_version $LIB)
+ VER_NOW=$(extract_version "$LIB" < "$AMFILE")
# Check whether library existed at last release
- git cat-file -e $LAST_RELEASE:$AMFILE 2>/dev/null
- if [ $? -ne 0 ]; then
+ if ! git cat-file -e "$LAST_RELEASE:$AMFILE" 2>/dev/null; then
echo "lib$LIB is new, not changing version ($VER_NOW)"
prompt_to_continue
echo ""
return
fi
HEADERS_LAST="$(find_headers_as_of "$LAST_RELEASE" "$LIB")"
HEADERS_HEAD="$(find_headers_as_of "HEAD" "$LIB")"
HEADERS_DIFF="$(diff <(echo "$HEADERS_LAST") <(echo "$HEADERS_HEAD"))"
HEADERS_GONE="$(echo "$HEADERS_DIFF" | sed -n -e 's/^< //p')"
HEADERS_ADDED="$(echo "$HEADERS_DIFF" | sed -n -e 's/^> //p')"
# Check whether there were any changes to headers or sources
SOURCES="$(find_sources "$LIB" "$AMFILE")"
if [ -n "$HEADERS_GONE" ]; then
DEFAULT_CHANGE="i" # Removed public header is incompatible change
elif [ -n "$HEADERS_ADDED" ]; then
DEFAULT_CHANGE="c" # Additions are likely compatible
- elif git diff --quiet -w $LAST_RELEASE..HEAD $HEADERS_HEAD $SOURCES ; then
+ elif git diff --quiet -w "$LAST_RELEASE..HEAD" $HEADERS_HEAD $SOURCES ; then
echo "No changes to $LIB interface"
prompt_to_continue
echo ""
return
else
DEFAULT_CHANGE="f" # Sources changed, so it's at least a fix
fi
# Show all header changes since last release
echo "- Changes in lib$LIB public headers since $LAST_RELEASE:"
if [ -n "$HEADERS_GONE" ]; then
for HEADER in $HEADERS_GONE; do
echo "-- $HEADER was removed"
done
fi
if [ -n "$HEADERS_ADDED" ]; then
for HEADER in $HEADERS_ADDED; do
echo "++ $HEADER is new"
done
fi
- git --no-pager diff --color -w $LAST_RELEASE..HEAD $HEADERS_HEAD
+ git --no-pager diff --color -w "$LAST_RELEASE..HEAD" $HEADERS_HEAD
echo ""
if yesno "Show commits (minus refactor/build/merge) touching lib$LIB since $LAST_RELEASE [y/N]?"
then
- git log --color $LAST_RELEASE..HEAD -z $HEADERS_HEAD $SOURCES $AMFILE \
+ git log --color "$LAST_RELEASE..HEAD" -z $HEADERS_HEAD $SOURCES "$AMFILE" \
| grep -vzE "Refactor:|Build:|Merge pull request"
echo
prompt_to_continue
fi
# @TODO this seems broken ...
#echo ""
#if yesno "Show merged PRs touching lib$LIB since $LAST_RELEASE [y/N]?"
#then
# git log --merges $LAST_RELEASE..HEAD $HEADERS_HEAD $SOURCES $AMFILE
# echo
# prompt_to_continue
#fi
# Show summary of source changes since last release
echo ""
echo "- Headers: $HEADERS_HEAD"
echo "- Changed sources since $LAST_RELEASE:"
- git --no-pager diff --color -w $LAST_RELEASE..HEAD --stat $SOURCES
+ git --no-pager diff --color -w "$LAST_RELEASE..HEAD" --stat $SOURCES
echo ""
# Ask for human guidance
echo "Are the changes to lib$LIB:"
read -p "[c]ompatible additions, [i]ncompatible additions/removals or [f]ixes? [$DEFAULT_CHANGE]: " CHANGE
[ -z "$CHANGE" ] && CHANGE="$DEFAULT_CHANGE"
# Get (and show) shared library version at last release
- VER=$(git show $LAST_RELEASE:$AMFILE | extract_version $LIB)
- VER_1=$(echo $VER | awk -F: '{print $1}')
- VER_2=$(echo $VER | awk -F: '{print $2}')
- VER_3=$(echo $VER | awk -F: '{print $3}')
+ VER=$(git show "$LAST_RELEASE:$AMFILE" | extract_version "$LIB")
+ VER_1=$(echo "$VER" | awk -F: '{print $1}')
+ VER_2=$(echo "$VER" | awk -F: '{print $2}')
+ VER_3=$(echo "$VER" | awk -F: '{print $3}')
echo "lib$LIB version at $LAST_RELEASE: $VER"
# Show current shared library version if changed
- if [ $VER_NOW != $VER ]; then
+ if [ "$VER_NOW" != "$VER" ]; then
echo "lib$LIB version currently: $VER_NOW"
fi
# Calculate new library version
case $CHANGE in
i|I)
echo "New backwards-incompatible version: x+1:0:0"
- VER_1=$(expr $VER_1 + 1)
+ (( VER_1++ ))
VER_2=0
VER_3=0
# Some headers define constants for shared library names,
# update them if the name changed
for H in $HEADERS_HEAD; do
- sed -i -e "s/$(shared_lib_name "$LIB" "$VER_NOW")/$(shared_lib_name "$LIB" "$VER_1:0:0")/" $H
+ sed -i -e "s/$(shared_lib_name "$LIB" "$VER_NOW")/$(shared_lib_name "$LIB" "$VER_1:0:0")/" "$H"
done
;;
c|C)
echo "New version with backwards-compatible extensions: x+1:0:z+1"
- VER_1=$(expr $VER_1 + 1)
+ (( VER_1++ ))
VER_2=0
- VER_3=$(expr $VER_3 + 1)
+ (( VER_3++ ))
;;
F|f)
echo "Code changed though interfaces didn't: x:y+1:z"
- VER_2=$(expr $VER_2 + 1)
+ (( VER_2++ ))
;;
*)
echo "Not updating lib$LIB version"
prompt_to_continue
CHANGE=""
;;
esac
VER_NEW=$VER_1:$VER_2:$VER_3
- if [ ! -z $CHANGE ]; then
+ if [ -n "$CHANGE" ]; then
if [ "$VER_NEW" != "$VER_NOW" ]; then
echo "Updating lib$LIB version from $VER_NOW to $VER_NEW"
prompt_to_continue
- sed -i "s/version-info\s*$VER_NOW/version-info $VER_NEW/" $AMFILE
+ sed -i "s/version-info\s*$VER_NOW/version-info $VER_NEW/" "$AMFILE"
else
echo "No version change needed for lib$LIB"
prompt_to_continue
fi
fi
echo ""
}
echo "Definitions:"
echo "- Compatible additions: new public API functions, structs, etc."
echo "- Incompatible additions/removals: new arguments to public API functions,"
echo " new members added to the middle of public API structs,"
echo " removal of any public API, etc."
echo "- Fixes: any other code changes at all"
echo ""
echo "When possible, improve backward compatibility first:"
echo "- move new members to the end of structs"
echo "- use bitfields instead of booleans"
echo "- when adding arguments, create a new function that the old one can wrap"
echo ""
prompt_to_continue
LAST_RELEASE=$(find_last_release "$1")
for LIB in $(find_libs); do
process_lib "$LIB" "$LAST_RELEASE"
done
# Show all proposed changes
git --no-pager diff --color -w
diff --git a/rpm/Makefile.am b/rpm/Makefile.am
index c7975e4c81..956252efc4 100644
--- a/rpm/Makefile.am
+++ b/rpm/Makefile.am
@@ -1,282 +1,286 @@
#
-# Copyright 2003-2022 the Pacemaker project contributors
+# Copyright 2003-2023 the Pacemaker project contributors
#
# The version control history for this file may have further details.
#
# This source code is licensed under the GNU General Public License version 2
# or later (GPLv2+) WITHOUT ANY WARRANTY.
#
# We want to support the use case where this file is fed straight to make
# without running automake first, so define defaults for any automake variables
# used in this file.
top_srcdir ?= ..
abs_srcdir ?= $(shell pwd)
abs_builddir ?= $(abs_srcdir)
MAKE ?= make
PACKAGE ?= pacemaker
AM_V_at ?= @
MKDIR_P ?= mkdir -p
include $(top_srcdir)/mk/common.mk
include $(top_srcdir)/mk/release.mk
EXTRA_DIST = pacemaker.spec.in \
rpmlintrc
+# Extra options to pass to rpmbuild (this can be used to override the location
+# options this file normally passes, or to override macros used by the spec)
+RPM_EXTRA ?=
+
# Where to put RPM artifacts; possible values:
#
# - subtree (default): RPM sources (i.e. TARFILE) in top-level build directory,
# everything else in dedicated "rpm" subdirectory of build tree
#
# - toplevel (deprecated): RPM sources, spec, and source rpm in top-level build
# directory, everything else uses the usual rpmbuild defaults
#
# - anything else: The value will be treated as a directory path to be used for
# all RPM artifacts. WARNING: The entire directory will get removed with
# "make clean" or "make rpm-clean".
#
RPMDEST ?= subtree
RPM_SPEC_DIR_subtree = $(abs_builddir)/SPECS
RPM_SRCRPM_DIR_subtree = $(abs_builddir)/SRPMS
RPM_OPTS_subtree = --define "_sourcedir $(abs_builddir)/.." \
--define "_topdir $(abs_builddir)"
RPM_CLEAN_subtree = "$(abs_builddir)/BUILD" \
"$(abs_builddir)/BUILDROOT" \
"$(abs_builddir)/RPMS" \
"$(abs_builddir)/SPECS" \
"$(abs_builddir)/SRPMS"
RPM_SPEC_DIR_toplevel = $(abs_builddir)/..
RPM_SRCRPM_DIR_toplevel = $(abs_builddir)/..
RPM_OPTS_toplevel = --define "_sourcedir $(abs_builddir)/.." \
--define "_specdir $(RPM_SPEC_DIR_toplevel)" \
--define "_srcrpmdir $(RPM_SRCRPM_DIR_toplevel)"
RPM_CLEAN_toplevel =
RPM_SPEC_DIR_other = $(RPMDEST)/SPECS
RPM_SRCRPM_DIR_other = $(RPMDEST)/SRPMS
RPM_OPTS_other = --define "_sourcedir $(abs_builddir)/.." \
--define "_topdir $(RPMDEST)"
RPM_CLEAN_other = "$(RPMDEST)"
RPMTYPE = $(shell case "$(RPMDEST)" in \
toplevel$(rparen) echo toplevel ;; \
subtree$(rparen) echo subtree ;; \
*$(rparen) echo other ;; \
esac)
RPM_SPEC_DIR = $(RPM_SPEC_DIR_$(RPMTYPE))
RPM_SRCRPM_DIR = $(RPM_SRCRPM_DIR_$(RPMTYPE))
-RPM_OPTS = $(RPM_OPTS_$(RPMTYPE))
+RPM_OPTS = $(RPM_OPTS_$(RPMTYPE)) $(RPM_EXTRA)
RPM_CLEAN = $(RPM_CLEAN_$(RPMTYPE))
WITH ?= --without doc
# If $(BUILD_COUNTER) is an existing file, its contents will be used as the
# spec version in built RPMs, unless $(SPECVERSION) is set to override it,
# and the next increment will be written back to the file after building.
BUILD_COUNTER ?= $(shell test -e build.counter && echo build.counter || echo ../build.counter)
LAST_COUNT = $(shell test -e "$(BUILD_COUNTER)" && cat "$(BUILD_COUNTER)" || echo 0)
COUNT = $(shell expr 1 + $(LAST_COUNT))
SPECVERSION ?= $(COUNT)
# SPEC_COMMIT is identical to TAG for DIST and tagged releases, otherwise it is
# the short commit ID (which must be used in order for "make export" to use the
# same archive name as "make dist")
SPEC_COMMIT ?= $(shell \
case $(TAG) in \
Pacemaker-*|DIST$(rparen) \
echo '$(TAG)' ;; \
*$(rparen) \
git log --pretty=format:%h -n 1 '$(TAG)';; \
esac)$(DIRTY_EXT)
SPEC_ABBREV = $(shell printf %s '$(SPEC_COMMIT)' | wc -c)
SPEC_RELEASE = $(shell case "$(WITH)" in \
*pre_release*$(rparen) \
[ "$(LAST_RELEASE)" = "$(TAG)" ] \
&& echo "$(LAST_RELEASE)" \
|| echo "$(NEXT_RELEASE)" ;; \
*$(rparen) \
echo "$(LAST_RELEASE)" ;; \
esac)
SPEC_RELEASE_NO = $(shell echo $(SPEC_RELEASE) | sed -e s:Pacemaker-:: -e s:-.*::)
MOCK_DIR = $(abs_builddir)/mock
MOCK_OPTIONS ?= --resultdir="$(MOCK_DIR)" --no-cleanup-after
F ?= $(shell test ! -e /etc/fedora-release && echo 0; test -e /etc/fedora-release && rpm --eval %{fedora})
ARCH ?= $(shell test ! -e /etc/fedora-release && uname -m; test -e /etc/fedora-release && rpm --eval %{_arch})
MOCK_CFG ?= $(shell test -e /etc/fedora-release && echo fedora-$(F)-$(ARCH))
distdir = $(top_distdir)/rpm
TARFILE = $(abs_builddir)/../$(top_distdir).tar.gz
# Create a source distribution based on a git archive. (If we aren't in a git
# checkout, do a make dist instead.)
export:
cd $(abs_srcdir)/..; \
if [ -z "$(CHECKOUT)" ] && [ -f "$(TARFILE)" ]; then \
echo "`date`: Using existing tarball: $(TARFILE)"; \
elif [ -z "$(CHECKOUT)" ]; then \
$(MAKE) $(AM_MAKEFLAGS) dist; \
echo "`date`: Rebuilt tarball: $(TARFILE)"; \
elif [ -n "$(DIRTY_EXT)" ]; then \
git commit -m "DO-NOT-PUSH" -a; \
git archive --prefix=$(top_distdir)/ -o "$(TARFILE)" HEAD^{tree}; \
git reset --mixed HEAD^; \
echo "`date`: Rebuilt $(TARFILE)"; \
elif [ -f "$(TARFILE)" ]; then \
echo "`date`: Using existing tarball: $(TARFILE)"; \
else \
git archive --prefix=$(top_distdir)/ -o "$(TARFILE)" $(TAG)^{tree}; \
echo "`date`: Rebuilt $(TARFILE)"; \
fi
# Depend on spec-clean so the spec gets rebuilt every time
$(RPM_SPEC_DIR)/$(PACKAGE).spec: spec-clean pacemaker.spec.in
$(AM_V_at)$(MKDIR_P) "$(RPM_SPEC_DIR)"
$(AM_V_GEN)if [ x"`git ls-files -m pacemaker.spec.in 2>/dev/null`" != x ]; then \
cat "$(abs_srcdir)/pacemaker.spec.in"; \
elif git cat-file -e $(TAG):rpm/pacemaker.spec.in 2>/dev/null; then \
git show $(TAG):rpm/pacemaker.spec.in; \
elif git cat-file -e $(TAG):pacemaker.spec.in 2>/dev/null; then \
git show $(TAG):pacemaker.spec.in; \
else \
cat "$(abs_srcdir)/pacemaker.spec.in"; \
fi | sed \
-e 's/^\(%global pcmkversion \).*/\1$(SPEC_RELEASE_NO)/' \
-e 's/^\(%global specversion \).*/\1$(SPECVERSION)/' \
-e 's/^\(%global commit \).*/\1$(SPEC_COMMIT)/' \
-e 's/^\(%global commit_abbrev \).*/\1$(SPEC_ABBREV)/' \
-e "s/PACKAGE_DATE/$$(date +'%a %b %d %Y')/" \
-e 's/PACKAGE_VERSION/$(SPEC_RELEASE_NO)-$(SPECVERSION)/' \
> "$@"
.PHONY: spec $(PACKAGE).spec
spec $(PACKAGE).spec: $(RPM_SPEC_DIR)/$(PACKAGE).spec
spec-clean:
-rm -f "$(RPM_SPEC_DIR)/$(PACKAGE).spec"
.PHONY: srpm
srpm: export srpm-clean $(RPM_SPEC_DIR)/$(PACKAGE).spec
if [ -e "$(BUILD_COUNTER)" ]; then \
echo $(COUNT) > "$(BUILD_COUNTER)"; \
fi
rpmbuild -bs $(RPM_OPTS) $(WITH) "$(RPM_SPEC_DIR)/$(PACKAGE).spec"
.PHONY: srpm-clean
srpm-clean:
-rm -f "$(RPM_SRCRPM_DIR)"/*.src.rpm
# e.g. make WITH="--with pre_release" rpm
.PHONY: rpm
rpm: srpm
@echo To create custom builds, edit the flags and options in $(PACKAGE).spec first
rpmbuild $(RPM_OPTS) $(WITH) --rebuild "$(RPM_SRCRPM_DIR)"/*.src.rpm
.PHONY: rpm-clean
rpm-clean: spec-clean srpm-clean
-if [ -n "$(RPM_CLEAN)" ]; then rm -rf $(RPM_CLEAN); fi
.PHONY: rpmlint
rpmlint: $(RPM_SPEC_DIR)/$(PACKAGE).spec
rpmlint -f rpmlintrc "$<"
.PHONY: rpm-dep
rpm-dep: $(RPM_SPEC_DIR)/$(PACKAGE).spec
sudo yum-builddep "$(RPM_SPEC_DIR)/$(PACKAGE).spec"
.PHONY: release
release:
$(MAKE) $(AM_MAKEFLAGS) TAG=$(LAST_RELEASE) rpm
# Build the highest-versioned rc tag
.PHONY: rc
rc:
@if [ -z "$(CHECKOUT)" ]; then \
echo 'This target must be run from a git checkout'; \
exit 1; \
fi
$(MAKE) $(AM_MAKEFLAGS) TAG="$$(git tag -l 2>/dev/null \
| sed -n -e 's/^\(Pacemaker-[0-9.]*-rc[0-9]*\)$$/\1/p' \
| sort -Vr | head -n 1)" rpm
.PHONY: chroot
chroot: mock-$(MOCK_CFG) mock-install-$(MOCK_CFG) mock-sh-$(MOCK_CFG)
@echo Done
.PHONY: mock-next
mock-next:
$(MAKE) $(AM_MAKEFLAGS) F=$(shell expr 1 + $(F)) mock
.PHONY: mock-rawhide
mock-rawhide:
$(MAKE) $(AM_MAKEFLAGS) F=rawhide mock
mock-install-%:
@echo "Installing packages"
mock --root=$* $(MOCK_OPTIONS) --install "$(MOCK_DIR)"/*.rpm \
vi sudo valgrind lcov gdb fence-agents psmisc
.PHONY: mock-install
mock-install: mock-install-$(MOCK_CFG)
@echo Done
.PHONY: mock-sh
mock-sh: mock-sh-$(MOCK_CFG)
@echo Done
mock-sh-%:
@echo Connecting
mock --root=$* $(MOCK_OPTIONS) --shell
@echo Done
mock-%: srpm mock-clean
mock $(MOCK_OPTIONS) --root=$* --no-cleanup-after --rebuild \
$(WITH) "$(RPM_SRCRPM_DIR)"/*.src.rpm
.PHONY: mock
mock: mock-$(MOCK_CFG)
@echo Done
.PHONY: dirty
dirty:
$(MAKE) $(AM_MAKEFLAGS) DIRTY=yes mock
.PHONY: mock-clean
mock-clean:
-rm -rf "$(MOCK_DIR)"
# Make debugging makefile issues easier
vars:
@echo "CHECKOUT=$(CHECKOUT)"
@echo "VERSION=$(VERSION)"
@echo "COMMIT=$(COMMIT)"
@echo "TAG=$(TAG)"
@echo "DIRTY=$(DIRTY)"
@echo "DIRTY_EXT=$(DIRTY_EXT)"
@echo "LAST_RELEASE=$(LAST_RELEASE)"
@echo "NEXT_RELEASE=$(NEXT_RELEASE)"
@echo "top_distdir=$(top_distdir)"
@echo "RPMDEST=$(RPMDEST)"
@echo "RPMTYPE=$(RPMTYPE)"
@echo "RPM_SPEC_DIR=$(RPM_SPEC_DIR)"
@echo "RPM_SRCRPM_DIR=$(RPM_SRCRPM_DIR)"
@echo "RPM_OPTS=$(RPM_OPTS)"
@echo "RPM_CLEAN=$(RPM_CLEAN)"
@echo "WITH=$(WITH)"
@echo "BUILD_COUNTER=$(BUILD_COUNTER)"
@echo "LAST_COUNT=$(LAST_COUNT)"
@echo "COUNT=$(COUNT)"
@echo "SPECVERSION=$(SPECVERSION)"
@echo "SPEC_COMMIT=$(SPEC_COMMIT)"
@echo "SPEC_ABBREV=$(SPEC_ABBREV)"
@echo "SPEC_RELEASE=$(SPEC_RELEASE)"
@echo "SPEC_RELEASE_NO=$(SPEC_RELEASE_NO)"
@echo "TARFILE=$(TARFILE)"
clean-local: mock-clean rpm-clean
-rm -f "$(TARFILE)"