diff --git a/doc/sphinx/Pacemaker_Administration/index.rst b/doc/sphinx/Pacemaker_Administration/index.rst
index 27e950ec99..af893801b6 100644
--- a/doc/sphinx/Pacemaker_Administration/index.rst
+++ b/doc/sphinx/Pacemaker_Administration/index.rst
@@ -1,37 +1,38 @@
Pacemaker Administration
========================
*Managing Pacemaker Clusters*
Abstract
--------
This document has instructions and tips for system administrators who
manage high-availability clusters using Pacemaker.
Table of Contents
-----------------
.. toctree::
:maxdepth: 3
:numbered:
intro
installing
cluster
configuring
tools
administrative
+ moving
troubleshooting
upgrading
alerts
agents
pcs-crmsh
Index
-----
* :ref:`genindex`
* :ref:`search`
diff --git a/doc/sphinx/Pacemaker_Explained/advanced-options.rst b/doc/sphinx/Pacemaker_Administration/moving.rst
similarity index 94%
rename from doc/sphinx/Pacemaker_Explained/advanced-options.rst
rename to doc/sphinx/Pacemaker_Administration/moving.rst
index 666fcff6ae..3d6a92af51 100644
--- a/doc/sphinx/Pacemaker_Explained/advanced-options.rst
+++ b/doc/sphinx/Pacemaker_Administration/moving.rst
@@ -1,307 +1,305 @@
-Advanced Configuration
-----------------------
+Moving Resources
+----------------
.. index::
single: resource; move
-Moving Resources
-################
-
Moving Resources Manually
-_________________________
+#########################
There are primarily two occasions when you would want to move a resource from
its current location: when the whole node is under maintenance, and when a
single resource needs to be moved.
.. index::
single: standby mode
single: node; standby mode
Standby Mode
-~~~~~~~~~~~~
+____________
Since everything eventually comes down to a score, you could create constraints
for every resource to prevent them from running on one node. While Pacemaker
configuration can seem convoluted at times, not even we would require this of
administrators.
Instead, you can set a special node attribute which tells the cluster "don't
let anything run here". There is even a helpful tool to help query and set it,
called ``crm_standby``. To check the standby status of the current machine,
run:
.. code-block:: none
# crm_standby -G
A value of ``on`` indicates that the node is *not* able to host any resources,
while a value of ``off`` says that it *can*.
You can also check the status of other nodes in the cluster by specifying the
`--node` option:
.. code-block:: none
# crm_standby -G --node sles-2
To change the current node's standby status, use ``-v`` instead of ``-G``:
.. code-block:: none
# crm_standby -v on
Again, you can change another host's value by supplying a hostname with
``--node``.
A cluster node in standby mode will not run resources, but still contributes to
quorum, and may fence or be fenced by nodes.
Moving One Resource
-~~~~~~~~~~~~~~~~~~~
+___________________
When only one resource is required to move, we could do this by creating
location constraints. However, once again we provide a user-friendly shortcut
as part of the ``crm_resource`` command, which creates and modifies the extra
constraints for you. If ``Email`` were running on ``sles-1`` and you wanted it
moved to a specific location, the command would look something like:
.. code-block:: none
# crm_resource -M -r Email -H sles-2
Behind the scenes, the tool will create the following location constraint:
.. code-block:: xml
It is important to note that subsequent invocations of ``crm_resource -M`` are
not cumulative. So, if you ran these commands:
.. code-block:: none
# crm_resource -M -r Email -H sles-2
# crm_resource -M -r Email -H sles-3
then it is as if you had never performed the first command.
To allow the resource to move back again, use:
.. code-block:: none
# crm_resource -U -r Email
Note the use of the word *allow*. The resource *can* move back to its original
location, but depending on ``resource-stickiness``, location constraints, and
so forth, it might stay where it is.
To be absolutely certain that it moves back to ``sles-1``, move it there before
issuing the call to ``crm_resource -U``:
.. code-block:: none
# crm_resource -M -r Email -H sles-1
# crm_resource -U -r Email
Alternatively, if you only care that the resource should be moved from its
current location, try:
.. code-block:: none
# crm_resource -B -r Email
which will instead create a negative constraint, like:
.. code-block:: xml
This will achieve the desired effect, but will also have long-term
consequences. As the tool will warn you, the creation of a ``-INFINITY``
constraint will prevent the resource from running on that node until
``crm_resource -U`` is used. This includes the situation where every other
cluster node is no longer available!
In some cases, such as when ``resource-stickiness`` is set to ``INFINITY``, it
-is possible that you will end up with the problem described in
-:ref:`node-score-equal`. The tool can detect some of these cases and deals with
-them by creating both positive and negative constraints. For example:
+is possible that you will end up with nodes with the same score, forcing the
+cluster to choose one (which may not be the one you want). The tool can detect
+some of these cases and deals with them by creating both positive and negative
+constraints. For example:
.. code-block:: xml
which has the same long-term consequences as discussed earlier.
Moving Resources Due to Connectivity Changes
-____________________________________________
+############################################
You can configure the cluster to move resources when external connectivity is
lost in two steps.
.. index::
single: ocf:pacemaker:ping resource
single: ping resource
Tell Pacemaker to Monitor Connectivity
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+______________________________________
First, add an ``ocf:pacemaker:ping`` resource to the cluster. The ``ping``
resource uses the system utility of the same name to a test whether a list of
machines (specified by DNS hostname or IP address) are reachable, and uses the
results to maintain a node attribute.
The node attribute is called ``pingd`` by default, but is customizable in order
to allow multiple ping groups to be defined.
Normally, the ping resource should run on all cluster nodes, which means that
you'll need to create a clone. A template for this can be found below, along
with a description of the most interesting parameters.
.. table:: **Commonly Used ocf:pacemaker:ping Resource Parameters**
:widths: 1 4
+--------------------+--------------------------------------------------------------+
| Resource Parameter | Description |
+====================+==============================================================+
| dampen | .. index:: |
| | single: ocf:pacemaker:ping resource; dampen parameter |
| | single: dampen; ocf:pacemaker:ping resource parameter |
| | |
| | The time to wait (dampening) for further changes to occur. |
| | Use this to prevent a resource from bouncing around the |
| | cluster when cluster nodes notice the loss of connectivity |
| | at slightly different times. |
+--------------------+--------------------------------------------------------------+
| multiplier | .. index:: |
| | single: ocf:pacemaker:ping resource; multiplier parameter |
| | single: multiplier; ocf:pacemaker:ping resource parameter |
| | |
| | The number of connected ping nodes gets multiplied by this |
| | value to get a score. Useful when there are multiple ping |
| | nodes configured. |
+--------------------+--------------------------------------------------------------+
| host_list | .. index:: |
| | single: ocf:pacemaker:ping resource; host_list parameter |
| | single: host_list; ocf:pacemaker:ping resource parameter |
| | |
| | The machines to contact in order to determine the current |
| | connectivity status. Allowed values include resolvable DNS |
| | connectivity host names, IPv4 addresses, and IPv6 addresses. |
+--------------------+--------------------------------------------------------------+
.. topic:: Example ping resource that checks node connectivity once every minute
.. code-block:: xml
.. important::
You're only half done. The next section deals with telling Pacemaker how to
deal with the connectivity status that ``ocf:pacemaker:ping`` is recording.
Tell Pacemaker How to Interpret the Connectivity Data
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+_____________________________________________________
.. important::
- Before attempting the following, make sure you understand
- :ref:`rules`.
+ Before attempting the following, make sure you understand rules. See the
+ "Rules" chapter of the *Pacemaker Explained* document for details.
There are a number of ways to use the connectivity data.
The most common setup is for people to have a single ping target (for example,
the service network's default gateway), to prevent the cluster from running a
resource on any unconnected node.
.. topic:: Don't run a resource on unconnected nodes
.. code-block:: xml
A more complex setup is to have a number of ping targets configured. You can
require the cluster to only run resources on nodes that can connect to all (or
a minimum subset) of them.
.. topic:: Run only on nodes connected to three or more ping targets
.. code-block:: xml
...
...
...
Alternatively, you can tell the cluster only to *prefer* nodes with the best
connectivity, by using ``score-attribute`` in the rule. Just be sure to set
``multiplier`` to a value higher than that of ``resource-stickiness`` (and
don't set either of them to ``INFINITY``).
.. topic:: Prefer node with most connected ping nodes
.. code-block:: xml
It is perhaps easier to think of this in terms of the simple constraints that
the cluster translates it into. For example, if ``sles-1`` is connected to all
five ping nodes but ``sles-2`` is only connected to two, then it would be as if
you instead had the following constraints in your configuration:
.. topic:: How the cluster translates the above location constraint
.. code-block:: xml
The advantage is that you don't have to manually update any constraints
whenever your network connectivity changes.
You can also combine the concepts above into something even more complex. The
example below shows how you can prefer the node with the most connected ping
nodes provided they have connectivity to at least three (again assuming that
``multiplier`` is set to 1000).
.. topic:: More complex example of choosing location based on connectivity
.. code-block:: xml
diff --git a/doc/sphinx/Pacemaker_Explained/index.rst b/doc/sphinx/Pacemaker_Explained/index.rst
index 4e24c2d328..e3b7e9e55e 100644
--- a/doc/sphinx/Pacemaker_Explained/index.rst
+++ b/doc/sphinx/Pacemaker_Explained/index.rst
@@ -1,42 +1,41 @@
Pacemaker Explained
===================
*Configuring Pacemaker Clusters*
Abstract
--------
This document definitively explains Pacemaker's features and capabilities,
particularly the XML syntax used in Pacemaker's Cluster Information Base (CIB).
Table of Contents
-----------------
.. toctree::
:maxdepth: 3
:numbered:
intro
options
nodes
resources
operations
constraints
fencing
alerts
rules
- advanced-options
collective
reusing-configuration
utilization
acls
status
multi-site-clusters
ap-samples
Index
-----
* :ref:`genindex`
* :ref:`search`