diff --git a/doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst b/doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst
index 8f432d50fb..86f44ff1d3 100644
--- a/doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst
+++ b/doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst
@@ -1,280 +1,287 @@
.. index::
single: remote node; walk-through
Remote Node Walk-through
------------------------
**What this tutorial is:** An in-depth walk-through of how to get Pacemaker to
integrate a remote node into the cluster as a node capable of running cluster
resources.
**What this tutorial is not:** A realistic deployment scenario. The steps shown
here are meant to get users familiar with the concept of remote nodes as
quickly as possible.
Configure Cluster Nodes
#######################
This walk-through assumes you already have a Pacemaker cluster configured. For examples, we will use a cluster with two cluster nodes named pcmk-1 and pcmk-2. You can substitute whatever your node names are, for however many nodes you have. If you are not familiar with setting up basic Pacemaker clusters, follow the walk-through in the Clusters From Scratch document before attempting this one.
Configure Remote Node
#####################
.. index::
single: remote node; firewall
Configure Firewall on Remote Node
_________________________________
Allow cluster-related services through the local firewall:
.. code-block:: none
# firewall-cmd --permanent --add-service=high-availability
success
# firewall-cmd --reload
success
.. NOTE::
If you are using some other firewall solution besides firewalld,
simply open the following ports, which can be used by various
clustering components: TCP ports 2224, 3121, and 21064.
If you run into any problems during testing, you might want to disable
the firewall and SELinux entirely until you have everything working.
This may create significant security issues and should not be performed on
machines that will be exposed to the outside world, but may be appropriate
during development and testing on a protected host.
To disable security measures:
.. code-block:: none
# setenforce 0
# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" \
/etc/selinux/config
# systemctl mask firewalld.service
# systemctl stop firewalld.service
Configure ``/etc/hosts``
________________________
You will need to add the remote node's hostname (we're using **remote1** in
this tutorial) to the cluster nodes' ``/etc/hosts`` files if you haven't already.
This is required unless you have DNS set up in a way where remote1's address can be
discovered.
For each remote node, execute the following on each cluster node and on the
remote nodes, replacing the IP address with the actual IP address of the remote
node.
.. code-block:: none
# cat << END >> /etc/hosts
192.168.122.10 remote1
END
Also add entries for each cluster node to the ``/etc/hosts`` file on each
remote node. For example:
.. code-block:: none
# cat << END >> /etc/hosts
192.168.122.101 pcmk-1
192.168.122.102 pcmk-2
END
Configure pacemaker_remote on Remote Node
_________________________________________
Install the pacemaker_remote daemon on the remote node.
.. code-block:: none
- # yum install -y pacemaker-remote resource-agents pcs
+ [root@remote1 ~]# dnf config-manager --set-enabled highavailability
+ [root@remote1 ~]# dnf install -y pacemaker-remote resource-agents pcs
Prepare ``pcsd``
________________
Now we need to prepare ``pcsd`` on the remote node so that we can use ``pcs``
commands to communicate with it.
Start and enable the ``pcsd`` daemon on the remote node.
.. code-block:: none
[root@remote1 ~]# systemctl start pcsd
[root@remote1 ~]# systemctl enable pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
Next, set a password for the ``hacluster`` user on the remote node
.. code-block:: none
[root@remote ~]# echo MyPassword | passwd --stdin hacluster
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
Now authenticate the existing cluster nodes to ``pcsd`` on the remote node. The
below command only needs to be run from one cluster node.
.. code-block:: none
[root@pcmk-1 ~]# pcs host auth remote1 -u hacluster
Password:
remote1: Authorized
Integrate Remote Node into Cluster
__________________________________
Integrating a remote node into the cluster is achieved through the
creation of a remote node connection resource. The remote node connection
resource both establishes the connection to the remote node and defines that
the remote node exists. Note that this resource is actually internal to
Pacemaker's controller. The metadata for this resource can be found in
the ``/usr/lib/ocf/resource.d/pacemaker/remote`` file. The metadata in this file
describes what options are available, but there is no actual
**ocf:pacemaker:remote** resource agent script that performs any work.
Define the remote node connection resource to our remote node,
**remote1**, using the following command on any cluster node. This
command creates the ocf:pacemaker:remote resource; creates the authkey if it
does not exist already and distributes it to the remote node; and starts and
enables ``pacemaker-remoted`` on the remote node.
.. code-block:: none
[root@pcmk-1 ~]# pcs cluster node add-remote remote1
No addresses specified for host 'remote1', using 'remote1'
Sending 'pacemaker authkey' to 'remote1'
remote1: successful distribution of the file 'pacemaker authkey'
Requesting 'pacemaker_remote enable', 'pacemaker_remote start' on 'remote1'
remote1: successful run of 'pacemaker_remote enable'
remote1: successful run of 'pacemaker_remote start'
That's it. After a moment you should see the remote node come online. The final ``pcs status`` output should look something like this, and you can see that it
created the ocf:pacemaker:remote resource:
.. code-block:: none
- # pcs status
+ [root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Cluster Summary:
* Stack: corosync
- * Current DC: pcmk-1 (version 2.0.5-8.el8-ba59be7122) - partition with quorum
- * Last updated: Wed Mar 3 11:02:03 2021
- * Last change: Wed Mar 3 11:01:57 2021 by root via cibadmin on pcmk-1
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 05:17:28 2022
+ * Last change: Wed Aug 10 05:17:26 2022 by root via cibadmin on pcmk-1
* 3 nodes configured
- * 1 resource instance configured
-
+ * 2 resource instances configured
+
Node List:
* Online: [ pcmk-1 pcmk-2 ]
* RemoteOnline: [ remote1 ]
-
+
Full List of Resources:
- * remote1 (ocf::pacemaker:remote): Started pcmk-1
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * remote1 (ocf:pacemaker:remote): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
How pcs Configures the Remote
#############################
Let's take a closer look at what the ``pcs cluster node add-remote`` command is
doing. There is no need to run any of the commands in this section.
First, ``pcs`` copies the Pacemaker authkey file to the VM that will become the
guest. If an authkey is not already present on the cluster nodes, this command
creates one and distributes it to the existing nodes and to the guest.
If you want to do this manually, you can run a command like the following to
generate an authkey in ``/etc/pacemaker/authkey``, and then distribute the key
to the rest of the nodes and to the new guest.
.. code-block:: none
[root@pcmk-1 ~]# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
Then ``pcs`` starts and enables the ``pacemaker_remote`` service on the guest.
If you want to do this manually, run the following commands.
.. code-block:: none
[root@guest1 ~]# systemctl start pacemaker_remote
[root@guest1 ~]# systemctl enable pacemaker_remote
Starting Resources on Remote Node
#################################
Once the remote node is integrated into the cluster, starting and managing
resources on a remote node is the exact same as on cluster nodes. Refer to the
`Clusters from Scratch `_ document for examples of
resource creation.
.. WARNING::
Never involve a remote node connection resource in a resource group,
colocation constraint, or order constraint.
.. index::
single: remote node; fencing
Fencing Remote Nodes
####################
Remote nodes are fenced the same way as cluster nodes. No special
considerations are required. Configure fencing resources for use with
remote nodes the same as you would with cluster nodes.
Note, however, that remote nodes can never 'initiate' a fencing action. Only
cluster nodes are capable of actually executing a fencing operation against
another node.
Accessing Cluster Tools from a Remote Node
##########################################
Besides allowing the cluster to manage resources on a remote node,
pacemaker_remote has one other trick. The pacemaker_remote daemon allows
nearly all the pacemaker tools (``crm_resource``, ``crm_mon``,
``crm_attribute``, etc.) to work on remote nodes natively.
Try it: Run ``crm_mon`` on the remote node after pacemaker has
integrated it into the cluster. These tools just work. These means resource
agents such as promotable resources (which need access to tools like
``crm_attribute``) work seamlessly on the remote nodes.
Higher-level command shells such as ``pcs`` may have partial support
on remote nodes, but it is recommended to run them from a cluster node.
Troubleshooting a Remote Connection
###################################
Note: This section should not be done when the remote is connected to the cluster.
Should connectivity issues occur, it can be worth verifying that the cluster nodes
can contact the remote node on port 3121. Here's a trick you can use.
Connect using ssh from each of the cluster nodes. The connection will get
destroyed, but how it is destroyed tells you whether it worked or not.
If running the ssh command on one of the cluster nodes results in this
output before disconnecting, the connection works:
.. code-block:: none
# ssh -p 3121 remote1
ssh_exchange_identification: read: Connection reset by peer
If you see one of these, the connection is not working:
.. code-block:: none
# ssh -p 3121 remote1
ssh: connect to host remote1 port 3121: No route to host
.. code-block:: none
# ssh -p 3121 remote1
ssh: connect to host remote1 port 3121: Connection refused
Once you can successfully connect to the remote node from the both
cluster nodes, you may move on to setting up Pacemaker on the
cluster nodes.
diff --git a/doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst b/doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst
index 8ef197bbe3..ba0151250b 100644
--- a/doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst
+++ b/doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst
@@ -1,577 +1,592 @@
.. index::
single: guest node; walk-through
Guest Node Walk-through
-----------------------
**What this tutorial is:** An in-depth walk-through of how to get Pacemaker to
manage a KVM guest instance and integrate that guest into the cluster as a
guest node.
**What this tutorial is not:** A realistic deployment scenario. The steps shown
here are meant to get users familiar with the concept of guest nodes as quickly
as possible.
Configure Cluster Nodes
#######################
This walk-through assumes you already have a Pacemaker cluster configured. For examples, we will use a cluster with two cluster nodes named pcmk-1 and pcmk-2. You can substitute whatever your node names are, for however many nodes you have. If you are not familiar with setting up basic Pacemaker clusters, follow the walk-through in the Clusters From Scratch document before attempting this one.
Install Virtualization Software
_______________________________
On each node within your cluster, install virt-install, libvirt, and qemu-kvm.
-Start and enable libvirtd.
+Start and enable ``virtnetworkd``.
.. code-block:: none
- # yum install -y virt-install libvirt qemu-kvm
- # systemctl start libvirtd
- # systemctl enable libvirtd
+ # dnf install -y virt-install libvirt qemu-kvm
+ # systemctl start virtnetworkd
+ # systemctl enable virtnetworkd
Reboot the host.
.. NOTE::
While KVM is used in this example, any virtualization platform with a Pacemaker
resource agent can be used to create a guest node. The resource agent needs
only to support usual commands (start, stop, etc.); Pacemaker implements the
**remote-node** meta-attribute, independent of the agent.
Configure the KVM guest
#######################
Create Guest
____________
Create a KVM guest to use as a guest node. Be sure to configure the guest with a
hostname and a static IP address (as an example here, we will use guest1 and 192.168.122.10).
Here's an example way to create a guest:
-* Download an .iso file from the `CentOS Mirrors List `_ into a directory on your cluster node.
+* Download an .iso file from the |REMOTE_DISTRO| |REMOTE_DISTRO_VER| `mirrors
+ list `_ into a directory on your
+ cluster node.
* Run the following command, using your own path for the **location** flag:
.. code-block:: none
- # virt-install \
- --name vm-guest1 \
- --ram 1024 \
- --disk path=./vm-guest1.qcow2,size=1 \
- --vcpus 2 \
- --os-type linux \
- --os-variant centos-stream8\
- --network bridge=virbr0 \
- --graphics none \
- --console pty,target_type=serial \
- --location \
- --extra-args 'console=ttyS0,115200n8 serial'
+ [root@pcmk-1 ~]# virt-install \
+ --name vm-guest1 \
+ --memory 1536 \
+ --disk path=/var/lib/libvirt/images/vm-guest1.qcow2,size=4 \
+ --vcpus 2 \
+ --os-variant almalinux9 \
+ --network bridge=virbr0 \
+ --graphics none \
+ --console pty,target_type=serial \
+ --location /tmp/AlmaLinux-9-latest-x86_64-dvd.iso \
+ --extra-args 'console=ttyS0,115200n8'
+
+ .. NOTE::
+
+ See the Clusters from Scratch document for more details about installing
+ |REMOTE_DISTRO| |REMOTE_DISTRO_VER|. The above command will perform a
+ text-based installation by default, but feel free to do a graphical
+ installation, which exposes more options.
.. index::
single: guest node; firewall
Configure Firewall on Guest
___________________________
On each guest, allow cluster-related services through the local firewall. If
you're using ``firewalld``, run the following commands.
.. code-block:: none
[root@guest1 ~]# firewall-cmd --permanent --add-service=high-availability
success
[root@guest1 ~]# firewall-cmd --reload
success
.. NOTE::
If you are using some other firewall solution besides firewalld,
simply open the following ports, which can be used by various
clustering components: TCP ports 2224, 3121, and 21064.
If you run into any problems during testing, you might want to disable
the firewall and SELinux entirely until you have everything working.
This may create significant security issues and should not be performed on
machines that will be exposed to the outside world, but may be appropriate
during development and testing on a protected host.
To disable security measures:
.. code-block:: none
[root@guest1 ~]# setenforce 0
[root@guest1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" \
/etc/selinux/config
[root@guest1 ~]# systemctl mask firewalld.service
[root@guest1 ~]# systemctl stop firewalld.service
Configure ``/etc/hosts``
________________________
You will need to add the remote node's hostname (we're using **guest1** in
this tutorial) to the cluster nodes' ``/etc/hosts`` files if you haven't already.
This is required unless you have DNS set up in a way where guest1's address can be
discovered.
For each guest, execute the following on each cluster node and on the guests,
replacing the IP address with the actual IP address of the guest node.
.. code-block:: none
# cat << END >> /etc/hosts
192.168.122.10 guest1
END
Also add entries for each cluster node to the ``/etc/hosts`` file on each guest.
For example:
.. code-block:: none
# cat << END >> /etc/hosts
192.168.122.101 pcmk-1
192.168.122.102 pcmk-2
END
Verify Connectivity
___________________
At this point, you should be able to ping and ssh into guests from hosts, and
vice versa.
+Depending on your installation method, you may have to perform an additional
+step to make SSH work. The simplest approach is to open the
+``/etc/ssh/sshd_config`` file and set ``PermitRootLogin yes``. Then to make the
+change take effect, run the following command.
+
+.. code-block:: none
+
+ [root@guest1 ~]# systemctl restart sshd
+
Configure pacemaker_remote on Guest Node
________________________________________
Install the pacemaker_remote daemon on the guest node. We'll also install the
``pacemaker`` package. It isn't required for a guest node to run, but it
provides the ``crm_attribute`` tool, which many resource agents use.
.. code-block:: none
- # yum install -y pacemaker-remote resource-agents pcs pacemaker
+ [root@guest1 ~]# dnf config-manager --set-enabled highavailability
+ [root@guest1 ~]# dnf install -y pacemaker-remote resource-agents pcs \
+ pacemaker
Integrate Guest into Cluster
############################
Now the fun part, integrating the virtual machine you've just created into the
cluster. It is incredibly simple.
Start the Cluster
_________________
On the host, start Pacemaker if it's not already running.
.. code-block:: none
# pcs cluster start
Create a ``VirtualDomain`` Resource for the Guest VM
____________________________________________________
For this simple walk-through, we have created the VM and made its disk
available only on node ``pcmk-1``, so that's the only node where the VM is
capable of running. In a more realistic scenario, you'll probably want to have
multiple nodes that are capable of running the VM.
Next we'll assign an attribute to node 1 that denotes its eligibility to host
``vm-guest1``. If other nodes are capable of hosting your guest VM, then add the
attribute to each of those nodes as well.
.. code-block:: none
[root@pcmk-1 ~]# pcs node attribute pcmk-1 can-host-vm-guest1=1
Then we'll create a ``VirtualDomain`` resource so that Pacemaker can manage
``vm-guest1``. Be sure to replace the XML file path below with your own if it
differs. We'll also create a rule to prevent Pacemaker from trying to start the
resource or probe its status on any node that isn't capable of running the VM.
We'll save the CIB to a file, make both of these edits, and push them
simultaneously.
.. code-block:: none
[root@pcmk-1 ~]# pcs cluster cib vm_cfg
[root@pcmk-1 ~]# pcs -f vm_cfg resource create vm-guest1 VirtualDomain \
hypervisor="qemu:///system" config="/etc/libvirt/qemu/vm-guest1.xml"
Assumed agent name 'ocf:heartbeat:VirtualDomain' (deduced from 'VirtualDomain')
[root@pcmk-1 ~]# pcs -f vm_cfg constraint location vm-guest1 rule \
resource-discovery=never score=-INFINITY can-host-vm-guest1 ne 1
[root@pcmk-1 ~]# pcs cluster cib-push --config vm_cfg --wait
.. NOTE::
If all nodes in your cluster are capable of hosting the VM that you've
created, then you can skip the ``pcs node attribute`` and ``pcs constraint
location`` commands.
.. NOTE::
The ID of the resource managing the virtual machine (``vm-guest1`` in the
above example) **must** be different from the virtual machine's node name
(``guest1`` in the above example). Pacemaker will create an implicit
internal resource for the Pacemaker Remote connection to the guest. This
implicit resource will be named with the value of the ``VirtualDomain``
resource's ``remote-node`` meta attribute, which will be set by ``pcs`` to
the guest node's node name. Therefore, that value cannot be used as the name
of any other resource.
Now we can confirm that the ``VirtualDomain`` resource is running on ``pcmk-1``.
.. code-block:: none
[root@pcmk-1 ~]# pcs resource status
* vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
Prepare ``pcsd``
________________
Now we need to prepare ``pcsd`` on the guest so that we can use ``pcs`` commands
to communicate with it.
Start and enable the ``pcsd`` daemon on the guest.
.. code-block:: none
[root@guest1 ~]# systemctl start pcsd
[root@guest1 ~]# systemctl enable pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
Next, set a password for the ``hacluster`` user on the guest.
.. code-block:: none
[root@guest1 ~]# echo MyPassword | passwd --stdin hacluster
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
Now authenticate the existing cluster nodes to ``pcsd`` on the guest. The below
command only needs to be run from one cluster node.
.. code-block:: none
[root@pcmk-1 ~]# pcs host auth guest1 -u hacluster
Password:
guest1: Authorized
Integrate Guest Node into Cluster
_________________________________
We're finally ready to integrate the VM into the cluster as a guest node. Run
the following command, which will create a guest node from the ``VirtualDomain``
resource and take care of all the remaining steps. Note that the format is ``pcs
cluster node add-guest ``.
.. code-block:: none
[root@pcmk-1 ~]# pcs cluster node add-guest guest1 vm-guest1
No addresses specified for host 'guest1', using 'guest1'
Sending 'pacemaker authkey' to 'guest1'
guest1: successful distribution of the file 'pacemaker authkey'
Requesting 'pacemaker_remote enable', 'pacemaker_remote start' on 'guest1'
guest1: successful run of 'pacemaker_remote enable'
guest1: successful run of 'pacemaker_remote start'
You should soon see ``guest1`` appear in the ``pcs status`` output as a node.
The output should look something like this:
.. code-block:: none
- # pcs status
+ [root@pcmk-1 ~]# pcs status
Cluster name: mycluster
-
Cluster Summary:
* Stack: corosync
- * Current DC: pcmk-1 (version 2.0.5-8.el8-ba59be7122) - partition with quorum
- * Last updated: Wed Mar 17 08:37:37 2021
- * Last change: Wed Mar 17 08:31:01 2021 by root via cibadmin on pcmk-1
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 00:08:58 2022
+ * Last change: Wed Aug 10 00:02:37 2022 by root via cibadmin on pcmk-1
* 3 nodes configured
- * 2 resource instances configured
-
+ * 3 resource instances configured
+
Node List:
* Online: [ pcmk-1 pcmk-2 ]
* GuestOnline: [ guest1@pcmk-1 ]
Full List of Resources:
- * vm-guest1 (ocf::heartbeat:VirtualDomain): pcmk-1
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
The resulting configuration should look something like the following:
.. code-block:: none
[root@pcmk-1 ~]# pcs resource config
Resource: vm-guest1 (class=ocf provider=heartbeat type=VirtualDomain)
Attributes: config=/etc/libvirt/qemu/vm-guest1.xml hypervisor=qemu:///system
Meta Attrs: remote-addr=guest1 remote-node=guest1
Operations: migrate_from interval=0s timeout=60s (vm-guest1-migrate_from-interval-0s)
migrate_to interval=0s timeout=120s (vm-guest1-migrate_to-interval-0s)
monitor interval=10s timeout=30s (vm-guest1-monitor-interval-10s)
start interval=0s timeout=90s (vm-guest1-start-interval-0s)
stop interval=0s timeout=90s (vm-guest1-stop-interval-0s)
How pcs Configures the Guest
____________________________
Let's take a closer look at what the ``pcs cluster node add-guest`` command is
doing. There is no need to run any of the commands in this section.
First, ``pcs`` copies the Pacemaker authkey file to the VM that will become the
guest. If an authkey is not already present on the cluster nodes, this command
creates one and distributes it to the existing nodes and to the guest.
If you want to do this manually, you can run a command like the following to
generate an authkey in ``/etc/pacemaker/authkey``, and then distribute the key
to the rest of the nodes and to the new guest.
.. code-block:: none
[root@pcmk-1 ~]# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
Then ``pcs`` starts and enables the ``pacemaker_remote`` service on the guest.
If you want to do this manually, run the following commands.
.. code-block:: none
[root@guest1 ~]# systemctl start pacemaker_remote
[root@guest1 ~]# systemctl enable pacemaker_remote
Finally, ``pcs`` creates a guest node from the ``VirtualDomain`` resource by
adding ``remote-addr`` and ``remote-node`` meta attributes to the resource. If
you want to do this manually, you can run the following command if you're using
``pcs``. Alternativately, run an equivalent command if you're using another
cluster shell, or edit the CIB manually.
.. code-block:: none
[root@pcmk-1 ~]# pcs resource update vm-guest1 meta remote-addr='guest1' \
remote-node='guest1' --force
Starting Resources on KVM Guest
###############################
The following example demonstrates that resources can be run on the guest node
in the exact same way as on the cluster nodes.
Create a few ``Dummy`` resources. A ``Dummy`` resource is a real resource that
actually executes operations on its assigned node. However, these operations are
trivial (creating, deleting, or checking the existence of an empty or small
file), so ``Dummy`` resources are ideal for testing purposes. ``Dummy``
resources use the ``ocf:heartbeat:Dummy`` or ``ocf:pacemaker:Dummy`` resource
agent.
.. code-block:: none
# for i in {1..5}; do pcs resource create FAKE${i} ocf:heartbeat:Dummy; done
-Now check your ``pcs status`` output. In the resource section, you should see
-something like the following, where some of the resources started on the
-cluster nodes, and some started on the guest node.
+Now run ``pcs resource status``. You should see something like the following,
+where some of the resources are started on the cluster nodes, and some are
+started on the guest node.
.. code-block:: none
- Full List of Resources:
- * vm-guest1 (ocf::heartbeat:VirtualDomain): Started pcmk-1
- * FAKE1 (ocf::heartbeat:Dummy): Started guest1
- * FAKE2 (ocf::heartbeat:Dummy): Started guest1
- * FAKE3 (ocf::heartbeat:Dummy): Started pcmk-1
- * FAKE4 (ocf::heartbeat:Dummy): Started guest1
- * FAKE5 (ocf::heartbeat:Dummy): Started pcmk-1
-
-The guest node, **guest1**, behaves just like any other node in the cluster with
+ [root@pcmk-1 ~]# pcs resource status
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE2 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE3 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): Started guest1
+
+The guest node, ``guest1``, behaves just like any other node in the cluster with
respect to resources. For example, choose a resource that is running on one of
-your cluster nodes. We'll choose ``FAKE3`` from the output above. It's currently
-running on ``pcmk-1``. We can force ``FAKE3`` to run on ``guest1`` in the exact
+your cluster nodes. We'll choose ``FAKE2`` from the output above. It's currently
+running on ``pcmk-2``. We can force ``FAKE2`` to run on ``guest1`` in the exact
same way as we could force it to run on any particular cluster node. We do this
by creating a location constraint:
.. code-block:: none
- # pcs constraint location FAKE3 prefers guest1
+ # pcs constraint location FAKE2 prefers guest1
-Now, looking at the bottom of the `pcs status` output you'll see FAKE3 is on
-**guest1**.
+Now the ``pcs resource status`` output shows that ``FAKE2`` is on ``guest1``.
.. code-block:: none
- Full List of Resources:
- * vm-guest1 (ocf::heartbeat:VirtualDomain): Started pcmk-1
- * FAKE1 (ocf::heartbeat:Dummy): Started guest1
- * FAKE2 (ocf::heartbeat:Dummy): Started guest1
- * FAKE3 (ocf::heartbeat:Dummy): Started guest1
- * FAKE4 (ocf::heartbeat:Dummy): Started pcmk-1
- * FAKE5 (ocf::heartbeat:Dummy): Started pcmk-1
+ [root@pcmk-1 ~]# pcs resource status
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE2 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE3 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): Started guest1
Testing Recovery and Fencing
############################
Pacemaker's scheduler is smart enough to know fencing guest nodes
associated with a virtual machine means shutting off/rebooting the virtual
machine. No special configuration is necessary to make this happen. If you
are interested in testing this functionality out, trying stopping the guest's
pacemaker_remote daemon. This would be equivalent of abruptly terminating a
cluster node's corosync membership without properly shutting it down.
ssh into the guest and run this command.
.. code-block:: none
- # kill -9 $(pidof pacemaker-remoted)
+ [root@guest1 ~]# kill -9 $(pidof pacemaker-remoted)
Within a few seconds, your ``pcs status`` output will show a monitor failure,
and the **guest1** node will not be shown while it is being recovered.
.. code-block:: none
- # pcs status
+ [root@pcmk-1 ~]# pcs status
Cluster name: mycluster
-
Cluster Summary:
* Stack: corosync
- * Current DC: pcmk-1 (version 2.0.5-8.el8-ba59be7122) - partition with quorum
- * Last updated: Wed Mar 17 08:37:37 2021
- * Last change: Wed Mar 17 08:31:01 2021 by root via cibadmin on pcmk-1
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 01:39:40 2022
+ * Last change: Wed Aug 10 01:34:55 2022 by root via cibadmin on pcmk-1
* 3 nodes configured
- * 7 resource instances configured
-
+ * 8 resource instances configured
+
Node List:
* Online: [ pcmk-1 pcmk-2 ]
Full List of Resources:
- * vm-guest1 (ocf::heartbeat:VirtualDomain): FAILED pcmk-1
- * FAKE1 (ocf::heartbeat:Dummy): FAILED guest1
- * FAKE2 (ocf::heartbeat:Dummy): FAILED guest1
- * FAKE3 (ocf::heartbeat:Dummy): FAILED guest1
- * FAKE4 (ocf::heartbeat:Dummy): Started pcmk-1
- * FAKE5 (ocf::heartbeat:Dummy): Started pcmk-1
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): FAILED pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): FAILED guest1
+ * FAKE2 (ocf:heartbeat:Dummy): FAILED guest1
+ * FAKE3 (ocf:heartbeat:Dummy): FAILED guest1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): FAILED guest1
- Failed Actions:
- * guest1_monitor_30000 on pcmk-1 'unknown error' (1): call=8, status=Error, exitreason='none',
- last-rc-change='Wed Mar 17 08:32:01 2021', queued=0ms, exec=0ms
+ Failed Resource Actions:
+ * guest1 30s-interval monitor on pcmk-1 could not be executed (Error) because 'Lost connection to remote executor' at Wed Aug 10 01:39:38 2022
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
-
.. NOTE::
A guest node involves two resources: an explicitly configured resource that
you create, which manages the virtual machine (the ``VirtualDomain``
resource in our example); and an implicit resource that Pacemaker creates,
which manages the ``pacemaker-remoted`` connection to the guest. The
implicit resource's name is the value of the explicit resource's
``remote-node`` meta attribute. When we killed ``pacemaker-remoted``, the
**implicit** resource is what failed. That's why the failed action starts
with ``guest1`` and not ``vm-guest1``.
Once recovery of the guest is complete, you'll see it automatically get
re-integrated into the cluster. The final ``pcs status`` output should look
something like this.
.. code-block:: none
- # pcs status
+ [root@pcmk-1 ~]# pcs status
Cluster name: mycluster
-
Cluster Summary:
* Stack: corosync
- * Current DC: pcmk-1 (version 2.0.5-8.el8-ba59be7122) - partition with quorum
- * Last updated: Wed Mar 17 08:37:37 2021
- * Last change: Wed Mar 17 08:31:01 2021 by root via cibadmin on pcmk-1
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 01:40:05 2022
+ * Last change: Wed Aug 10 01:34:55 2022 by root via cibadmin on pcmk-1
* 3 nodes configured
- * 7 resource instances configured
-
+ * 8 resource instances configured
+
Node List:
* Online: [ pcmk-1 pcmk-2 ]
* GuestOnline: [ guest1@pcmk-1 ]
Full List of Resources:
- * vm-guest1 (ocf::heartbeat:VirtualDomain): pcmk-1
- * FAKE1 (ocf::heartbeat:Dummy): Stopped
- * FAKE2 (ocf::heartbeat:Dummy): Stopped
- * FAKE3 (ocf::heartbeat:Dummy): Stopped
- * FAKE4 (ocf::heartbeat:Dummy): Started pcmk-1
- * FAKE5 (ocf::heartbeat:Dummy): Started pcmk-1
-
- Failed Actions:
- * guest1_monitor_30000 on pcmk-1 'unknown error' (1): call=8, status=Error, exitreason='none',
- last-rc-change='Fri Jan 12 18:08:29 2018', queued=0ms, exec=0ms
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE2 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE3 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): Started guest1
+
+ Failed Resource Actions:
+ * guest1 30s-interval monitor on pcmk-1 could not be executed (Error) because 'Lost connection to remote executor' at Wed Aug 10 01:39:38 2022
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Normally, once you've investigated and addressed a failed action, you can clear the
failure. However Pacemaker does not yet support cleanup for the implicitly
created connection resource while the explicit resource is active. If you want
to clear the failed action from the status output, stop the guest resource before
clearing it. For example:
.. code-block:: none
# pcs resource disable vm-guest1 --wait
# pcs resource cleanup guest1
# pcs resource enable vm-guest1
Accessing Cluster Tools from Guest Node
#######################################
Besides allowing the cluster to manage resources on a guest node,
pacemaker_remote has one other trick. The pacemaker_remote daemon allows
nearly all the pacemaker tools (``crm_resource``, ``crm_mon``, ``crm_attribute``,
etc.) to work on guest nodes natively.
Try it: Run ``crm_mon`` on the guest after pacemaker has
integrated the guest node into the cluster. These tools just work. This
means resource agents such as promotable resources (which need access to tools
like ``crm_attribute``) work seamlessly on the guest nodes.
Higher-level command shells such as ``pcs`` may have partial support
on guest nodes, but it is recommended to run them from a cluster node.
Troubleshooting a Remote Connection
###################################
Note: This section should not be done when the guest is connected to the cluster.
Should connectivity issues occur, it can be worth verifying that the cluster nodes
can contact the remote node on port 3121. Here's a trick you can use.
Connect using ssh from each of the cluster nodes. The connection will get
destroyed, but how it is destroyed tells you whether it worked or not.
If running the ssh command on one of the cluster nodes results in this
output before disconnecting, the connection works:
.. code-block:: none
# ssh -p 3121 guest1
ssh_exchange_identification: read: Connection reset by peer
If you see one of these, the connection is not working:
.. code-block:: none
# ssh -p 3121 guest1
ssh: connect to host guest1 port 3121: No route to host
.. code-block:: none
# ssh -p 3121 guest1
ssh: connect to host guest1 port 3121: Connection refused
If you see this, then the connection is working, but port 3121 is attached
to SSH, which it should not be.
.. code-block:: none
# ssh -p 3121 guest1
kex_exchange_identification: banner line contains invalid characters
Once you can successfully connect to the guest from the host, you may
shutdown the guest. Pacemaker will be managing the virtual machine from
this point forward.
diff --git a/doc/sphinx/Pacemaker_Remote/options.rst b/doc/sphinx/Pacemaker_Remote/options.rst
index 27baa77a1e..482182976e 100644
--- a/doc/sphinx/Pacemaker_Remote/options.rst
+++ b/doc/sphinx/Pacemaker_Remote/options.rst
@@ -1,171 +1,174 @@
.. index::
single: configuration
Configuration Explained
-----------------------
The walk-through examples use some of these options, but don't explain exactly
what they mean or do. This section is meant to be the go-to resource for all
the options available for configuring Pacemaker Remote.
.. index::
pair: configuration; guest node
single: guest node; meta-attribute
Resource Meta-Attributes for Guest Nodes
########################################
When configuring a virtual machine as a guest node, the virtual machine is
created using one of the usual resource agents for that purpose (for example,
**ocf:heartbeat:VirtualDomain** or **ocf:heartbeat:Xen**), with additional
meta-attributes.
No restrictions are enforced on what agents may be used to create a guest node,
but obviously the agent must create a distinct environment capable of running
the pacemaker_remote daemon and cluster resources. An additional requirement is
that fencing the host running the guest node resource must be sufficient for
ensuring the guest node is stopped. This means, for example, that not all
hypervisors supported by **VirtualDomain** may be used to create guest nodes;
if the guest can survive the hypervisor being fenced, it may not be used as a
guest node.
Below are the meta-attributes available to enable a resource as a guest node
and define its connection parameters.
.. table:: **Meta-attributes for configuring VM resources as guest nodes**
+------------------------+-----------------+-----------------------------------------------------------+
| Option | Default | Description |
+========================+=================+===========================================================+
| remote-node | none | The node name of the guest node this resource defines. |
| | | This both enables the resource as a guest node and |
| | | defines the unique name used to identify the guest node. |
| | | If no other parameters are set, this value will also be |
| | | assumed as the hostname to use when connecting to |
| | | pacemaker_remote on the VM. This value **must not** |
| | | overlap with any resource or node IDs. |
+------------------------+-----------------+-----------------------------------------------------------+
| remote-port | 3121 | The port on the virtual machine that the cluster will |
| | | use to connect to pacemaker_remote. |
+------------------------+-----------------+-----------------------------------------------------------+
| remote-addr | 'value of' | The IP address or hostname to use when connecting to |
| | ``remote-node`` | pacemaker_remote on the VM. |
+------------------------+-----------------+-----------------------------------------------------------+
| remote-connect-timeout | 60s | How long before a pending guest connection will time out. |
+------------------------+-----------------+-----------------------------------------------------------+
.. index::
pair: configuration; remote node
Connection Resources for Remote Nodes
#####################################
A remote node is defined by a connection resource. That connection resource
has instance attributes that define where the remote node is located on the
network and how to communicate with it.
Descriptions of these instance attributes can be retrieved using the following
``pcs`` command:
.. code-block:: none
- # pcs resource describe remote
- ocf:pacemaker:remote - remote resource agent
+ [root@pcmk-1 ~]# pcs resource describe remote
+ Assumed agent name 'ocf:pacemaker:remote' (deduced from 'remote')
+ ocf:pacemaker:remote - Pacemaker Remote connection
Resource options:
- server: Server location to connect to (IP address or resolvable host name)
- port: TCP port at which to contact Pacemaker Remote executor
- reconnect_interval: If this is a positive time interval, the cluster will attempt to
- reconnect to a remote node after an active connection has been
- lost at this interval. Otherwise, the cluster will attempt to
- reconnect immediately (after any fencing needed).
-
+ server (unique group: address): Server location to connect to (IP address
+ or resolvable host name)
+ port (unique group: address): TCP port at which to contact Pacemaker
+ Remote executor
+ reconnect_interval: If this is a positive time interval, the cluster will
+ attempt to reconnect to a remote node after an active
+ connection has been lost at this interval. Otherwise,
+ the cluster will attempt to reconnect immediately
+ (after any fencing needed).
When defining a remote node's connection resource, it is common and recommended
to name the connection resource the same as the remote node's hostname. By
default, if no ``server`` option is provided, the cluster will attempt to contact
the remote node using the resource name as the hostname.
Environment Variables for Daemon Start-up
#########################################
Authentication and encryption of the connection between cluster nodes
and nodes running pacemaker_remote is achieved using
with `TLS-PSK `_ encryption/authentication
over TCP (port 3121 by default). This means that both the cluster node and
remote node must share the same private key. By default, this
key is placed at ``/etc/pacemaker/authkey`` on each node.
You can change the default port and/or key location for Pacemaker and
``pacemaker_remoted`` via environment variables. How these variables are set
varies by OS, but usually they are set in the ``/etc/sysconfig/pacemaker`` or
``/etc/default/pacemaker`` file.
.. code-block:: none
#==#==# Pacemaker Remote
# Use the contents of this file as the authorization key to use with Pacemaker
# Remote connections. This file must be readable by Pacemaker daemons (that is,
# it must allow read permissions to either the hacluster user or the haclient
# group), and its contents must be identical on all nodes. The default is
# "/etc/pacemaker/authkey".
# PCMK_authkey_location=/etc/pacemaker/authkey
# If the Pacemaker Remote service is run on the local node, it will listen
# for connections on this address. The value may be a resolvable hostname or an
# IPv4 or IPv6 numeric address. When resolving names or using the default
# wildcard address (i.e. listen on all available addresses), IPv6 will be
# preferred if available. When listening on an IPv6 address, IPv4 clients will
# be supported (via IPv4-mapped IPv6 addresses).
# PCMK_remote_address="192.0.2.1"
-
+
# Use this TCP port number when connecting to a Pacemaker Remote node. This
# value must be the same on all nodes. The default is "3121".
# PCMK_remote_port=3121
-
+
# Use these GnuTLS cipher priorities for TLS connections. See:
#
# https://gnutls.org/manual/html_node/Priority-Strings.html
#
# Pacemaker will append ":+ANON-DH" for remote CIB access (when enabled) and
# ":+DHE-PSK:+PSK" for Pacemaker Remote connections, as they are required for
# the respective functionality.
# PCMK_tls_priorities="NORMAL"
-
+
# Set bounds on the bit length of the prime number generated for Diffie-Hellman
# parameters needed by TLS connections. The default is not to set any bounds.
#
# If these values are specified, the server (Pacemaker Remote daemon, or CIB
# manager configured to accept remote clients) will use these values to provide
# a floor and/or ceiling for the value recommended by the GnuTLS library. The
# library will only accept a limited number of specific values, which vary by
# library version, so setting these is recommended only when required for
# compatibility with specific client versions.
#
# If PCMK_dh_min_bits is specified, the client (connecting cluster node or
# remote CIB command) will require that the server use a prime of at least this
# size. This is only recommended when the value must be lowered in order for
# the client's GnuTLS library to accept a connection to an older server.
# The client side does not use PCMK_dh_max_bits.
#
# PCMK_dh_min_bits=1024
# PCMK_dh_max_bits=2048
Removing Remote Nodes and Guest Nodes
#####################################
If the resource creating a guest node, or the **ocf:pacemaker:remote** resource
creating a connection to a remote node, is removed from the configuration, the
affected node will continue to show up in output as an offline node.
If you want to get rid of that output, run (replacing ``$NODE_NAME``
appropriately):
.. code-block:: none
# crm_node --force --remove $NODE_NAME
.. WARNING::
Be absolutely sure that there are no references to the node's resource in the
configuration before running the above command.
diff --git a/doc/sphinx/conf.py.in b/doc/sphinx/conf.py.in
index 62139f45b3..498c873b8f 100644
--- a/doc/sphinx/conf.py.in
+++ b/doc/sphinx/conf.py.in
@@ -1,316 +1,316 @@
""" Sphinx configuration for Pacemaker documentation
"""
__copyright__ = "Copyright 2020-2022 the Pacemaker project contributors"
__license__ = "GNU General Public License version 2 or later (GPLv2+) WITHOUT ANY WARRANTY"
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import datetime
import os
import sys
# Variables that can be used later in this file
authors = "the Pacemaker project contributors"
year = datetime.datetime.now().year
doc_license = "Creative Commons Attribution-ShareAlike International Public License"
doc_license += " version 4.0 or later (CC-BY-SA v4.0+)"
# rST markup to insert at beginning of every document; mainly used for
#
# .. || replace::
#
# where occurrences of || in the rST will be substituted with
rst_prolog="""
.. |CFS_DISTRO| replace:: AlmaLinux
.. |CFS_DISTRO_VER| replace:: 9
-.. |REMOTE_DISTRO| replace:: CentOS Stream
-.. |REMOTE_DISTRO_VER| replace:: 8
+.. |REMOTE_DISTRO| replace:: AlmaLinux
+.. |REMOTE_DISTRO_VER| replace:: 9
"""
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = '%BOOK_ID%'
copyright = "2009-%s %s. Released under the terms of the %s" % (year, authors, doc_license)
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The full version, including alpha/beta/rc tags.
release = '%VERSION%'
# The short X.Y version.
version = release.rsplit('.', 1)[0]
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'vs'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'pyramid'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
html_style = 'pacemaker.css'
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
html_title = "%BOOK_TITLE%"
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = [ '%SRC_DIR%/_static' ]
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Pacemakerdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_engine = "xelatex"
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', '%BOOK_ID%.tex', '%BOOK_TITLE%', authors, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', '%BOOK_ID%', 'Part of the Pacemaker documentation set', [authors], 8)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', '%BOOK_ID%', '%BOOK_TITLE%', authors, '%BOOK_TITLE%',
'Pacemaker is an advanced, scalable high-availability cluster resource manager.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# -- Options for Epub output ---------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = '%BOOK_TITLE%'
epub_author = authors
epub_publisher = 'ClusterLabs.org'
epub_copyright = copyright
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
epub_scheme = 'URL'
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
epub_identifier = 'https://www.clusterlabs.org/pacemaker/doc/2.1/%BOOK_ID%/epub/%BOOK_ID%.epub'
# A unique identification for the text.
epub_uid = 'ClusterLabs.org-Pacemaker-%BOOK_ID%'
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files that should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file.
epub_exclude_files = [
'_static/doctools.js',
'_static/jquery.js',
'_static/searchtools.js',
'_static/underscore.js',
'_static/basic.css',
'_static/websupport.js',
'search.html',
]
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True