diff --git a/doc/sphinx/Pacemaker_Administration/cluster.rst b/doc/sphinx/Pacemaker_Administration/cluster.rst
index 069121f042..3713733418 100644
--- a/doc/sphinx/Pacemaker_Administration/cluster.rst
+++ b/doc/sphinx/Pacemaker_Administration/cluster.rst
@@ -1,71 +1,21 @@
.. index::
single: cluster layer
The Cluster Layer
-----------------
-Pacemaker and the Cluster Layer
-###############################
-
Pacemaker utilizes an underlying cluster layer for two purposes:
* obtaining quorum
* messaging between nodes
-Currently, only Corosync 2 and later is supported for this layer.
-
.. index::
single: cluster layer; Corosync
single: Corosync
-Managing Nodes in a Corosync-Based Cluster
-##########################################
-
-.. index::
- pair: Corosync; add cluster node
-
-Adding a New Corosync Node
-__________________________
-
-To add a new node:
-
-#. Install Corosync and Pacemaker on the new host.
-#. Copy ``/etc/corosync/corosync.conf`` and ``/etc/corosync/authkey`` (if it
- exists) from an existing node. You may need to modify the ``mcastaddr``
- option to match the new node's IP address.
-#. Start the cluster software on the new host. If a log message containing
- "Invalid digest" appears from Corosync, the keys are not consistent between
- the machines.
-
-.. index::
- pair: Corosync; remove cluster node
-
-Removing a Corosync Node
-________________________
-
-Because the messaging and membership layers are the authoritative
-source for cluster nodes, deleting them from the CIB is not a complete
-solution. First, one must arrange for corosync to forget about the
-node (**pcmk-1** in the example below).
-
-#. Stop the cluster on the host to be removed. How to do this will vary with
- your operating system and installed versions of cluster software, for example,
- ``pcs cluster stop`` if you are using pcs for cluster management.
-#. From one of the remaining active cluster nodes, tell Pacemaker to forget
- about the removed host, which will also delete the node from the CIB:
-
- .. code-block:: none
-
- # crm_node -R pcmk-1
-
-.. index::
- pair: Corosync; replace cluster node
-
-Replacing a Corosync Node
-_________________________
-
-To replace an existing cluster node:
+Currently, only Corosync 2 and later is supported for this layer.
-#. Make sure the old node is completely stopped.
-#. Give the new machine the same hostname and IP address as the old one.
-#. Follow the procedure above for adding a node.
+This document assumes you have configured the cluster nodes in Corosync
+already. High-level cluster management tools are available that can configure
+Corosync for you. If you want the lower-level details, see the
+`Corosync documentation `_.
diff --git a/doc/sphinx/Pacemaker_Administration/installing.rst b/doc/sphinx/Pacemaker_Administration/installing.rst
index 179f4fe665..44a3f5f119 100644
--- a/doc/sphinx/Pacemaker_Administration/installing.rst
+++ b/doc/sphinx/Pacemaker_Administration/installing.rst
@@ -1,112 +1,9 @@
Installing Cluster Software
---------------------------
.. index:: installation
-Installing the Software
-#######################
-
Most major Linux distributions have pacemaker packages in their standard
package repositories, or the software can be built from source code.
See the `Install wiki page `_
for details.
-
-Enabling Pacemaker
-##################
-
-.. index::
- pair: configuration; Corosync
-
-Enabling Pacemaker For Corosync version 2 and greater
-_____________________________________________________
-
-High-level cluster management tools are available that can configure
-corosync for you. This document focuses on the lower-level details
-if you want to configure corosync yourself.
-
-Corosync configuration is normally located in
-``/etc/corosync/corosync.conf``.
-
-.. topic:: Corosync configuration file for two nodes **myhost1** and **myhost2**
-
- .. code-block:: none
-
- totem {
- version: 2
- secauth: off
- cluster_name: mycluster
- transport: udpu
- }
-
- nodelist {
- node {
- ring0_addr: myhost1
- nodeid: 1
- }
- node {
- ring0_addr: myhost2
- nodeid: 2
- }
- }
-
- quorum {
- provider: corosync_votequorum
- two_node: 1
- }
-
- logging {
- to_syslog: yes
- }
-
-.. topic:: Corosync configuration file for three nodes **myhost1**, **myhost2** and **myhost3**
-
- .. code-block:: none
-
- totem {
- version: 2
- secauth: off
- cluster_name: mycluster
- transport: udpu
- }
-
- nodelist {
- node {
- ring0_addr: myhost1
- nodeid: 1
- }
- node {
- ring0_addr: myhost2
- nodeid: 2
- }
- node {
- ring0_addr: myhost3
- nodeid: 3
- }
- }
-
- quorum {
- provider: corosync_votequorum
- }
-
- logging {
- to_syslog: yes
- }
-
-In the above examples, the ``totem`` section defines what protocol version and
-options (including encryption) to use, [#]_
-and gives the cluster a unique name (``mycluster`` in these examples).
-
-The ``node`` section lists the nodes in this cluster.
-
-The ``quorum`` section defines how the cluster uses quorum. The important thing
-is that two-node clusters must be handled specially, so ``two_node: 1`` must be
-defined for two-node clusters (it will be ignored for clusters of any other
-size).
-
-The ``logging`` section should be self-explanatory.
-
-.. rubric:: Footnotes
-
-.. [#] Please consult the Corosync website (http://www.corosync.org/) and
- documentation for details on enabling encryption and peer authentication
- for the cluster.