diff --git a/cts/README.md b/cts/README.md
index 0e6eac148d..0ff1065bf4 100644
--- a/cts/README.md
+++ b/cts/README.md
@@ -1,370 +1,369 @@
 # Pacemaker Cluster Test Suite (CTS)
 
 The Cluster Test Suite (CTS) refers to all Pacemaker testing code that can be
 run in an installed environment. (Pacemaker also has unit tests that must be
 run from a source distribution.)
 
 CTS includes:
 
 * Regression tests: These test specific Pacemaker components individually (no
   integration tests). The primary front end is cts-regression in this
   directory. Run it with the --help option to see its usage.
 
   cts-regression is a wrapper for individual component regression tests also
   in this directory (cts-cli, cts-exec, cts-fencing, and cts-scheduler).
 
   The CLI and scheduler regression tests can also be run from a source
   distribution. The other regression tests can only run in an installed
   environment, and the cluster should not be running on the node running these
   tests.
 
 * The CTS lab: This is a cluster exerciser for intensively testing the behavior
   of an entire working cluster. It is primarily for developers and packagers of
   the Pacemaker source code, but it can be useful for users who wish to see how
   their cluster will react to various situations. In an installed deployment,
   the CTS lab is in the cts subdirectory of this directory; in a source
   distibution, it is in cts/lab.
 
   The CTS lab runs a randomized series of predefined tests on the cluster. CTS
   can be run against a pre-existing cluster configuration or overwrite the
   existing configuration with a test configuration.
 
 * Helpers: Some of the component regression tests and the CTS lab require
   certain helpers to be installed as root. These include a dummy LSB init
   script, dummy systemd service, etc. In a source distribution, the source for
   these is in cts/support.
 
   The tests will install these as needed and uninstall them when done. This
   means that the cluster configuration created by the CTS lab will generate
   failures if started manually after the lab exits. However, the helper
   installer can be run manually to make the configuration usable, if you want
   to do your own further testing with it:
 
       /usr/libexec/pacemaker/cts-support install
 
   As you might expect, you can also remove the helpers with:
 
       /usr/libexec/pacemaker/cts-support uninstall
 
 * Cluster benchmark: The benchmark subdirectory of this directory contains some
   cluster test environment benchmarking code. It is not particularly useful for
   end users.
 
 * LXC generator: The lxc\_autogen.sh script can be used to create some guest
   nodes for testing using LXC containers. It is not particularly useful for end
   users. In an installed deployment, it is in the cts subdirectory of this
   directory; in a source distribution, it is in this directory.
 
 * Valgrind suppressions: When memory-testing Pacemaker code with valgrind,
   various bugs in non-Pacemaker libraries and such can clutter the results. The
   valgrind-pcmk.suppressions file in this directory can be used with valgrind's
   --suppressions option to eliminate many of these.
 
 
 ## Using the CTS lab
 
 ### Requirements
 
 * Three or more machines (one test exerciser and at least two cluster nodes).
 
 * The test cluster nodes should be on the same subnet and have journalling
   filesystems (ext4, xfs, etc.) for all of their filesystems other than
   /boot. You also need a number of free IP addresses on that subnet if you
   intend to test IP address takeover.
 
 * The test exerciser machine doesn't need to be on the same subnet as the test
   cluster machines. Minimal demands are made on the exerciser; it just has to
   stay up during the tests.
 
 * Tracking problems is easier if all machines' clocks are closely synchronized.
   NTP does this automatically, but you can do it by hand if you want.
 
 * The account on the exerciser used to run the CTS lab (which does not need to
   be root) must be able to ssh as root to the cluster nodes without a password
   challenge. See the Mini-HOWTO at the end of this file for details about how
   to configure ssh for this.
 
 * The exerciser needs to be able to resolve all cluster node names, whether by
   DNS or /etc/hosts.
 
 * CTS is not guaranteed to run on all platforms that Pacemaker itself does.
   It calls commands such as service that may not be provided by all OSes.
 
 
 ### Preparation
 
 * Install Pacemaker, including the testing code, on all machines. The testing
   code must be the same version as the rest of Pacemaker, and the Pacemaker
   version must be the same on the exerciser and all cluster nodes.
 
   You can install from source, although many distributions package the testing
   code (named pacemaker-cts or similar). Typically, everything needed by the
   CTS lab is installed in /usr/share/pacemaker/tests/cts.
 
 * Configure the cluster layer (Corosync) on the cluster machines (*not* the
   exerciser), and verify it works. Node names used in the cluster configuration
   *must* match the hosts' names as returned by `uname -n`; they do not have to
   match the machines' fully qualified domain names.
 
 
 ### Run
 
 The primary interface to the CTS lab is the CTSlab.py executable:
 
     /usr/share/pacemaker/tests/cts/CTSlab.py [options] <number-of-tests-to-run>
 
 As part of the options, specify the cluster nodes with --nodes, for example:
 
     --nodes "pcmk-1 pcmk-2 pcmk-3"
 
 Most people will want to save the output to a file, for example:
 
     --outputfile ~/cts.log
 
 Unless you want to test a pre-existing cluster configuration, you also want
 (*warning*: with these options, any existing configuration will be lost):
 
     --clobber-cib
     --populate-resources
 
 You can test floating IP addresses (*not* already used by any host), one per
 cluster node, by specifying the first, for example:
 
     --test-ip-base 192.168.9.100
 
 Configure some sort of fencing, for example to use fence\_xvm:
 
     --stonith xvm
 
 Putting all the above together, a command line might look like:
 
     /usr/share/pacemaker/tests/cts/CTSlab.py --nodes "pcmk-1 pcmk-2 pcmk-3" \
         --outputfile ~/cts.log --clobber-cib --populate-resources \
         --test-ip-base 192.168.9.100 --stonith xvm 50
 
 For more options, run with the --help option.
 
 There are also a couple of wrappers for CTSlab.py that some users may find more
 convenient: cts, which is typically installed in the same place as the rest of
 the testing code; and cluster\_test, which is in the source directory and
 typically not installed.
 
 To extract the result of a particular test, run:
 
     crm_report -T $test
 
 
 ### Optional: Memory testing
 
 Pacemaker has various options for testing memory management. On cluster nodes,
 Pacemaker components use various environment variables to control these
 options. How these variables are set varies by OS, but usually they are set in
 a file such as /etc/sysconfig/pacemaker or /etc/default/pacemaker.
 
 Valgrind is a program for detecting memory management problems such as
 use-after-free errors. If you have valgrind installed, you can enable it by
 setting the following environment variables on all cluster nodes:
 
     PCMK_valgrind_enabled=pacemaker-attrd,pacemaker-based,pacemaker-controld,pacemaker-execd,pacemaker-fenced,pacemaker-schedulerd
     VALGRIND_OPTS="--leak-check=full --trace-children=no --num-callers=25
         --log-file=/var/lib/pacemaker/valgrind-%p
         --suppressions=/usr/share/pacemaker/tests/valgrind-pcmk.suppressions
         --gen-suppressions=all"
 
 If running the CTS lab with valgrind enabled on the cluster nodes, add these
 options to CTSlab.py:
 
     --valgrind-tests --valgrind-procs "pacemaker-attrd pacemaker-based pacemaker-controld pacemaker-execd pacemaker-schedulerd pacemaker-fenced"
 
 These options should only be set while specifically testing memory management,
 because they may slow down the cluster significantly, and they will disable
 writes to the CIB. If desired, you can enable valgrind on a subset of pacemaker
 components rather than all of them as listed above.
 
 Valgrind will put a text file for each process in the location specified by
 valgrind's --log-file option. See
 https://www.valgrind.org/docs/manual/mc-manual.html for explanations of the
 messages valgrind generates.
 
 Separately, if you are using the GNU C library, the G\_SLICE,
 MALLOC\_PERTURB\_, and MALLOC\_CHECK\_ environment variables can be set to
 affect the library's memory management functions.
 
 When using valgrind, G\_SLICE should be set to "always-malloc", which helps
 valgrind track memory by always using the malloc() and free() routines
 directly. When not using valgrind, G\_SLICE can be left unset, or set to
 "debug-blocks", which enables the C library to catch many memory errors
 but may impact performance.
 
 If the MALLOC\_PERTURB\_ environment variable is set to an 8-bit integer, the C
 library will initialize all newly allocated bytes of memory to the integer
 value, and will set all newly freed bytes of memory to the bitwise inverse of
 the integer value. This helps catch uses of uninitialized or freed memory
 blocks that might otherwise go unnoticed. Example:
 
     MALLOC_PERTURB_=221
 
 If the MALLOC\_CHECK\_ environment variable is set, the C library will check for
 certain heap corruption errors. The most useful value in testing is 3, which
 will cause the library to print a message to stderr and abort execution.
 Example:
 
     MALLOC_CHECK_=3
 
 Valgrind should be enabled for either all nodes or none when used with the CTS
 lab, but the C library variables may be set differently on different nodes.
 
 
 ### Optional: Remote node testing
 
 If the pacemaker-remoted daemon is installed on all cluster nodes, CTS will
 enable remote node tests.
 
 The remote node tests choose a random node, stop the cluster on it, start
 pacemaker-remoted on it, and add an ocf:pacemaker:remote resource to turn it
 into a remote node. When the test is done, CTS will turn the node back into
 a cluster node.
 
 To avoid conflicts, CTS will rename the node, prefixing the original node name
 with "remote-". For example, "pcmk-1" will become "remote-pcmk-1". These names
 do not need to be resolvable.
 
 The name change may require special fencing configuration, if the fence agent
 expects the node name to be the same as its hostname. A common approach is to
 specify the "remote-" names in pcmk\_host\_list. If you use
 pcmk\_host\_list=all, CTS will expand that to all cluster nodes and their
 "remote-" names.  You may additionally need a pcmk\_host\_map argument to map
 the "remote-" names to the hostnames. Example:
 
     --stonith xvm --stonith-args \
     pcmk_host_list=all,pcmk_host_map=remote-pcmk-1:pcmk-1;remote-pcmk-2:pcmk-2
 
 
 ### Optional: Remote node testing with valgrind
 
 When running the remote node tests, the Pacemaker components on the *cluster*
 nodes can be run under valgrind as described in the "Memory testing" section.
 However, pacemaker-remoted cannot be run under valgrind that way, because it is
 started by the OS's regular boot system and not by Pacemaker.
 
 Details vary by system, but the goal is to set the VALGRIND\_OPTS environment
 variable and then start pacemaker-remoted by prefixing it with the path to
 valgrind.
 
 The init script and systemd service file provided with pacemaker-remoted will
 load the pacemaker environment variables from the same location used by other
 Pacemaker components, so VALGRIND\_OPTS will be set correctly if using one of
 those.
 
 For an OS using systemd, you can override the ExecStart parameter to run
 valgrind. For example:
 
     mkdir /etc/systemd/system/pacemaker_remote.service.d
     cat >/etc/systemd/system/pacemaker_remote.service.d/valgrind.conf <<EOF
     [Service]
     ExecStart=
     ExecStart=/usr/bin/valgrind /usr/sbin/pacemaker-remoted
     EOF
 
 
 ### Optional: Container testing
 
 If the --container-tests option is given to CTSlab.py, it will enable
 testing of LXC resources (currently only the RemoteLXC test,
 which starts a remote node using an LXC container).
 
 The container tests have additional package dependencies (see the toplevel
 INSTALL.md). Also, SELinux must be enabled (in either permissive or enforcing
 mode), libvirtd must be enabled and running, and root must be able to ssh
 without a password between all cluster nodes (not just from the exerciser).
 Before running the tests, you can verify your environment with:
 
     /usr/share/pacemaker/tests/cts/lxc_autogen.sh -v
 
 LXC tests will create two containers with hardcoded parameters: a NAT'ed bridge
 named virbr0 using the IP network 192.168.123.0/24 will be created on the
 cluster node hosting the containers; the host will be assigned
 52:54:00:A8:12:35 as the MAC address and 192.168.123.1 as the IP address.
 Each container will be assigned a random MAC address starting with 52:54:,
 the IP address 192.168.123.11 or 192.168.123.12, the hostname lxc1 or lxc2
 (which will be added to the host's /etc/hosts file), and 196MB RAM.
 
 The test will revert all of the configuration when it is done.
 
 
 ### Mini-HOWTO: Allow passwordless remote SSH connections
 
 The CTS scripts run "ssh -l root" so you don't have to do any of your testing
 logged in as root on the exerciser. Here is how to allow such connections
 without requiring a password to be entered each time:
 
 * On your test exerciser, create an SSH key if you do not already have one.
   Most commonly, SSH keys will be in your ~/.ssh directory, with the
   private key file not having an extension, and the public key file
   named the same with the extension ".pub" (for example, ~/.ssh/id\_rsa.pub).
 
   If you don't already have a key, you can create one with:
 
       ssh-keygen -t rsa
 
 * From your test exerciser, authorize your SSH public key for root on all test
   machines (both the exerciser and the cluster test machines):
 
       ssh-copy-id -i ~/.ssh/id_rsa.pub root@$MACHINE
 
   You will probably have to provide your password, and possibly say
   "yes" to some questions about accepting the identity of the test machines.
 
   The above assumes you have a RSA SSH key in the specified location;
   if you have some other type of key (DSA, ECDSA, etc.), use its file name
   in the -i option above.
 
 * To verify, try this command from the exerciser machine for each
   of your cluster machines, and for the exerciser machine itself.
 
       ssh -l root $MACHINE
 
   If this works without prompting for a password, you're in business.
   If not, look at the documentation for your version of ssh.
 
 
 ## Note on the maintenance
 
 ### Tests for scheduler
 
 The source `*.xml` files are preferably kept in sync with the newest
-major (and only major, which is enough) schema version, unless justified
-otherwise (e.g. testing a feature backed only in `pacemaker-next` special
-version of the schema), since these tests are not meant to double as
-schema upgrade ones (unless some cases expressly designated so).
+major (and only major, which is enough) schema version, since these
+tests are not meant to double as schema upgrade ones (except some cases
+expressly designated as such).
 
 Currently and unless something goes wrong, the procedure of upgrading
 these tests en masse is as easy as:
 
     cd "$(git rev-parse --show-toplevel)/cts"  # if not already
     pushd "$(git rev-parse --show-toplevel)/xml"
     ./regression.sh cts_scheduler -G
     popd
     git add --interactive .
     git commit -m 'XML: upgrade-M.N.xsl: apply on scheduler CTS test cases'
     git reset HEAD && git checkout .  # if some differences still remain
     ./cts-scheduler  # absolutely vital to check nothing got broken!
 
 Now, sadly, there's no proved automated way to minimize instances like this:
 
     <primitive id="rsc1" class="ocf" provider="heartbeat" type="apache">
     </primitive>
 
 that may be left behind into more canonical:
 
     <primitive id="rsc1" class="ocf" provider="heartbeat" type="apache"/>
 
 so manual editing is tasked, or perhaps `--format` or `--c14n`
 to `xmllint` will be of help (without any other side effects).
 
 If the overall process gets stuck anywhere, common sense to the rescue.
 The initial part of the above recipe can be repeated anytime to verify
 there's nothing to upgrade artificially like this, which is a desired
 state.  Note that `regression.sh` script performs validation of both
 the input and output, should the upgrade take place, implicitly, so
 there's no need of revalidation in the happy case.
diff --git a/doc/sphinx/Pacemaker_Administration/upgrading.rst b/doc/sphinx/Pacemaker_Administration/upgrading.rst
index 3cd13a5f52..1ca2a4e4b8 100644
--- a/doc/sphinx/Pacemaker_Administration/upgrading.rst
+++ b/doc/sphinx/Pacemaker_Administration/upgrading.rst
@@ -1,535 +1,534 @@
 .. index:: upgrade
 
 Upgrading a Pacemaker Cluster
 -----------------------------
 
 .. index:: version
 
 Pacemaker Versioning
 ####################
 
 Pacemaker has an overall release version, plus separate version numbers for
 certain internal components.
 
 .. index::
    single: version; release
 
 * **Pacemaker release version:** This version consists of three numbers
   (*x.y.z*).
 
   The major version number (the *x* in *x.y.z*) increases when at least some
   rolling upgrades are not possible from the previous major version. For example,
   a rolling upgrade from 1.0.8 to 1.1.15 should always be supported, but a
   rolling upgrade from 1.0.8 to 2.0.0 may not be possible.
 
   The minor version (the *y* in *x.y.z*) increases when there are significant
   changes in cluster default behavior, tool behavior, and/or the API interface
   (for software that utilizes Pacemaker libraries). The main benefit is to alert
   you to pay closer attention to the release notes, to see if you might be
   affected.
 
   The release counter (the *z* in *x.y.z*) is increased with all public releases
   of Pacemaker, which typically include both bug fixes and new features.
 
 .. index::
    single: feature set
    single: version; feature set
 
 * **CRM feature set:** This version number applies to the communication between
   full cluster nodes, and is used to avoid problems in mixed-version clusters.
 
   The major version number increases when nodes with different versions would not
   work (rolling upgrades are not allowed). The minor version number increases
   when mixed-version clusters are allowed only during rolling upgrades. The
   minor-minor version number is ignored, but allows resource agents to detect
   cluster support for various features. [#]_
 
   Pacemaker ensures that the longest-running node is the cluster's DC. This
   ensures new features are not enabled until all nodes are upgraded to support
   them.
 
 .. index::
    single: version; Pacemaker Remote protocol
 
 * **Pacemaker Remote protocol version:** This version applies to communication
   between a Pacemaker Remote node and the cluster. It increases when an older
   cluster node would have problems hosting the connection to a newer
   Pacemaker Remote node. To avoid these problems, Pacemaker Remote nodes will
   accept connections only from cluster nodes with the same or newer
   Pacemaker Remote protocol version.
 
   Unlike with CRM feature set differences between full cluster nodes,
   mixed Pacemaker Remote protocol versions between Pacemaker Remote nodes and
   full cluster nodes are fine, as long as the Pacemaker Remote nodes have the
   older version. This can be useful, for example, to host a legacy application
   in an older operating system version used as a Pacemaker Remote node.
 
 .. index::
    single: version; XML schema
 
 * **XML schema version:** Pacemaker’s configuration syntax — what's allowed in
   the Configuration Information Base (CIB) — has its own version. This allows
   the configuration syntax to evolve over time while still allowing clusters
   with older configurations to work without change.
 
 
 .. index::
    single: upgrade; methods
 
 Upgrading Cluster Software
 ##########################
 
 There are three approaches to upgrading a cluster, each with advantages and
 disadvantages.
 
 .. table:: **Upgrade Methods**
 
    +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
    | Method                                            | Available| Can be   | Service| Service | Exercises| Allows   |
    |                                                   | between  | used with| outage | recovery| failover | change of|
    |                                                   | all      | Pacemaker| during | during  | logic    | messaging|
    |                                                   | versions | Remote   | upgrade| upgrade |          | layer    |
    |                                                   |          | nodes    |        |         |          | [#]_     |
    +===================================================+==========+==========+========+=========+==========+==========+
    | Complete cluster shutdown                         | yes      | yes      | always | N/A     | no       | yes      |
    +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
    | Rolling (node by node)                            | no       | yes      | always | yes     | yes      | no       |
    |                                                   |          |          | [#]_   |         |          |          |
    +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
    | Detach and reattach                               | yes      | no       | only   | no      | no       | yes      |
    |                                                   |          |          | due to |         |          |          |
    |                                                   |          |          | failure|         |          |          |
    +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
 
 
 .. index::
    single: upgrade; shutdown
 
 Complete Cluster Shutdown
 _________________________
 
 In this scenario, one shuts down all cluster nodes and resources,
 then upgrades all the nodes before restarting the cluster.
 
 #. On each node:
 
    a. Shutdown the cluster software (pacemaker and the messaging layer).
    #. Upgrade the Pacemaker software. This may also include upgrading the
       messaging layer and/or the underlying operating system.
    #. Check the configuration with the ``crm_verify`` tool.
 
 #. On each node:
 
    a. Start the cluster software.
 
 Currently, only Corosync version 2 and greater is supported as the cluster
 layer, but if another stack is supported in the future, the stack does not
 need to be the same one before the upgrade.
 
 One variation of this approach is to build a new cluster on new hosts.
 This allows the new version to be tested beforehand, and minimizes downtime by
 having the new nodes ready to be placed in production as soon as the old nodes
 are shut down.
 
 
 .. index::
    single: upgrade; rolling upgrade
 
 Rolling (node by node)
 ______________________
 
 In this scenario, each node is removed from the cluster, upgraded, and then
 brought back online, until all nodes are running the newest version.
 
 Special considerations when planning a rolling upgrade:
 
 * If you plan to upgrade other cluster software -- such as the messaging layer --
   at the same time, consult that software's documentation for its compatibility
   with a rolling upgrade.
 
 * If the major version number is changing in the Pacemaker version you are
   upgrading to, a rolling upgrade may not be possible. Read the new version's
   release notes (as well the information here) for what limitations may exist.
 
 * If the CRM feature set is changing in the Pacemaker version you are upgrading
   to, you should run a mixed-version cluster only during a small rolling
   upgrade window. If one of the older nodes drops out of the cluster for any
   reason, it will not be able to rejoin until it is upgraded.
 
 * If the Pacemaker Remote protocol version is changing, all cluster nodes
   should be upgraded before upgrading any Pacemaker Remote nodes.
 
 See the ClusterLabs wiki's
 `release calendar <https://wiki.clusterlabs.org/wiki/ReleaseCalendar>`_
 to figure out whether the CRM feature set and/or Pacemaker Remote protocol
 version changed between the the Pacemaker release versions in your rolling
 upgrade.
 
 To perform a rolling upgrade, on each node in turn:
 
 #. Put the node into standby mode, and wait for any active resources
    to be moved cleanly to another node. (This step is optional, but
    allows you to deal with any resource issues before the upgrade.)
 #. Shutdown the cluster software (pacemaker and the messaging layer) on the node.
 #. Upgrade the Pacemaker software. This may also include upgrading the
    messaging layer and/or the underlying operating system.
 #. If this is the first node to be upgraded, check the configuration
    with the ``crm_verify`` tool.
 #. Start the messaging layer.
    This must be the same messaging layer (currently only Corosync version 2 and
    greater is supported) that the rest of the cluster is using.
 
 .. note::
 
    Even if a rolling upgrade from the current version of the cluster to the
    newest version is not directly possible, it may be possible to perform a
    rolling upgrade in multiple steps, by upgrading to an intermediate version
    first.
 
 .. table:: **Version Compatibility Table**
 
    +-------------------------+---------------------------+
    | Version being Installed | Oldest Compatible Version |
    +=========================+===========================+
    | Pacemaker 2.y.z         | Pacemaker 1.1.11 [#]_     |
    +-------------------------+---------------------------+
    | Pacemaker 1.y.z         | Pacemaker 1.0.0           |
    +-------------------------+---------------------------+
    | Pacemaker 0.7.z         | Pacemaker 0.6.z           |
    +-------------------------+---------------------------+
 
 .. index::
    single: upgrade; detach and reattach
 
 Detach and Reattach
 ___________________
 
 The reattach method is a variant of a complete cluster shutdown, where the
 resources are left active and get re-detected when the cluster is restarted.
 
 This method may not be used if the cluster contains any Pacemaker Remote nodes.
 
 #. Tell the cluster to stop managing services. This is required to allow the
    services to remain active after the cluster shuts down.
 
    .. code-block:: none
 
       # crm_attribute --name maintenance-mode --update true
 
 #. On each node, shutdown the cluster software (pacemaker and the messaging
    layer), and upgrade the Pacemaker software. This may also include upgrading
    the messaging layer. While the underlying operating system may be upgraded
    at the same time, that will be more likely to cause outages in the detached
    services (certainly, if a reboot is required).
 #. Check the configuration with the ``crm_verify`` tool.
 #. On each node, start the cluster software.
    Currently, only Corosync version 2 and greater is supported as the cluster
    layer, but if another stack is supported in the future, the stack does not
    need to be the same one before the upgrade.
 #. Verify that the cluster re-detected all resources correctly.
 #. Allow the cluster to resume managing resources again:
 
    .. code-block:: none
 
       # crm_attribute --name maintenance-mode --delete
 
 .. note::
 
    While the goal of the detach-and-reattach method is to avoid disturbing
    running services, resources may still move after the upgrade if any
    resource's location is governed by a rule based on transient node
    attributes. Transient node attributes are erased when the node leaves the
    cluster. A common example is using the ``ocf:pacemaker:ping`` resource to
    set a node attribute used to locate other resources.
 
 .. index::
    pair: upgrade; CIB
 
 Upgrading the Configuration
 ###########################
 
 The CIB schema version can change from one Pacemaker version to another.
 
 After cluster software is upgraded, the cluster will continue to use the older
 schema version that it was previously using. This can be useful, for example,
 when administrators have written tools that modify the configuration, and are
 based on the older syntax. [#]_
 
 However, when using an older syntax, new features may be unavailable, and there
 is a performance impact, since the cluster must do a non-persistent
 configuration upgrade before each transition. So while using the old syntax is
 possible, it is not advisable to continue using it indefinitely.
 
 Even if you wish to continue using the old syntax, it is a good idea to
 follow the upgrade procedure outlined below, except for the last step, to ensure
 that the new software has no problems with your existing configuration (since it
 will perform much the same task internally).
 
 If you are brave, it is sufficient simply to run ``cibadmin --upgrade``.
 
 A more cautious approach would proceed like this:
 
 #. Create a shadow copy of the configuration. The later commands will
    automatically operate on this copy, rather than the live configuration.
 
    .. code-block:: none
 
       # crm_shadow --create shadow
 
 .. index::
    single: configuration; verify
 
 #. Verify the configuration is valid with the new software (which may be
    stricter about syntax mistakes, or may have dropped support for deprecated
    features):
 
    .. code-block:: none
 
       # crm_verify --live-check
 
 #. Fix any errors or warnings.
 #. Perform the upgrade:
 
    .. code-block:: none
 
       # cibadmin --upgrade
 
 #. If this step fails, there are three main possibilities:
 
    a. The configuration was not valid to start with (did you do steps 2 and
       3?).
    #. The transformation failed; `report a bug <https://bugs.clusterlabs.org/>`_.
    #. The transformation was successful but produced an invalid result.
 
    If the result of the transformation is invalid, you may see a number of
    errors from the validation library. If these are not helpful, visit the
    `Validation FAQ wiki page <https://wiki.clusterlabs.org/wiki/Validation_FAQ>`_
    and/or try the manual upgrade procedure described below.
 
 #. Check the changes:
 
    .. code-block:: none
 
       # crm_shadow --diff
 
    If at this point there is anything about the upgrade that you wish to
    fine-tune (for example, to change some of the automatic IDs), now is the
    time to do so:
 
    .. code-block:: none
 
       # crm_shadow --edit
 
    This will open the configuration in your favorite editor (whichever is
    specified by the standard ``$EDITOR`` environment variable).
 
 #. Preview how the cluster will react:
 
    .. code-block:: none
 
       # crm_simulate --live-check --save-dotfile shadow.dot -S
       # dot -Tsvg shadow.dot -o shadow.svg
 
    You can then view shadow.svg with any compatible image viewer or web
    browser. Verify that either no resource actions will occur or that you are
    happy with any that are scheduled.  If the output contains actions you do
    not expect (possibly due to changes to the score calculations), you may need
    to make further manual changes. See :ref:`crm_simulate` for further details
    on how to interpret the output of ``crm_simulate`` and ``dot``.
 
 #. Upload the changes:
 
    .. code-block:: none
 
       # crm_shadow --commit shadow --force
 
    In the unlikely event this step fails, please report a bug.
 
 .. note::
 
    It is also possible to perform the configuration upgrade steps manually:
 
    #. Locate the ``upgrade*.xsl`` conversion scripts provided with the source
       code. These will often be installed in a location such as
       ``/usr/share/pacemaker``, or may be obtained from the
       `source repository <https://github.com/ClusterLabs/pacemaker/tree/main/xml>`_.
           
    #. Run the conversion scripts that apply to your older version, for example:
 
       .. code-block:: none
 
          # xsltproc /path/to/upgrade06.xsl config06.xml > config10.xml
 
    #. Locate the ``pacemaker.rng`` script (from the same location as the xsl
       files).
    #. Check the XML validity:
 
       .. code-block:: none
 
          # xmllint --relaxng /path/to/pacemaker.rng config10.xml
 
    The advantage of this method is that it can be performed without the cluster
    running, and any validation errors are often more informative.
 
 
 What Changed in 2.1
 ###################
 
 The Pacemaker 2.1 release is fully backward-compatible in both the CIB XML and
 the C API. Highlights:
 
 * Pacemaker now supports the **OCF Resource Agent API version 1.1**.
   Most notably, the ``Master`` and ``Slave`` role names have been renamed to
   ``Promoted`` and ``Unpromoted``.
 
 * Pacemaker now supports colocations where the dependent resource does not
   affect the primary resource's placement (via a new ``influence`` colocation
   constraint option and ``critical`` resource meta-attribute). This is intended
   for cases where a less-important resource must be colocated with an essential
   resource, but it is preferred to leave the less-important resource stopped if
   it fails, rather than move both resources.
 
 * If Pacemaker is built with libqb 2.0 or later, the detail log will use
   **millisecond-resolution timestamps**.
 
 * In addition to crm_mon and stonith_admin, the crmadmin, crm_resource,
   crm_simulate, and crm_verify commands now support the ``--output-as`` and
   ``--output-to`` options, including **XML output** (which scripts and
   higher-level tools are strongly recommended to use instead of trying to parse
   the text output, which may change from release to release).
 
 For a detailed list of changes, see the release notes and the
 `Pacemaker 2.1 Changes <https://wiki.clusterlabs.org/wiki/Pacemaker_2.1_Changes>`_
 page on the ClusterLabs wiki.
 
 
 What Changed in 2.0
 ###################
 
 The main goal of the 2.0 release was to remove support for deprecated syntax,
 along with some small changes in default configuration behavior and tool
 behavior. Highlights:
 
 * Only Corosync version 2 and greater is now supported as the underlying
   cluster layer. Support for Heartbeat and Corosync 1 (including CMAN) is
   removed.
 
 * The Pacemaker detail log file is now stored in
   ``/var/log/pacemaker/pacemaker.log`` by default.
 
 * The record-pending cluster property now defaults to true, which
   allows status tools such as crm_mon to show operations that are in
   progress.
 
 * Support for a number of deprecated build options, environment variables,
   and configuration settings has been removed.
 
 * The ``master`` tag has been deprecated in favor of using the ``clone`` tag
   with the new ``promotable`` meta-attribute set to ``true``. "Master/slave"
   clone resources are now referred to as "promotable" clone resources.
 
 * The public API for Pacemaker libraries that software applications can use
   has changed significantly.
 
 For a detailed list of changes, see the release notes and the
 `Pacemaker 2.0 Changes <https://wiki.clusterlabs.org/wiki/Pacemaker_2.0_Changes>`_
 page on the ClusterLabs wiki.
 
 
 What Changed in 1.0
 ###################
 
 New
 ___
 
 * Failure timeouts.
 * New section for resource and operation defaults.
 * Tool for making offline configuration changes.
 * ``Rules``, ``instance_attributes``, ``meta_attributes`` and sets of
   operations can be defined once and referenced in multiple places.
 * The CIB now accepts XPath-based create/modify/delete operations. See
   ``cibadmin --help``.
 * Multi-dimensional colocation and ordering constraints.
 * The ability to connect to the CIB from non-cluster machines.
 * Allow recurring actions to be triggered at known times.
 
 
 Changed
 _______
 
 * Syntax
 
   * All resource and cluster options now use dashes (-) instead of underscores
     (_)
   * ``master_slave`` was renamed to ``master``
   * The ``attributes`` container tag was removed
   * The operation field ``pre-req`` has been renamed ``requires``
   * All operations must have an ``interval``, ``start``/``stop`` must have it
     set to zero
 
 * The ``stonith-enabled`` option now defaults to true.
 * The cluster will refuse to start resources if ``stonith-enabled`` is true (or
   unset) and no STONITH resources have been defined
 * The attributes of colocation and ordering constraints were renamed for
   clarity.
 * ``resource-failure-stickiness`` has been replaced by ``migration-threshold``.
 * The parameters for command-line tools have been made consistent
 * Switched to 'RelaxNG' schema validation and 'libxml2' parser
 
   * id fields are now XML IDs which have the following limitations:
 
     * id's cannot contain colons (:)
     * id's cannot begin with a number
     * id's must be globally unique (not just unique for that tag)
 
   * Some fields (such as those in constraints that refer to resources) are
     IDREFs.
 
     This means that they must reference existing resources or objects in
     order for the configuration to be valid.  Removing an object which is
     referenced elsewhere will therefore fail.
 
   * The CIB representation, from which a MD5 digest is calculated to verify
     CIBs on the nodes, has changed.
 
     This means that every CIB update will require a full refresh on any
     upgraded nodes until the cluster is fully upgraded to 1.0. This will result
     in significant performance degradation and it is therefore highly
     inadvisable to run a mixed 1.0/0.6 cluster for any longer than absolutely
     necessary.
 
 * Ping node information no longer needs to be added to ``ha.cf``. Simply
   include the lists of hosts in your ping resource(s).
 
 
 Removed
 _______
 
 
 * Syntax
 
   * It is no longer possible to set resource meta options as top-level
     attributes. Use meta-attributes instead.
   * Resource and operation defaults are no longer read from ``crm_config``.
 
 .. rubric:: Footnotes
 
 .. [#] Before CRM feature set 3.1.0 (Pacemaker 2.0.0), the minor-minor version
        number was treated the same as the minor version.
 
 .. [#] Currently, Corosync version 2 and greater is the only supported cluster
        stack, but other stacks have been supported by past versions, and may be
        supported by future versions.
 
 .. [#] Any active resources will be moved off the node being upgraded, so there
        will be at least a brief outage unless all resources can be migrated
        "live".
 
 .. [#] Rolling upgrades from Pacemaker 1.1.z to 2.y.z are possible only if the
        cluster uses corosync version 2 or greater as its messaging layer, and
        the Cluster Information Base (CIB) uses schema 1.0 or higher in its
        ``validate-with`` property.
 
 .. [#] As of Pacemaker 2.0.0, only schema versions pacemaker-1.0 and higher
-       are supported (excluding pacemaker-1.1, which was an experimental schema
-       now known as pacemaker-next).
+       are supported (excluding pacemaker-1.1, which was a special case).
diff --git a/xml/README.md b/xml/README.md
index a1bef4169d..4d74c67d56 100644
--- a/xml/README.md
+++ b/xml/README.md
@@ -1,148 +1,134 @@
 # Schema Reference
 
 Pacemaker's XML schema has a version of its own, independent of the version of
 Pacemaker itself.
 
 ## Versioned Schema Evolution
 
 A versioned schema offers transparent backward and forward compatibility.
 
 - It reflects the timeline of schema-backed features (introduction,
   changes to the syntax, possibly deprecation) through the versioned
   stable schema increments, while keeping schema versions used by default
   by older Pacemaker versions untouched.
 
 - Pacemaker internally uses the latest stable schema version, and relies on
   supplemental transformations to promote cluster configurations based on
   older, incompatible schema versions into the desired form.
 
-- It allows experimental features with a possibly unstable configuration
-  interface to be developed using the special `next` version of the schema.
-
 ## Mapping Pacemaker Versions to Schema Versions
 
 | Pacemaker | Latest Schema | Changed
 | --------- | ------------- | ----------------------------------------------
 | `2.1.3`   | `3.8`         | `acls`
 | `2.1.0`   | `3.7`         | `constraints`, `resources`
 | `2.0.5`   | `3.5`         | `api`, `resources`, `rule`
 | `2.0.4`   | `3.3`         | `tags`
 | `2.0.1`   | `3.2`         | `resources`
 | `2.0.0`   | `3.1`         | `constraints`, `resources`
 | `1.1.18`  | `2.10`        | `resources`, `alerts`
 | `1.1.17`  | `2.9`         | `resources`, `rule`
 | `1.1.16`  | `2.6`         | `constraints`
 | `1.1.15`  | `2.5`         | `alerts`
 | `1.1.14`  | `2.4`         | `fencing`
 | `1.1.13`  | `2.3`         | `constraints`
 | `1.1.12`  | `2.0`         | `nodes`, `nvset`, `resources`, `tags`, `acls`
 | `1.1.8`+  | `1.2`         |
 
 ## Schema generation
 
 Each logical portion of the schema goes into its own RNG file, named like
 `${base}-${X}.${Y}.rng`. `${base}` identifies the portion of the schema
 (e.g. constraints, resources); ${X}.${Y} is the latest schema version that
 contained changes in this portion of the schema.
 
 The complete, overall schema, `pacemaker-${X}.${Y}.rng`, is automatically
 generated from the other files via the Makefile.
 
 # Updating schema files #
 
-## Experimental features ##
-
-Experimental features go into `${base}-next.rng` where `${base}` is the
-affected portion of the schema. If such a file does not already exist,
-create it by copying the most recent `${base}-${X}.${Y}.rng`.
-
-Pacemaker will not use the experimental schema by default; the cluster
-administrator must explicitly set the `validate-with` property appropriately to
-use it.
-
-## Stable features ##
+## New features ##
 
-The current stable version is determined at runtime when
+The current schema version is determined at runtime when
 crm\_schema\_init() scans the CRM\_SCHEMA\_DIRECTORY.
 
 It will have the form `pacemaker-${X}.${Y}` and the highest
 `${X}.${Y}` wins.
 
 ### Simple Additions
 
 When the new syntax is a simple addition to the previous one, create a
 new entry, incrementing `${Y}`.
 
 ### Feature Removal or otherwise Incompatible Changes
 
 When the new syntax is not a simple addition to the previous one,
 create a new entry, incrementing `${X}` and setting `${Y} = 0`.
 
 An XSLT file is also required that converts an old syntax to the new
 one and must be named `upgrade-${Xold}.${Yold}.xsl`.
 
 See `xml/upgrade-1.3.xsl` for an example.
 
 Since `xml/upgrade-2.10.xsl`, rather self-descriptive approach is taken,
 separating metadata of the replacements and other modifications to
 perform from the actual executive parts, which is leveraged, e.g., with
 the on-the-fly overview as obtained with `./regression.sh -X test2to3`.
 Also this was the first time particular key names of `nvpair`s,
 i.e. below the granularity of the schemas so far, received attention,
 and consequently, no longer expected names became systemically banned
 in the after-upgrade schemas, using `<except>` construct in the
 data type specification pertaining the affected XML path.
 
 The implied complexity also resulted in establishing a new compound,
 stepwise transformation, alleviating the procedural burden from the
 core upgrade recipe.  In particular, `id-ref` based syntactic
 simplification granted in the CIB format introduces nonnegligible
 internal "noise" because of the extra indirection encumbered with
 generally non-bijective character of such a scheme (context-dependent
 interpretation).  To reduce this strain, a symmetric arrangement is
 introduced as a pair of _enter_/_leave_ (pre-upgrade/post-upgrade)
 transformations where the latter is meant to eventually reversibly
 restore what the former intentionally simplified (normalized) for
 upgrade transformation's peruse.  It's optional (even the post-upgrade
 counterpart is optional alone) and depends on whether the suitable
 files are found along the upgrade transformation itself: e.g., for
 `upgrade-2.10.xsl`, such files are `upgrade-2.10-enter.xsl` and
 `upgrade-2.10-leave.xsl`.  Note that unfolding + refolding `id-ref`
 shortcuts is just a practically imposed individual case of how to
 reversibly make the configuration space tractable in the upgrade
 itself, allowing for more sophistication down the road.
 
 ### General Procedure
 
 1. Copy the most recent version of `${base}-*.rng` to `${base}-${X}.${Y}.rng`,
    such that the new file name increments the highest number of any schema file,
    not just the file being edited.
 2. Commit the copy, e.g. `"Low: xml: clone ${base} schema in preparation for
    changes"`. This way, the actual change will be obvious in the commit history.
 3. Modify `${base}-${X}.${Y}.rng` as required.
 4. If required, add an XSLT file, and update `xslt\_SCRIPTS` in `xml/Makefile.am`.
 5. Commit.
 6. Run `make -C xml clean; make -C xml` to rebuild the schemas in the local
 6. Run `make -C xml clean; make -C xml` to rebuild the schemas in the local
    source directory.
 7. The CIB validity and upgrade regression tests will break after the schema is
    updated. Run `cts/cts-cli -s` to make the expected outputs reflect the
    changes made so far, and run `git diff` to ensure that these changes look
    sane. Finally, commit the changes.
 8. Similarly, with the new major version `${X}`, it's advisable to refresh
    scheduler tests at some point. See the instructions in `cts/README.md`.
 
 ## Using a New Schema
 
 New features will not be available until the cluster administrator:
 
 1. Updates all the nodes
 2. Runs the equivalent of `cibadmin --upgrade --force`
 
 ## Random Notes
 
 From the source directory, run `make -C xml diff` to see the changes
-in the current schema (compared to the previous ones) and also the
-pending changes in `pacemaker-next`.
+in the current schema (compared to the previous ones).
 Alternatively, if the intention is to grok the overall historical schema
 evolution, use `make -C xml fulldiff`.