diff --git a/doc/sphinx/Pacemaker_Development/components.rst b/doc/sphinx/Pacemaker_Development/components.rst
index e14df26ad6..91862cd48d 100644
--- a/doc/sphinx/Pacemaker_Development/components.rst
+++ b/doc/sphinx/Pacemaker_Development/components.rst
@@ -1,489 +1,489 @@
 Coding Particular Pacemaker Components
 --------------------------------------
 
 The Pacemaker code can be intricate and difficult to follow. This chapter has
 some high-level descriptions of how individual components work.
 
 
 .. index::
    single: controller
    single: pacemaker-controld
 
 Controller
 ##########
 
 ``pacemaker-controld`` is the Pacemaker daemon that utilizes the other daemons
 to orchestrate actions that need to be taken in the cluster. It receives CIB
 change notifications from the CIB manager, passes the new CIB to the scheduler
 to determine whether anything needs to be done, uses the executor and fencer to
 execute any actions required, and sets failure counts (among other things) via
 the attribute manager.
 
 As might be expected, it has the most code of any of the daemons.
 
 .. index::
    single: join
 
 Join sequence
 _____________
 
 Most daemons track their cluster peers using Corosync's membership and CPG
 only. The controller additionally requires peers to `join`, which ensures they
 are ready to be assigned tasks. Joining proceeds through a series of phases
 referred to as the `join sequence` or `join process`.
 
 A node's current join phase is tracked by the ``join`` member of ``crm_node_t``
 (used in the peer cache). It is an ``enum crm_join_phase`` that (ideally)
 progresses from the DC's point of view as follows:
 
 * The node initially starts at ``crm_join_none``
 
 * The DC sends the node a `join offer` (``CRM_OP_JOIN_OFFER``), and the node
   proceeds to ``crm_join_welcomed``. This can happen in three ways:
   
   * The joining node will send a `join announce` (``CRM_OP_JOIN_ANNOUNCE``) at
     its controller startup, and the DC will reply to that with a join offer.
   * When the DC's peer status callback notices that the node has joined the
     messaging layer, it registers ``I_NODE_JOIN`` (which leads to
     ``A_DC_JOIN_OFFER_ONE`` -> ``do_dc_join_offer_one()`` ->
     ``join_make_offer()``).
   * After certain events (notably a new DC being elected), the DC will send all
     nodes join offers (via A_DC_JOIN_OFFER_ALL -> ``do_dc_join_offer_all()``).
 
   These can overlap. The DC can send a join offer and the node can send a join
   announce at nearly the same time, so the node responds to the original join
   offer while the DC responds to the join announce with a new join offer. The
   situation resolves itself after looping a bit.
 
 * The node responds to join offers with a `join request`
   (``CRM_OP_JOIN_REQUEST``, via ``do_cl_join_offer_respond()`` and
   ``join_query_callback()``). When the DC receives the request, the
   node proceeds to ``crm_join_integrated`` (via ``do_dc_join_filter_offer()``).
 
 * As each node is integrated, the current best CIB is sync'ed to each
   integrated node via ``do_dc_join_finalize()``. As each integrated node's CIB
   sync succeeds, the DC acks the node's join request (``CRM_OP_JOIN_ACKNAK``)
   and the node proceeds to ``crm_join_finalized`` (via
   ``finalize_sync_callback()`` + ``finalize_join_for()``).
 
 * Each node confirms the finalization ack (``CRM_OP_JOIN_CONFIRM`` via
   ``do_cl_join_finalize_respond()``), including its current resource operation
   history (via ``controld_query_executor_state()``). Once the DC receives this
   confirmation, the node proceeds to ``crm_join_confirmed`` via
   ``do_dc_join_ack()``.
 
 Once all nodes are confirmed, the DC calls ``do_dc_join_final()``, which checks
 for quorum and responds appropriately.
 
 When peers are lost, their join phase is reset to none (in various places).
 
 ``crm_update_peer_join()`` updates a node's join phase.
 
 The DC increments the global ``current_join_id`` for each joining round, and
 rejects any (older) replies that don't match.
 
 
 .. index::
    single: fencer
    single: pacemaker-fenced
 
 Fencer
 ######
 
 ``pacemaker-fenced`` is the Pacemaker daemon that handles fencing requests. In
 the broadest terms, fencing works like this:
 
 #. The initiator (an external program such as ``stonith_admin``, or the cluster
    itself via the controller) asks the local fencer, "Hey, could you please
    fence this node?"
 #. The local fencer asks all the fencers in the cluster (including itself),
    "Hey, what fencing devices do you have access to that can fence this node?"
 #. Each fencer in the cluster replies with a list of available devices that
    it knows about.
 #. Once the original fencer gets all the replies, it asks the most
    appropriate fencer peer to actually carry out the fencing. It may send
    out more than one such request if the target node must be fenced with
    multiple devices.
 #. The chosen fencer(s) call the appropriate fencing resource agent(s) to
    do the fencing, then reply to the original fencer with the result.
 #. The original fencer broadcasts the result to all fencers.
 #. Each fencer sends the result to each of its local clients (including, at
    some point, the initiator).
 
 A more detailed description follows.
 
 .. index::
    single: libstonithd
 
 Initiating a fencing request
 ____________________________
 
 A fencing request can be initiated by the cluster or externally, using the
 libstonithd API.
 
 * The cluster always initiates fencing via
   ``daemons/controld/controld_fencing.c:te_fence_node()`` (which calls the
   ``fence()`` API method). This occurs when a transition graph synapse contains
   a ``CRM_OP_FENCE`` XML operation.
 * The main external clients are ``stonith_admin`` and ``cts-fence-helper``.
   The ``DLM`` project also uses Pacemaker for fencing.
 
 Highlights of the fencing API:
 
 * ``stonith_api_new()`` creates and returns a new ``stonith_t`` object, whose
   ``cmds`` member has methods for connect, disconnect, fence, etc.
 * the ``fence()`` method creates and sends a ``STONITH_OP_FENCE XML`` request with
   the desired action and target node. Callers do not have to choose or even
   have any knowledge about particular fencing devices.
 
 Fencing queries
 _______________
 
 The function calls for a fencing request go something like this:
 
 The local fencer receives the client's request via an IPC or messaging
 layer callback, which calls
 
 * ``stonith_command()``, which (for requests) calls
 
   * ``handle_request()``, which (for ``STONITH_OP_FENCE`` from a client) calls
 
     * ``initiate_remote_stonith_op()``, which creates a ``STONITH_OP_QUERY`` XML
       request with the target, desired action, timeout, etc. then broadcasts
       the operation to the cluster group (i.e. all fencer instances) and
       starts a timer. The query is broadcast because (1) location constraints
       might prevent the local node from accessing the stonith device directly,
       and (2) even if the local node does have direct access, another node
       might be preferred to carry out the fencing.
 
 Each fencer receives the original fencer's ``STONITH_OP_QUERY`` broadcast
 request via IPC or messaging layer callback, which calls:
 
 * ``stonith_command()``, which (for requests) calls
 
   *  ``handle_request()``, which (for ``STONITH_OP_QUERY`` from a peer) calls
 
     * ``stonith_query()``, which calls
 
       * ``get_capable_devices()`` with ``stonith_query_capable_device_cb()`` to add
         device information to an XML reply and send it. (A message is
         considered a reply if it contains ``T_STONITH_REPLY``, which is only
         set by fencer peers, not clients.)
 
 The original fencer receives all peers' ``STONITH_OP_QUERY`` replies via IPC
 or messaging layer callback, which calls:
 
 * ``stonith_command()``, which (for replies) calls
 
   * ``handle_reply()`` which (for ``STONITH_OP_QUERY``) calls
 
     * ``process_remote_stonith_query()``, which allocates a new query result
       structure, parses device information into it, and adds it to the
       operation object. It increments the number of replies received for this
       operation, and compares it against the expected number of replies (i.e.
       the number of active peers), and if this is the last expected reply,
       calls
 
       * ``request_peer_fencing()``, which calculates the timeout and sends
         ``STONITH_OP_FENCE`` request(s) to carry out the fencing. If the target
 	node has a fencing "topology" (which allows specifications such as
 	"this node can be fenced either with device A, or devices B and C in
 	combination"), it will choose the device(s), and send out as many
 	requests as needed. If it chooses a device, it will choose the peer; a
 	peer is preferred if it has "verified" access to the desired device,
 	meaning that it has the device "running" on it and thus has a monitor
         operation ensuring reachability.
 
 Fencing operations
 __________________
 
 Each ``STONITH_OP_FENCE`` request goes something like this:
 
 The chosen peer fencer receives the ``STONITH_OP_FENCE`` request via IPC or
 messaging layer callback, which calls:
 
 * ``stonith_command()``, which (for requests) calls
 
   * ``handle_request()``, which (for ``STONITH_OP_FENCE`` from a peer) calls
 
     * ``stonith_fence()``, which calls
 
       * ``schedule_stonith_command()`` (using supplied device if
         ``F_STONITH_DEVICE`` was set, otherwise the highest-priority capable
 	device obtained via ``get_capable_devices()`` with
 	``stonith_fence_get_devices_cb()``), which adds the operation to the
         device's pending operations list and triggers processing.
 
 The chosen peer fencer's mainloop is triggered and calls
 
 * ``stonith_device_dispatch()``, which calls
 
   * ``stonith_device_execute()``, which pops off the next item from the device's
     pending operations list. If acting as the (internally implemented) watchdog
     agent, it panics the node, otherwise it calls
 
     * ``stonith_action_create()`` and ``stonith_action_execute_async()`` to
       call the fencing agent.
 
 The chosen peer fencer's mainloop is triggered again once the fencing agent
 returns, and calls
 
 * ``stonith_action_async_done()`` which adds the results to an action object
   then calls its
 
   * done callback (``st_child_done()``), which calls ``schedule_stonith_command()``
     for a new device if there are further required actions to execute or if the
     original action failed, then builds and sends an XML reply to the original
     fencer (via ``send_async_reply()``), then checks whether any
     pending actions are the same as the one just executed and merges them if so.
 
 Fencing replies
 _______________
 
 The original fencer receives the ``STONITH_OP_FENCE`` reply via IPC or
 messaging layer callback, which calls:
 
 * ``stonith_command()``, which (for replies) calls
 
   * ``handle_reply()``, which calls
 
     * ``fenced_process_fencing_reply()``, which calls either
       ``request_peer_fencing()`` (to retry a failed operation, or try the next
       device in a topology if appropriate, which issues a new
       ``STONITH_OP_FENCE`` request, proceeding as before) or
       ``finalize_op()`` (if the operation is definitively failed or
       successful).
 
       * ``finalize_op()`` broadcasts the result to all peers.
 
 Finally, all peers receive the broadcast result and call
 
 * ``finalize_op()``, which sends the result to all local clients.
 
 
 .. index::
    single: fence history
 
 Fencing History
 _______________
 
 The fencer keeps a running history of all fencing operations. The bulk of the
 relevant code is in `fenced_history.c` and ensures the history is synchronized
 across all nodes even if a node leaves and rejoins the cluster.
 
 In libstonithd, this information is represented by `stonith_history_t` and is
 queryable by the `stonith_api_operations_t:history()` method. `crm_mon` and
 `stonith_admin` use this API to display the history.
 
 
 .. index::
    single: scheduler
    single: pacemaker-schedulerd
    single: libpe_status
    single: libpe_rules
    single: libpacemaker
 
 Scheduler
 #########
 
 ``pacemaker-schedulerd`` is the Pacemaker daemon that runs the Pacemaker
 scheduler for the controller, but "the scheduler" in general refers to related
 library code in ``libpe_status`` and ``libpe_rules`` (``lib/pengine/*.c``), and
 some of ``libpacemaker`` (``lib/pacemaker/pcmk_sched_*.c``).
 
 The purpose of the scheduler is to take a CIB as input and generate a
 transition graph (list of actions that need to be taken) as output.
 
 The controller invokes the scheduler by contacting the scheduler daemon via
 local IPC. Tools such as ``crm_simulate``, ``crm_mon``, and ``crm_resource``
 can also invoke the scheduler, but do so by calling the library functions
 directly. This allows them to run using a ``CIB_file`` without the cluster
 needing to be active.
 
 The main entry point for the scheduler code is
 ``lib/pacemaker/pcmk_sched_allocate.c:pcmk__schedule_actions()``. It sets
 defaults and calls a series of functions for the scheduling. Some key steps:
 
 * ``unpack_cib()`` parses most of the CIB XML into data structures, and
   determines the current cluster status.
 * ``apply_node_criteria()`` applies factors that make resources prefer certain
   nodes, such as shutdown locks, location constraints, and stickiness.
 * ``pcmk__create_internal_constraints()`` creates internal constraints, such as
   the implicit ordering for group members, or start actions being implicitly
   ordered before promote actions.
 * ``pcmk__handle_rsc_config_changes()`` processes resource history entries in
   the CIB status section. This is used to decide whether certain
   actions need to be done, such as deleting orphan resources, forcing a restart
   when a resource definition changes, etc.
-* ``allocate_resources()`` assigns resources to nodes.
+* ``assign_resources()`` assigns resources to nodes.
 * ``schedule_resource_actions()`` schedules resource-specific actions (which
   might or might not end up in the final graph).
 * ``pcmk__apply_orderings()`` processes ordering constraints in order to modify
   action attributes such as optional or required.
 * ``pcmk__create_graph()`` creates the transition graph.
 
 Challenges
 __________
 
 Working with the scheduler is difficult. Challenges include:
 
 * It is far too much code to keep more than a small portion in your head at one
   time.
 * Small changes can have large (and unexpected) effects. This is why we have a
   large number of regression tests (``cts/cts-scheduler``), which should be run
   after making code changes.
 * It produces an insane amount of log messages at debug and trace levels.
   You can put resource ID(s) in the ``PCMK_trace_tags`` environment variable to
   enable trace-level messages only when related to specific resources.
 * Different parts of the main ``pe_working_set_t`` structure are finalized at
   different points in the scheduling process, so you have to keep in mind
   whether information you're using at one point of the code can possibly change
   later. For example, data unpacked from the CIB can safely be used anytime
   after ``unpack_cib(),`` but actions may become optional or required anytime
   before ``pcmk__create_graph()``. There's no easy way to deal with this.
 * Many names of struct members, functions, etc., are suboptimal, but are part
   of the public API and cannot be changed until an API backward compatibility
   break.
 
 
 .. index::
    single: pe_working_set_t
 
 Cluster Working Set
 ___________________
 
 The main data object for the scheduler is ``pe_working_set_t``, which contains
 all information needed about nodes, resources, constraints, etc., both as the
 raw CIB XML and parsed into more usable data structures, plus the resulting
 transition graph XML. The variable name is usually ``data_set``.
 
 .. index::
    single: pe_resource_t
 
 Resources
 _________
 
 ``pe_resource_t`` is the data object representing cluster resources. A resource
 has a variant: primitive (a.k.a. native), group, clone, or bundle.
 
 The resource object has members for two sets of methods,
 ``resource_object_functions_t`` from the ``libpe_status`` public API, and
 ``resource_alloc_functions_t`` whose implementation is internal to
 ``libpacemaker``. The actual functions vary by variant.
 
 The object functions have basic capabilities such as unpacking the resource
 XML, and determining the current or planned location of the resource.
 
 The allocation functions have more obscure capabilities needed for scheduling,
 such as processing location and ordering constraints. For example,
 ``pcmk__create_internal_constraints()`` simply calls the
 ``internal_constraints()`` method for each top-level resource in the cluster.
 
 .. index::
    single: pe_node_t
 
 Nodes
 _____
 
 Allocation of resources to nodes is done by choosing the node with the highest
 score for a given resource. The scheduler does a bunch of processing to
 generate the scores, then the actual allocation is straightforward.
 
 Node lists are frequently used. For example, ``pe_working_set_t`` has a
 ``nodes`` member which is a list of all nodes in the cluster, and
 ``pe_resource_t`` has a ``running_on`` member which is a list of all nodes on
 which the resource is (or might be) active. These are lists of ``pe_node_t``
 objects.
 
 The ``pe_node_t`` object contains a ``struct pe_node_shared_s *details`` member
 with all node information that is independent of resource allocation (the node
 name, etc.).
 
 The working set's ``nodes`` member contains the original of this information.
 All other node lists contain copies of ``pe_node_t`` where only the ``details``
 member points to the originals in the working set's ``nodes`` list. In this
 way, the other members of ``pe_node_t`` (such as ``weight``, which is the node
 score) may vary by node list, while the common details are shared.
 
 .. index::
    single: pe_action_t
    single: pe_action_flags
 
 Actions
 _______
 
 ``pe_action_t`` is the data object representing actions that might need to be
 taken. These could be resource actions, cluster-wide actions such as fencing a
 node, or "pseudo-actions" which are abstractions used as convenient points for
 ordering other actions against.
 
 It has a ``flags`` member which is a bitmask of ``enum pe_action_flags``. The
 most important of these are ``pe_action_runnable`` (if not set, the action is
 "blocked" and cannot be added to the transition graph) and
 ``pe_action_optional`` (actions with this set will not be added to the
 transition graph; actions often start out as optional, and may become required
 later).
 
 
 .. index::
    single: pe__colocation_t
 
 Colocations
 ___________
 
 ``pcmk__colocation_t`` is the data object representing colocations.
 
 Colocation constraints come into play in these parts of the scheduler code:
 
 * When sorting resources for assignment, so resources with highest node score
   are assigned first (see ``cmp_resources()``)
 * When updating node scores for resource assigment or promotion priority
 * When assigning resources, so any resources to be colocated with can be
   assigned first, and so colocations affect where the resource is assigned
 * When choosing roles for promotable clone instances, so colocations involving
   a specific role can affect which instances are promoted
 
 The resource allocation functions have several methods related to colocations:
 
 * ``apply_coloc_score():`` This applies a colocation's score to either the
   dependent's allowed node scores (if called while resources are being
   assigned) or the dependent's priority (if called while choosing promotable
   instance roles). It can behave differently depending on whether it is being
   called as the primary's method or as the dependent's method.
 * ``add_colocated_node_scores():`` This updates a table of nodes for a given
   colocation attribute and score. It goes through colocations involving a given
   resource, and updates the scores of the nodes in the table with the best
   scores of nodes that match up according to the colocation criteria.
 * ``colocated_resources():`` This generates a list of all resources involved
   in mandatory colocations (directly or indirectly via colocation chains) with
   a given resource.
 
 
 .. index::
    single: pe__ordering_t
    single: pe_ordering
 
 Orderings
 _________
 
 Ordering constraints are simple in concept, but they are one of the most
 important, powerful, and difficult to follow aspects of the scheduler code.
 
 ``pe__ordering_t`` is the data object representing an ordering, better thought
 of as a relationship between two actions, since the relation can be more
 complex than just "this one runs after that one".
 
 For an ordering "A then B", the code generally refers to A as "first" or
 "before", and B as "then" or "after".
 
 Much of the power comes from ``enum pe_ordering``, which are flags that
 determine how an ordering behaves. There are many obscure flags with big
 effects. A few examples:
 
 * ``pe_order_none`` means the ordering is disabled and will be ignored. It's 0,
   meaning no flags set, so it must be compared with equality rather than
   ``pcmk_is_set()``.
 * ``pe_order_optional`` means the ordering does not make either action
   required, so it only applies if they both become required for other reasons.
 * ``pe_order_implies_first`` means that if action B becomes required for any
   reason, then action A will become required as well.
diff --git a/lib/pacemaker/libpacemaker_private.h b/lib/pacemaker/libpacemaker_private.h
index dac081ee83..a6f840e339 100644
--- a/lib/pacemaker/libpacemaker_private.h
+++ b/lib/pacemaker/libpacemaker_private.h
@@ -1,1056 +1,1056 @@
 /*
  * Copyright 2021-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU Lesser General Public License
  * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY.
  */
 
 #ifndef PCMK__LIBPACEMAKER_PRIVATE__H
 #  define PCMK__LIBPACEMAKER_PRIVATE__H
 
 /* This header is for the sole use of libpacemaker, so that functions can be
  * declared with G_GNUC_INTERNAL for efficiency.
  */
 
 #include <crm/pengine/pe_types.h> // pe_action_t, pe_node_t, pe_working_set_t
 #include <crm/pengine/internal.h> // pe__location_t
 
 // Flags to modify the behavior of add_colocated_node_scores()
 enum pcmk__coloc_select {
     // With no other flags, apply all "with this" colocations
     pcmk__coloc_select_default      = 0,
 
     // Apply "this with" colocations instead of "with this" colocations
     pcmk__coloc_select_this_with    = (1 << 0),
 
     // Apply only colocations with non-negative scores
     pcmk__coloc_select_nonnegative  = (1 << 1),
 
     // Apply only colocations with at least one matching node
     pcmk__coloc_select_active       = (1 << 2),
 };
 
 // Flags the update_ordered_actions() method can return
 enum pcmk__updated {
     pcmk__updated_none      = 0,        // Nothing changed
     pcmk__updated_first     = (1 << 0), // First action was updated
     pcmk__updated_then      = (1 << 1), // Then action was updated
 };
 
 #define pcmk__set_updated_flags(au_flags, action, flags_to_set) do {        \
         au_flags = pcmk__set_flags_as(__func__, __LINE__,                   \
                                       LOG_TRACE, "Action update",           \
                                       (action)->uuid, au_flags,             \
                                       (flags_to_set), #flags_to_set);       \
     } while (0)
 
 #define pcmk__clear_updated_flags(au_flags, action, flags_to_clear) do {    \
         au_flags = pcmk__clear_flags_as(__func__, __LINE__,                 \
                                         LOG_TRACE, "Action update",         \
                                         (action)->uuid, au_flags,           \
                                         (flags_to_clear), #flags_to_clear); \
     } while (0)
 
-// Resource allocation methods
+// Resource assignment methods
 struct resource_alloc_functions_s {
     /*!
      * \internal
      * \brief Assign a resource to a node
      *
      * \param[in,out] rsc     Resource to assign to a node
      * \param[in]     prefer  Node to prefer, if all else is equal
      *
      * \return Node that \p rsc is assigned to, if assigned entirely to one node
      */
     pe_node_t *(*assign)(pe_resource_t *rsc, const pe_node_t *prefer);
 
     /*!
      * \internal
      * \brief Create all actions needed for a given resource
      *
      * \param[in,out] rsc  Resource to create actions for
      */
     void (*create_actions)(pe_resource_t *rsc);
 
     /*!
      * \internal
      * \brief Schedule any probes needed for a resource on a node
      *
      * \param[in,out] rsc   Resource to create probe for
      * \param[in,out] node  Node to create probe on
      *
      * \return true if any probe was created, otherwise false
      */
     bool (*create_probe)(pe_resource_t *rsc, pe_node_t *node);
 
     /*!
      * \internal
      * \brief Create implicit constraints needed for a resource
      *
      * \param[in,out] rsc  Resource to create implicit constraints for
      */
     void (*internal_constraints)(pe_resource_t *rsc);
 
     /*!
      * \internal
      * \brief Apply a colocation's score to node weights or resource priority
      *
      * Given a colocation constraint, apply its score to the dependent's
      * allowed node weights (if we are still placing resources) or priority (if
      * we are choosing promotable clone instance roles).
      *
      * \param[in,out] dependent      Dependent resource in colocation
      * \param[in,out] primary        Primary resource in colocation
      * \param[in]     colocation     Colocation constraint to apply
      * \param[in]     for_dependent  true if called on behalf of dependent
      */
     void (*apply_coloc_score)(pe_resource_t *dependent, pe_resource_t *primary,
                               const pcmk__colocation_t *colocation,
                               bool for_dependent);
 
     /*!
      * \internal
      * \brief Create list of all resources in colocations with a given resource
      *
      * Given a resource, create a list of all resources involved in mandatory
      * colocations with it, whether directly or indirectly via chained colocations.
      *
      * \param[in]     rsc             Resource to add to colocated list
      * \param[in]     orig_rsc        Resource originally requested
      * \param[in,out] colocated_rscs  Existing list
      *
      * \return List of given resource and all resources involved in colocations
      *
      * \note This function is recursive; top-level callers should pass NULL as
      *       \p colocated_rscs and \p orig_rsc, and the desired resource as
      *       \p rsc. The recursive calls will use other values.
      */
     GList *(*colocated_resources)(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc,
                                   GList *colocated_rscs);
 
     /*!
      * \internal
      * \brief Add colocations affecting a resource as primary to a list
      *
      * Given a resource being assigned (\p orig_rsc) and a resource somewhere in
      * its chain of ancestors (\p rsc, which may be \p orig_rsc), get
      * colocations that affect the ancestor as primary and should affect the
      * resource, and add them to a given list.
      *
      * \param[in]     rsc       Resource whose colocations should be added
      * \param[in]     orig_rsc  Affected resource (\p rsc or a descendant)
      * \param[in,out] list      List of colocations to add to
      *
      * \note All arguments should be non-NULL.
      * \note The pcmk__with_this_colocations() wrapper should usually be used
      *       instead of using this method directly.
      */
     void (*with_this_colocations)(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc, GList **list);
 
     /*!
      * \internal
      * \brief Add colocations affecting a resource as dependent to a list
      *
      * Given a resource being assigned (\p orig_rsc) and a resource somewhere in
      * its chain of ancestors (\p rsc, which may be \p orig_rsc), get
      * colocations that affect the ancestor as dependent and should affect the
      * resource, and add them to a given list.
      *
      *
      * \param[in]     rsc       Resource whose colocations should be added
      * \param[in]     orig_rsc  Affected resource (\p rsc or a descendant)
      * \param[in,out] list      List of colocations to add to
      *
      * \note All arguments should be non-NULL.
      * \note The pcmk__this_with_colocations() wrapper should usually be used
      *       instead of using this method directly.
      */
     void (*this_with_colocations)(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc, GList **list);
 
     /*!
      * \internal
      * \brief Update nodes with scores of colocated resources' nodes
      *
      * Given a table of nodes and a resource, update the nodes' scores with the
      * scores of the best nodes matching the attribute used for each of the
      * resource's relevant colocations.
      *
      * \param[in,out] rsc         Resource to check colocations for
      * \param[in]     log_id      Resource ID for logs (if NULL, use \p rsc ID)
      * \param[in,out] nodes       Nodes to update (set initial contents to NULL
      *                            to copy \p rsc's allowed nodes)
      * \param[in]     colocation  Original colocation constraint (used to get
      *                            configured primary resource's stickiness, and
      *                            to get colocation node attribute; if NULL,
      *                            \p rsc's own matching node scores will not be
      *                            added, and *nodes must be NULL as well)
      * \param[in]     factor      Incorporate scores multiplied by this factor
      * \param[in]     flags       Bitmask of enum pcmk__coloc_select values
      *
      * \note NULL *nodes, NULL colocation, and the pcmk__coloc_select_this_with
      *       flag are used together (and only by cmp_resources()).
      * \note The caller remains responsible for freeing \p *nodes.
      */
     void (*add_colocated_node_scores)(pe_resource_t *rsc, const char *log_id,
                                       GHashTable **nodes,
                                       pcmk__colocation_t *colocation,
                                       float factor, uint32_t flags);
 
     /*!
      * \internal
      * \brief Apply a location constraint to a resource's allowed node scores
      *
      * \param[in,out] rsc       Resource to apply constraint to
      * \param[in,out] location  Location constraint to apply
      */
     void (*apply_location)(pe_resource_t *rsc, pe__location_t *location);
 
     /*!
      * \internal
      * \brief Return action flags for a given resource action
      *
      * \param[in,out] action  Action to get flags for
      * \param[in]     node    If not NULL, limit effects to this node
      *
      * \return Flags appropriate to \p action on \p node
      * \note For primitives, this will be the same as action->flags regardless
      *       of node. For collective resources, the flags can differ due to
      *       multiple instances possibly being involved.
      */
     uint32_t (*action_flags)(pe_action_t *action, const pe_node_t *node);
 
     /*!
      * \internal
      * \brief Update two actions according to an ordering between them
      *
      * Given information about an ordering of two actions, update the actions'
      * flags (and runnable_before members if appropriate) as appropriate for the
      * ordering. Effects may cascade to other orderings involving the actions as
      * well.
      *
      * \param[in,out] first     'First' action in an ordering
      * \param[in,out] then      'Then' action in an ordering
      * \param[in]     node      If not NULL, limit scope of ordering to this
      *                          node (only used when interleaving instances)
      * \param[in]     flags     Action flags for \p first for ordering purposes
      * \param[in]     filter    Action flags to limit scope of certain updates
      *                          (may include pe_action_optional to affect only
      *                          mandatory actions, and pe_action_runnable to
      *                          affect only runnable actions)
      * \param[in]     type      Group of enum pe_ordering flags to apply
      * \param[in,out] data_set  Cluster working set
      *
      * \return Group of enum pcmk__updated flags indicating what was updated
      */
     uint32_t (*update_ordered_actions)(pe_action_t *first, pe_action_t *then,
                                        const pe_node_t *node, uint32_t flags,
                                        uint32_t filter, uint32_t type,
                                        pe_working_set_t *data_set);
 
     /*!
      * \internal
      * \brief Output a summary of scheduled actions for a resource
      *
      * \param[in,out] rsc  Resource to output actions for
      */
     void (*output_actions)(pe_resource_t *rsc);
 
     /*!
      * \internal
      * \brief Add a resource's actions to the transition graph
      *
      * \param[in,out] rsc  Resource whose actions should be added
      */
     void (*add_actions_to_graph)(pe_resource_t *rsc);
 
     /*!
      * \internal
      * \brief Add meta-attributes relevant to transition graph actions to XML
      *
      * If a given resource supports variant-specific meta-attributes that are
      * needed for transition graph actions, add them to a given XML element.
      *
      * \param[in]     rsc  Resource whose meta-attributes should be added
      * \param[in,out] xml  Transition graph action attributes XML to add to
      */
     void (*add_graph_meta)(const pe_resource_t *rsc, xmlNode *xml);
 
     /*!
      * \internal
      * \brief Add a resource's utilization to a table of utilization values
      *
      * This function is used when summing the utilization of a resource and all
      * resources colocated with it, to determine whether a node has sufficient
      * capacity. Given a resource and a table of utilization values, it will add
      * the resource's utilization to the existing values, if the resource has
-     * not yet been allocated to a node.
+     * not yet been assigned to a node.
      *
      * \param[in]     rsc          Resource with utilization to add
-     * \param[in]     orig_rsc     Resource being allocated (for logging only)
+     * \param[in]     orig_rsc     Resource being assigned (for logging only)
      * \param[in]     all_rscs     List of all resources that will be summed
      * \param[in,out] utilization  Table of utilization values to add to
      */
     void (*add_utilization)(const pe_resource_t *rsc,
                             const pe_resource_t *orig_rsc, GList *all_rscs,
                             GHashTable *utilization);
 
     /*!
      * \internal
      * \brief Apply a shutdown lock for a resource, if appropriate
      *
      * \param[in,out] rsc       Resource to check for shutdown lock
      */
     void (*shutdown_lock)(pe_resource_t *rsc);
 };
 
 // Actions (pcmk_sched_actions.c)
 
 G_GNUC_INTERNAL
 void pcmk__update_action_for_orderings(pe_action_t *action,
                                        pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__update_ordered_actions(pe_action_t *first, pe_action_t *then,
                                       const pe_node_t *node, uint32_t flags,
                                       uint32_t filter, uint32_t type,
                                       pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__log_action(const char *pre_text, const pe_action_t *action,
                       bool details);
 
 G_GNUC_INTERNAL
 pe_action_t *pcmk__new_cancel_action(pe_resource_t *rsc, const char *name,
                                      guint interval_ms, const pe_node_t *node);
 
 G_GNUC_INTERNAL
 pe_action_t *pcmk__new_shutdown_action(pe_node_t *node);
 
 G_GNUC_INTERNAL
 bool pcmk__action_locks_rsc_to_node(const pe_action_t *action);
 
 G_GNUC_INTERNAL
 void pcmk__deduplicate_action_inputs(pe_action_t *action);
 
 G_GNUC_INTERNAL
 void pcmk__output_actions(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 bool pcmk__check_action_config(pe_resource_t *rsc, pe_node_t *node,
                                const xmlNode *xml_op);
 
 G_GNUC_INTERNAL
 void pcmk__handle_rsc_config_changes(pe_working_set_t *data_set);
 
 
 // Recurring actions (pcmk_sched_recurring.c)
 
 G_GNUC_INTERNAL
 void pcmk__create_recurring_actions(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__schedule_cancel(pe_resource_t *rsc, const char *call_id,
                            const char *task, guint interval_ms,
                            const pe_node_t *node, const char *reason);
 
 G_GNUC_INTERNAL
 void pcmk__reschedule_recurring(pe_resource_t *rsc, const char *task,
                                 guint interval_ms, pe_node_t *node);
 
 G_GNUC_INTERNAL
 bool pcmk__action_is_recurring(const pe_action_t *action);
 
 
 // Producing transition graphs (pcmk_graph_producer.c)
 
 G_GNUC_INTERNAL
 bool pcmk__graph_has_loop(const pe_action_t *init_action,
                           const pe_action_t *action,
                           pe_action_wrapper_t *input);
 
 G_GNUC_INTERNAL
 void pcmk__add_rsc_actions_to_graph(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__create_graph(pe_working_set_t *data_set);
 
 
 // Fencing (pcmk_sched_fencing.c)
 
 G_GNUC_INTERNAL
 void pcmk__order_vs_fence(pe_action_t *stonith_op, pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__order_vs_unfence(const pe_resource_t *rsc, pe_node_t *node,
                             pe_action_t *action, enum pe_ordering order);
 
 G_GNUC_INTERNAL
 void pcmk__fence_guest(pe_node_t *node);
 
 G_GNUC_INTERNAL
 bool pcmk__node_unfenced(const pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__order_restart_vs_unfence(gpointer data, gpointer user_data);
 
 
 // Injected scheduler inputs (pcmk_sched_injections.c)
 
 void pcmk__inject_scheduler_input(pe_working_set_t *data_set, cib_t *cib,
                                   const pcmk_injections_t *injections);
 
 
 // Constraints of any type (pcmk_sched_constraints.c)
 
 G_GNUC_INTERNAL
 pe_resource_t *pcmk__find_constraint_resource(GList *rsc_list, const char *id);
 
 G_GNUC_INTERNAL
 xmlNode *pcmk__expand_tags_in_sets(xmlNode *xml_obj,
                                    const pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 bool pcmk__valid_resource_or_tag(const pe_working_set_t *data_set,
                                  const char *id, pe_resource_t **rsc,
                                  pe_tag_t **tag);
 
 G_GNUC_INTERNAL
 bool pcmk__tag_to_set(xmlNode *xml_obj, xmlNode **rsc_set, const char *attr,
                       bool convert_rsc, const pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__create_internal_constraints(pe_working_set_t *data_set);
 
 
 // Location constraints
 
 G_GNUC_INTERNAL
 void pcmk__unpack_location(xmlNode *xml_obj, pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 pe__location_t *pcmk__new_location(const char *id, pe_resource_t *rsc,
                                    int node_weight, const char *discover_mode,
                                    pe_node_t *foo_node,
                                    pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__apply_locations(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__apply_location(pe_resource_t *rsc, pe__location_t *constraint);
 
 
 // Colocation constraints (pcmk_sched_colocation.c)
 
 enum pcmk__coloc_affects {
     pcmk__coloc_affects_nothing = 0,
     pcmk__coloc_affects_location,
     pcmk__coloc_affects_role,
 };
 
 G_GNUC_INTERNAL
 enum pcmk__coloc_affects pcmk__colocation_affects(const pe_resource_t *dependent,
                                                   const pe_resource_t *primary,
                                                   const pcmk__colocation_t *colocation,
                                                   bool preview);
 
 G_GNUC_INTERNAL
 void pcmk__apply_coloc_to_weights(pe_resource_t *dependent,
                                   const pe_resource_t *primary,
                                   const pcmk__colocation_t *colocation);
 
 G_GNUC_INTERNAL
 void pcmk__apply_coloc_to_priority(pe_resource_t *dependent,
                                    const pe_resource_t *primary,
                                    const pcmk__colocation_t *colocation);
 
 G_GNUC_INTERNAL
 void pcmk__add_colocated_node_scores(pe_resource_t *rsc, const char *log_id,
                                      GHashTable **nodes,
                                      pcmk__colocation_t *colocation,
                                      float factor, uint32_t flags);
 
 G_GNUC_INTERNAL
 void pcmk__add_dependent_scores(gpointer data, gpointer user_data);
 
 G_GNUC_INTERNAL
 void pcmk__unpack_colocation(xmlNode *xml_obj, pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__add_this_with(GList **list, const pcmk__colocation_t *colocation);
 
 G_GNUC_INTERNAL
 void pcmk__add_this_with_list(GList **list, GList *addition);
 
 G_GNUC_INTERNAL
 void pcmk__add_with_this(GList **list, const pcmk__colocation_t *colocation);
 
 G_GNUC_INTERNAL
 void pcmk__add_with_this_list(GList **list, GList *addition);
 
 G_GNUC_INTERNAL
 void pcmk__new_colocation(const char *id, const char *node_attr, int score,
                           pe_resource_t *dependent, pe_resource_t *primary,
                           const char *dependent_role, const char *primary_role,
                           bool influence, pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__block_colocation_dependents(pe_action_t *action,
                                        pe_working_set_t *data_set);
 
 /*!
  * \internal
  * \brief Check whether colocation's dependent preferences should be considered
  *
  * \param[in] colocation  Colocation constraint
  * \param[in] rsc         Primary instance (normally this will be
  *                        colocation->primary, which NULL will be treated as,
  *                        but for clones or bundles with multiple instances
  *                        this can be a particular instance)
  *
  * \return true if colocation influence should be effective, otherwise false
  */
 static inline bool
 pcmk__colocation_has_influence(const pcmk__colocation_t *colocation,
                                const pe_resource_t *rsc)
 {
     if (rsc == NULL) {
         rsc = colocation->primary;
     }
 
     /* A bundle replica colocates its remote connection with its container,
      * using a finite score so that the container can run on Pacemaker Remote
      * nodes.
      *
      * Moving a connection is lightweight and does not interrupt the service,
      * while moving a container is heavyweight and does interrupt the service,
      * so don't move a clean, active container based solely on the preferences
      * of its connection.
      *
      * This also avoids problematic scenarios where two containers want to
      * perpetually swap places.
      */
     if (pcmk_is_set(colocation->dependent->flags, pe_rsc_allow_remote_remotes)
         && !pcmk_is_set(rsc->flags, pe_rsc_failed)
         && pcmk__list_of_1(rsc->running_on)) {
         return false;
     }
 
     /* The dependent in a colocation influences the primary's location
      * if the influence option is true or the primary is not yet active.
      */
     return colocation->influence || (rsc->running_on == NULL);
 }
 
 
 // Ordering constraints (pcmk_sched_ordering.c)
 
 G_GNUC_INTERNAL
 void pcmk__new_ordering(pe_resource_t *first_rsc, char *first_task,
                         pe_action_t *first_action, pe_resource_t *then_rsc,
                         char *then_task, pe_action_t *then_action,
                         uint32_t flags, pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__unpack_ordering(xmlNode *xml_obj, pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__disable_invalid_orderings(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__order_stops_before_shutdown(pe_node_t *node,
                                        pe_action_t *shutdown_op);
 
 G_GNUC_INTERNAL
 void pcmk__apply_orderings(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 void pcmk__order_after_each(pe_action_t *after, GList *list);
 
 
 /*!
  * \internal
  * \brief Create a new ordering between two resource actions
  *
  * \param[in,out] first_rsc   Resource for 'first' action
  * \param[in,out] first_task  Action key for 'first' action
  * \param[in]     then_rsc    Resource for 'then' action
  * \param[in,out] then_task   Action key for 'then' action
  * \param[in]     flags       Bitmask of enum pe_ordering flags
  */
 #define pcmk__order_resource_actions(first_rsc, first_task,                 \
                                      then_rsc, then_task, flags)            \
     pcmk__new_ordering((first_rsc),                                         \
                        pcmk__op_key((first_rsc)->id, (first_task), 0),      \
                        NULL,                                                \
                        (then_rsc),                                          \
                        pcmk__op_key((then_rsc)->id, (then_task), 0),        \
                        NULL, (flags), (first_rsc)->cluster)
 
 #define pcmk__order_starts(rsc1, rsc2, flags)                \
     pcmk__order_resource_actions((rsc1), CRMD_ACTION_START,  \
                                  (rsc2), CRMD_ACTION_START, (flags))
 
 #define pcmk__order_stops(rsc1, rsc2, flags)                 \
     pcmk__order_resource_actions((rsc1), CRMD_ACTION_STOP,   \
                                  (rsc2), CRMD_ACTION_STOP, (flags))
 
 
 // Ticket constraints (pcmk_sched_tickets.c)
 
 G_GNUC_INTERNAL
 void pcmk__unpack_rsc_ticket(xmlNode *xml_obj, pe_working_set_t *data_set);
 
 
 // Promotable clone resources (pcmk_sched_promotable.c)
 
 G_GNUC_INTERNAL
 void pcmk__add_promotion_scores(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__require_promotion_tickets(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__set_instance_roles(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__create_promotable_actions(pe_resource_t *clone);
 
 G_GNUC_INTERNAL
 void pcmk__promotable_restart_ordering(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__order_promotable_instances(pe_resource_t *clone);
 
 G_GNUC_INTERNAL
 void pcmk__update_dependent_with_promotable(const pe_resource_t *primary,
                                             pe_resource_t *dependent,
                                             const pcmk__colocation_t *colocation);
 
 G_GNUC_INTERNAL
 void pcmk__update_promotable_dependent_priority(const pe_resource_t *primary,
                                                 pe_resource_t *dependent,
                                                 const pcmk__colocation_t *colocation);
 
 
 // Pacemaker Remote nodes (pcmk_sched_remote.c)
 
 G_GNUC_INTERNAL
 bool pcmk__is_failed_remote_node(const pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__order_remote_connection_actions(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 bool pcmk__rsc_corresponds_to_guest(const pe_resource_t *rsc,
                                     const pe_node_t *node);
 
 G_GNUC_INTERNAL
 pe_node_t *pcmk__connection_host_for_action(const pe_action_t *action);
 
 G_GNUC_INTERNAL
 void pcmk__substitute_remote_addr(pe_resource_t *rsc, GHashTable *params);
 
 G_GNUC_INTERNAL
 void pcmk__add_bundle_meta_to_xml(xmlNode *args_xml, const pe_action_t *action);
 
 
 // Primitives (pcmk_sched_primitive.c)
 
 G_GNUC_INTERNAL
 pe_node_t *pcmk__primitive_assign(pe_resource_t *rsc, const pe_node_t *prefer);
 
 G_GNUC_INTERNAL
 void pcmk__primitive_create_actions(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__primitive_internal_constraints(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__primitive_action_flags(pe_action_t *action,
                                       const pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__primitive_apply_coloc_score(pe_resource_t *dependent,
                                        pe_resource_t *primary,
                                        const pcmk__colocation_t *colocation,
                                        bool for_dependent);
 
 G_GNUC_INTERNAL
 void pcmk__with_primitive_colocations(const pe_resource_t *rsc,
                                       const pe_resource_t *orig_rsc,
                                       GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__primitive_with_colocations(const pe_resource_t *rsc,
                                       const pe_resource_t *orig_rsc,
                                       GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__schedule_cleanup(pe_resource_t *rsc, const pe_node_t *node,
                             bool optional);
 
 G_GNUC_INTERNAL
 void pcmk__primitive_add_graph_meta(const pe_resource_t *rsc, xmlNode *xml);
 
 G_GNUC_INTERNAL
 void pcmk__primitive_add_utilization(const pe_resource_t *rsc,
                                      const pe_resource_t *orig_rsc,
                                      GList *all_rscs, GHashTable *utilization);
 
 G_GNUC_INTERNAL
 void pcmk__primitive_shutdown_lock(pe_resource_t *rsc);
 
 
 // Groups (pcmk_sched_group.c)
 
 G_GNUC_INTERNAL
 pe_node_t *pcmk__group_assign(pe_resource_t *rsc, const pe_node_t *prefer);
 
 G_GNUC_INTERNAL
 void pcmk__group_create_actions(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__group_internal_constraints(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__group_apply_coloc_score(pe_resource_t *dependent,
                                    pe_resource_t *primary,
                                    const pcmk__colocation_t *colocation,
                                    bool for_dependent);
 
 G_GNUC_INTERNAL
 void pcmk__with_group_colocations(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc, GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__group_with_colocations(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc, GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__group_add_colocated_node_scores(pe_resource_t *rsc,
                                            const char *log_id,
                                            GHashTable **nodes,
                                            pcmk__colocation_t *colocation,
                                            float factor, uint32_t flags);
 
 G_GNUC_INTERNAL
 void pcmk__group_apply_location(pe_resource_t *rsc, pe__location_t *location);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__group_action_flags(pe_action_t *action, const pe_node_t *node);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__group_update_ordered_actions(pe_action_t *first,
                                             pe_action_t *then,
                                             const pe_node_t *node,
                                             uint32_t flags, uint32_t filter,
                                             uint32_t type,
                                             pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 GList *pcmk__group_colocated_resources(const pe_resource_t *rsc,
                                        const pe_resource_t *orig_rsc,
                                        GList *colocated_rscs);
 
 G_GNUC_INTERNAL
 void pcmk__group_add_utilization(const pe_resource_t *rsc,
                                  const pe_resource_t *orig_rsc, GList *all_rscs,
                                  GHashTable *utilization);
 
 G_GNUC_INTERNAL
 void pcmk__group_shutdown_lock(pe_resource_t *rsc);
 
 
 // Clones (pcmk_sched_clone.c)
 
 G_GNUC_INTERNAL
 pe_node_t *pcmk__clone_assign(pe_resource_t *rsc, const pe_node_t *prefer);
 
 G_GNUC_INTERNAL
 void pcmk__clone_create_actions(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 bool pcmk__clone_create_probe(pe_resource_t *rsc, pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__clone_internal_constraints(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__clone_apply_coloc_score(pe_resource_t *dependent,
                                    pe_resource_t *primary,
                                    const pcmk__colocation_t *colocation,
                                    bool for_dependent);
 
 G_GNUC_INTERNAL
 void pcmk__with_clone_colocations(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc, GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__clone_with_colocations(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc, GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__clone_apply_location(pe_resource_t *rsc, pe__location_t *constraint);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__clone_action_flags(pe_action_t *action, const pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__clone_add_actions_to_graph(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__clone_add_graph_meta(const pe_resource_t *rsc, xmlNode *xml);
 
 G_GNUC_INTERNAL
 void pcmk__clone_add_utilization(const pe_resource_t *rsc,
                                  const pe_resource_t *orig_rsc,
                                  GList *all_rscs, GHashTable *utilization);
 
 G_GNUC_INTERNAL
 void pcmk__clone_shutdown_lock(pe_resource_t *rsc);
 
 // Bundles (pcmk_sched_bundle.c)
 
 G_GNUC_INTERNAL
 pe_node_t *pcmk__bundle_assign(pe_resource_t *rsc, const pe_node_t *prefer);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_create_actions(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 bool pcmk__bundle_create_probe(pe_resource_t *rsc, pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_internal_constraints(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_apply_coloc_score(pe_resource_t *dependent,
                                     pe_resource_t *primary,
                                     const pcmk__colocation_t *colocation,
                                     bool for_dependent);
 
 G_GNUC_INTERNAL
 void pcmk__with_bundle_colocations(const pe_resource_t *rsc,
                                    const pe_resource_t *orig_rsc, GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_with_colocations(const pe_resource_t *rsc,
                                    const pe_resource_t *orig_rsc, GList **list);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_apply_location(pe_resource_t *rsc,
                                  pe__location_t *constraint);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__bundle_action_flags(pe_action_t *action, const pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__output_bundle_actions(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_add_actions_to_graph(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_add_utilization(const pe_resource_t *rsc,
                                   const pe_resource_t *orig_rsc,
                                   GList *all_rscs, GHashTable *utilization);
 
 G_GNUC_INTERNAL
 void pcmk__bundle_shutdown_lock(pe_resource_t *rsc);
 
 
 // Clone instances or bundle replica containers (pcmk_sched_instances.c)
 
 G_GNUC_INTERNAL
 void pcmk__assign_instances(pe_resource_t *collective, GList *instances,
                             int max_total, int max_per_node);
 
 G_GNUC_INTERNAL
 void pcmk__create_instance_actions(pe_resource_t *rsc, GList *instances);
 
 G_GNUC_INTERNAL
 bool pcmk__instance_matches(const pe_resource_t *instance,
                             const pe_node_t *node, enum rsc_role_e role,
                             bool current);
 
 G_GNUC_INTERNAL
 pe_resource_t *pcmk__find_compatible_instance(const pe_resource_t *match_rsc,
                                               const pe_resource_t *rsc,
                                               enum rsc_role_e role,
                                               bool current);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__instance_update_ordered_actions(pe_action_t *first,
                                                pe_action_t *then,
                                                const pe_node_t *node,
                                                uint32_t flags, uint32_t filter,
                                                uint32_t type,
                                                pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 uint32_t pcmk__collective_action_flags(pe_action_t *action,
                                        const GList *instances,
                                        const pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__add_collective_constraints(GList **list,
                                       const pe_resource_t *instance,
                                       const pe_resource_t *collective,
                                       bool with_this);
 
 
 // Injections (pcmk_injections.c)
 
 G_GNUC_INTERNAL
 xmlNode *pcmk__inject_node(cib_t *cib_conn, const char *node, const char *uuid);
 
 G_GNUC_INTERNAL
 xmlNode *pcmk__inject_node_state_change(cib_t *cib_conn, const char *node,
                                         bool up);
 
 G_GNUC_INTERNAL
 xmlNode *pcmk__inject_resource_history(pcmk__output_t *out, xmlNode *cib_node,
                                        const char *resource,
                                        const char *lrm_name,
                                        const char *rclass,
                                        const char *rtype,
                                        const char *rprovider);
 
 G_GNUC_INTERNAL
 void pcmk__inject_failcount(pcmk__output_t *out, xmlNode *cib_node,
                             const char *resource, const char *task,
                             guint interval_ms, int rc);
 
 G_GNUC_INTERNAL
 xmlNode *pcmk__inject_action_result(xmlNode *cib_resource,
                                     lrmd_event_data_t *op, int target_rc);
 
 
 // Nodes (pcmk_sched_nodes.c)
 
 G_GNUC_INTERNAL
 bool pcmk__node_available(const pe_node_t *node, bool consider_score,
                           bool consider_guest);
 
 G_GNUC_INTERNAL
 bool pcmk__any_node_available(GHashTable *nodes);
 
 G_GNUC_INTERNAL
 GHashTable *pcmk__copy_node_table(GHashTable *nodes);
 
 G_GNUC_INTERNAL
 GList *pcmk__sort_nodes(GList *nodes, pe_node_t *active_node);
 
 G_GNUC_INTERNAL
 void pcmk__apply_node_health(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 pe_node_t *pcmk__top_allowed_node(const pe_resource_t *rsc,
                                   const pe_node_t *node);
 
 
 // Functions applying to more than one variant (pcmk_sched_resource.c)
 
 G_GNUC_INTERNAL
-void pcmk__set_allocation_methods(pe_working_set_t *data_set);
+void pcmk__set_assignment_methods(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 bool pcmk__rsc_agent_changed(pe_resource_t *rsc, pe_node_t *node,
                              const xmlNode *rsc_entry, bool active_on_node);
 
 G_GNUC_INTERNAL
 GList *pcmk__rscs_matching_id(const char *id, const pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 GList *pcmk__colocated_resources(const pe_resource_t *rsc,
                                  const pe_resource_t *orig_rsc,
                                  GList *colocated_rscs);
 
 G_GNUC_INTERNAL
 void pcmk__noop_add_graph_meta(const pe_resource_t *rsc, xmlNode *xml);
 
 G_GNUC_INTERNAL
 void pcmk__output_resource_actions(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 bool pcmk__finalize_assignment(pe_resource_t *rsc, pe_node_t *chosen,
                                bool force);
 
 G_GNUC_INTERNAL
 bool pcmk__assign_resource(pe_resource_t *rsc, pe_node_t *node, bool force);
 
 G_GNUC_INTERNAL
 void pcmk__unassign_resource(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 bool pcmk__threshold_reached(pe_resource_t *rsc, const pe_node_t *node,
                              pe_resource_t **failed);
 
 G_GNUC_INTERNAL
 void pcmk__sort_resources(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 gint pcmk__cmp_instance(gconstpointer a, gconstpointer b);
 
 G_GNUC_INTERNAL
 gint pcmk__cmp_instance_number(gconstpointer a, gconstpointer b);
 
 
 // Functions related to probes (pcmk_sched_probes.c)
 
 G_GNUC_INTERNAL
 bool pcmk__probe_rsc_on_node(pe_resource_t *rsc, pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__order_probes(pe_working_set_t *data_set);
 
 G_GNUC_INTERNAL
 bool pcmk__probe_resource_list(GList *rscs, pe_node_t *node);
 
 G_GNUC_INTERNAL
 void pcmk__schedule_probes(pe_working_set_t *data_set);
 
 
 // Functions related to live migration (pcmk_sched_migration.c)
 
 void pcmk__create_migration_actions(pe_resource_t *rsc,
                                     const pe_node_t *current);
 
 void pcmk__abort_dangling_migration(void *data, void *user_data);
 
 bool pcmk__rsc_can_migrate(const pe_resource_t *rsc, const pe_node_t *current);
 
 void pcmk__order_migration_equivalents(pe__ordering_t *order);
 
 
 // Functions related to node utilization (pcmk_sched_utilization.c)
 
 G_GNUC_INTERNAL
 int pcmk__compare_node_capacities(const pe_node_t *node1,
                                   const pe_node_t *node2);
 
 G_GNUC_INTERNAL
 void pcmk__consume_node_capacity(GHashTable *current_utilization,
                                  const pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__release_node_capacity(GHashTable *current_utilization,
                                  const pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 const pe_node_t *pcmk__ban_insufficient_capacity(pe_resource_t *rsc);
 
 G_GNUC_INTERNAL
 void pcmk__create_utilization_constraints(pe_resource_t *rsc,
                                           const GList *allowed_nodes);
 
 G_GNUC_INTERNAL
 void pcmk__show_node_capacities(const char *desc, pe_working_set_t *data_set);
 
 #endif // PCMK__LIBPACEMAKER_PRIVATE__H
diff --git a/lib/pacemaker/pcmk_graph_producer.c b/lib/pacemaker/pcmk_graph_producer.c
index 268829ec77..7386fe24e6 100644
--- a/lib/pacemaker/pcmk_graph_producer.c
+++ b/lib/pacemaker/pcmk_graph_producer.c
@@ -1,1078 +1,1078 @@
 /*
  * Copyright 2004-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 
 #include <sys/param.h>
 #include <crm/crm.h>
 #include <crm/cib.h>
 #include <crm/msg_xml.h>
 #include <crm/common/xml.h>
 
 #include <glib.h>
 
 #include <pacemaker-internal.h>
 
 #include "libpacemaker_private.h"
 
 // Convenience macros for logging action properties
 
 #define action_type_str(flags) \
     (pcmk_is_set((flags), pe_action_pseudo)? "pseudo-action" : "action")
 
 #define action_optional_str(flags) \
     (pcmk_is_set((flags), pe_action_optional)? "optional" : "required")
 
 #define action_runnable_str(flags) \
     (pcmk_is_set((flags), pe_action_runnable)? "runnable" : "unrunnable")
 
 #define action_node_str(a) \
     (((a)->node == NULL)? "no node" : (a)->node->details->uname)
 
 /*!
  * \internal
  * \brief Add an XML node tag for a specified ID
  *
  * \param[in]     id      Node UUID to add
  * \param[in,out] xml     Parent XML tag to add to
  */
 static xmlNode*
 add_node_to_xml_by_id(const char *id, xmlNode *xml)
 {
     xmlNode *node_xml;
 
     node_xml = create_xml_node(xml, XML_CIB_TAG_NODE);
     crm_xml_add(node_xml, XML_ATTR_ID, id);
 
     return node_xml;
 }
 
 /*!
  * \internal
  * \brief Add an XML node tag for a specified node
  *
  * \param[in]     node  Node to add
  * \param[in,out] xml   XML to add node to
  */
 static void
 add_node_to_xml(const pe_node_t *node, void *xml)
 {
     add_node_to_xml_by_id(node->details->id, (xmlNode *) xml);
 }
 
 /*!
  * \internal
  * \brief Add XML with nodes that need an update of their maintenance state
  *
  * \param[in,out] xml       Parent XML tag to add to
  * \param[in]     data_set  Working set for cluster
  */
 static int
 add_maintenance_nodes(xmlNode *xml, const pe_working_set_t *data_set)
 {
     GList *gIter = NULL;
     xmlNode *maintenance =
         xml?create_xml_node(xml, XML_GRAPH_TAG_MAINTENANCE):NULL;
     int count = 0;
 
     for (gIter = data_set->nodes; gIter != NULL;
          gIter = gIter->next) {
         pe_node_t *node = (pe_node_t *) gIter->data;
         struct pe_node_shared_s *details = node->details;
 
         if (!pe__is_guest_or_remote_node(node)) {
             continue; /* just remote nodes need to know atm */
         }
 
         if (details->maintenance != details->remote_maintenance) {
             if (maintenance) {
                 crm_xml_add(
                     add_node_to_xml_by_id(node->details->id, maintenance),
                     XML_NODE_IS_MAINTENANCE, details->maintenance?"1":"0");
             }
             count++;
         }
     }
     crm_trace("%s %d nodes to adjust maintenance-mode "
               "to transition", maintenance?"Added":"Counted", count);
     return count;
 }
 
 /*!
  * \internal
  * \brief Add pseudo action with nodes needing maintenance state update
  *
  * \param[in,out] data_set  Working set for cluster
  */
 static void
 add_maintenance_update(pe_working_set_t *data_set)
 {
     pe_action_t *action = NULL;
 
     if (add_maintenance_nodes(NULL, data_set)) {
         crm_trace("adding maintenance state update pseudo action");
         action = get_pseudo_op(CRM_OP_MAINTENANCE_NODES, data_set);
         pe__set_action_flags(action, pe_action_print_always);
     }
 }
 
 /*!
  * \internal
  * \brief Add XML with nodes that an action is expected to bring down
  *
  * If a specified action is expected to bring any nodes down, add an XML block
  * with their UUIDs. When a node is lost, this allows the controller to
  * determine whether it was expected.
  *
  * \param[in,out] xml       Parent XML tag to add to
  * \param[in]     action    Action to check for downed nodes
  * \param[in]     data_set  Working set for cluster
  */
 static void
 add_downed_nodes(xmlNode *xml, const pe_action_t *action,
                  const pe_working_set_t *data_set)
 {
     CRM_CHECK(xml && action && action->node && data_set, return);
 
     if (pcmk__str_eq(action->task, CRM_OP_SHUTDOWN, pcmk__str_casei)) {
 
         /* Shutdown makes the action's node down */
         xmlNode *downed = create_xml_node(xml, XML_GRAPH_TAG_DOWNED);
         add_node_to_xml_by_id(action->node->details->id, downed);
 
     } else if (pcmk__str_eq(action->task, CRM_OP_FENCE, pcmk__str_casei)) {
 
         /* Fencing makes the action's node and any hosted guest nodes down */
         const char *fence = g_hash_table_lookup(action->meta, "stonith_action");
 
         if (pcmk__is_fencing_action(fence)) {
             xmlNode *downed = create_xml_node(xml, XML_GRAPH_TAG_DOWNED);
             add_node_to_xml_by_id(action->node->details->id, downed);
             pe_foreach_guest_node(data_set, action->node, add_node_to_xml, downed);
         }
 
     } else if (action->rsc && action->rsc->is_remote_node
                && pcmk__str_eq(action->task, CRMD_ACTION_STOP, pcmk__str_casei)) {
 
         /* Stopping a remote connection resource makes connected node down,
          * unless it's part of a migration
          */
         GList *iter;
         pe_action_t *input;
         bool migrating = false;
 
         for (iter = action->actions_before; iter != NULL; iter = iter->next) {
             input = ((pe_action_wrapper_t *) iter->data)->action;
             if (input->rsc && pcmk__str_eq(action->rsc->id, input->rsc->id, pcmk__str_casei)
                 && pcmk__str_eq(input->task, CRMD_ACTION_MIGRATED, pcmk__str_casei)) {
                 migrating = true;
                 break;
             }
         }
         if (!migrating) {
             xmlNode *downed = create_xml_node(xml, XML_GRAPH_TAG_DOWNED);
             add_node_to_xml_by_id(action->rsc->id, downed);
         }
     }
 }
 
 /*!
  * \internal
  * \brief Create a transition graph operation key for a clone action
  *
  * \param[in] action       Clone action
  * \param[in] interval_ms  Action interval in milliseconds
  *
  * \return Newly allocated string with transition graph operation key
  */
 static char *
 clone_op_key(const pe_action_t *action, guint interval_ms)
 {
     if (pcmk__str_eq(action->task, RSC_NOTIFY, pcmk__str_none)) {
         const char *n_type = g_hash_table_lookup(action->meta, "notify_type");
         const char *n_task = g_hash_table_lookup(action->meta,
                                                  "notify_operation");
 
         CRM_LOG_ASSERT((n_type != NULL) && (n_task != NULL));
         return pcmk__notify_key(action->rsc->clone_name, n_type, n_task);
 
     } else if (action->cancel_task != NULL) {
         return pcmk__op_key(action->rsc->clone_name, action->cancel_task,
                             interval_ms);
     } else {
         return pcmk__op_key(action->rsc->clone_name, action->task, interval_ms);
     }
 }
 
 /*!
  * \internal
  * \brief Add node details to transition graph action XML
  *
  * \param[in]     action  Scheduled action
  * \param[in,out] xml     Transition graph action XML for \p action
  */
 static void
 add_node_details(const pe_action_t *action, xmlNode *xml)
 {
     pe_node_t *router_node = pcmk__connection_host_for_action(action);
 
     crm_xml_add(xml, XML_LRM_ATTR_TARGET, action->node->details->uname);
     crm_xml_add(xml, XML_LRM_ATTR_TARGET_UUID, action->node->details->id);
     if (router_node != NULL) {
         crm_xml_add(xml, XML_LRM_ATTR_ROUTER_NODE, router_node->details->uname);
     }
 }
 
 /*!
  * \internal
  * \brief Add resource details to transition graph action XML
  *
  * \param[in]     action      Scheduled action
  * \param[in,out] action_xml  Transition graph action XML for \p action
  */
 static void
 add_resource_details(const pe_action_t *action, xmlNode *action_xml)
 {
     xmlNode *rsc_xml = NULL;
     const char *attr_list[] = {
         XML_AGENT_ATTR_CLASS,
         XML_AGENT_ATTR_PROVIDER,
         XML_ATTR_TYPE
     };
 
     /* If a resource is locked to a node via shutdown-lock, mark its actions
      * so the controller can preserve the lock when the action completes.
      */
     if (pcmk__action_locks_rsc_to_node(action)) {
         crm_xml_add_ll(action_xml, XML_CONFIG_ATTR_SHUTDOWN_LOCK,
                        (long long) action->rsc->lock_time);
     }
 
     // List affected resource
 
     rsc_xml = create_xml_node(action_xml, crm_element_name(action->rsc->xml));
     if (pcmk_is_set(action->rsc->flags, pe_rsc_orphan)
         && (action->rsc->clone_name != NULL)) {
         /* Use the numbered instance name here, because if there is more
          * than one instance on a node, we need to make sure the command
          * goes to the right one.
          *
          * This is important even for anonymous clones, because the clone's
          * unique meta-attribute might have just been toggled from on to
          * off.
          */
         crm_debug("Using orphan clone name %s instead of %s",
                   action->rsc->id, action->rsc->clone_name);
         crm_xml_add(rsc_xml, XML_ATTR_ID, action->rsc->clone_name);
         crm_xml_add(rsc_xml, XML_ATTR_ID_LONG, action->rsc->id);
 
     } else if (!pcmk_is_set(action->rsc->flags, pe_rsc_unique)) {
         const char *xml_id = ID(action->rsc->xml);
 
         crm_debug("Using anonymous clone name %s for %s (aka %s)",
                   xml_id, action->rsc->id, action->rsc->clone_name);
 
         /* ID is what we'd like client to use
          * ID_LONG is what they might know it as instead
          *
          * ID_LONG is only strictly needed /here/ during the
          * transition period until all nodes in the cluster
          * are running the new software /and/ have rebooted
          * once (meaning that they've only ever spoken to a DC
          * supporting this feature).
          *
          * If anyone toggles the unique flag to 'on', the
          * 'instance free' name will correspond to an orphan
          * and fall into the clause above instead
          */
         crm_xml_add(rsc_xml, XML_ATTR_ID, xml_id);
         if ((action->rsc->clone_name != NULL)
             && !pcmk__str_eq(xml_id, action->rsc->clone_name,
                              pcmk__str_none)) {
             crm_xml_add(rsc_xml, XML_ATTR_ID_LONG, action->rsc->clone_name);
         } else {
             crm_xml_add(rsc_xml, XML_ATTR_ID_LONG, action->rsc->id);
         }
 
     } else {
         CRM_ASSERT(action->rsc->clone_name == NULL);
         crm_xml_add(rsc_xml, XML_ATTR_ID, action->rsc->id);
     }
 
     for (int lpc = 0; lpc < PCMK__NELEM(attr_list); lpc++) {
         crm_xml_add(rsc_xml, attr_list[lpc],
                     g_hash_table_lookup(action->rsc->meta, attr_list[lpc]));
     }
 }
 
 /*!
  * \internal
  * \brief Add action attributes to transition graph action XML
  *
  * \param[in,out] action      Scheduled action
  * \param[in,out] action_xml  Transition graph action XML for \p action
  */
 static void
 add_action_attributes(pe_action_t *action, xmlNode *action_xml)
 {
     xmlNode *args_xml = NULL;
 
     /* We create free-standing XML to start, so we can sort the attributes
      * before adding it to action_xml, which keeps the scheduler regression
      * test graphs comparable.
      */
     args_xml = create_xml_node(NULL, XML_TAG_ATTRS);
 
     crm_xml_add(args_xml, XML_ATTR_CRM_VERSION, CRM_FEATURE_SET);
     g_hash_table_foreach(action->extra, hash2field, args_xml);
 
     if ((action->rsc != NULL) && (action->node != NULL)) {
         // Get the resource instance attributes, evaluated properly for node
         GHashTable *params = pe_rsc_params(action->rsc, action->node,
                                            action->rsc->cluster);
 
         pcmk__substitute_remote_addr(action->rsc, params);
 
         g_hash_table_foreach(params, hash2smartfield, args_xml);
 
     } else if ((action->rsc != NULL) && (action->rsc->variant <= pe_native)) {
         GHashTable *params = pe_rsc_params(action->rsc, NULL,
                                            action->rsc->cluster);
 
         g_hash_table_foreach(params, hash2smartfield, args_xml);
     }
 
     g_hash_table_foreach(action->meta, hash2metafield, args_xml);
     if (action->rsc != NULL) {
         pe_resource_t *parent = action->rsc;
 
         while (parent != NULL) {
             parent->cmds->add_graph_meta(parent, args_xml);
             parent = parent->parent;
         }
 
         pcmk__add_bundle_meta_to_xml(args_xml, action);
 
     } else if (pcmk__str_eq(action->task, CRM_OP_FENCE, pcmk__str_none)
                && (action->node != NULL)) {
         /* Pass the node's attributes as meta-attributes.
          *
          * @TODO: Determine whether it is still necessary to do this. It was
          * added in 33d99707, probably for the libfence-based implementation in
          * c9a90bd, which is no longer used.
          */
         g_hash_table_foreach(action->node->details->attrs, hash2metafield, args_xml);
     }
 
     sorted_xml(args_xml, action_xml, FALSE);
     free_xml(args_xml);
 }
 
 /*!
  * \internal
  * \brief Create the transition graph XML for a scheduled action
  *
  * \param[in,out] parent        Parent XML element to add action to
  * \param[in,out] action        Scheduled action
  * \param[in]     skip_details  If false, add action details as sub-elements
  * \param[in]     data_set      Cluster working set
  */
 static void
 create_graph_action(xmlNode *parent, pe_action_t *action, bool skip_details,
                     const pe_working_set_t *data_set)
 {
     bool needs_node_info = true;
     bool needs_maintenance_info = false;
     xmlNode *action_xml = NULL;
 
     if ((action == NULL) || (data_set == NULL)) {
         return;
     }
 
     // Create the top-level element based on task
 
     if (pcmk__str_eq(action->task, CRM_OP_FENCE, pcmk__str_casei)) {
         /* All fences need node info; guest node fences are pseudo-events */
         action_xml = create_xml_node(parent,
                                      pcmk_is_set(action->flags, pe_action_pseudo)?
                                      XML_GRAPH_TAG_PSEUDO_EVENT :
                                      XML_GRAPH_TAG_CRM_EVENT);
 
     } else if (pcmk__str_any_of(action->task,
                                 CRM_OP_SHUTDOWN,
                                 CRM_OP_CLEAR_FAILCOUNT, NULL)) {
         action_xml = create_xml_node(parent, XML_GRAPH_TAG_CRM_EVENT);
 
     } else if (pcmk__str_eq(action->task, CRM_OP_LRM_DELETE, pcmk__str_none)) {
         // CIB-only clean-up for shutdown locks
         action_xml = create_xml_node(parent, XML_GRAPH_TAG_CRM_EVENT);
         crm_xml_add(action_xml, PCMK__XA_MODE, XML_TAG_CIB);
 
     } else if (pcmk_is_set(action->flags, pe_action_pseudo)) {
         if (pcmk__str_eq(action->task, CRM_OP_MAINTENANCE_NODES,
                          pcmk__str_none)) {
             needs_maintenance_info = true;
         }
         action_xml = create_xml_node(parent, XML_GRAPH_TAG_PSEUDO_EVENT);
         needs_node_info = false;
 
     } else {
         action_xml = create_xml_node(parent, XML_GRAPH_TAG_RSC_OP);
     }
 
     crm_xml_add_int(action_xml, XML_ATTR_ID, action->id);
     crm_xml_add(action_xml, XML_LRM_ATTR_TASK, action->task);
 
     if ((action->rsc != NULL) && (action->rsc->clone_name != NULL)) {
         char *clone_key = NULL;
         guint interval_ms;
 
         if (pcmk__guint_from_hash(action->meta, XML_LRM_ATTR_INTERVAL_MS, 0,
                                   &interval_ms) != pcmk_rc_ok) {
             interval_ms = 0;
         }
         clone_key = clone_op_key(action, interval_ms);
         crm_xml_add(action_xml, XML_LRM_ATTR_TASK_KEY, clone_key);
         crm_xml_add(action_xml, "internal_" XML_LRM_ATTR_TASK_KEY, action->uuid);
         free(clone_key);
     } else {
         crm_xml_add(action_xml, XML_LRM_ATTR_TASK_KEY, action->uuid);
     }
 
     if (needs_node_info && (action->node != NULL)) {
         add_node_details(action, action_xml);
         g_hash_table_insert(action->meta, strdup(XML_LRM_ATTR_TARGET),
                             strdup(action->node->details->uname));
         g_hash_table_insert(action->meta, strdup(XML_LRM_ATTR_TARGET_UUID),
                             strdup(action->node->details->id));
     }
 
     if (skip_details) {
         return;
     }
 
     if ((action->rsc != NULL)
         && !pcmk_is_set(action->flags, pe_action_pseudo)) {
 
         // This is a real resource action, so add resource details
         add_resource_details(action, action_xml);
     }
 
     /* List any attributes in effect */
     add_action_attributes(action, action_xml);
 
     /* List any nodes this action is expected to make down */
     if (needs_node_info && (action->node != NULL)) {
         add_downed_nodes(action_xml, action, data_set);
     }
 
     if (needs_maintenance_info) {
         add_maintenance_nodes(action_xml, data_set);
     }
 }
 
 /*!
  * \internal
  * \brief Check whether an action should be added to the transition graph
  *
  * \param[in] action  Action to check
  *
  * \return true if action should be added to graph, otherwise false
  */
 static bool
 should_add_action_to_graph(const pe_action_t *action)
 {
     if (!pcmk_is_set(action->flags, pe_action_runnable)) {
         crm_trace("Ignoring action %s (%d): unrunnable",
                   action->uuid, action->id);
         return false;
     }
 
     if (pcmk_is_set(action->flags, pe_action_optional)
         && !pcmk_is_set(action->flags, pe_action_print_always)) {
         crm_trace("Ignoring action %s (%d): optional",
                   action->uuid, action->id);
         return false;
     }
 
     /* Actions for unmanaged resources should be excluded from the graph,
      * with the exception of monitors and cancellation of recurring monitors.
      */
     if ((action->rsc != NULL)
         && !pcmk_is_set(action->rsc->flags, pe_rsc_managed)
         && !pcmk__str_eq(action->task, RSC_STATUS, pcmk__str_none)) {
         const char *interval_ms_s;
 
         /* A cancellation of a recurring monitor will get here because the task
          * is cancel rather than monitor, but the interval can still be used to
          * recognize it. The interval has been normalized to milliseconds by
          * this point, so a string comparison is sufficient.
          */
         interval_ms_s = g_hash_table_lookup(action->meta,
                                             XML_LRM_ATTR_INTERVAL_MS);
         if (pcmk__str_eq(interval_ms_s, "0", pcmk__str_null_matches)) {
             crm_trace("Ignoring action %s (%d): for unmanaged resource (%s)",
                       action->uuid, action->id, action->rsc->id);
             return false;
         }
     }
 
     /* Always add pseudo-actions, fence actions, and shutdown actions (already
      * determined to be required and runnable by this point)
      */
     if (pcmk_is_set(action->flags, pe_action_pseudo)
         || pcmk__strcase_any_of(action->task, CRM_OP_FENCE, CRM_OP_SHUTDOWN,
                                 NULL)) {
         return true;
     }
 
     if (action->node == NULL) {
         pe_err("Skipping action %s (%d) "
-               "because it was not allocated to a node (bug?)",
+               "because it was not assigned to a node (bug?)",
                action->uuid, action->id);
-        pcmk__log_action("Unallocated", action, false);
+        pcmk__log_action("Unassigned", action, false);
         return false;
     }
 
     if (pcmk_is_set(action->flags, pe_action_dc)) {
         crm_trace("Action %s (%d) should be dumped: "
                   "can run on DC instead of %s",
                   action->uuid, action->id, pe__node_name(action->node));
 
     } else if (pe__is_guest_node(action->node)
                && !action->node->details->remote_requires_reset) {
         crm_trace("Action %s (%d) should be dumped: "
                   "assuming will be runnable on guest %s",
                   action->uuid, action->id, pe__node_name(action->node));
 
     } else if (!action->node->details->online) {
         pe_err("Skipping action %s (%d) "
                "because it was scheduled for offline node (bug?)",
                action->uuid, action->id);
         pcmk__log_action("Offline node", action, false);
         return false;
 
     } else if (action->node->details->unclean) {
         pe_err("Skipping action %s (%d) "
                "because it was scheduled for unclean node (bug?)",
                action->uuid, action->id);
         pcmk__log_action("Unclean node", action, false);
         return false;
     }
     return true;
 }
 
 /*!
  * \internal
  * \brief Check whether an ordering's flags can change an action
  *
  * \param[in] ordering  Ordering to check
  *
  * \return true if ordering has flags that can change an action, false otherwise
  */
 static bool
 ordering_can_change_actions(const pe_action_wrapper_t *ordering)
 {
     return pcmk_any_flags_set(ordering->type, ~(pe_order_implies_first_printed
                                                 |pe_order_implies_then_printed
                                                 |pe_order_optional));
 }
 
 /*!
  * \internal
  * \brief Check whether an action input should be in the transition graph
  *
  * \param[in]     action  Action to check
  * \param[in,out] input   Action input to check
  *
  * \return true if input should be in graph, false otherwise
  * \note This function may not only check an input, but disable it under certian
  *       circumstances (load or anti-colocation orderings that are not needed).
  */
 static bool
 should_add_input_to_graph(const pe_action_t *action, pe_action_wrapper_t *input)
 {
     if (input->state == pe_link_dumped) {
         return true;
     }
 
     if (input->type == pe_order_none) {
         crm_trace("Ignoring %s (%d) input %s (%d): "
                   "ordering disabled",
                   action->uuid, action->id,
                   input->action->uuid, input->action->id);
         return false;
 
     } else if (!pcmk_is_set(input->action->flags, pe_action_runnable)
                && !ordering_can_change_actions(input)) {
         crm_trace("Ignoring %s (%d) input %s (%d): "
                   "optional and input unrunnable",
                   action->uuid, action->id,
                   input->action->uuid, input->action->id);
         return false;
 
     } else if (!pcmk_is_set(input->action->flags, pe_action_runnable)
                && pcmk_is_set(input->type, pe_order_one_or_more)) {
         crm_trace("Ignoring %s (%d) input %s (%d): "
                   "one-or-more and input unrunnable",
                   action->uuid, action->id,
                   input->action->uuid, input->action->id);
         return false;
 
     } else if (pcmk_is_set(input->type, pe_order_implies_first_migratable)
                && !pcmk_is_set(input->action->flags, pe_action_runnable)) {
         crm_trace("Ignoring %s (%d) input %s (%d): "
                   "implies input migratable but input unrunnable",
                   action->uuid, action->id,
                   input->action->uuid, input->action->id);
         return false;
 
     } else if (pcmk_is_set(input->type, pe_order_apply_first_non_migratable)
                && pcmk_is_set(input->action->flags, pe_action_migrate_runnable)) {
         crm_trace("Ignoring %s (%d) input %s (%d): "
                   "only if input unmigratable but input unrunnable",
                   action->uuid, action->id,
                   input->action->uuid, input->action->id);
         return false;
 
     } else if ((input->type == pe_order_optional)
                && pcmk_is_set(input->action->flags, pe_action_migrate_runnable)
                && pcmk__ends_with(input->action->uuid, "_stop_0")) {
         crm_trace("Ignoring %s (%d) input %s (%d): "
                   "optional but stop in migration",
                   action->uuid, action->id,
                   input->action->uuid, input->action->id);
         return false;
 
     } else if (input->type == pe_order_load) {
         pe_node_t *input_node = input->action->node;
 
         // load orderings are relevant only if actions are for same node
 
         if (action->rsc && pcmk__str_eq(action->task, RSC_MIGRATE, pcmk__str_casei)) {
-            pe_node_t *allocated = action->rsc->allocated_to;
+            pe_node_t *assigned = action->rsc->allocated_to;
 
             /* For load_stopped -> migrate_to orderings, we care about where it
-             * has been allocated to, not where it will be executed.
+             * has been assigned to, not where it will be executed.
              */
-            if ((input_node == NULL) || (allocated == NULL)
-                || (input_node->details != allocated->details)) {
+            if ((input_node == NULL) || (assigned == NULL)
+                || (input_node->details != assigned->details)) {
                 crm_trace("Ignoring %s (%d) input %s (%d): "
                           "load ordering node mismatch %s vs %s",
                           action->uuid, action->id,
                           input->action->uuid, input->action->id,
-                          (allocated? allocated->details->uname : "<none>"),
+                          (assigned? assigned->details->uname : "<none>"),
                           (input_node? input_node->details->uname : "<none>"));
                 input->type = pe_order_none;
                 return false;
             }
 
         } else if ((input_node == NULL) || (action->node == NULL)
                    || (input_node->details != action->node->details)) {
             crm_trace("Ignoring %s (%d) input %s (%d): "
                       "load ordering node mismatch %s vs %s",
                       action->uuid, action->id,
                       input->action->uuid, input->action->id,
                       (action->node? action->node->details->uname : "<none>"),
                       (input_node? input_node->details->uname : "<none>"));
             input->type = pe_order_none;
             return false;
 
         } else if (pcmk_is_set(input->action->flags, pe_action_optional)) {
             crm_trace("Ignoring %s (%d) input %s (%d): "
                       "load ordering input optional",
                       action->uuid, action->id,
                       input->action->uuid, input->action->id);
             input->type = pe_order_none;
             return false;
         }
 
     } else if (input->type == pe_order_anti_colocation) {
         if (input->action->node && action->node
             && (input->action->node->details != action->node->details)) {
             crm_trace("Ignoring %s (%d) input %s (%d): "
                       "anti-colocation node mismatch %s vs %s",
                       action->uuid, action->id,
                       input->action->uuid, input->action->id,
                       pe__node_name(action->node),
                       pe__node_name(input->action->node));
             input->type = pe_order_none;
             return false;
 
         } else if (pcmk_is_set(input->action->flags, pe_action_optional)) {
             crm_trace("Ignoring %s (%d) input %s (%d): "
                       "anti-colocation input optional",
                       action->uuid, action->id,
                       input->action->uuid, input->action->id);
             input->type = pe_order_none;
             return false;
         }
 
     } else if (input->action->rsc
                && input->action->rsc != action->rsc
                && pcmk_is_set(input->action->rsc->flags, pe_rsc_failed)
                && !pcmk_is_set(input->action->rsc->flags, pe_rsc_managed)
                && pcmk__ends_with(input->action->uuid, "_stop_0")
                && action->rsc && pe_rsc_is_clone(action->rsc)) {
         crm_warn("Ignoring requirement that %s complete before %s:"
                  " unmanaged failed resources cannot prevent clone shutdown",
                  input->action->uuid, action->uuid);
         return false;
 
     } else if (pcmk_is_set(input->action->flags, pe_action_optional)
                && !pcmk_any_flags_set(input->action->flags,
                                       pe_action_print_always|pe_action_dumped)
                && !should_add_action_to_graph(input->action)) {
         crm_trace("Ignoring %s (%d) input %s (%d): "
                   "input optional",
                   action->uuid, action->id,
                   input->action->uuid, input->action->id);
         return false;
     }
 
     crm_trace("%s (%d) input %s %s (%d) on %s should be dumped: %s %s %#.6x",
               action->uuid, action->id, action_type_str(input->action->flags),
               input->action->uuid, input->action->id,
               action_node_str(input->action),
               action_runnable_str(input->action->flags),
               action_optional_str(input->action->flags), input->type);
     return true;
 }
 
 /*!
  * \internal
  * \brief Check whether an ordering creates an ordering loop
  *
  * \param[in]     init_action  "First" action in ordering
  * \param[in]     action       Callers should always set this the same as
  *                             \p init_action (this function may use a different
  *                             value for recursive calls)
  * \param[in,out] input        Action wrapper for "then" action in ordering
  *
  * \return true if the ordering creates a loop, otherwise false
  */
 bool
 pcmk__graph_has_loop(const pe_action_t *init_action, const pe_action_t *action,
                      pe_action_wrapper_t *input)
 {
     bool has_loop = false;
 
     if (pcmk_is_set(input->action->flags, pe_action_tracking)) {
         crm_trace("Breaking tracking loop: %s@%s -> %s@%s (%#.6x)",
                   input->action->uuid,
                   input->action->node? input->action->node->details->uname : "",
                   action->uuid,
                   action->node? action->node->details->uname : "",
                   input->type);
         return false;
     }
 
     // Don't need to check inputs that won't be used
     if (!should_add_input_to_graph(action, input)) {
         return false;
     }
 
     if (input->action == init_action) {
         crm_debug("Input loop found in %s@%s ->...-> %s@%s",
                   action->uuid,
                   action->node? action->node->details->uname : "",
                   init_action->uuid,
                   init_action->node? init_action->node->details->uname : "");
         return true;
     }
 
     pe__set_action_flags(input->action, pe_action_tracking);
 
     crm_trace("Checking inputs of action %s@%s input %s@%s (%#.6x)"
               "for graph loop with %s@%s ",
               action->uuid,
               action->node? action->node->details->uname : "",
               input->action->uuid,
               input->action->node? input->action->node->details->uname : "",
               input->type,
               init_action->uuid,
               init_action->node? init_action->node->details->uname : "");
 
     // Recursively check input itself for loops
     for (GList *iter = input->action->actions_before;
          iter != NULL; iter = iter->next) {
 
         if (pcmk__graph_has_loop(init_action, input->action,
                                  (pe_action_wrapper_t *) iter->data)) {
             // Recursive call already logged a debug message
             has_loop = true;
             break;
         }
     }
 
     pe__clear_action_flags(input->action, pe_action_tracking);
 
     if (!has_loop) {
         crm_trace("No input loop found in %s@%s -> %s@%s (%#.6x)",
                   input->action->uuid,
                   input->action->node? input->action->node->details->uname : "",
                   action->uuid,
                   action->node? action->node->details->uname : "",
                   input->type);
     }
     return has_loop;
 }
 
 /*!
  * \internal
  * \brief Create a synapse XML element for a transition graph
  *
  * \param[in]     action    Action that synapse is for
  * \param[in,out] data_set  Cluster working set containing graph
  *
  * \return Newly added XML element for new graph synapse
  */
 static xmlNode *
 create_graph_synapse(const pe_action_t *action, pe_working_set_t *data_set)
 {
     int synapse_priority = 0;
     xmlNode *syn = create_xml_node(data_set->graph, "synapse");
 
     crm_xml_add_int(syn, XML_ATTR_ID, data_set->num_synapse);
     data_set->num_synapse++;
 
     if (action->rsc != NULL) {
         synapse_priority = action->rsc->priority;
     }
     if (action->priority > synapse_priority) {
         synapse_priority = action->priority;
     }
     if (synapse_priority > 0) {
         crm_xml_add_int(syn, XML_CIB_ATTR_PRIORITY, synapse_priority);
     }
     return syn;
 }
 
 /*!
  * \internal
  * \brief Add an action to the transition graph XML if appropriate
  *
  * \param[in,out] data       Action to possibly add
  * \param[in,out] user_data  Cluster working set
  *
  * \note This will de-duplicate the action inputs, meaning that the
  *       pe_action_wrapper_t:type flags can no longer be relied on to retain
  *       their original settings. That means this MUST be called after
  *       pcmk__apply_orderings() is complete, and nothing after this should rely
  *       on those type flags. (For example, some code looks for type equal to
  *       some flag rather than whether the flag is set, and some code looks for
  *       particular combinations of flags -- such code must be done before
  *       pcmk__create_graph().)
  */
 static void
 add_action_to_graph(gpointer data, gpointer user_data)
 {
     pe_action_t *action = (pe_action_t *) data;
     pe_working_set_t *data_set = (pe_working_set_t *) user_data;
 
     xmlNode *syn = NULL;
     xmlNode *set = NULL;
     xmlNode *in = NULL;
 
     /* If we haven't already, de-duplicate inputs (even if we won't be adding
      * the action to the graph, so that crm_simulate's dot graphs don't have
      * duplicates).
      */
     if (!pcmk_is_set(action->flags, pe_action_dedup)) {
         pcmk__deduplicate_action_inputs(action);
         pe__set_action_flags(action, pe_action_dedup);
     }
 
     if (pcmk_is_set(action->flags, pe_action_dumped)    // Already added, or
         || !should_add_action_to_graph(action)) {       // shouldn't be added
         return;
     }
     pe__set_action_flags(action, pe_action_dumped);
 
     crm_trace("Adding action %d (%s%s%s) to graph",
               action->id, action->uuid,
               ((action->node == NULL)? "" : " on "),
               ((action->node == NULL)? "" : action->node->details->uname));
 
     syn = create_graph_synapse(action, data_set);
     set = create_xml_node(syn, "action_set");
     in = create_xml_node(syn, "inputs");
 
     create_graph_action(set, action, false, data_set);
 
     for (GList *lpc = action->actions_before; lpc != NULL; lpc = lpc->next) {
         pe_action_wrapper_t *input = (pe_action_wrapper_t *) lpc->data;
 
         if (should_add_input_to_graph(action, input)) {
             xmlNode *input_xml = create_xml_node(in, "trigger");
 
             input->state = pe_link_dumped;
             create_graph_action(input_xml, input->action, true, data_set);
         }
     }
 }
 
 static int transition_id = -1;
 
 /*!
  * \internal
  * \brief Log a message after calculating a transition
  *
  * \param[in] filename  Where transition input is stored
  */
 void
 pcmk__log_transition_summary(const char *filename)
 {
     if (was_processing_error) {
         crm_err("Calculated transition %d (with errors)%s%s",
                 transition_id,
                 (filename == NULL)? "" : ", saving inputs in ",
                 (filename == NULL)? "" : filename);
 
     } else if (was_processing_warning) {
         crm_warn("Calculated transition %d (with warnings)%s%s",
                  transition_id,
                  (filename == NULL)? "" : ", saving inputs in ",
                  (filename == NULL)? "" : filename);
 
     } else {
         crm_notice("Calculated transition %d%s%s",
                    transition_id,
                    (filename == NULL)? "" : ", saving inputs in ",
                    (filename == NULL)? "" : filename);
     }
     if (crm_config_error) {
         crm_notice("Configuration errors found during scheduler processing,"
                    "  please run \"crm_verify -L\" to identify issues");
     }
 }
 
 /*!
  * \internal
  * \brief Add a resource's actions to the transition graph
  *
  * \param[in,out] rsc  Resource whose actions should be added
  */
 void
 pcmk__add_rsc_actions_to_graph(pe_resource_t *rsc)
 {
     GList *iter = NULL;
 
     CRM_ASSERT(rsc != NULL);
     pe_rsc_trace(rsc, "Adding actions for %s to graph", rsc->id);
 
     // First add the resource's own actions
     g_list_foreach(rsc->actions, add_action_to_graph, rsc->cluster);
 
     // Then recursively add its children's actions (appropriate to variant)
     for (iter = rsc->children; iter != NULL; iter = iter->next) {
         pe_resource_t *child_rsc = (pe_resource_t *) iter->data;
 
         child_rsc->cmds->add_actions_to_graph(child_rsc);
     }
 }
 
 /*!
  * \internal
  * \brief Create a transition graph with all cluster actions needed
  *
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__create_graph(pe_working_set_t *data_set)
 {
     GList *iter = NULL;
     const char *value = NULL;
     long long limit = 0LL;
 
     transition_id++;
     crm_trace("Creating transition graph %d", transition_id);
 
     data_set->graph = create_xml_node(NULL, XML_TAG_GRAPH);
 
     value = pe_pref(data_set->config_hash, "cluster-delay");
     crm_xml_add(data_set->graph, "cluster-delay", value);
 
     value = pe_pref(data_set->config_hash, "stonith-timeout");
     crm_xml_add(data_set->graph, "stonith-timeout", value);
 
     crm_xml_add(data_set->graph, "failed-stop-offset", "INFINITY");
 
     if (pcmk_is_set(data_set->flags, pe_flag_start_failure_fatal)) {
         crm_xml_add(data_set->graph, "failed-start-offset", "INFINITY");
     } else {
         crm_xml_add(data_set->graph, "failed-start-offset", "1");
     }
 
     value = pe_pref(data_set->config_hash, "batch-limit");
     crm_xml_add(data_set->graph, "batch-limit", value);
 
     crm_xml_add_int(data_set->graph, "transition_id", transition_id);
 
     value = pe_pref(data_set->config_hash, "migration-limit");
     if ((pcmk__scan_ll(value, &limit, 0LL) == pcmk_rc_ok) && (limit > 0)) {
         crm_xml_add(data_set->graph, "migration-limit", value);
     }
 
     if (data_set->recheck_by > 0) {
         char *recheck_epoch = NULL;
 
         recheck_epoch = crm_strdup_printf("%llu",
                                           (long long) data_set->recheck_by);
         crm_xml_add(data_set->graph, "recheck-by", recheck_epoch);
         free(recheck_epoch);
     }
 
     /* The following code will de-duplicate action inputs, so nothing past this
      * should rely on the action input type flags retaining their original
      * values.
      */
 
     // Add resource actions to graph
     for (iter = data_set->resources; iter != NULL; iter = iter->next) {
         pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
         pe_rsc_trace(rsc, "Processing actions for %s", rsc->id);
         rsc->cmds->add_actions_to_graph(rsc);
     }
 
     // Add pseudo-action for list of nodes with maintenance state update
     add_maintenance_update(data_set);
 
     // Add non-resource (node) actions
     for (iter = data_set->actions; iter != NULL; iter = iter->next) {
         pe_action_t *action = (pe_action_t *) iter->data;
 
         if ((action->rsc != NULL)
             && (action->node != NULL)
             && action->node->details->shutdown
             && !pcmk_is_set(action->rsc->flags, pe_rsc_maintenance)
             && !pcmk_any_flags_set(action->flags,
                                    pe_action_optional|pe_action_runnable)
             && pcmk__str_eq(action->task, RSC_STOP, pcmk__str_none)) {
             /* Eventually we should just ignore the 'fence' case, but for now
              * it's the best way to detect (in CTS) when CIB resource updates
              * are being lost.
              */
             if (pcmk_is_set(data_set->flags, pe_flag_have_quorum)
                 || (data_set->no_quorum_policy == no_quorum_ignore)) {
                 crm_crit("Cannot %s %s because of %s:%s%s (%s)",
                          action->node->details->unclean? "fence" : "shut down",
                          pe__node_name(action->node), action->rsc->id,
                          pcmk_is_set(action->rsc->flags, pe_rsc_managed)? " blocked" : " unmanaged",
                          pcmk_is_set(action->rsc->flags, pe_rsc_failed)? " failed" : "",
                          action->uuid);
             }
         }
 
         add_action_to_graph((gpointer) action, (gpointer) data_set);
     }
 
     crm_log_xml_trace(data_set->graph, "graph");
 }
diff --git a/lib/pacemaker/pcmk_sched_actions.c b/lib/pacemaker/pcmk_sched_actions.c
index 13d0b4b4e6..5f2640268e 100644
--- a/lib/pacemaker/pcmk_sched_actions.c
+++ b/lib/pacemaker/pcmk_sched_actions.c
@@ -1,1918 +1,1918 @@
 /*
  * Copyright 2004-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 
 #include <stdio.h>
 #include <sys/param.h>
 #include <glib.h>
 
 #include <crm/lrmd_internal.h>
 #include <pacemaker-internal.h>
 #include "libpacemaker_private.h"
 
 /*!
  * \internal
  * \brief Get the action flags relevant to ordering constraints
  *
  * \param[in,out] action  Action to check
  * \param[in]     node    Node that *other* action in the ordering is on
  *                        (used only for clone resource actions)
  *
  * \return Action flags that should be used for orderings
  */
 static uint32_t
 action_flags_for_ordering(pe_action_t *action, const pe_node_t *node)
 {
     bool runnable = false;
     uint32_t flags;
 
     // For non-resource actions, return the action flags
     if (action->rsc == NULL) {
         return action->flags;
     }
 
     /* For non-clone resources, or a clone action not assigned to a node,
      * return the flags as determined by the resource method without a node
      * specified.
      */
     flags = action->rsc->cmds->action_flags(action, NULL);
     if ((node == NULL) || !pe_rsc_is_clone(action->rsc)) {
         return flags;
     }
 
     /* Otherwise (i.e., for clone resource actions on a specific node), first
      * remember whether the non-node-specific action is runnable.
      */
     runnable = pcmk_is_set(flags, pe_action_runnable);
 
     // Then recheck the resource method with the node
     flags = action->rsc->cmds->action_flags(action, node);
 
     /* For clones in ordering constraints, the node-specific "runnable" doesn't
      * matter, just the non-node-specific setting (i.e., is the action runnable
      * anywhere).
      *
      * This applies only to runnable, and only for ordering constraints. This
      * function shouldn't be used for other types of constraints without
      * changes. Not very satisfying, but it's logical and appears to work well.
      */
     if (runnable && !pcmk_is_set(flags, pe_action_runnable)) {
         pe__set_raw_action_flags(flags, action->rsc->id,
                                  pe_action_runnable);
     }
     return flags;
 }
 
 /*!
  * \internal
  * \brief Get action UUID that should be used with a resource ordering
  *
  * When an action is ordered relative to an action for a collective resource
  * (clone, group, or bundle), it actually needs to be ordered after all
  * instances of the collective have completed the relevant action (for example,
  * given "start CLONE then start RSC", RSC must wait until all instances of
  * CLONE have started). Given the UUID and resource of the first action in an
  * ordering, this returns the UUID of the action that should actually be used
  * for ordering (for example, "CLONE_started_0" instead of "CLONE_start_0").
  *
  * \param[in] first_uuid    UUID of first action in ordering
  * \param[in] first_rsc     Resource of first action in ordering
  *
  * \return Newly allocated copy of UUID to use with ordering
  * \note It is the caller's responsibility to free the return value.
  */
 static char *
 action_uuid_for_ordering(const char *first_uuid, const pe_resource_t *first_rsc)
 {
     guint interval_ms = 0;
     char *uuid = NULL;
     char *rid = NULL;
     char *first_task_str = NULL;
     enum action_tasks first_task = no_action;
     enum action_tasks remapped_task = no_action;
 
     // Only non-notify actions for collective resources need remapping
     if ((strstr(first_uuid, "notify") != NULL)
         || (first_rsc->variant < pe_group)) {
         goto done;
     }
 
     // Only non-recurring actions need remapping
     CRM_ASSERT(parse_op_key(first_uuid, &rid, &first_task_str, &interval_ms));
     if (interval_ms > 0) {
         goto done;
     }
 
     first_task = text2task(first_task_str);
     switch (first_task) {
         case stop_rsc:
         case start_rsc:
         case action_notify:
         case action_promote:
         case action_demote:
             remapped_task = first_task + 1;
             break;
         case stopped_rsc:
         case started_rsc:
         case action_notified:
         case action_promoted:
         case action_demoted:
             remapped_task = first_task;
             break;
         case monitor_rsc:
         case shutdown_crm:
         case stonith_node:
             break;
         default:
             crm_err("Unknown action '%s' in ordering", first_task_str);
             break;
     }
 
     if (remapped_task != no_action) {
         /* If a (clone) resource has notifications enabled, we want to order
          * relative to when all notifications have been sent for the remapped
          * task. Only outermost resources or those in bundles have
          * notifications.
          */
         if (pcmk_is_set(first_rsc->flags, pe_rsc_notify)
             && ((first_rsc->parent == NULL)
                 || (pe_rsc_is_clone(first_rsc)
                     && (first_rsc->parent->variant == pe_container)))) {
             uuid = pcmk__notify_key(rid, "confirmed-post",
                                     task2text(remapped_task));
         } else {
             uuid = pcmk__op_key(rid, task2text(remapped_task), 0);
         }
         pe_rsc_trace(first_rsc,
                      "Remapped action UUID %s to %s for ordering purposes",
                      first_uuid, uuid);
     }
 
 done:
     if (uuid == NULL) {
         uuid = strdup(first_uuid);
         CRM_ASSERT(uuid != NULL);
     }
     free(first_task_str);
     free(rid);
     return uuid;
 }
 
 /*!
  * \internal
  * \brief Get actual action that should be used with an ordering
  *
  * When an action is ordered relative to an action for a collective resource
  * (clone, group, or bundle), it actually needs to be ordered after all
  * instances of the collective have completed the relevant action (for example,
  * given "start CLONE then start RSC", RSC must wait until all instances of
  * CLONE have started). Given the first action in an ordering, this returns the
  * the action that should actually be used for ordering (for example, the
  * started action instead of the start action).
  *
  * \param[in] action  First action in an ordering
  *
  * \return Actual action that should be used for the ordering
  */
 static pe_action_t *
 action_for_ordering(pe_action_t *action)
 {
     pe_action_t *result = action;
     pe_resource_t *rsc = action->rsc;
 
     if ((rsc != NULL) && (rsc->variant >= pe_group) && (action->uuid != NULL)) {
         char *uuid = action_uuid_for_ordering(action->uuid, rsc);
 
         result = find_first_action(rsc->actions, uuid, NULL, NULL);
         if (result == NULL) {
             crm_warn("Not remapping %s to %s because %s does not have "
                      "remapped action", action->uuid, uuid, rsc->id);
             result = action;
         }
         free(uuid);
     }
     return result;
 }
 
 /*!
  * \internal
  * \brief Update flags for ordering's actions appropriately for ordering's flags
  *
  * \param[in,out] first        First action in an ordering
  * \param[in,out] then         Then action in an ordering
  * \param[in]     first_flags  Action flags for \p first for ordering purposes
  * \param[in]     then_flags   Action flags for \p then for ordering purposes
  * \param[in,out] order        Action wrapper for \p first in ordering
  * \param[in,out] data_set     Cluster working set
  *
  * \return Group of enum pcmk__updated flags
  */
 static uint32_t
 update_action_for_ordering_flags(pe_action_t *first, pe_action_t *then,
                                  uint32_t first_flags, uint32_t then_flags,
                                  pe_action_wrapper_t *order,
                                  pe_working_set_t *data_set)
 {
     uint32_t changed = pcmk__updated_none;
 
     /* The node will only be used for clones. If interleaved, node will be NULL,
      * otherwise the ordering scope will be limited to the node. Normally, the
      * whole 'then' clone should restart if 'first' is restarted, so then->node
      * is needed.
      */
     pe_node_t *node = then->node;
 
     if (pcmk_is_set(order->type, pe_order_implies_then_on_node)) {
         /* For unfencing, only instances of 'then' on the same node as 'first'
          * (the unfencing operation) should restart, so reset node to
          * first->node, at which point this case is handled like a normal
          * pe_order_implies_then.
          */
         pe__clear_order_flags(order->type, pe_order_implies_then_on_node);
         pe__set_order_flags(order->type, pe_order_implies_then);
         node = first->node;
         pe_rsc_trace(then->rsc,
                      "%s then %s: mapped pe_order_implies_then_on_node to "
                      "pe_order_implies_then on %s",
                      first->uuid, then->uuid, pe__node_name(node));
     }
 
     if (pcmk_is_set(order->type, pe_order_implies_then)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags & pe_action_optional,
                                                                pe_action_optional,
                                                                pe_order_implies_then,
                                                                data_set);
         } else if (!pcmk_is_set(first_flags, pe_action_optional)
                    && pcmk_is_set(then->flags, pe_action_optional)) {
             pe__clear_action_flags(then, pe_action_optional);
             pcmk__set_updated_flags(changed, first, pcmk__updated_then);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_implies_then",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_restart) && (then->rsc != NULL)) {
         enum pe_action_flags restart = pe_action_optional|pe_action_runnable;
 
         changed |= then->rsc->cmds->update_ordered_actions(first, then, node,
                                                            first_flags, restart,
                                                            pe_order_restart,
                                                            data_set);
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_restart",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_implies_first)) {
         if (first->rsc != NULL) {
             changed |= first->rsc->cmds->update_ordered_actions(first, then,
                                                                 node,
                                                                 first_flags,
                                                                 pe_action_optional,
                                                                 pe_order_implies_first,
                                                                 data_set);
         } else if (!pcmk_is_set(first_flags, pe_action_optional)
                    && pcmk_is_set(first->flags, pe_action_runnable)) {
             pe__clear_action_flags(first, pe_action_runnable);
             pcmk__set_updated_flags(changed, first, pcmk__updated_first);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_implies_first",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_promoted_implies_first)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags & pe_action_optional,
                                                                pe_action_optional,
                                                                pe_order_promoted_implies_first,
                                                                data_set);
         }
         pe_rsc_trace(then->rsc,
                      "%s then %s: %s after pe_order_promoted_implies_first",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_one_or_more)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags,
                                                                pe_action_runnable,
                                                                pe_order_one_or_more,
                                                                data_set);
 
         } else if (pcmk_is_set(first_flags, pe_action_runnable)) {
             // We have another runnable instance of "first"
             then->runnable_before++;
 
             /* Mark "then" as runnable if it requires a certain number of
              * "before" instances to be runnable, and they now are.
              */
             if ((then->runnable_before >= then->required_runnable_before)
                 && !pcmk_is_set(then->flags, pe_action_runnable)) {
 
                 pe__set_action_flags(then, pe_action_runnable);
                 pcmk__set_updated_flags(changed, first, pcmk__updated_then);
             }
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_one_or_more",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_probe) && (then->rsc != NULL)) {
         if (!pcmk_is_set(first_flags, pe_action_runnable)
             && (first->rsc->running_on != NULL)) {
 
             pe_rsc_trace(then->rsc,
                          "%s then %s: ignoring because first is stopping",
                          first->uuid, then->uuid);
             order->type = pe_order_none;
         } else {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags,
                                                                pe_action_runnable,
                                                                pe_order_runnable_left,
                                                                data_set);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_probe",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_runnable_left)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags,
                                                                pe_action_runnable,
                                                                pe_order_runnable_left,
                                                                data_set);
 
         } else if (!pcmk_is_set(first_flags, pe_action_runnable)
                    && pcmk_is_set(then->flags, pe_action_runnable)) {
 
             pe__clear_action_flags(then, pe_action_runnable);
             pcmk__set_updated_flags(changed, first, pcmk__updated_then);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_runnable_left",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_implies_first_migratable)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags,
                                                                pe_action_optional,
                                                                pe_order_implies_first_migratable,
                                                                data_set);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after "
                      "pe_order_implies_first_migratable",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_pseudo_left)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags,
                                                                pe_action_optional,
                                                                pe_order_pseudo_left,
                                                                data_set);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_pseudo_left",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_optional)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags,
                                                                pe_action_runnable,
                                                                pe_order_optional,
                                                                data_set);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_optional",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(order->type, pe_order_asymmetrical)) {
         if (then->rsc != NULL) {
             changed |= then->rsc->cmds->update_ordered_actions(first, then,
                                                                node,
                                                                first_flags,
                                                                pe_action_runnable,
                                                                pe_order_asymmetrical,
                                                                data_set);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after pe_order_asymmetrical",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     if (pcmk_is_set(first->flags, pe_action_runnable)
         && pcmk_is_set(order->type, pe_order_implies_then_printed)
         && !pcmk_is_set(first_flags, pe_action_optional)) {
 
         pe_rsc_trace(then->rsc, "%s will be in graph because %s is required",
                      then->uuid, first->uuid);
         pe__set_action_flags(then, pe_action_print_always);
         // Don't bother marking 'then' as changed just for this
     }
 
     if (pcmk_is_set(order->type, pe_order_implies_first_printed)
         && !pcmk_is_set(then_flags, pe_action_optional)) {
 
         pe_rsc_trace(then->rsc, "%s will be in graph because %s is required",
                      first->uuid, then->uuid);
         pe__set_action_flags(first, pe_action_print_always);
         // Don't bother marking 'first' as changed just for this
     }
 
     if (pcmk_any_flags_set(order->type, pe_order_implies_then
                                         |pe_order_implies_first
                                         |pe_order_restart)
         && (first->rsc != NULL)
         && !pcmk_is_set(first->rsc->flags, pe_rsc_managed)
         && pcmk_is_set(first->rsc->flags, pe_rsc_block)
         && !pcmk_is_set(first->flags, pe_action_runnable)
         && pcmk__str_eq(first->task, RSC_STOP, pcmk__str_casei)) {
 
         if (pcmk_is_set(then->flags, pe_action_runnable)) {
             pe__clear_action_flags(then, pe_action_runnable);
             pcmk__set_updated_flags(changed, first, pcmk__updated_then);
         }
         pe_rsc_trace(then->rsc, "%s then %s: %s after checking whether first "
                      "is blocked, unmanaged, unrunnable stop",
                      first->uuid, then->uuid,
                      (changed? "changed" : "unchanged"));
     }
 
     return changed;
 }
 
 // Convenience macros for logging action properties
 
 #define action_type_str(flags) \
     (pcmk_is_set((flags), pe_action_pseudo)? "pseudo-action" : "action")
 
 #define action_optional_str(flags) \
     (pcmk_is_set((flags), pe_action_optional)? "optional" : "required")
 
 #define action_runnable_str(flags) \
     (pcmk_is_set((flags), pe_action_runnable)? "runnable" : "unrunnable")
 
 #define action_node_str(a) \
     (((a)->node == NULL)? "no node" : (a)->node->details->uname)
 
 /*!
  * \internal
  * \brief Update an action's flags for all orderings where it is "then"
  *
  * \param[in,out] then      Action to update
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__update_action_for_orderings(pe_action_t *then, pe_working_set_t *data_set)
 {
     GList *lpc = NULL;
     uint32_t changed = pcmk__updated_none;
     int last_flags = then->flags;
 
     pe_rsc_trace(then->rsc, "Updating %s %s (%s %s) on %s",
                  action_type_str(then->flags), then->uuid,
                  action_optional_str(then->flags),
                  action_runnable_str(then->flags), action_node_str(then));
 
     if (pcmk_is_set(then->flags, pe_action_requires_any)) {
         /* Initialize current known "runnable before" actions. As
          * update_action_for_ordering_flags() is called for each of then's
          * before actions, this number will increment as runnable 'first'
          * actions are encountered.
          */
         then->runnable_before = 0;
 
         if (then->required_runnable_before == 0) {
             /* @COMPAT This ordering constraint uses the deprecated
              * "require-all=false" attribute. Treat it like "clone-min=1".
              */
             then->required_runnable_before = 1;
         }
 
         /* The pe_order_one_or_more clause of update_action_for_ordering_flags()
          * (called below) will reset runnable if appropriate.
          */
         pe__clear_action_flags(then, pe_action_runnable);
     }
 
     for (lpc = then->actions_before; lpc != NULL; lpc = lpc->next) {
         pe_action_wrapper_t *other = (pe_action_wrapper_t *) lpc->data;
         pe_action_t *first = other->action;
 
         pe_node_t *then_node = then->node;
         pe_node_t *first_node = first->node;
 
         if ((first->rsc != NULL)
             && (first->rsc->variant == pe_group)
             && pcmk__str_eq(first->task, RSC_START, pcmk__str_casei)) {
 
             first_node = first->rsc->fns->location(first->rsc, NULL, FALSE);
             if (first_node != NULL) {
                 pe_rsc_trace(first->rsc, "Found %s for 'first' %s",
                              pe__node_name(first_node), first->uuid);
             }
         }
 
         if ((then->rsc != NULL)
             && (then->rsc->variant == pe_group)
             && pcmk__str_eq(then->task, RSC_START, pcmk__str_casei)) {
 
             then_node = then->rsc->fns->location(then->rsc, NULL, FALSE);
             if (then_node != NULL) {
                 pe_rsc_trace(then->rsc, "Found %s for 'then' %s",
                              pe__node_name(then_node), then->uuid);
             }
         }
 
         // Disable constraint if it only applies when on same node, but isn't
         if (pcmk_is_set(other->type, pe_order_same_node)
             && (first_node != NULL) && (then_node != NULL)
             && (first_node->details != then_node->details)) {
 
             pe_rsc_trace(then->rsc,
                          "Disabled ordering %s on %s then %s on %s: not same node",
                          other->action->uuid, pe__node_name(first_node),
                          then->uuid, pe__node_name(then_node));
             other->type = pe_order_none;
             continue;
         }
 
         pcmk__clear_updated_flags(changed, then, pcmk__updated_first);
 
         if ((first->rsc != NULL)
             && pcmk_is_set(other->type, pe_order_then_cancels_first)
             && !pcmk_is_set(then->flags, pe_action_optional)) {
 
             /* 'then' is required, so we must abandon 'first'
              * (e.g. a required stop cancels any agent reload).
              */
             pe__set_action_flags(other->action, pe_action_optional);
             if (!strcmp(first->task, CRMD_ACTION_RELOAD_AGENT)) {
                 pe__clear_resource_flags(first->rsc, pe_rsc_reload);
             }
         }
 
         if ((first->rsc != NULL) && (then->rsc != NULL)
             && (first->rsc != then->rsc) && !is_parent(then->rsc, first->rsc)) {
             first = action_for_ordering(first);
         }
         if (first != other->action) {
             pe_rsc_trace(then->rsc, "Ordering %s after %s instead of %s",
                          then->uuid, first->uuid, other->action->uuid);
         }
 
         pe_rsc_trace(then->rsc,
                      "%s (%#.6x) then %s (%#.6x): type=%#.6x node=%s",
                      first->uuid, first->flags, then->uuid, then->flags,
                      other->type, action_node_str(first));
 
         if (first == other->action) {
             /* 'first' was not remapped (e.g. from 'start' to 'running'), which
              * could mean it is a non-resource action, a primitive resource
              * action, or already expanded.
              */
             uint32_t first_flags, then_flags;
 
             first_flags = action_flags_for_ordering(first, then_node);
             then_flags = action_flags_for_ordering(then, first_node);
 
             changed |= update_action_for_ordering_flags(first, then,
                                                         first_flags, then_flags,
                                                         other, data_set);
 
             /* 'first' was for a complex resource (clone, group, etc),
              * create a new dependency if necessary
              */
         } else if (order_actions(first, then, other->type)) {
             /* This was the first time 'first' and 'then' were associated,
              * start again to get the new actions_before list
              */
             pcmk__set_updated_flags(changed, then, pcmk__updated_then);
             pe_rsc_trace(then->rsc,
                          "Disabled ordering %s then %s in favor of %s then %s",
                          other->action->uuid, then->uuid, first->uuid,
                          then->uuid);
             other->type = pe_order_none;
         }
 
 
         if (pcmk_is_set(changed, pcmk__updated_first)) {
             crm_trace("Re-processing %s and its 'after' actions "
                       "because it changed", first->uuid);
             for (GList *lpc2 = first->actions_after; lpc2 != NULL;
                  lpc2 = lpc2->next) {
                 pe_action_wrapper_t *other = (pe_action_wrapper_t *) lpc2->data;
 
                 pcmk__update_action_for_orderings(other->action, data_set);
             }
             pcmk__update_action_for_orderings(first, data_set);
         }
     }
 
     if (pcmk_is_set(then->flags, pe_action_requires_any)) {
         if (last_flags == then->flags) {
             pcmk__clear_updated_flags(changed, then, pcmk__updated_then);
         } else {
             pcmk__set_updated_flags(changed, then, pcmk__updated_then);
         }
     }
 
     if (pcmk_is_set(changed, pcmk__updated_then)) {
         crm_trace("Re-processing %s and its 'after' actions because it changed",
                   then->uuid);
         if (pcmk_is_set(last_flags, pe_action_runnable)
             && !pcmk_is_set(then->flags, pe_action_runnable)) {
             pcmk__block_colocation_dependents(then, data_set);
         }
         pcmk__update_action_for_orderings(then, data_set);
         for (lpc = then->actions_after; lpc != NULL; lpc = lpc->next) {
             pe_action_wrapper_t *other = (pe_action_wrapper_t *) lpc->data;
 
             pcmk__update_action_for_orderings(other->action, data_set);
         }
     }
 }
 
 static inline bool
 is_primitive_action(const pe_action_t *action)
 {
     return action && action->rsc && (action->rsc->variant == pe_native);
 }
 
 /*!
  * \internal
  * \brief Clear a single action flag and set reason text
  *
  * \param[in,out] action  Action whose flag should be cleared
  * \param[in]     flag    Action flag that should be cleared
  * \param[in]     reason  Action that is the reason why flag is being cleared
  */
 #define clear_action_flag_because(action, flag, reason) do {                \
         if (pcmk_is_set((action)->flags, (flag))) {                         \
             pe__clear_action_flags(action, flag);                           \
             if ((action)->rsc != (reason)->rsc) {                           \
                 char *reason_text = pe__action2reason((reason), (flag));    \
                 pe_action_set_reason((action), reason_text,                 \
                                    ((flag) == pe_action_migrate_runnable)); \
                 free(reason_text);                                          \
             }                                                               \
         }                                                                   \
     } while (0)
 
 /*!
  * \internal
  * \brief Update actions in an asymmetric ordering
  *
  * If the "first" action in an asymmetric ordering is unrunnable, make the
  * "second" action unrunnable as well, if appropriate.
  *
  * \param[in]     first  'First' action in an asymmetric ordering
  * \param[in,out] then   'Then' action in an asymmetric ordering
  */
 static void
 handle_asymmetric_ordering(const pe_action_t *first, pe_action_t *then)
 {
     /* Only resource actions after an unrunnable 'first' action need updates for
      * asymmetric ordering.
      */
     if ((then->rsc == NULL) || pcmk_is_set(first->flags, pe_action_runnable)) {
         return;
     }
 
     // Certain optional 'then' actions are unaffected by unrunnable 'first'
     if (pcmk_is_set(then->flags, pe_action_optional)) {
         enum rsc_role_e then_rsc_role = then->rsc->fns->state(then->rsc, TRUE);
 
         if ((then_rsc_role == RSC_ROLE_STOPPED)
             && pcmk__str_eq(then->task, RSC_STOP, pcmk__str_none)) {
             /* If 'then' should stop after 'first' but is already stopped, the
              * ordering is irrelevant.
              */
             return;
         } else if ((then_rsc_role >= RSC_ROLE_STARTED)
             && pcmk__str_eq(then->task, RSC_START, pcmk__str_none)
             && pe__rsc_running_on_only(then->rsc, then->node)) {
             /* Similarly if 'then' should start after 'first' but is already
              * started on a single node.
              */
             return;
         }
     }
 
     // 'First' can't run, so 'then' can't either
     clear_action_flag_because(then, pe_action_optional, first);
     clear_action_flag_because(then, pe_action_runnable, first);
 }
 
 /*!
  * \internal
  * \brief Set action bits appropriately when pe_restart_order is used
  *
  * \param[in,out] first   'First' action in an ordering with pe_restart_order
  * \param[in,out] then    'Then' action in an ordering with pe_restart_order
  * \param[in]     filter  What action flags to care about
  *
  * \note pe_restart_order is set for "stop resource before starting it" and
  *       "stop later group member before stopping earlier group member"
  */
 static void
 handle_restart_ordering(pe_action_t *first, pe_action_t *then, uint32_t filter)
 {
     const char *reason = NULL;
 
     CRM_ASSERT(is_primitive_action(first));
     CRM_ASSERT(is_primitive_action(then));
 
     // We need to update the action in two cases:
 
     // ... if 'then' is required
     if (pcmk_is_set(filter, pe_action_optional)
         && !pcmk_is_set(then->flags, pe_action_optional)) {
         reason = "restart";
     }
 
     /* ... if 'then' is unrunnable action on same resource (if a resource
      * should restart but can't start, we still want to stop)
      */
     if (pcmk_is_set(filter, pe_action_runnable)
         && !pcmk_is_set(then->flags, pe_action_runnable)
         && pcmk_is_set(then->rsc->flags, pe_rsc_managed)
         && (first->rsc == then->rsc)) {
         reason = "stop";
     }
 
     if (reason == NULL) {
         return;
     }
 
     pe_rsc_trace(first->rsc, "Handling %s -> %s for %s",
                  first->uuid, then->uuid, reason);
 
     // Make 'first' required if it is runnable
     if (pcmk_is_set(first->flags, pe_action_runnable)) {
         clear_action_flag_because(first, pe_action_optional, then);
     }
 
     // Make 'first' required if 'then' is required
     if (!pcmk_is_set(then->flags, pe_action_optional)) {
         clear_action_flag_because(first, pe_action_optional, then);
     }
 
     // Make 'first' unmigratable if 'then' is unmigratable
     if (!pcmk_is_set(then->flags, pe_action_migrate_runnable)) {
         clear_action_flag_because(first, pe_action_migrate_runnable, then);
     }
 
     // Make 'then' unrunnable if 'first' is required but unrunnable
     if (!pcmk_is_set(first->flags, pe_action_optional)
         && !pcmk_is_set(first->flags, pe_action_runnable)) {
         clear_action_flag_because(then, pe_action_runnable, first);
     }
 }
 
 /*!
  * \internal
  * \brief Update two actions according to an ordering between them
  *
  * Given information about an ordering of two actions, update the actions' flags
  * (and runnable_before members if appropriate) as appropriate for the ordering.
  * Effects may cascade to other orderings involving the actions as well.
  *
  * \param[in,out] first     'First' action in an ordering
  * \param[in,out] then      'Then' action in an ordering
  * \param[in]     node      If not NULL, limit scope of ordering to this node
  *                          (ignored)
  * \param[in]     flags     Action flags for \p first for ordering purposes
  * \param[in]     filter    Action flags to limit scope of certain updates (may
  *                          include pe_action_optional to affect only mandatory
  *                          actions, and pe_action_runnable to affect only
  *                          runnable actions)
  * \param[in]     type      Group of enum pe_ordering flags to apply
  * \param[in,out] data_set  Cluster working set
  *
  * \return Group of enum pcmk__updated flags indicating what was updated
  */
 uint32_t
 pcmk__update_ordered_actions(pe_action_t *first, pe_action_t *then,
                              const pe_node_t *node, uint32_t flags,
                              uint32_t filter, uint32_t type,
                              pe_working_set_t *data_set)
 {
     uint32_t changed = pcmk__updated_none;
     uint32_t then_flags = then->flags;
     uint32_t first_flags = first->flags;
 
     if (pcmk_is_set(type, pe_order_asymmetrical)) {
         handle_asymmetric_ordering(first, then);
     }
 
     if (pcmk_is_set(type, pe_order_implies_first)
         && !pcmk_is_set(then_flags, pe_action_optional)) {
         // Then is required, and implies first should be, too
 
         if (pcmk_is_set(filter, pe_action_optional)
             && !pcmk_is_set(flags, pe_action_optional)
             && pcmk_is_set(first_flags, pe_action_optional)) {
             clear_action_flag_because(first, pe_action_optional, then);
         }
 
         if (pcmk_is_set(flags, pe_action_migrate_runnable)
             && !pcmk_is_set(then->flags, pe_action_migrate_runnable)) {
             clear_action_flag_because(first, pe_action_migrate_runnable, then);
         }
     }
 
     if (pcmk_is_set(type, pe_order_promoted_implies_first)
         && (then->rsc != NULL) && (then->rsc->role == RSC_ROLE_PROMOTED)
         && pcmk_is_set(filter, pe_action_optional)
         && !pcmk_is_set(then->flags, pe_action_optional)) {
 
         clear_action_flag_because(first, pe_action_optional, then);
 
         if (pcmk_is_set(first->flags, pe_action_migrate_runnable)
             && !pcmk_is_set(then->flags, pe_action_migrate_runnable)) {
             clear_action_flag_because(first, pe_action_migrate_runnable,
                                       then);
         }
     }
 
     if (pcmk_is_set(type, pe_order_implies_first_migratable)
         && pcmk_is_set(filter, pe_action_optional)) {
 
         if (!pcmk_all_flags_set(then->flags,
                                 pe_action_migrate_runnable|pe_action_runnable)) {
             clear_action_flag_because(first, pe_action_runnable, then);
         }
 
         if (!pcmk_is_set(then->flags, pe_action_optional)) {
             clear_action_flag_because(first, pe_action_optional, then);
         }
     }
 
     if (pcmk_is_set(type, pe_order_pseudo_left)
         && pcmk_is_set(filter, pe_action_optional)
         && !pcmk_is_set(first->flags, pe_action_runnable)) {
 
         clear_action_flag_because(then, pe_action_migrate_runnable, first);
         pe__clear_action_flags(then, pe_action_pseudo);
     }
 
     if (pcmk_is_set(type, pe_order_runnable_left)
         && pcmk_is_set(filter, pe_action_runnable)
         && pcmk_is_set(then->flags, pe_action_runnable)
         && !pcmk_is_set(flags, pe_action_runnable)) {
 
         clear_action_flag_because(then, pe_action_runnable, first);
         clear_action_flag_because(then, pe_action_migrate_runnable, first);
     }
 
     if (pcmk_is_set(type, pe_order_implies_then)
         && pcmk_is_set(filter, pe_action_optional)
         && pcmk_is_set(then->flags, pe_action_optional)
         && !pcmk_is_set(flags, pe_action_optional)
         && !pcmk_is_set(first->flags, pe_action_migrate_runnable)) {
 
         clear_action_flag_because(then, pe_action_optional, first);
     }
 
     if (pcmk_is_set(type, pe_order_restart)) {
         handle_restart_ordering(first, then, filter);
     }
 
     if (then_flags != then->flags) {
         pcmk__set_updated_flags(changed, first, pcmk__updated_then);
         pe_rsc_trace(then->rsc,
                      "%s on %s: flags are now %#.6x (was %#.6x) "
                      "because of 'first' %s (%#.6x)",
                      then->uuid, pe__node_name(then->node),
                      then->flags, then_flags, first->uuid, first->flags);
 
         if ((then->rsc != NULL) && (then->rsc->parent != NULL)) {
             // Required to handle "X_stop then X_start" for cloned groups
             pcmk__update_action_for_orderings(then, data_set);
         }
     }
 
     if (first_flags != first->flags) {
         pcmk__set_updated_flags(changed, first, pcmk__updated_first);
         pe_rsc_trace(first->rsc,
                      "%s on %s: flags are now %#.6x (was %#.6x) "
                      "because of 'then' %s (%#.6x)",
                      first->uuid, pe__node_name(first->node),
                      first->flags, first_flags, then->uuid, then->flags);
     }
 
     return changed;
 }
 
 /*!
  * \internal
  * \brief Trace-log an action (optionally with its dependent actions)
  *
  * \param[in] pre_text  If not NULL, prefix the log with this plus ": "
  * \param[in] action    Action to log
  * \param[in] details   If true, recursively log dependent actions
  */
 void
 pcmk__log_action(const char *pre_text, const pe_action_t *action, bool details)
 {
     const char *node_uname = NULL;
     const char *node_uuid = NULL;
     const char *desc = NULL;
 
     CRM_CHECK(action != NULL, return);
 
     if (!pcmk_is_set(action->flags, pe_action_pseudo)) {
         if (action->node != NULL) {
             node_uname = action->node->details->uname;
             node_uuid = action->node->details->id;
         } else {
             node_uname = "<none>";
         }
     }
 
     switch (text2task(action->task)) {
         case stonith_node:
         case shutdown_crm:
             if (pcmk_is_set(action->flags, pe_action_pseudo)) {
                 desc = "Pseudo ";
             } else if (pcmk_is_set(action->flags, pe_action_optional)) {
                 desc = "Optional ";
             } else if (!pcmk_is_set(action->flags, pe_action_runnable)) {
                 desc = "!!Non-Startable!! ";
             } else if (pcmk_is_set(action->flags, pe_action_processed)) {
                desc = "";
             } else {
                desc = "(Provisional) ";
             }
             crm_trace("%s%s%sAction %d: %s%s%s%s%s%s",
                       ((pre_text == NULL)? "" : pre_text),
                       ((pre_text == NULL)? "" : ": "),
                       desc, action->id, action->uuid,
                       (node_uname? "\ton " : ""), (node_uname? node_uname : ""),
                       (node_uuid? "\t\t(" : ""), (node_uuid? node_uuid : ""),
                       (node_uuid? ")" : ""));
             break;
         default:
             if (pcmk_is_set(action->flags, pe_action_optional)) {
                 desc = "Optional ";
             } else if (pcmk_is_set(action->flags, pe_action_pseudo)) {
                 desc = "Pseudo ";
             } else if (!pcmk_is_set(action->flags, pe_action_runnable)) {
                 desc = "!!Non-Startable!! ";
             } else if (pcmk_is_set(action->flags, pe_action_processed)) {
                desc = "";
             } else {
                desc = "(Provisional) ";
             }
             crm_trace("%s%s%sAction %d: %s %s%s%s%s%s%s",
                       ((pre_text == NULL)? "" : pre_text),
                       ((pre_text == NULL)? "" : ": "),
                       desc, action->id, action->uuid,
                       (action->rsc? action->rsc->id : "<none>"),
                       (node_uname? "\ton " : ""), (node_uname? node_uname : ""),
                       (node_uuid? "\t\t(" : ""), (node_uuid? node_uuid : ""),
                       (node_uuid? ")" : ""));
             break;
     }
 
     if (details) {
         const GList *iter = NULL;
         const pe_action_wrapper_t *other = NULL;
 
         crm_trace("\t\t====== Preceding Actions");
         for (iter = action->actions_before; iter != NULL; iter = iter->next) {
             other = (const pe_action_wrapper_t *) iter->data;
             pcmk__log_action("\t\t", other->action, false);
         }
         crm_trace("\t\t====== Subsequent Actions");
         for (iter = action->actions_after; iter != NULL; iter = iter->next) {
             other = (const pe_action_wrapper_t *) iter->data;
             pcmk__log_action("\t\t", other->action, false);
         }
         crm_trace("\t\t====== End");
 
     } else {
         crm_trace("\t\t(before=%d, after=%d)",
                   g_list_length(action->actions_before),
                   g_list_length(action->actions_after));
     }
 }
 
 /*!
  * \internal
  * \brief Create a new shutdown action for a node
  *
  * \param[in,out] node  Node being shut down
  *
  * \return Newly created shutdown action for \p node
  */
 pe_action_t *
 pcmk__new_shutdown_action(pe_node_t *node)
 {
     char *shutdown_id = NULL;
     pe_action_t *shutdown_op = NULL;
 
     CRM_ASSERT(node != NULL);
 
     shutdown_id = crm_strdup_printf("%s-%s", CRM_OP_SHUTDOWN,
                                     node->details->uname);
 
     shutdown_op = custom_action(NULL, shutdown_id, CRM_OP_SHUTDOWN, node, FALSE,
                                 TRUE, node->details->data_set);
 
     pcmk__order_stops_before_shutdown(node, shutdown_op);
     add_hash_param(shutdown_op->meta, XML_ATTR_TE_NOWAIT, XML_BOOLEAN_TRUE);
     return shutdown_op;
 }
 
 /*!
  * \internal
  * \brief Calculate and add an operation digest to XML
  *
  * Calculate an operation digest, which enables us to later determine when a
  * restart is needed due to the resource's parameters being changed, and add it
  * to given XML.
  *
  * \param[in]     op      Operation result from executor
  * \param[in,out] update  XML to add digest to
  */
 static void
 add_op_digest_to_xml(const lrmd_event_data_t *op, xmlNode *update)
 {
     char *digest = NULL;
     xmlNode *args_xml = NULL;
 
     if (op->params == NULL) {
         return;
     }
     args_xml = create_xml_node(NULL, XML_TAG_PARAMS);
     g_hash_table_foreach(op->params, hash2field, args_xml);
     pcmk__filter_op_for_digest(args_xml);
     digest = calculate_operation_digest(args_xml, NULL);
     crm_xml_add(update, XML_LRM_ATTR_OP_DIGEST, digest);
     free_xml(args_xml);
     free(digest);
 }
 
 #define FAKE_TE_ID     "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
 
 /*!
  * \internal
  * \brief Create XML for resource operation history update
  *
  * \param[in,out] parent          Parent XML node to add to
  * \param[in,out] op              Operation event data
  * \param[in]     caller_version  DC feature set
  * \param[in]     target_rc       Expected result of operation
  * \param[in]     node            Name of node on which operation was performed
  * \param[in]     origin          Arbitrary description of update source
  *
  * \return Newly created XML node for history update
  */
 xmlNode *
 pcmk__create_history_xml(xmlNode *parent, lrmd_event_data_t *op,
                          const char *caller_version, int target_rc,
                          const char *node, const char *origin)
 {
     char *key = NULL;
     char *magic = NULL;
     char *op_id = NULL;
     char *op_id_additional = NULL;
     char *local_user_data = NULL;
     const char *exit_reason = NULL;
 
     xmlNode *xml_op = NULL;
     const char *task = NULL;
 
     CRM_CHECK(op != NULL, return NULL);
     crm_trace("Creating history XML for %s-interval %s action for %s on %s "
               "(DC version: %s, origin: %s)",
               pcmk__readable_interval(op->interval_ms), op->op_type, op->rsc_id,
               ((node == NULL)? "no node" : node), caller_version, origin);
 
     task = op->op_type;
 
     /* Record a successful agent reload as a start, and a failed one as a
      * monitor, to make life easier for the scheduler when determining the
      * current state.
      *
      * @COMPAT We should check "reload" here only if the operation was for a
      * pre-OCF-1.1 resource agent, but we don't know that here, and we should
      * only ever get results for actions scheduled by us, so we can reasonably
      * assume any "reload" is actually a pre-1.1 agent reload.
      */
     if (pcmk__str_any_of(task, CRMD_ACTION_RELOAD, CRMD_ACTION_RELOAD_AGENT,
                          NULL)) {
         if (op->op_status == PCMK_EXEC_DONE) {
             task = CRMD_ACTION_START;
         } else {
             task = CRMD_ACTION_STATUS;
         }
     }
 
     key = pcmk__op_key(op->rsc_id, task, op->interval_ms);
     if (pcmk__str_eq(task, CRMD_ACTION_NOTIFY, pcmk__str_none)) {
         const char *n_type = crm_meta_value(op->params, "notify_type");
         const char *n_task = crm_meta_value(op->params, "notify_operation");
 
         CRM_LOG_ASSERT(n_type != NULL);
         CRM_LOG_ASSERT(n_task != NULL);
         op_id = pcmk__notify_key(op->rsc_id, n_type, n_task);
 
         if (op->op_status != PCMK_EXEC_PENDING) {
             /* Ignore notify errors.
              *
              * @TODO It might be better to keep the correct result here, and
              * ignore it in process_graph_event().
              */
             lrmd__set_result(op, PCMK_OCF_OK, PCMK_EXEC_DONE, NULL);
         }
 
     /* Migration history is preserved separately, which usually matters for
      * multiple nodes and is important for future cluster transitions.
      */
     } else if (pcmk__str_any_of(op->op_type, CRMD_ACTION_MIGRATE,
                                 CRMD_ACTION_MIGRATED, NULL)) {
         op_id = strdup(key);
 
     } else if (did_rsc_op_fail(op, target_rc)) {
         op_id = pcmk__op_key(op->rsc_id, "last_failure", 0);
         if (op->interval_ms == 0) {
             // Ensure 'last' gets updated, in case record-pending is true
             op_id_additional = pcmk__op_key(op->rsc_id, "last", 0);
         }
         exit_reason = op->exit_reason;
 
     } else if (op->interval_ms > 0) {
         op_id = strdup(key);
 
     } else {
         op_id = pcmk__op_key(op->rsc_id, "last", 0);
     }
 
   again:
     xml_op = pcmk__xe_match(parent, XML_LRM_TAG_RSC_OP, XML_ATTR_ID, op_id);
     if (xml_op == NULL) {
         xml_op = create_xml_node(parent, XML_LRM_TAG_RSC_OP);
     }
 
     if (op->user_data == NULL) {
         crm_debug("Generating fake transition key for: " PCMK__OP_FMT
                   " %d from %s", op->rsc_id, op->op_type, op->interval_ms,
                   op->call_id, origin);
         local_user_data = pcmk__transition_key(-1, op->call_id, target_rc,
                                                FAKE_TE_ID);
         op->user_data = local_user_data;
     }
 
     if (magic == NULL) {
         magic = crm_strdup_printf("%d:%d;%s", op->op_status, op->rc,
                                   (const char *) op->user_data);
     }
 
     crm_xml_add(xml_op, XML_ATTR_ID, op_id);
     crm_xml_add(xml_op, XML_LRM_ATTR_TASK_KEY, key);
     crm_xml_add(xml_op, XML_LRM_ATTR_TASK, task);
     crm_xml_add(xml_op, XML_ATTR_ORIGIN, origin);
     crm_xml_add(xml_op, XML_ATTR_CRM_VERSION, caller_version);
     crm_xml_add(xml_op, XML_ATTR_TRANSITION_KEY, op->user_data);
     crm_xml_add(xml_op, XML_ATTR_TRANSITION_MAGIC, magic);
     crm_xml_add(xml_op, XML_LRM_ATTR_EXIT_REASON, exit_reason == NULL ? "" : exit_reason);
     crm_xml_add(xml_op, XML_LRM_ATTR_TARGET, node); /* For context during triage */
 
     crm_xml_add_int(xml_op, XML_LRM_ATTR_CALLID, op->call_id);
     crm_xml_add_int(xml_op, XML_LRM_ATTR_RC, op->rc);
     crm_xml_add_int(xml_op, XML_LRM_ATTR_OPSTATUS, op->op_status);
     crm_xml_add_ms(xml_op, XML_LRM_ATTR_INTERVAL_MS, op->interval_ms);
 
     if (compare_version("2.1", caller_version) <= 0) {
         if (op->t_run || op->t_rcchange || op->exec_time || op->queue_time) {
             crm_trace("Timing data (" PCMK__OP_FMT
                       "): last=%u change=%u exec=%u queue=%u",
                       op->rsc_id, op->op_type, op->interval_ms,
                       op->t_run, op->t_rcchange, op->exec_time, op->queue_time);
 
             if ((op->interval_ms != 0) && (op->t_rcchange != 0)) {
                 // Recurring ops may have changed rc after initial run
                 crm_xml_add_ll(xml_op, XML_RSC_OP_LAST_CHANGE,
                                (long long) op->t_rcchange);
             } else {
                 crm_xml_add_ll(xml_op, XML_RSC_OP_LAST_CHANGE,
                                (long long) op->t_run);
             }
 
             crm_xml_add_int(xml_op, XML_RSC_OP_T_EXEC, op->exec_time);
             crm_xml_add_int(xml_op, XML_RSC_OP_T_QUEUE, op->queue_time);
         }
     }
 
     if (pcmk__str_any_of(op->op_type, CRMD_ACTION_MIGRATE, CRMD_ACTION_MIGRATED, NULL)) {
         /*
          * Record migrate_source and migrate_target always for migrate ops.
          */
         const char *name = XML_LRM_ATTR_MIGRATE_SOURCE;
 
         crm_xml_add(xml_op, name, crm_meta_value(op->params, name));
 
         name = XML_LRM_ATTR_MIGRATE_TARGET;
         crm_xml_add(xml_op, name, crm_meta_value(op->params, name));
     }
 
     add_op_digest_to_xml(op, xml_op);
 
     if (op_id_additional) {
         free(op_id);
         op_id = op_id_additional;
         op_id_additional = NULL;
         goto again;
     }
 
     if (local_user_data) {
         free(local_user_data);
         op->user_data = NULL;
     }
     free(magic);
     free(op_id);
     free(key);
     return xml_op;
 }
 
 /*!
  * \internal
  * \brief Check whether an action shutdown-locks a resource to a node
  *
  * If the shutdown-lock cluster property is set, resources will not be recovered
  * on a different node if cleanly stopped, and may start only on that same node.
  * This function checks whether that applies to a given action, so that the
  * transition graph can be marked appropriately.
  *
  * \param[in] action  Action to check
  *
  * \return true if \p action locks its resource to the action's node,
  *         otherwise false
  */
 bool
 pcmk__action_locks_rsc_to_node(const pe_action_t *action)
 {
     // Only resource actions taking place on resource's lock node are locked
     if ((action == NULL) || (action->rsc == NULL)
         || (action->rsc->lock_node == NULL) || (action->node == NULL)
         || (action->node->details != action->rsc->lock_node->details)) {
         return false;
     }
 
     /* During shutdown, only stops are locked (otherwise, another action such as
      * a demote would cause the controller to clear the lock)
      */
     if (action->node->details->shutdown && (action->task != NULL)
         && (strcmp(action->task, RSC_STOP) != 0)) {
         return false;
     }
 
     return true;
 }
 
 /* lowest to highest */
 static gint
 sort_action_id(gconstpointer a, gconstpointer b)
 {
     const pe_action_wrapper_t *action_wrapper2 = (const pe_action_wrapper_t *)a;
     const pe_action_wrapper_t *action_wrapper1 = (const pe_action_wrapper_t *)b;
 
     if (a == NULL) {
         return 1;
     }
     if (b == NULL) {
         return -1;
     }
     if (action_wrapper1->action->id < action_wrapper2->action->id) {
         return 1;
     }
     if (action_wrapper1->action->id > action_wrapper2->action->id) {
         return -1;
     }
     return 0;
 }
 
 /*!
  * \internal
  * \brief Remove any duplicate action inputs, merging action flags
  *
  * \param[in,out] action  Action whose inputs should be checked
  */
 void
 pcmk__deduplicate_action_inputs(pe_action_t *action)
 {
     GList *item = NULL;
     GList *next = NULL;
     pe_action_wrapper_t *last_input = NULL;
 
     action->actions_before = g_list_sort(action->actions_before,
                                          sort_action_id);
     for (item = action->actions_before; item != NULL; item = next) {
         pe_action_wrapper_t *input = (pe_action_wrapper_t *) item->data;
 
         next = item->next;
         if ((last_input != NULL)
             && (input->action->id == last_input->action->id)) {
             crm_trace("Input %s (%d) duplicate skipped for action %s (%d)",
                       input->action->uuid, input->action->id,
                       action->uuid, action->id);
 
             /* For the purposes of scheduling, the ordering flags no longer
              * matter, but crm_simulate looks at certain ones when creating a
              * dot graph. Combining the flags is sufficient for that purpose.
              */
             last_input->type |= input->type;
             if (input->state == pe_link_dumped) {
                 last_input->state = pe_link_dumped;
             }
 
             free(item->data);
             action->actions_before = g_list_delete_link(action->actions_before,
                                                         item);
         } else {
             last_input = input;
             input->state = pe_link_not_dumped;
         }
     }
 }
 
 /*!
  * \internal
  * \brief Output all scheduled actions
  *
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__output_actions(pe_working_set_t *data_set)
 {
     pcmk__output_t *out = data_set->priv;
 
     // Output node (non-resource) actions
     for (GList *iter = data_set->actions; iter != NULL; iter = iter->next) {
         char *node_name = NULL;
         char *task = NULL;
         pe_action_t *action = (pe_action_t *) iter->data;
 
         if (action->rsc != NULL) {
             continue; // Resource actions will be output later
 
         } else if (pcmk_is_set(action->flags, pe_action_optional)) {
             continue; // This action was not scheduled
         }
 
         if (pcmk__str_eq(action->task, CRM_OP_SHUTDOWN, pcmk__str_casei)) {
             task = strdup("Shutdown");
 
         } else if (pcmk__str_eq(action->task, CRM_OP_FENCE, pcmk__str_casei)) {
             const char *op = g_hash_table_lookup(action->meta, "stonith_action");
 
             task = crm_strdup_printf("Fence (%s)", op);
 
         } else {
             continue; // Don't display other node action types
         }
 
         if (pe__is_guest_node(action->node)) {
             node_name = crm_strdup_printf("%s (resource: %s)",
                                           pe__node_name(action->node),
                                           action->node->details->remote_rsc->container->id);
         } else if (action->node != NULL) {
             node_name = crm_strdup_printf("%s", pe__node_name(action->node));
         }
 
         out->message(out, "node-action", task, node_name, action->reason);
 
         free(node_name);
         free(task);
     }
 
     // Output resource actions
     for (GList *iter = data_set->resources; iter != NULL; iter = iter->next) {
         pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
         rsc->cmds->output_actions(rsc);
     }
 }
 
 /*!
  * \internal
  * \brief Check whether action from resource history is still in configuration
  *
  * \param[in] rsc          Resource that action is for
  * \param[in] task         Action's name
  * \param[in] interval_ms  Action's interval (in milliseconds)
  *
  * \return true if action is still in resource configuration, otherwise false
  */
 static bool
 action_in_config(const pe_resource_t *rsc, const char *task, guint interval_ms)
 {
     char *key = pcmk__op_key(rsc->id, task, interval_ms);
     bool config = (find_rsc_op_entry(rsc, key) != NULL);
 
     free(key);
     return config;
 }
 
 /*!
  * \internal
  * \brief Get action name needed to compare digest for configuration changes
  *
  * \param[in] task         Action name from history
  * \param[in] interval_ms  Action interval (in milliseconds)
  *
  * \return Action name whose digest should be compared
  */
 static const char *
 task_for_digest(const char *task, guint interval_ms)
 {
     /* Certain actions need to be compared against the parameters used to start
      * the resource.
      */
     if ((interval_ms == 0)
         && pcmk__str_any_of(task, RSC_STATUS, RSC_MIGRATED, RSC_PROMOTE, NULL)) {
         task = RSC_START;
     }
     return task;
 }
 
 /*!
  * \internal
  * \brief Check whether only sanitized parameters to an action changed
  *
  * When collecting CIB files for troubleshooting, crm_report will mask
  * sensitive resource parameters. If simulations were run using that, affected
  * resources would appear to need a restart, which would complicate
  * troubleshooting. To avoid that, we save a "secure digest" of non-sensitive
  * parameters. This function used that digest to check whether only masked
  * parameters are different.
  *
  * \param[in] xml_op       Resource history entry with secure digest
  * \param[in] digest_data  Operation digest information being compared
  * \param[in] data_set     Cluster working set
  *
  * \return true if only sanitized parameters changed, otherwise false
  */
 static bool
 only_sanitized_changed(const xmlNode *xml_op,
                        const op_digest_cache_t *digest_data,
                        const pe_working_set_t *data_set)
 {
     const char *digest_secure = NULL;
 
     if (!pcmk_is_set(data_set->flags, pe_flag_sanitized)) {
         // The scheduler is not being run as a simulation
         return false;
     }
 
     digest_secure = crm_element_value(xml_op, XML_LRM_ATTR_SECURE_DIGEST);
 
     return (digest_data->rc != RSC_DIGEST_MATCH) && (digest_secure != NULL)
            && (digest_data->digest_secure_calc != NULL)
            && (strcmp(digest_data->digest_secure_calc, digest_secure) == 0);
 }
 
 /*!
  * \internal
  * \brief Force a restart due to a configuration change
  *
  * \param[in,out] rsc          Resource that action is for
  * \param[in]     task         Name of action whose configuration changed
  * \param[in]     interval_ms  Action interval (in milliseconds)
  * \param[in,out] node         Node where resource should be restarted
  */
 static void
 force_restart(pe_resource_t *rsc, const char *task, guint interval_ms,
               pe_node_t *node)
 {
     char *key = pcmk__op_key(rsc->id, task, interval_ms);
     pe_action_t *required = custom_action(rsc, key, task, NULL, FALSE, TRUE,
                                           rsc->cluster);
 
     pe_action_set_reason(required, "resource definition change", true);
     trigger_unfencing(rsc, node, "Device parameters changed", NULL,
                       rsc->cluster);
 }
 
 /*!
  * \internal
  * \brief Schedule a reload of a resource on a node
  *
  * \param[in,out] rsc   Resource to reload
  * \param[in]     node  Where resource should be reloaded
  */
 static void
 schedule_reload(pe_resource_t *rsc, const pe_node_t *node)
 {
     pe_action_t *reload = NULL;
 
     // For collective resources, just call recursively for children
     if (rsc->variant > pe_native) {
         g_list_foreach(rsc->children, (GFunc) schedule_reload, (gpointer) node);
         return;
     }
 
     // Skip the reload in certain situations
     if ((node == NULL)
         || !pcmk_is_set(rsc->flags, pe_rsc_managed)
         || pcmk_is_set(rsc->flags, pe_rsc_failed)) {
         pe_rsc_trace(rsc, "Skip reload of %s:%s%s %s",
                      rsc->id,
                      pcmk_is_set(rsc->flags, pe_rsc_managed)? "" : " unmanaged",
                      pcmk_is_set(rsc->flags, pe_rsc_failed)? " failed" : "",
                      (node == NULL)? "inactive" : node->details->uname);
         return;
     }
 
     /* If a resource's configuration changed while a start was pending,
      * force a full restart instead of a reload.
      */
     if (pcmk_is_set(rsc->flags, pe_rsc_start_pending)) {
         pe_rsc_trace(rsc, "%s: preventing agent reload because start pending",
                      rsc->id);
         custom_action(rsc, stop_key(rsc), CRMD_ACTION_STOP, node, FALSE, TRUE,
                       rsc->cluster);
         return;
     }
 
     // Schedule the reload
     pe__set_resource_flags(rsc, pe_rsc_reload);
     reload = custom_action(rsc, reload_key(rsc), CRMD_ACTION_RELOAD_AGENT, node,
                            FALSE, TRUE, rsc->cluster);
     pe_action_set_reason(reload, "resource definition change", FALSE);
 
     // Set orderings so that a required stop or demote cancels the reload
     pcmk__new_ordering(NULL, NULL, reload, rsc, stop_key(rsc), NULL,
                        pe_order_optional|pe_order_then_cancels_first,
                        rsc->cluster);
     pcmk__new_ordering(NULL, NULL, reload, rsc, demote_key(rsc), NULL,
                        pe_order_optional|pe_order_then_cancels_first,
                        rsc->cluster);
 }
 
 /*!
  * \internal
  * \brief Handle any configuration change for an action
  *
  * Given an action from resource history, if the resource's configuration
  * changed since the action was done, schedule any actions needed (restart,
  * reload, unfencing, rescheduling recurring actions, etc.).
  *
  * \param[in,out] rsc     Resource that action is for
  * \param[in,out] node    Node that action was on
  * \param[in]     xml_op  Action XML from resource history
  *
  * \return true if action configuration changed, otherwise false
  */
 bool
 pcmk__check_action_config(pe_resource_t *rsc, pe_node_t *node,
                           const xmlNode *xml_op)
 {
     guint interval_ms = 0;
     const char *task = NULL;
     const op_digest_cache_t *digest_data = NULL;
 
     CRM_CHECK((rsc != NULL) && (node != NULL) && (xml_op != NULL),
               return false);
 
     task = crm_element_value(xml_op, XML_LRM_ATTR_TASK);
     CRM_CHECK(task != NULL, return false);
 
     crm_element_value_ms(xml_op, XML_LRM_ATTR_INTERVAL_MS, &interval_ms);
 
     // If this is a recurring action, check whether it has been orphaned
     if (interval_ms > 0) {
         if (action_in_config(rsc, task, interval_ms)) {
             pe_rsc_trace(rsc, "%s-interval %s for %s on %s is in configuration",
                          pcmk__readable_interval(interval_ms), task, rsc->id,
                          pe__node_name(node));
         } else if (pcmk_is_set(rsc->cluster->flags,
                                pe_flag_stop_action_orphans)) {
             pcmk__schedule_cancel(rsc,
                                   crm_element_value(xml_op, XML_LRM_ATTR_CALLID),
                                   task, interval_ms, node, "orphan");
             return true;
         } else {
             pe_rsc_debug(rsc, "%s-interval %s for %s on %s is orphaned",
                          pcmk__readable_interval(interval_ms), task, rsc->id,
                          pe__node_name(node));
             return true;
         }
     }
 
     crm_trace("Checking %s-interval %s for %s on %s for configuration changes",
               pcmk__readable_interval(interval_ms), task, rsc->id,
               pe__node_name(node));
     task = task_for_digest(task, interval_ms);
     digest_data = rsc_action_digest_cmp(rsc, xml_op, node, rsc->cluster);
 
     if (only_sanitized_changed(xml_op, digest_data, rsc->cluster)) {
         if (!pcmk__is_daemon && (rsc->cluster->priv != NULL)) {
             pcmk__output_t *out = rsc->cluster->priv;
 
             out->info(out,
                       "Only 'private' parameters to %s-interval %s for %s "
                       "on %s changed: %s",
                       pcmk__readable_interval(interval_ms), task, rsc->id,
                       pe__node_name(node),
                       crm_element_value(xml_op, XML_ATTR_TRANSITION_MAGIC));
         }
         return false;
     }
 
     switch (digest_data->rc) {
         case RSC_DIGEST_RESTART:
             crm_log_xml_debug(digest_data->params_restart, "params:restart");
             force_restart(rsc, task, interval_ms, node);
             return true;
 
         case RSC_DIGEST_ALL:
         case RSC_DIGEST_UNKNOWN:
             // Changes that can potentially be handled by an agent reload
 
             if (interval_ms > 0) {
                 /* Recurring actions aren't reloaded per se, they are just
                  * re-scheduled so the next run uses the new parameters.
                  * The old instance will be cancelled automatically.
                  */
                 crm_log_xml_debug(digest_data->params_all, "params:reschedule");
                 pcmk__reschedule_recurring(rsc, task, interval_ms, node);
 
             } else if (crm_element_value(xml_op,
                                          XML_LRM_ATTR_RESTART_DIGEST) != NULL) {
                 // Agent supports reload, so use it
                 trigger_unfencing(rsc, node,
                                   "Device parameters changed (reload)", NULL,
                                   rsc->cluster);
                 crm_log_xml_debug(digest_data->params_all, "params:reload");
                 schedule_reload(rsc, node);
 
             } else {
                 pe_rsc_trace(rsc,
                              "Restarting %s because agent doesn't support reload",
                              rsc->id);
                 crm_log_xml_debug(digest_data->params_restart,
                                   "params:restart");
                 force_restart(rsc, task, interval_ms, node);
             }
             return true;
 
         default:
             break;
     }
     return false;
 }
 
 /*!
  * \internal
  * \brief Create a list of resource's action history entries, sorted by call ID
  *
  * \param[in]  rsc_entry    Resource's <lrm_rsc_op> status XML
  * \param[out] start_index  Where to store index of start-like action, if any
  * \param[out] stop_index   Where to store index of stop action, if any
  */
 static GList *
 rsc_history_as_list(const xmlNode *rsc_entry, int *start_index, int *stop_index)
 {
     GList *ops = NULL;
 
     for (xmlNode *rsc_op = first_named_child(rsc_entry, XML_LRM_TAG_RSC_OP);
          rsc_op != NULL; rsc_op = crm_next_same_xml(rsc_op)) {
         ops = g_list_prepend(ops, rsc_op);
     }
     ops = g_list_sort(ops, sort_op_by_callid);
     calculate_active_ops(ops, start_index, stop_index);
     return ops;
 }
 
 /*!
  * \internal
  * \brief Process a resource's action history from the CIB status
  *
  * Given a resource's action history, if the resource's configuration
  * changed since the actions were done, schedule any actions needed (restart,
  * reload, unfencing, rescheduling recurring actions, clean-up, etc.).
  * (This also cancels recurring actions for maintenance mode, which is not
  * entirely related but convenient to do here.)
  *
  * \param[in]     rsc_entry  Resource's <lrm_rsc_op> status XML
  * \param[in,out] rsc        Resource whose history is being processed
  * \param[in,out] node       Node whose history is being processed
  */
 static void
 process_rsc_history(const xmlNode *rsc_entry, pe_resource_t *rsc,
                     pe_node_t *node)
 {
     int offset = -1;
     int stop_index = 0;
     int start_index = 0;
     GList *sorted_op_list = NULL;
 
     if (pcmk_is_set(rsc->flags, pe_rsc_orphan)) {
         if (pe_rsc_is_anon_clone(pe__const_top_resource(rsc, false))) {
             pe_rsc_trace(rsc,
                          "Skipping configuration check "
                          "for orphaned clone instance %s",
                          rsc->id);
         } else {
             pe_rsc_trace(rsc,
                          "Skipping configuration check and scheduling clean-up "
                          "for orphaned resource %s", rsc->id);
             pcmk__schedule_cleanup(rsc, node, false);
         }
         return;
     }
 
     if (pe_find_node_id(rsc->running_on, node->details->id) == NULL) {
         if (pcmk__rsc_agent_changed(rsc, node, rsc_entry, false)) {
             pcmk__schedule_cleanup(rsc, node, false);
         }
         pe_rsc_trace(rsc,
                      "Skipping configuration check for %s "
                      "because no longer active on %s",
                      rsc->id, pe__node_name(node));
         return;
     }
 
     pe_rsc_trace(rsc, "Checking for configuration changes for %s on %s",
                  rsc->id, pe__node_name(node));
 
     if (pcmk__rsc_agent_changed(rsc, node, rsc_entry, true)) {
         pcmk__schedule_cleanup(rsc, node, false);
     }
 
     sorted_op_list = rsc_history_as_list(rsc_entry, &start_index, &stop_index);
     if (start_index < stop_index) {
         return; // Resource is stopped
     }
 
     for (GList *iter = sorted_op_list; iter != NULL; iter = iter->next) {
         xmlNode *rsc_op = (xmlNode *) iter->data;
         const char *task = NULL;
         guint interval_ms = 0;
 
         if (++offset < start_index) {
             // Skip actions that happened before a start
             continue;
         }
 
         task = crm_element_value(rsc_op, XML_LRM_ATTR_TASK);
         crm_element_value_ms(rsc_op, XML_LRM_ATTR_INTERVAL_MS, &interval_ms);
 
         if ((interval_ms > 0)
             && (pcmk_is_set(rsc->flags, pe_rsc_maintenance)
                 || node->details->maintenance)) {
             // Maintenance mode cancels recurring operations
             pcmk__schedule_cancel(rsc,
                                   crm_element_value(rsc_op, XML_LRM_ATTR_CALLID),
                                   task, interval_ms, node, "maintenance mode");
 
         } else if ((interval_ms > 0)
                    || pcmk__strcase_any_of(task, RSC_STATUS, RSC_START,
                                            RSC_PROMOTE, RSC_MIGRATED, NULL)) {
             /* If a resource operation failed, and the operation's definition
              * has changed, clear any fail count so they can be retried fresh.
              */
 
             if (pe__bundle_needs_remote_name(rsc)) {
-                /* We haven't allocated resources to nodes yet, so if the
+                /* We haven't assigned resources to nodes yet, so if the
                  * REMOTE_CONTAINER_HACK is used, we may calculate the digest
                  * based on the literal "#uname" value rather than the properly
                  * substituted value. That would mistakenly make the action
                  * definition appear to have been changed. Defer the check until
                  * later in this case.
                  */
                 pe__add_param_check(rsc_op, rsc, node, pe_check_active,
                                     rsc->cluster);
 
             } else if (pcmk__check_action_config(rsc, node, rsc_op)
                        && (pe_get_failcount(node, rsc, NULL, pe_fc_effective,
                                             NULL) != 0)) {
                 pe__clear_failcount(rsc, node, "action definition changed",
                                     rsc->cluster);
             }
         }
     }
     g_list_free(sorted_op_list);
 }
 
 /*!
  * \internal
  * \brief Process a node's action history from the CIB status
  *
  * Given a node's resource history, if the resource's configuration changed
  * since the actions were done, schedule any actions needed (restart,
  * reload, unfencing, rescheduling recurring actions, clean-up, etc.).
  * (This also cancels recurring actions for maintenance mode, which is not
  * entirely related but convenient to do here.)
  *
  * \param[in,out] node      Node whose history is being processed
  * \param[in]     lrm_rscs  Node's <lrm_resources> from CIB status XML
  */
 static void
 process_node_history(pe_node_t *node, const xmlNode *lrm_rscs)
 {
     crm_trace("Processing node history for %s", pe__node_name(node));
     for (const xmlNode *rsc_entry = first_named_child(lrm_rscs,
                                                       XML_LRM_TAG_RESOURCE);
          rsc_entry != NULL; rsc_entry = crm_next_same_xml(rsc_entry)) {
 
         if (xml_has_children(rsc_entry)) {
             GList *result = pcmk__rscs_matching_id(ID(rsc_entry),
                                                    node->details->data_set);
 
             for (GList *iter = result; iter != NULL; iter = iter->next) {
                 pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
                 if (rsc->variant == pe_native) {
                     process_rsc_history(rsc_entry, rsc, node);
                 }
             }
             g_list_free(result);
         }
     }
 }
 
 // XPath to find a node's resource history
 #define XPATH_NODE_HISTORY "/" XML_TAG_CIB "/" XML_CIB_TAG_STATUS             \
                            "/" XML_CIB_TAG_STATE "[@" XML_ATTR_UNAME "='%s']" \
                            "/" XML_CIB_TAG_LRM "/" XML_LRM_TAG_RESOURCES
 
 /*!
  * \internal
  * \brief Process any resource configuration changes in the CIB status
  *
  * Go through all nodes' resource history, and if a resource's configuration
  * changed since its actions were done, schedule any actions needed (restart,
  * reload, unfencing, rescheduling recurring actions, clean-up, etc.).
  * (This also cancels recurring actions for maintenance mode, which is not
  * entirely related but convenient to do here.)
  *
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__handle_rsc_config_changes(pe_working_set_t *data_set)
 {
     crm_trace("Check resource and action configuration for changes");
 
     /* Rather than iterate through the status section, iterate through the nodes
      * and search for the appropriate status subsection for each. This skips
      * orphaned nodes and lets us eliminate some cases before searching the XML.
      */
     for (GList *iter = data_set->nodes; iter != NULL; iter = iter->next) {
         pe_node_t *node = (pe_node_t *) iter->data;
 
         /* Don't bother checking actions for a node that can't run actions ...
          * unless it's in maintenance mode, in which case we still need to
          * cancel any existing recurring monitors.
          */
         if (node->details->maintenance
             || pcmk__node_available(node, false, false)) {
 
             char *xpath = NULL;
             xmlNode *history = NULL;
 
             xpath = crm_strdup_printf(XPATH_NODE_HISTORY, node->details->uname);
             history = get_xpath_object(xpath, data_set->input, LOG_NEVER);
             free(xpath);
 
             process_node_history(node, history);
         }
     }
 }
diff --git a/lib/pacemaker/pcmk_sched_colocation.c b/lib/pacemaker/pcmk_sched_colocation.c
index a262633a92..0c76382736 100644
--- a/lib/pacemaker/pcmk_sched_colocation.c
+++ b/lib/pacemaker/pcmk_sched_colocation.c
@@ -1,1665 +1,1665 @@
 /*
  * Copyright 2004-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 
 #include <stdbool.h>
 #include <glib.h>
 
 #include <crm/crm.h>
 #include <crm/pengine/status.h>
 #include <pacemaker-internal.h>
 
 #include "crm/common/util.h"
 #include "crm/common/xml_internal.h"
 #include "crm/msg_xml.h"
 #include "libpacemaker_private.h"
 
 #define EXPAND_CONSTRAINT_IDREF(__set, __rsc, __name) do {                      \
         __rsc = pcmk__find_constraint_resource(data_set->resources, __name);    \
         if (__rsc == NULL) {                                                    \
             pcmk__config_err("%s: No resource found for %s", __set, __name);    \
             return;                                                             \
         }                                                                       \
     } while(0)
 
 // Used to temporarily mark a node as unusable
 #define INFINITY_HACK   (INFINITY * -100)
 
 static gint
 cmp_dependent_priority(gconstpointer a, gconstpointer b)
 {
     const pcmk__colocation_t *rsc_constraint1 = (const pcmk__colocation_t *) a;
     const pcmk__colocation_t *rsc_constraint2 = (const pcmk__colocation_t *) b;
 
     if (a == NULL) {
         return 1;
     }
     if (b == NULL) {
         return -1;
     }
 
     CRM_ASSERT(rsc_constraint1->dependent != NULL);
     CRM_ASSERT(rsc_constraint1->primary != NULL);
 
     if (rsc_constraint1->dependent->priority > rsc_constraint2->dependent->priority) {
         return -1;
     }
 
     if (rsc_constraint1->dependent->priority < rsc_constraint2->dependent->priority) {
         return 1;
     }
 
     /* Process clones before primitives and groups */
     if (rsc_constraint1->dependent->variant > rsc_constraint2->dependent->variant) {
         return -1;
     }
     if (rsc_constraint1->dependent->variant < rsc_constraint2->dependent->variant) {
         return 1;
     }
 
     /* @COMPAT scheduler <2.0.0: Process promotable clones before nonpromotable
      * clones (probably unnecessary, but avoids having to update regression
      * tests)
      */
     if (rsc_constraint1->dependent->variant == pe_clone) {
         if (pcmk_is_set(rsc_constraint1->dependent->flags, pe_rsc_promotable)
             && !pcmk_is_set(rsc_constraint2->dependent->flags, pe_rsc_promotable)) {
             return -1;
         } else if (!pcmk_is_set(rsc_constraint1->dependent->flags, pe_rsc_promotable)
             && pcmk_is_set(rsc_constraint2->dependent->flags, pe_rsc_promotable)) {
             return 1;
         }
     }
 
     return strcmp(rsc_constraint1->dependent->id,
                   rsc_constraint2->dependent->id);
 }
 
 static gint
 cmp_primary_priority(gconstpointer a, gconstpointer b)
 {
     const pcmk__colocation_t *rsc_constraint1 = (const pcmk__colocation_t *) a;
     const pcmk__colocation_t *rsc_constraint2 = (const pcmk__colocation_t *) b;
 
     if (a == NULL) {
         return 1;
     }
     if (b == NULL) {
         return -1;
     }
 
     CRM_ASSERT(rsc_constraint1->dependent != NULL);
     CRM_ASSERT(rsc_constraint1->primary != NULL);
 
     if (rsc_constraint1->primary->priority > rsc_constraint2->primary->priority) {
         return -1;
     }
 
     if (rsc_constraint1->primary->priority < rsc_constraint2->primary->priority) {
         return 1;
     }
 
     /* Process clones before primitives and groups */
     if (rsc_constraint1->primary->variant > rsc_constraint2->primary->variant) {
         return -1;
     } else if (rsc_constraint1->primary->variant < rsc_constraint2->primary->variant) {
         return 1;
     }
 
     /* @COMPAT scheduler <2.0.0: Process promotable clones before nonpromotable
      * clones (probably unnecessary, but avoids having to update regression
      * tests)
      */
     if (rsc_constraint1->primary->variant == pe_clone) {
         if (pcmk_is_set(rsc_constraint1->primary->flags, pe_rsc_promotable)
             && !pcmk_is_set(rsc_constraint2->primary->flags, pe_rsc_promotable)) {
             return -1;
         } else if (!pcmk_is_set(rsc_constraint1->primary->flags, pe_rsc_promotable)
             && pcmk_is_set(rsc_constraint2->primary->flags, pe_rsc_promotable)) {
             return 1;
         }
     }
 
     return strcmp(rsc_constraint1->primary->id, rsc_constraint2->primary->id);
 }
 
 /*!
  * \internal
  * \brief Add a "this with" colocation constraint to a sorted list
  *
  * \param[in,out] list        List of constraints to add \p colocation to
  * \param[in]     colocation  Colocation constraint to add to \p list
  *
  * \note The list will be sorted using cmp_primary_priority().
  */
 void
 pcmk__add_this_with(GList **list, const pcmk__colocation_t *colocation)
 {
     CRM_ASSERT((list != NULL) && (colocation != NULL));
 
     crm_trace("Adding colocation %s (%s with %s%s%s @%d) "
               "to 'this with' list",
               colocation->id, colocation->dependent->id,
               colocation->primary->id,
               (colocation->node_attribute == NULL)? "" : " using ",
               pcmk__s(colocation->node_attribute, ""),
               colocation->score);
     *list = g_list_insert_sorted(*list, (gpointer) colocation,
                                  cmp_primary_priority);
 }
 
 /*!
  * \internal
  * \brief Add a list of "this with" colocation constraints to a list
  *
  * \param[in,out] list      List of constraints to add \p addition to
  * \param[in]     addition  List of colocation constraints to add to \p list
  *
  * \note The lists must be pre-sorted by cmp_primary_priority().
  */
 void
 pcmk__add_this_with_list(GList **list, GList *addition)
 {
     CRM_CHECK((list != NULL), return);
 
     if (*list == NULL) { // Trivial case for efficiency
         crm_trace("Copying %u 'this with' colocations to new list",
                   g_list_length(addition));
         *list = g_list_copy(addition);
     } else {
         while (addition != NULL) {
             pcmk__add_this_with(list, addition->data);
             addition = addition->next;
         }
     }
 }
 
 /*!
  * \internal
  * \brief Add a "with this" colocation constraint to a sorted list
  *
  * \param[in,out] list        List of constraints to add \p colocation to
  * \param[in]     colocation  Colocation constraint to add to \p list
  *
  * \note The list will be sorted using cmp_dependent_priority().
  */
 void
 pcmk__add_with_this(GList **list, const pcmk__colocation_t *colocation)
 {
     CRM_ASSERT((list != NULL) && (colocation != NULL));
 
     crm_trace("Adding colocation %s (%s with %s%s%s @%d) "
               "to 'with this' list",
               colocation->id, colocation->dependent->id,
               colocation->primary->id,
               (colocation->node_attribute == NULL)? "" : " using ",
               pcmk__s(colocation->node_attribute, ""),
               colocation->score);
     *list = g_list_insert_sorted(*list, (gpointer) colocation,
                                  cmp_dependent_priority);
 }
 
 /*!
  * \internal
  * \brief Add a list of "with this" colocation constraints to a list
  *
  * \param[in,out] list      List of constraints to add \p addition to
  * \param[in]     addition  List of colocation constraints to add to \p list
  *
  * \note The lists must be pre-sorted by cmp_dependent_priority().
  */
 void
 pcmk__add_with_this_list(GList **list, GList *addition)
 {
     CRM_CHECK((list != NULL), return);
 
     if (*list == NULL) { // Trivial case for efficiency
         crm_trace("Copying %u 'with this' colocations to new list",
                   g_list_length(addition));
         *list = g_list_copy(addition);
     } else {
         while (addition != NULL) {
             pcmk__add_with_this(list, addition->data);
             addition = addition->next;
         }
     }
 }
 
 /*!
  * \internal
  * \brief Add orderings necessary for an anti-colocation constraint
  *
  * \param[in,out] first_rsc   One resource in an anti-colocation
  * \param[in]     first_role  Anti-colocation role of \p first_rsc
  * \param[in]     then_rsc    Other resource in the anti-colocation
  * \param[in]     then_role   Anti-colocation role of \p then_rsc
  */
 static void
 anti_colocation_order(pe_resource_t *first_rsc, int first_role,
                       pe_resource_t *then_rsc, int then_role)
 {
     const char *first_tasks[] = { NULL, NULL };
     const char *then_tasks[] = { NULL, NULL };
 
     /* Actions to make first_rsc lose first_role */
     if (first_role == RSC_ROLE_PROMOTED) {
         first_tasks[0] = CRMD_ACTION_DEMOTE;
 
     } else {
         first_tasks[0] = CRMD_ACTION_STOP;
 
         if (first_role == RSC_ROLE_UNPROMOTED) {
             first_tasks[1] = CRMD_ACTION_PROMOTE;
         }
     }
 
     /* Actions to make then_rsc gain then_role */
     if (then_role == RSC_ROLE_PROMOTED) {
         then_tasks[0] = CRMD_ACTION_PROMOTE;
 
     } else {
         then_tasks[0] = CRMD_ACTION_START;
 
         if (then_role == RSC_ROLE_UNPROMOTED) {
             then_tasks[1] = CRMD_ACTION_DEMOTE;
         }
     }
 
     for (int first_lpc = 0;
          (first_lpc <= 1) && (first_tasks[first_lpc] != NULL); first_lpc++) {
 
         for (int then_lpc = 0;
              (then_lpc <= 1) && (then_tasks[then_lpc] != NULL); then_lpc++) {
 
             pcmk__order_resource_actions(first_rsc, first_tasks[first_lpc],
                                          then_rsc, then_tasks[then_lpc],
                                          pe_order_anti_colocation);
         }
     }
 }
 
 /*!
  * \internal
  * \brief Add a new colocation constraint to a cluster working set
  *
  * \param[in]     id              XML ID for this constraint
  * \param[in]     node_attr       Colocate by this attribute (NULL for #uname)
  * \param[in]     score           Constraint score
  * \param[in,out] dependent       Resource to be colocated
  * \param[in,out] primary         Resource to colocate \p dependent with
  * \param[in]     dependent_role  Current role of \p dependent
  * \param[in]     primary_role    Current role of \p primary
  * \param[in]     influence       Whether colocation constraint has influence
  * \param[in,out] data_set        Cluster working set to add constraint to
  */
 void
 pcmk__new_colocation(const char *id, const char *node_attr, int score,
                      pe_resource_t *dependent, pe_resource_t *primary,
                      const char *dependent_role, const char *primary_role,
                      bool influence, pe_working_set_t *data_set)
 {
     pcmk__colocation_t *new_con = NULL;
 
     if (score == 0) {
         crm_trace("Ignoring colocation '%s' because score is 0", id);
         return;
     }
     if ((dependent == NULL) || (primary == NULL)) {
         pcmk__config_err("Ignoring colocation '%s' because resource "
                          "does not exist", id);
         return;
     }
 
     new_con = calloc(1, sizeof(pcmk__colocation_t));
     if (new_con == NULL) {
         return;
     }
 
     if (pcmk__str_eq(dependent_role, RSC_ROLE_STARTED_S,
                      pcmk__str_null_matches|pcmk__str_casei)) {
         dependent_role = RSC_ROLE_UNKNOWN_S;
     }
 
     if (pcmk__str_eq(primary_role, RSC_ROLE_STARTED_S,
                      pcmk__str_null_matches|pcmk__str_casei)) {
         primary_role = RSC_ROLE_UNKNOWN_S;
     }
 
     new_con->id = id;
     new_con->dependent = dependent;
     new_con->primary = primary;
     new_con->score = score;
     new_con->dependent_role = text2role(dependent_role);
     new_con->primary_role = text2role(primary_role);
     new_con->node_attribute = node_attr;
     new_con->influence = influence;
 
     if (node_attr == NULL) {
         node_attr = CRM_ATTR_UNAME;
     }
 
     pe_rsc_trace(dependent, "%s ==> %s (%s %d)",
                  dependent->id, primary->id, node_attr, score);
 
     pcmk__add_this_with(&(dependent->rsc_cons), new_con);
     pcmk__add_with_this(&(primary->rsc_cons_lhs), new_con);
 
     data_set->colocation_constraints = g_list_append(data_set->colocation_constraints,
                                                      new_con);
 
     if (score <= -INFINITY) {
         anti_colocation_order(dependent, new_con->dependent_role, primary,
                               new_con->primary_role);
         anti_colocation_order(primary, new_con->primary_role, dependent,
                               new_con->dependent_role);
     }
 }
 
 /*!
  * \internal
  * \brief Return the boolean influence corresponding to configuration
  *
  * \param[in] coloc_id     Colocation XML ID (for error logging)
  * \param[in] rsc          Resource involved in constraint (for default)
  * \param[in] influence_s  String value of influence option
  *
  * \return true if string evaluates true, false if string evaluates false,
  *         or value of resource's critical option if string is NULL or invalid
  */
 static bool
 unpack_influence(const char *coloc_id, const pe_resource_t *rsc,
                  const char *influence_s)
 {
     if (influence_s != NULL) {
         int influence_i = 0;
 
         if (crm_str_to_boolean(influence_s, &influence_i) < 0) {
             pcmk__config_err("Constraint '%s' has invalid value for "
                              XML_COLOC_ATTR_INFLUENCE " (using default)",
                              coloc_id);
         } else {
             return (influence_i != 0);
         }
     }
     return pcmk_is_set(rsc->flags, pe_rsc_critical);
 }
 
 static void
 unpack_colocation_set(xmlNode *set, int score, const char *coloc_id,
                       const char *influence_s, pe_working_set_t *data_set)
 {
     xmlNode *xml_rsc = NULL;
     pe_resource_t *with = NULL;
     pe_resource_t *resource = NULL;
     const char *set_id = ID(set);
     const char *role = crm_element_value(set, "role");
     const char *ordering = crm_element_value(set, "ordering");
     int local_score = score;
     bool sequential = false;
 
     const char *score_s = crm_element_value(set, XML_RULE_ATTR_SCORE);
 
     if (score_s) {
         local_score = char2score(score_s);
     }
     if (local_score == 0) {
         crm_trace("Ignoring colocation '%s' for set '%s' because score is 0",
                   coloc_id, set_id);
         return;
     }
 
     if (ordering == NULL) {
         ordering = "group";
     }
 
     if (pcmk__xe_get_bool_attr(set, "sequential", &sequential) == pcmk_rc_ok && !sequential) {
         return;
 
     } else if ((local_score > 0)
                && pcmk__str_eq(ordering, "group", pcmk__str_casei)) {
         for (xml_rsc = first_named_child(set, XML_TAG_RESOURCE_REF);
              xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) {
 
             EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc));
             if (with != NULL) {
                 pe_rsc_trace(resource, "Colocating %s with %s", resource->id, with->id);
                 pcmk__new_colocation(set_id, NULL, local_score, resource,
                                      with, role, role,
                                      unpack_influence(coloc_id, resource,
                                                       influence_s), data_set);
             }
             with = resource;
         }
 
     } else if (local_score > 0) {
         pe_resource_t *last = NULL;
 
         for (xml_rsc = first_named_child(set, XML_TAG_RESOURCE_REF);
              xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) {
 
             EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc));
             if (last != NULL) {
                 pe_rsc_trace(resource, "Colocating %s with %s",
                              last->id, resource->id);
                 pcmk__new_colocation(set_id, NULL, local_score, last,
                                      resource, role, role,
                                      unpack_influence(coloc_id, last,
                                                       influence_s), data_set);
             }
 
             last = resource;
         }
 
     } else {
         /* Anti-colocating with every prior resource is
          * the only way to ensure the intuitive result
          * (i.e. that no one in the set can run with anyone else in the set)
          */
 
         for (xml_rsc = first_named_child(set, XML_TAG_RESOURCE_REF);
              xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) {
 
             xmlNode *xml_rsc_with = NULL;
             bool influence = true;
 
             EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc));
             influence = unpack_influence(coloc_id, resource, influence_s);
 
             for (xml_rsc_with = first_named_child(set, XML_TAG_RESOURCE_REF);
                  xml_rsc_with != NULL;
                  xml_rsc_with = crm_next_same_xml(xml_rsc_with)) {
 
                 if (pcmk__str_eq(resource->id, ID(xml_rsc_with),
                                  pcmk__str_casei)) {
                     break;
                 }
                 EXPAND_CONSTRAINT_IDREF(set_id, with, ID(xml_rsc_with));
                 pe_rsc_trace(resource, "Anti-Colocating %s with %s", resource->id,
                              with->id);
                 pcmk__new_colocation(set_id, NULL, local_score,
                                      resource, with, role, role,
                                      influence, data_set);
             }
         }
     }
 }
 
 static void
 colocate_rsc_sets(const char *id, xmlNode *set1, xmlNode *set2, int score,
                   const char *influence_s, pe_working_set_t *data_set)
 {
     xmlNode *xml_rsc = NULL;
     pe_resource_t *rsc_1 = NULL;
     pe_resource_t *rsc_2 = NULL;
 
     const char *role_1 = crm_element_value(set1, "role");
     const char *role_2 = crm_element_value(set2, "role");
 
     int rc = pcmk_rc_ok;
     bool sequential = false;
 
     if (score == 0) {
         crm_trace("Ignoring colocation '%s' between sets because score is 0",
                   id);
         return;
     }
 
     rc = pcmk__xe_get_bool_attr(set1, "sequential", &sequential);
     if (rc != pcmk_rc_ok || sequential) {
         // Get the first one
         xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF);
         if (xml_rsc != NULL) {
             EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc));
         }
     }
 
     rc = pcmk__xe_get_bool_attr(set2, "sequential", &sequential);
     if (rc != pcmk_rc_ok || sequential) {
         // Get the last one
         const char *rid = NULL;
 
         for (xml_rsc = first_named_child(set2, XML_TAG_RESOURCE_REF);
              xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) {
 
             rid = ID(xml_rsc);
         }
         EXPAND_CONSTRAINT_IDREF(id, rsc_2, rid);
     }
 
     if ((rsc_1 != NULL) && (rsc_2 != NULL)) {
         pcmk__new_colocation(id, NULL, score, rsc_1, rsc_2, role_1, role_2,
                              unpack_influence(id, rsc_1, influence_s),
                              data_set);
 
     } else if (rsc_1 != NULL) {
         bool influence = unpack_influence(id, rsc_1, influence_s);
 
         for (xml_rsc = first_named_child(set2, XML_TAG_RESOURCE_REF);
              xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) {
 
             EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc));
             pcmk__new_colocation(id, NULL, score, rsc_1, rsc_2, role_1,
                                  role_2, influence, data_set);
         }
 
     } else if (rsc_2 != NULL) {
         for (xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF);
              xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) {
 
             EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc));
             pcmk__new_colocation(id, NULL, score, rsc_1, rsc_2, role_1,
                                  role_2,
                                  unpack_influence(id, rsc_1, influence_s),
                                  data_set);
         }
 
     } else {
         for (xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF);
              xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) {
 
             xmlNode *xml_rsc_2 = NULL;
             bool influence = true;
 
             EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc));
             influence = unpack_influence(id, rsc_1, influence_s);
 
             for (xml_rsc_2 = first_named_child(set2, XML_TAG_RESOURCE_REF);
                  xml_rsc_2 != NULL;
                  xml_rsc_2 = crm_next_same_xml(xml_rsc_2)) {
 
                 EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc_2));
                 pcmk__new_colocation(id, NULL, score, rsc_1, rsc_2,
                                      role_1, role_2, influence,
                                      data_set);
             }
         }
     }
 }
 
 static void
 unpack_simple_colocation(xmlNode *xml_obj, const char *id,
                          const char *influence_s, pe_working_set_t *data_set)
 {
     int score_i = 0;
 
     const char *score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE);
     const char *dependent_id = crm_element_value(xml_obj,
                                                  XML_COLOC_ATTR_SOURCE);
     const char *primary_id = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET);
     const char *dependent_role = crm_element_value(xml_obj,
                                                    XML_COLOC_ATTR_SOURCE_ROLE);
     const char *primary_role = crm_element_value(xml_obj,
                                                  XML_COLOC_ATTR_TARGET_ROLE);
     const char *attr = crm_element_value(xml_obj, XML_COLOC_ATTR_NODE_ATTR);
 
     // @COMPAT: Deprecated since 2.1.5
     const char *dependent_instance = crm_element_value(xml_obj,
                                                        XML_COLOC_ATTR_SOURCE_INSTANCE);
     // @COMPAT: Deprecated since 2.1.5
     const char *primary_instance = crm_element_value(xml_obj,
                                                      XML_COLOC_ATTR_TARGET_INSTANCE);
 
     pe_resource_t *dependent = pcmk__find_constraint_resource(data_set->resources,
                                                               dependent_id);
     pe_resource_t *primary = pcmk__find_constraint_resource(data_set->resources,
                                                             primary_id);
 
     if (dependent_instance != NULL) {
         pe_warn_once(pe_wo_coloc_inst,
                      "Support for " XML_COLOC_ATTR_SOURCE_INSTANCE " is "
                      "deprecated and will be removed in a future release.");
     }
 
     if (primary_instance != NULL) {
         pe_warn_once(pe_wo_coloc_inst,
                      "Support for " XML_COLOC_ATTR_TARGET_INSTANCE " is "
                      "deprecated and will be removed in a future release.");
     }
 
     if (dependent == NULL) {
         pcmk__config_err("Ignoring constraint '%s' because resource '%s' "
                          "does not exist", id, dependent_id);
         return;
 
     } else if (primary == NULL) {
         pcmk__config_err("Ignoring constraint '%s' because resource '%s' "
                          "does not exist", id, primary_id);
         return;
 
     } else if ((dependent_instance != NULL) && !pe_rsc_is_clone(dependent)) {
         pcmk__config_err("Ignoring constraint '%s' because resource '%s' "
                          "is not a clone but instance '%s' was requested",
                          id, dependent_id, dependent_instance);
         return;
 
     } else if ((primary_instance != NULL) && !pe_rsc_is_clone(primary)) {
         pcmk__config_err("Ignoring constraint '%s' because resource '%s' "
                          "is not a clone but instance '%s' was requested",
                          id, primary_id, primary_instance);
         return;
     }
 
     if (dependent_instance != NULL) {
         dependent = find_clone_instance(dependent, dependent_instance);
         if (dependent == NULL) {
             pcmk__config_warn("Ignoring constraint '%s' because resource '%s' "
                               "does not have an instance '%s'",
                               id, dependent_id, dependent_instance);
             return;
         }
     }
 
     if (primary_instance != NULL) {
         primary = find_clone_instance(primary, primary_instance);
         if (primary == NULL) {
             pcmk__config_warn("Ignoring constraint '%s' because resource '%s' "
                               "does not have an instance '%s'",
                               "'%s'", id, primary_id, primary_instance);
             return;
         }
     }
 
     if (pcmk__xe_attr_is_true(xml_obj, XML_CONS_ATTR_SYMMETRICAL)) {
         pcmk__config_warn("The colocation constraint '"
                           XML_CONS_ATTR_SYMMETRICAL
                           "' attribute has been removed");
     }
 
     if (score) {
         score_i = char2score(score);
     }
 
     pcmk__new_colocation(id, attr, score_i, dependent, primary,
                          dependent_role, primary_role,
                          unpack_influence(id, dependent, influence_s), data_set);
 }
 
 // \return Standard Pacemaker return code
 static int
 unpack_colocation_tags(xmlNode *xml_obj, xmlNode **expanded_xml,
                        pe_working_set_t *data_set)
 {
     const char *id = NULL;
     const char *dependent_id = NULL;
     const char *primary_id = NULL;
     const char *dependent_role = NULL;
     const char *primary_role = NULL;
 
     pe_resource_t *dependent = NULL;
     pe_resource_t *primary = NULL;
 
     pe_tag_t *dependent_tag = NULL;
     pe_tag_t *primary_tag = NULL;
 
     xmlNode *dependent_set = NULL;
     xmlNode *primary_set = NULL;
     bool any_sets = false;
 
     *expanded_xml = NULL;
 
     CRM_CHECK(xml_obj != NULL, return EINVAL);
 
     id = ID(xml_obj);
     if (id == NULL) {
         pcmk__config_err("Ignoring <%s> constraint without " XML_ATTR_ID,
                          crm_element_name(xml_obj));
         return pcmk_rc_unpack_error;
     }
 
     // Check whether there are any resource sets with template or tag references
     *expanded_xml = pcmk__expand_tags_in_sets(xml_obj, data_set);
     if (*expanded_xml != NULL) {
         crm_log_xml_trace(*expanded_xml, "Expanded rsc_colocation");
         return pcmk_rc_ok;
     }
 
     dependent_id = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE);
     primary_id = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET);
     if ((dependent_id == NULL) || (primary_id == NULL)) {
         return pcmk_rc_ok;
     }
 
     if (!pcmk__valid_resource_or_tag(data_set, dependent_id, &dependent,
                                      &dependent_tag)) {
         pcmk__config_err("Ignoring constraint '%s' because '%s' is not a "
                          "valid resource or tag", id, dependent_id);
         return pcmk_rc_unpack_error;
     }
 
     if (!pcmk__valid_resource_or_tag(data_set, primary_id, &primary,
                                      &primary_tag)) {
         pcmk__config_err("Ignoring constraint '%s' because '%s' is not a "
                          "valid resource or tag", id, primary_id);
         return pcmk_rc_unpack_error;
     }
 
     if ((dependent != NULL) && (primary != NULL)) {
         /* Neither side references any template/tag. */
         return pcmk_rc_ok;
     }
 
     if ((dependent_tag != NULL) && (primary_tag != NULL)) {
         // A colocation constraint between two templates/tags makes no sense
         pcmk__config_err("Ignoring constraint '%s' because two templates or "
                          "tags cannot be colocated", id);
         return pcmk_rc_unpack_error;
     }
 
     dependent_role = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE_ROLE);
     primary_role = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET_ROLE);
 
     *expanded_xml = copy_xml(xml_obj);
 
     // Convert template/tag reference in "rsc" into resource_set under constraint
     if (!pcmk__tag_to_set(*expanded_xml, &dependent_set, XML_COLOC_ATTR_SOURCE,
                           true, data_set)) {
         free_xml(*expanded_xml);
         *expanded_xml = NULL;
         return pcmk_rc_unpack_error;
     }
 
     if (dependent_set != NULL) {
         if (dependent_role != NULL) {
             // Move "rsc-role" into converted resource_set as "role"
             crm_xml_add(dependent_set, "role", dependent_role);
             xml_remove_prop(*expanded_xml, XML_COLOC_ATTR_SOURCE_ROLE);
         }
         any_sets = true;
     }
 
     // Convert template/tag reference in "with-rsc" into resource_set under constraint
     if (!pcmk__tag_to_set(*expanded_xml, &primary_set, XML_COLOC_ATTR_TARGET,
                           true, data_set)) {
         free_xml(*expanded_xml);
         *expanded_xml = NULL;
         return pcmk_rc_unpack_error;
     }
 
     if (primary_set != NULL) {
         if (primary_role != NULL) {
             // Move "with-rsc-role" into converted resource_set as "role"
             crm_xml_add(primary_set, "role", primary_role);
             xml_remove_prop(*expanded_xml, XML_COLOC_ATTR_TARGET_ROLE);
         }
         any_sets = true;
     }
 
     if (any_sets) {
         crm_log_xml_trace(*expanded_xml, "Expanded rsc_colocation");
     } else {
         free_xml(*expanded_xml);
         *expanded_xml = NULL;
     }
 
     return pcmk_rc_ok;
 }
 
 /*!
  * \internal
  * \brief Parse a colocation constraint from XML into a cluster working set
  *
  * \param[in,out] xml_obj   Colocation constraint XML to unpack
  * \param[in,out] data_set  Cluster working set to add constraint to
  */
 void
 pcmk__unpack_colocation(xmlNode *xml_obj, pe_working_set_t *data_set)
 {
     int score_i = 0;
     xmlNode *set = NULL;
     xmlNode *last = NULL;
 
     xmlNode *orig_xml = NULL;
     xmlNode *expanded_xml = NULL;
 
     const char *id = crm_element_value(xml_obj, XML_ATTR_ID);
     const char *score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE);
     const char *influence_s = crm_element_value(xml_obj,
                                                 XML_COLOC_ATTR_INFLUENCE);
 
     if (score) {
         score_i = char2score(score);
     }
 
     if (unpack_colocation_tags(xml_obj, &expanded_xml,
                                data_set) != pcmk_rc_ok) {
         return;
     }
     if (expanded_xml) {
         orig_xml = xml_obj;
         xml_obj = expanded_xml;
     }
 
     for (set = first_named_child(xml_obj, XML_CONS_TAG_RSC_SET); set != NULL;
          set = crm_next_same_xml(set)) {
 
         set = expand_idref(set, data_set->input);
         if (set == NULL) { // Configuration error, message already logged
             if (expanded_xml != NULL) {
                 free_xml(expanded_xml);
             }
             return;
         }
 
         unpack_colocation_set(set, score_i, id, influence_s, data_set);
 
         if (last != NULL) {
             colocate_rsc_sets(id, last, set, score_i, influence_s, data_set);
         }
         last = set;
     }
 
     if (expanded_xml) {
         free_xml(expanded_xml);
         xml_obj = orig_xml;
     }
 
     if (last == NULL) {
         unpack_simple_colocation(xml_obj, id, influence_s, data_set);
     }
 }
 
 /*!
  * \internal
  * \brief Make actions of a given type unrunnable for a given resource
  *
  * \param[in,out] rsc     Resource whose actions should be blocked
  * \param[in]     task    Name of action to block
  * \param[in]     reason  Unrunnable start action causing the block
  */
 static void
 mark_action_blocked(pe_resource_t *rsc, const char *task,
                     const pe_resource_t *reason)
 {
     char *reason_text = crm_strdup_printf("colocation with %s", reason->id);
 
     for (GList *gIter = rsc->actions; gIter != NULL; gIter = gIter->next) {
         pe_action_t *action = (pe_action_t *) gIter->data;
 
         if (pcmk_is_set(action->flags, pe_action_runnable)
             && pcmk__str_eq(action->task, task, pcmk__str_casei)) {
 
             pe__clear_action_flags(action, pe_action_runnable);
             pe_action_set_reason(action, reason_text, false);
             pcmk__block_colocation_dependents(action, rsc->cluster);
             pcmk__update_action_for_orderings(action, rsc->cluster);
         }
     }
 
     // If parent resource can't perform an action, neither can any children
     for (GList *iter = rsc->children; iter != NULL; iter = iter->next) {
         mark_action_blocked((pe_resource_t *) (iter->data), task, reason);
     }
     free(reason_text);
 }
 
 /*!
  * \internal
  * \brief If an action is unrunnable, block any relevant dependent actions
  *
  * If a given action is an unrunnable start or promote, block the start or
  * promote actions of resources colocated with it, as appropriate to the
  * colocations' configured roles.
  *
  * \param[in,out] action    Action to check
  * \param[in]     data_set  Cluster working set (ignored)
  */
 void
 pcmk__block_colocation_dependents(pe_action_t *action,
                                   pe_working_set_t *data_set)
 {
     GList *gIter = NULL;
     GList *colocations = NULL;
     pe_resource_t *rsc = NULL;
     bool is_start = false;
 
     if (pcmk_is_set(action->flags, pe_action_runnable)) {
         return; // Only unrunnable actions block dependents
     }
 
     is_start = pcmk__str_eq(action->task, RSC_START, pcmk__str_none);
     if (!is_start && !pcmk__str_eq(action->task, RSC_PROMOTE, pcmk__str_none)) {
         return; // Only unrunnable starts and promotes block dependents
     }
 
     CRM_ASSERT(action->rsc != NULL); // Start and promote are resource actions
 
     /* If this resource is part of a collective resource, dependents are blocked
      * only if all instances of the collective are unrunnable, so check the
      * collective resource.
      */
     rsc = uber_parent(action->rsc);
     if (rsc->parent != NULL) {
         rsc = rsc->parent; // Bundle
     }
 
     // Colocation fails only if entire primary can't reach desired role
     for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) {
         pe_resource_t *child = (pe_resource_t *) gIter->data;
         pe_action_t *child_action = find_first_action(child->actions, NULL,
                                                       action->task, NULL);
 
         if ((child_action == NULL)
             || pcmk_is_set(child_action->flags, pe_action_runnable)) {
             crm_trace("Not blocking %s colocation dependents because "
                       "at least %s has runnable %s",
                       rsc->id, child->id, action->task);
             return; // At least one child can reach desired role
         }
     }
 
     crm_trace("Blocking %s colocation dependents due to unrunnable %s %s",
               rsc->id, action->rsc->id, action->task);
 
     // Check each colocation where this resource is primary
     colocations = pcmk__with_this_colocations(rsc);
     for (gIter = colocations; gIter != NULL; gIter = gIter->next) {
         pcmk__colocation_t *colocation = (pcmk__colocation_t *) gIter->data;
 
         if (colocation->score < INFINITY) {
             continue; // Only mandatory colocations block dependent
         }
 
         /* If the primary can't start, the dependent can't reach its colocated
          * role, regardless of what the primary or dependent colocation role is.
          *
          * If the primary can't be promoted, the dependent can't reach its
          * colocated role if the primary's colocation role is promoted.
          */
         if (!is_start && (colocation->primary_role != RSC_ROLE_PROMOTED)) {
             continue;
         }
 
         // Block the dependent from reaching its colocated role
         if (colocation->dependent_role == RSC_ROLE_PROMOTED) {
             mark_action_blocked(colocation->dependent, RSC_PROMOTE,
                                 action->rsc);
         } else {
             mark_action_blocked(colocation->dependent, RSC_START, action->rsc);
         }
     }
     g_list_free(colocations);
 }
 
 /*!
  * \internal
  * \brief Determine how a colocation constraint should affect a resource
  *
  * Colocation constraints have different effects at different points in the
  * scheduler sequence. Initially, they affect a resource's location; once that
  * is determined, then for promotable clones they can affect a resource
  * instance's role; after both are determined, the constraints no longer matter.
  * Given a specific colocation constraint, check what has been done so far to
  * determine what should be affected at the current point in the scheduler.
  *
  * \param[in] dependent   Dependent resource in colocation
  * \param[in] primary     Primary resource in colocation
  * \param[in] colocation  Colocation constraint
- * \param[in] preview     If true, pretend resources have already been allocated
+ * \param[in] preview     If true, pretend resources have already been assigned
  *
  * \return How colocation constraint should be applied at this point
  */
 enum pcmk__coloc_affects
 pcmk__colocation_affects(const pe_resource_t *dependent,
                          const pe_resource_t *primary,
                          const pcmk__colocation_t *colocation, bool preview)
 {
     if (!preview && pcmk_is_set(primary->flags, pe_rsc_provisional)) {
-        // Primary resource has not been allocated yet, so we can't do anything
+        // Primary resource has not been assigned yet, so we can't do anything
         return pcmk__coloc_affects_nothing;
     }
 
     if ((colocation->dependent_role >= RSC_ROLE_UNPROMOTED)
         && (dependent->parent != NULL)
         && pcmk_is_set(dependent->parent->flags, pe_rsc_promotable)
         && !pcmk_is_set(dependent->flags, pe_rsc_provisional)) {
 
         /* This is a colocation by role, and the dependent is a promotable clone
-         * that has already been allocated, so the colocation should now affect
+         * that has already been assigned, so the colocation should now affect
          * the role.
          */
         return pcmk__coloc_affects_role;
     }
 
     if (!preview && !pcmk_is_set(dependent->flags, pe_rsc_provisional)) {
-        /* The dependent resource has already been through allocation, so the
+        /* The dependent resource has already been through assignment, so the
          * constraint no longer has any effect. Log an error if a mandatory
          * colocation constraint has been violated.
          */
 
         const pe_node_t *primary_node = primary->allocated_to;
 
         if (dependent->allocated_to == NULL) {
             crm_trace("Skipping colocation '%s': %s will not run anywhere",
                       colocation->id, dependent->id);
 
         } else if (colocation->score >= INFINITY) {
             // Dependent resource must colocate with primary resource
 
             if ((primary_node == NULL) ||
                 (primary_node->details != dependent->allocated_to->details)) {
                 crm_err("%s must be colocated with %s but is not (%s vs. %s)",
                         dependent->id, primary->id,
                         pe__node_name(dependent->allocated_to),
                         pe__node_name(primary_node));
             }
 
         } else if (colocation->score <= -CRM_SCORE_INFINITY) {
             // Dependent resource must anti-colocate with primary resource
 
             if ((primary_node != NULL) &&
                 (dependent->allocated_to->details == primary_node->details)) {
-                crm_err("%s and %s must be anti-colocated but are allocated "
+                crm_err("%s and %s must be anti-colocated but are assigned "
                         "to the same node (%s)",
                         dependent->id, primary->id, pe__node_name(primary_node));
             }
         }
         return pcmk__coloc_affects_nothing;
     }
 
     if ((colocation->score > 0)
         && (colocation->dependent_role != RSC_ROLE_UNKNOWN)
         && (colocation->dependent_role != dependent->next_role)) {
 
         crm_trace("Skipping colocation '%s': dependent limited to %s role "
                   "but %s next role is %s",
                   colocation->id, role2text(colocation->dependent_role),
                   dependent->id, role2text(dependent->next_role));
         return pcmk__coloc_affects_nothing;
     }
 
     if ((colocation->score > 0)
         && (colocation->primary_role != RSC_ROLE_UNKNOWN)
         && (colocation->primary_role != primary->next_role)) {
 
         crm_trace("Skipping colocation '%s': primary limited to %s role "
                   "but %s next role is %s",
                   colocation->id, role2text(colocation->primary_role),
                   primary->id, role2text(primary->next_role));
         return pcmk__coloc_affects_nothing;
     }
 
     if ((colocation->score < 0)
         && (colocation->dependent_role != RSC_ROLE_UNKNOWN)
         && (colocation->dependent_role == dependent->next_role)) {
         crm_trace("Skipping anti-colocation '%s': dependent role %s matches",
                   colocation->id, role2text(colocation->dependent_role));
         return pcmk__coloc_affects_nothing;
     }
 
     if ((colocation->score < 0)
         && (colocation->primary_role != RSC_ROLE_UNKNOWN)
         && (colocation->primary_role == primary->next_role)) {
         crm_trace("Skipping anti-colocation '%s': primary role %s matches",
                   colocation->id, role2text(colocation->primary_role));
         return pcmk__coloc_affects_nothing;
     }
 
     return pcmk__coloc_affects_location;
 }
 
 /*!
  * \internal
- * \brief Apply colocation to dependent for allocation purposes
+ * \brief Apply colocation to dependent for assignment purposes
  *
  * Update the allowed node weights of the dependent resource in a colocation,
- * for the purposes of allocating it to a node
+ * for the purposes of assigning it to a node.
  *
  * \param[in,out] dependent   Dependent resource in colocation
  * \param[in]     primary     Primary resource in colocation
  * \param[in]     colocation  Colocation constraint
  */
 void
 pcmk__apply_coloc_to_weights(pe_resource_t *dependent,
                              const pe_resource_t *primary,
                              const pcmk__colocation_t *colocation)
 {
     const char *attribute = CRM_ATTR_ID;
     const char *value = NULL;
     GHashTable *work = NULL;
     GHashTableIter iter;
     pe_node_t *node = NULL;
 
     if (colocation->node_attribute != NULL) {
         attribute = colocation->node_attribute;
     }
 
     if (primary->allocated_to != NULL) {
         value = pe_node_attribute_raw(primary->allocated_to, attribute);
 
     } else if (colocation->score < 0) {
         // Nothing to do (anti-colocation with something that is not running)
         return;
     }
 
     work = pcmk__copy_node_table(dependent->allowed_nodes);
 
     g_hash_table_iter_init(&iter, work);
     while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
         if (primary->allocated_to == NULL) {
             node->weight = pcmk__add_scores(-colocation->score, node->weight);
             pe_rsc_trace(dependent,
                          "Applied %s to %s score on %s (now %s after "
                          "subtracting %s because primary %s inactive)",
                          colocation->id, dependent->id, pe__node_name(node),
                          pcmk_readable_score(node->weight),
                          pcmk_readable_score(colocation->score), primary->id);
 
         } else if (pcmk__str_eq(pe_node_attribute_raw(node, attribute), value,
                                 pcmk__str_casei)) {
             /* Add colocation score only if optional (or minus infinity). A
              * mandatory colocation is a requirement rather than a preference,
              * so we don't need to consider it for relative assignment purposes.
              * The resource will simply be forbidden from running on the node if
              * the primary isn't active there (via the condition above).
              */
             if (colocation->score < CRM_SCORE_INFINITY) {
                 node->weight = pcmk__add_scores(colocation->score,
                                                 node->weight);
                 pe_rsc_trace(dependent,
                              "Applied %s to %s score on %s (now %s after "
                              "adding %s)",
                              colocation->id, dependent->id, pe__node_name(node),
                              pcmk_readable_score(node->weight),
                              pcmk_readable_score(colocation->score));
             }
 
         } else if (colocation->score >= CRM_SCORE_INFINITY) {
             /* Only mandatory colocations are relevant when the colocation
              * attribute doesn't match, because an attribute not matching is not
              * a negative preference -- the colocation is simply relevant only
              * where it matches.
              */
             node->weight = -CRM_SCORE_INFINITY;
             pe_rsc_trace(dependent,
                          "Banned %s from %s because colocation %s attribute %s "
                          "does not match",
                          dependent->id, pe__node_name(node), colocation->id,
                          attribute);
         }
     }
 
     if ((colocation->score <= -INFINITY) || (colocation->score >= INFINITY)
         || pcmk__any_node_available(work)) {
 
         g_hash_table_destroy(dependent->allowed_nodes);
         dependent->allowed_nodes = work;
         work = NULL;
 
     } else {
         pe_rsc_info(dependent,
                     "%s: Rolling back scores from %s (no available nodes)",
                     dependent->id, primary->id);
     }
 
     if (work != NULL) {
         g_hash_table_destroy(work);
     }
 }
 
 /*!
  * \internal
  * \brief Apply colocation to dependent for role purposes
  *
  * Update the priority of the dependent resource in a colocation, for the
  * purposes of selecting its role
  *
  * \param[in,out] dependent   Dependent resource in colocation
  * \param[in]     primary     Primary resource in colocation
  * \param[in]     colocation  Colocation constraint
  */
 void
 pcmk__apply_coloc_to_priority(pe_resource_t *dependent,
                               const pe_resource_t *primary,
                               const pcmk__colocation_t *colocation)
 {
     const char *dependent_value = NULL;
     const char *primary_value = NULL;
     const char *attribute = CRM_ATTR_ID;
     int score_multiplier = 1;
 
     if ((primary->allocated_to == NULL) || (dependent->allocated_to == NULL)) {
         return;
     }
 
     if (colocation->node_attribute != NULL) {
         attribute = colocation->node_attribute;
     }
 
     dependent_value = pe_node_attribute_raw(dependent->allocated_to, attribute);
     primary_value = pe_node_attribute_raw(primary->allocated_to, attribute);
 
     if (!pcmk__str_eq(dependent_value, primary_value, pcmk__str_casei)) {
         if ((colocation->score == INFINITY)
             && (colocation->dependent_role == RSC_ROLE_PROMOTED)) {
             dependent->priority = -INFINITY;
         }
         return;
     }
 
     if ((colocation->primary_role != RSC_ROLE_UNKNOWN)
         && (colocation->primary_role != primary->next_role)) {
         return;
     }
 
     if (colocation->dependent_role == RSC_ROLE_UNPROMOTED) {
         score_multiplier = -1;
     }
 
     dependent->priority = pcmk__add_scores(score_multiplier * colocation->score,
                                            dependent->priority);
     pe_rsc_trace(dependent,
                  "Applied %s to %s promotion priority (now %s after %s %s)",
                  colocation->id, dependent->id,
                  pcmk_readable_score(dependent->priority),
                  ((score_multiplier == 1)? "adding" : "subtracting"),
                  pcmk_readable_score(colocation->score));
 }
 
 /*!
  * \internal
  * \brief Find score of highest-scored node that matches colocation attribute
  *
  * \param[in] rsc    Resource whose allowed nodes should be searched
  * \param[in] attr   Colocation attribute name (must not be NULL)
  * \param[in] value  Colocation attribute value to require
  */
 static int
 best_node_score_matching_attr(const pe_resource_t *rsc, const char *attr,
                               const char *value)
 {
     GHashTableIter iter;
     pe_node_t *node = NULL;
     int best_score = -INFINITY;
     const char *best_node = NULL;
 
     // Find best allowed node with matching attribute
     g_hash_table_iter_init(&iter, rsc->allowed_nodes);
     while (g_hash_table_iter_next(&iter, NULL, (void **) &node)) {
 
         if ((node->weight > best_score) && pcmk__node_available(node, false, false)
             && pcmk__str_eq(value, pe_node_attribute_raw(node, attr), pcmk__str_casei)) {
 
             best_score = node->weight;
             best_node = node->details->uname;
         }
     }
 
     if (!pcmk__str_eq(attr, CRM_ATTR_UNAME, pcmk__str_casei)) {
         if (best_node == NULL) {
             crm_info("No allowed node for %s matches node attribute %s=%s",
                      rsc->id, attr, value);
         } else {
             crm_info("Allowed node %s for %s had best score (%d) "
                      "of those matching node attribute %s=%s",
                      best_node, rsc->id, best_score, attr, value);
         }
     }
     return best_score;
 }
 
 /*!
  * \internal
  * \brief Check whether a resource is allowed only on a single node
  *
  * \param[in] rsc   Resource to check
  *
  * \return \c true if \p rsc is allowed only on one node, otherwise \c false
  */
 static bool
 allowed_on_one(const pe_resource_t *rsc)
 {
     GHashTableIter iter;
     pe_node_t *allowed_node = NULL;
     int allowed_nodes = 0;
 
     g_hash_table_iter_init(&iter, rsc->allowed_nodes);
     while (g_hash_table_iter_next(&iter, NULL, (gpointer *) &allowed_node)) {
         if ((allowed_node->weight >= 0) && (++allowed_nodes > 1)) {
             pe_rsc_trace(rsc, "%s is allowed on multiple nodes", rsc->id);
             return false;
         }
     }
     pe_rsc_trace(rsc, "%s is allowed %s", rsc->id,
                  ((allowed_nodes == 1)? "on a single node" : "nowhere"));
     return (allowed_nodes == 1);
 }
 
 /*!
  * \internal
- * \brief Add resource's colocation matches to current node allocation scores
+ * \brief Add resource's colocation matches to current node assignment scores
  *
  * For each node in a given table, if any of a given resource's allowed nodes
  * have a matching value for the colocation attribute, add the highest of those
  * nodes' scores to the node's score.
  *
  * \param[in,out] nodes          Table of nodes with assignment scores so far
  * \param[in]     rsc            Resource whose allowed nodes should be compared
  * \param[in]     colocation     Original colocation constraint (used to get
  *                               configured primary resource's stickiness, and
  *                               to get colocation node attribute; pass NULL to
  *                               ignore stickiness and use default attribute)
  * \param[in]     factor         Factor by which to multiply scores being added
  * \param[in]     only_positive  Whether to add only positive scores
  */
 static void
 add_node_scores_matching_attr(GHashTable *nodes, const pe_resource_t *rsc,
                               pcmk__colocation_t *colocation, float factor,
                               bool only_positive)
 {
     GHashTableIter iter;
     pe_node_t *node = NULL;
     const char *attr = CRM_ATTR_UNAME;
 
     if ((colocation != NULL) && (colocation->node_attribute != NULL)) {
         attr = colocation->node_attribute;
     }
 
     // Iterate through each node
     g_hash_table_iter_init(&iter, nodes);
     while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
         float weight_f = 0;
         int weight = 0;
         int score = 0;
         int new_score = 0;
         const char *value = pe_node_attribute_raw(node, attr);
 
         score = best_node_score_matching_attr(rsc, attr, value);
 
         if ((factor < 0) && (score < 0)) {
             /* If the dependent is anti-colocated, we generally don't want the
              * primary to prefer nodes that the dependent avoids. That could
              * lead to unnecessary shuffling of the primary when the dependent
              * hits its migration threshold somewhere, for example.
              *
              * However, there are cases when it is desirable. If the dependent
              * can't run anywhere but where the primary is, it would be
              * worthwhile to move the primary for the sake of keeping the
              * dependent active.
              *
              * We can't know that exactly at this point since we don't know
              * where the primary will be assigned, but we can limit considering
              * the preference to when the dependent is allowed only on one node.
              * This is less than ideal for multiple reasons:
              *
              * - the dependent could be allowed on more than one node but have
              *   anti-colocation primaries on each;
              * - the dependent could be a clone or bundle with multiple
              *   instances, and the dependent as a whole is allowed on multiple
              *   nodes but some instance still can't run
              * - the dependent has considered node-specific criteria such as
              *   location constraints and stickiness by this point, but might
              *   have other factors that end up disallowing a node
              *
              * but the alternative is making the primary move when it doesn't
              * need to.
              *
              * We also consider the primary's stickiness and influence, so the
              * user has some say in the matter. (This is the configured primary,
              * not a particular instance of the primary, but that doesn't matter
              * unless stickiness uses a rule to vary by node, and that seems
              * acceptable to ignore.)
              */
             if ((colocation == NULL)
                 || (colocation->primary->stickiness >= -score)
                 || !pcmk__colocation_has_influence(colocation, NULL)
                 || !allowed_on_one(colocation->dependent)) {
                 crm_trace("%s: Filtering %d + %f * %d "
                           "(double negative disallowed)",
                           pe__node_name(node), node->weight, factor, score);
                 continue;
             }
         }
 
         if (node->weight == INFINITY_HACK) {
             crm_trace("%s: Filtering %d + %f * %d (node was marked unusable)",
                       pe__node_name(node), node->weight, factor, score);
             continue;
         }
 
         weight_f = factor * score;
 
         // Round the number; see http://c-faq.com/fp/round.html
         weight = (int) ((weight_f < 0)? (weight_f - 0.5) : (weight_f + 0.5));
 
         /* Small factors can obliterate the small scores that are often actually
          * used in configurations. If the score and factor are nonzero, ensure
          * that the result is nonzero as well.
          */
         if ((weight == 0) && (score != 0)) {
             if (factor > 0.0) {
                 weight = 1;
             } else if (factor < 0.0) {
                 weight = -1;
             }
         }
 
         new_score = pcmk__add_scores(weight, node->weight);
 
         if (only_positive && (new_score < 0) && (node->weight > 0)) {
             crm_trace("%s: Filtering %d + %f * %d = %d "
                       "(negative disallowed, marking node unusable)",
                       pe__node_name(node), node->weight, factor, score,
                       new_score);
             node->weight = INFINITY_HACK;
             continue;
         }
 
         if (only_positive && (new_score < 0) && (node->weight == 0)) {
             crm_trace("%s: Filtering %d + %f * %d = %d (negative disallowed)",
                       pe__node_name(node), node->weight, factor, score,
                       new_score);
             continue;
         }
 
         crm_trace("%s: %d + %f * %d = %d", pe__node_name(node),
                   node->weight, factor, score, new_score);
         node->weight = new_score;
     }
 }
 
 /*!
  * \internal
  * \brief Update nodes with scores of colocated resources' nodes
  *
  * Given a table of nodes and a resource, update the nodes' scores with the
  * scores of the best nodes matching the attribute used for each of the
  * resource's relevant colocations.
  *
  * \param[in,out] rsc         Resource to check colocations for
  * \param[in]     log_id      Resource ID for logs (if NULL, use \p rsc ID)
  * \param[in,out] nodes       Nodes to update (set initial contents to NULL
  *                            to copy \p rsc's allowed nodes)
  * \param[in]     colocation  Original colocation constraint (used to get
  *                            configured primary resource's stickiness, and
  *                            to get colocation node attribute; if NULL,
  *                            \p rsc's own matching node scores will not be
  *                            added, and *nodes must be NULL as well)
  * \param[in]     factor      Incorporate scores multiplied by this factor
  * \param[in]     flags       Bitmask of enum pcmk__coloc_select values
  *
  * \note NULL *nodes, NULL colocation, and the pcmk__coloc_select_this_with
  *       flag are used together (and only by cmp_resources()).
  * \note The caller remains responsible for freeing \p *nodes.
  */
 void
 pcmk__add_colocated_node_scores(pe_resource_t *rsc, const char *log_id,
                                 GHashTable **nodes,
                                 pcmk__colocation_t *colocation,
                                 float factor, uint32_t flags)
 {
     GHashTable *work = NULL;
 
     CRM_ASSERT((rsc != NULL) && (nodes != NULL)
                && ((colocation != NULL) || (*nodes == NULL)));
 
     if (log_id == NULL) {
         log_id = rsc->id;
     }
 
     // Avoid infinite recursion
     if (pcmk_is_set(rsc->flags, pe_rsc_merging)) {
         pe_rsc_info(rsc, "%s: Breaking dependency loop at %s",
                     log_id, rsc->id);
         return;
     }
     pe__set_resource_flags(rsc, pe_rsc_merging);
 
     if (*nodes == NULL) {
         work = pcmk__copy_node_table(rsc->allowed_nodes);
     } else {
         pe_rsc_trace(rsc, "%s: Merging scores from %s (at %.6f)",
                      log_id, rsc->id, factor);
         work = pcmk__copy_node_table(*nodes);
         add_node_scores_matching_attr(work, rsc, colocation, factor,
                                       pcmk_is_set(flags,
                                                   pcmk__coloc_select_nonnegative));
     }
 
     if (work == NULL) {
         pe__clear_resource_flags(rsc, pe_rsc_merging);
         return;
     }
 
     if (pcmk__any_node_available(work)) {
         GList *colocations = NULL;
 
         if (pcmk_is_set(flags, pcmk__coloc_select_this_with)) {
             colocations = pcmk__this_with_colocations(rsc);
             pe_rsc_trace(rsc,
                          "Checking additional %d optional '%s with' constraints",
                          g_list_length(colocations), rsc->id);
         } else {
             colocations = pcmk__with_this_colocations(rsc);
             pe_rsc_trace(rsc,
                          "Checking additional %d optional 'with %s' constraints",
                          g_list_length(colocations), rsc->id);
         }
         flags |= pcmk__coloc_select_active;
 
         for (GList *iter = colocations; iter != NULL; iter = iter->next) {
             pcmk__colocation_t *constraint = (pcmk__colocation_t *) iter->data;
 
             pe_resource_t *other = NULL;
             float other_factor = factor * constraint->score / (float) INFINITY;
 
             if (pcmk_is_set(flags, pcmk__coloc_select_this_with)) {
                 other = constraint->primary;
             } else if (!pcmk__colocation_has_influence(constraint, NULL)) {
                 continue;
             } else {
                 other = constraint->dependent;
             }
 
             pe_rsc_trace(rsc, "Optionally merging score of '%s' constraint (%s with %s)",
                          constraint->id, constraint->dependent->id,
                          constraint->primary->id);
             other->cmds->add_colocated_node_scores(other, log_id, &work,
                                                    constraint,
                                                    other_factor, flags);
             pe__show_node_weights(true, NULL, log_id, work, rsc->cluster);
         }
         g_list_free(colocations);
 
     } else if (pcmk_is_set(flags, pcmk__coloc_select_active)) {
         pe_rsc_info(rsc, "%s: Rolling back optional scores from %s",
                     log_id, rsc->id);
         g_hash_table_destroy(work);
         pe__clear_resource_flags(rsc, pe_rsc_merging);
         return;
     }
 
 
     if (pcmk_is_set(flags, pcmk__coloc_select_nonnegative)) {
         pe_node_t *node = NULL;
         GHashTableIter iter;
 
         g_hash_table_iter_init(&iter, work);
         while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
             if (node->weight == INFINITY_HACK) {
                 node->weight = 1;
             }
         }
     }
 
     if (*nodes != NULL) {
        g_hash_table_destroy(*nodes);
     }
     *nodes = work;
 
     pe__clear_resource_flags(rsc, pe_rsc_merging);
 }
 
 /*!
  * \internal
  * \brief Apply a "with this" colocation to a resource's allowed node scores
  *
  * \param[in,out] data       Colocation to apply
  * \param[in,out] user_data  Resource being assigned
  */
 void
 pcmk__add_dependent_scores(gpointer data, gpointer user_data)
 {
     pcmk__colocation_t *colocation = (pcmk__colocation_t *) data;
     pe_resource_t *rsc = (pe_resource_t *) user_data;
 
     pe_resource_t *other = colocation->dependent;
     const float factor = colocation->score / (float) INFINITY;
     uint32_t flags = pcmk__coloc_select_active;
 
     if (!pcmk__colocation_has_influence(colocation, NULL)) {
         return;
     }
     if (rsc->variant == pe_clone) {
         flags |= pcmk__coloc_select_nonnegative;
     }
     pe_rsc_trace(rsc,
                  "%s: Incorporating attenuated %s assignment scores due "
                  "to colocation %s", rsc->id, other->id, colocation->id);
     other->cmds->add_colocated_node_scores(other, rsc->id, &rsc->allowed_nodes,
                                            colocation, factor, flags);
 }
 
 /*!
  * \internal
  * \brief Get all colocations affecting a resource as the primary
  *
  * \param[in] rsc  Resource to get colocations for
  *
  * \return Newly allocated list of colocations affecting \p rsc as primary
  *
  * \note This is a convenience wrapper for the with_this_colocations() method.
  */
 GList *
 pcmk__with_this_colocations(const pe_resource_t *rsc)
 {
     GList *list = NULL;
 
     rsc->cmds->with_this_colocations(rsc, rsc, &list);
     return list;
 }
 
 /*!
  * \internal
  * \brief Get all colocations affecting a resource as the dependent
  *
  * \param[in] rsc  Resource to get colocations for
  *
  * \return Newly allocated list of colocations affecting \p rsc as dependent
  *
  * \note This is a convenience wrapper for the this_with_colocations() method.
  */
 GList *
 pcmk__this_with_colocations(const pe_resource_t *rsc)
 {
     GList *list = NULL;
 
     rsc->cmds->this_with_colocations(rsc, rsc, &list);
     return list;
 }
diff --git a/lib/pacemaker/pcmk_sched_nodes.c b/lib/pacemaker/pcmk_sched_nodes.c
index d7d5ba4616..8eeebe4820 100644
--- a/lib/pacemaker/pcmk_sched_nodes.c
+++ b/lib/pacemaker/pcmk_sched_nodes.c
@@ -1,351 +1,351 @@
 /*
  * Copyright 2004-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 #include <crm/msg_xml.h>
 #include <crm/lrmd.h>       // lrmd_event_data_t
 #include <crm/common/xml_internal.h>
 #include <pacemaker-internal.h>
 #include <pacemaker.h>
 #include "libpacemaker_private.h"
 
 /*!
  * \internal
  * \brief Check whether a node is available to run resources
  *
  * \param[in] node            Node to check
  * \param[in] consider_score  If true, consider a negative score unavailable
  * \param[in] consider_guest  If true, consider a guest node unavailable whose
  *                            resource will not be active
  *
  * \return true if node is online and not shutting down, unclean, or in standby
  *         or maintenance mode, otherwise false
  */
 bool
 pcmk__node_available(const pe_node_t *node, bool consider_score,
                      bool consider_guest)
 {
     if ((node == NULL) || (node->details == NULL) || !node->details->online
             || node->details->shutdown || node->details->unclean
             || node->details->standby || node->details->maintenance) {
         return false;
     }
 
     if (consider_score && (node->weight < 0)) {
         return false;
     }
 
     // @TODO Go through all callers to see which should set consider_guest
     if (consider_guest && pe__is_guest_node(node)) {
         pe_resource_t *guest = node->details->remote_rsc->container;
 
         if (guest->fns->location(guest, NULL, FALSE) == NULL) {
             return false;
         }
     }
 
     return true;
 }
 
 /*!
  * \internal
  * \brief Copy a hash table of node objects
  *
  * \param[in] nodes  Hash table to copy
  *
  * \return New copy of nodes (or NULL if nodes is NULL)
  */
 GHashTable *
 pcmk__copy_node_table(GHashTable *nodes)
 {
     GHashTable *new_table = NULL;
     GHashTableIter iter;
     pe_node_t *node = NULL;
 
     if (nodes == NULL) {
         return NULL;
     }
     new_table = pcmk__strkey_table(NULL, free);
     g_hash_table_iter_init(&iter, nodes);
     while (g_hash_table_iter_next(&iter, NULL, (gpointer *) &node)) {
         pe_node_t *new_node = pe__copy_node(node);
 
         g_hash_table_insert(new_table, (gpointer) new_node->details->id,
                             new_node);
     }
     return new_table;
 }
 
 /*!
  * \internal
  * \brief Copy a list of node objects
  *
  * \param[in] list   List to copy
  * \param[in] reset  Set copies' scores to 0
  *
  * \return New list of shallow copies of nodes in original list
  */
 GList *
 pcmk__copy_node_list(const GList *list, bool reset)
 {
     GList *result = NULL;
 
     for (const GList *gIter = list; gIter != NULL; gIter = gIter->next) {
         pe_node_t *new_node = NULL;
         pe_node_t *this_node = (pe_node_t *) gIter->data;
 
         new_node = pe__copy_node(this_node);
         if (reset) {
             new_node->weight = 0;
         }
         result = g_list_prepend(result, new_node);
     }
     return result;
 }
 
 /*!
  * \internal
- * \brief Compare two nodes for allocation desirability
+ * \brief Compare two nodes for assignment preference
  *
- * Given two nodes, check which one is more preferred by allocation criteria
+ * Given two nodes, check which one is more preferred by assignment criteria
  * such as node weight and utilization.
  *
  * \param[in] a     First node to compare
  * \param[in] b     Second node to compare
  * \param[in] data  Node that resource being assigned is active on, if any
  *
  * \return -1 if \p a is preferred, +1 if \p b is preferred, or 0 if they are
  *         equally preferred
  */
 static gint
 compare_nodes(gconstpointer a, gconstpointer b, gpointer data)
 {
     const pe_node_t *node1 = (const pe_node_t *) a;
     const pe_node_t *node2 = (const pe_node_t *) b;
     const pe_node_t *active = (const pe_node_t *) data;
 
     int node1_weight = 0;
     int node2_weight = 0;
 
     int result = 0;
 
     if (a == NULL) {
         return 1;
     }
     if (b == NULL) {
         return -1;
     }
 
     // Compare node weights
 
     node1_weight = pcmk__node_available(node1, false, false)? node1->weight : -INFINITY;
     node2_weight = pcmk__node_available(node2, false, false)? node2->weight : -INFINITY;
 
     if (node1_weight > node2_weight) {
         crm_trace("%s (%d) > %s (%d) : weight",
                   pe__node_name(node1), node1_weight, pe__node_name(node2),
                   node2_weight);
         return -1;
     }
 
     if (node1_weight < node2_weight) {
         crm_trace("%s (%d) < %s (%d) : weight",
                   pe__node_name(node1), node1_weight, pe__node_name(node2),
                   node2_weight);
         return 1;
     }
 
     crm_trace("%s (%d) == %s (%d) : weight",
               pe__node_name(node1), node1_weight, pe__node_name(node2),
               node2_weight);
 
     // If appropriate, compare node utilization
 
     if (pcmk__str_eq(node1->details->data_set->placement_strategy, "minimal",
                      pcmk__str_casei)) {
         goto equal;
     }
 
     if (pcmk__str_eq(node1->details->data_set->placement_strategy, "balanced",
                      pcmk__str_casei)) {
         result = pcmk__compare_node_capacities(node1, node2);
         if (result < 0) {
             crm_trace("%s > %s : capacity (%d)",
                       pe__node_name(node1), pe__node_name(node2), result);
             return -1;
         } else if (result > 0) {
             crm_trace("%s < %s : capacity (%d)",
                       pe__node_name(node1), pe__node_name(node2), result);
             return 1;
         }
     }
 
-    // Compare number of allocated resources
+    // Compare number of resources already assigned to node
 
     if (node1->details->num_resources < node2->details->num_resources) {
         crm_trace("%s (%d) > %s (%d) : resources",
                   pe__node_name(node1), node1->details->num_resources,
                   pe__node_name(node2), node2->details->num_resources);
         return -1;
 
     } else if (node1->details->num_resources > node2->details->num_resources) {
         crm_trace("%s (%d) < %s (%d) : resources",
                   pe__node_name(node1), node1->details->num_resources,
                   pe__node_name(node2), node2->details->num_resources);
         return 1;
     }
 
     // Check whether one node is already running desired resource
 
     if (active != NULL) {
         if (active->details == node1->details) {
             crm_trace("%s (%d) > %s (%d) : active",
                       pe__node_name(node1), node1->details->num_resources,
                       pe__node_name(node2), node2->details->num_resources);
             return -1;
         } else if (active->details == node2->details) {
             crm_trace("%s (%d) < %s (%d) : active",
                       pe__node_name(node1), node1->details->num_resources,
                       pe__node_name(node2), node2->details->num_resources);
             return 1;
         }
     }
 
     // If all else is equal, prefer node with lowest-sorting name
 equal:
     crm_trace("%s = %s", pe__node_name(node1), pe__node_name(node2));
     return strcmp(node1->details->uname, node2->details->uname);
 }
 
 /*!
  * \internal
- * \brief Sort a list of nodes by allocation desirability
+ * \brief Sort a list of nodes by assigment preference
  *
  * \param[in,out] nodes        Node list to sort
  * \param[in]     active_node  Node where resource being assigned is active
  *
  * \return New head of sorted list
  */
 GList *
 pcmk__sort_nodes(GList *nodes, pe_node_t *active_node)
 {
     return g_list_sort_with_data(nodes, compare_nodes, active_node);
 }
 
 /*!
  * \internal
  * \brief Check whether any node is available to run resources
  *
  * \param[in] nodes  Nodes to check
  *
  * \return true if any node in \p nodes is available to run resources,
  *         otherwise false
  */
 bool
 pcmk__any_node_available(GHashTable *nodes)
 {
     GHashTableIter iter;
     const pe_node_t *node = NULL;
 
     if (nodes == NULL) {
         return false;
     }
     g_hash_table_iter_init(&iter, nodes);
     while (g_hash_table_iter_next(&iter, NULL, (void **) &node)) {
         if (pcmk__node_available(node, true, false)) {
             return true;
         }
     }
     return false;
 }
 
 /*!
  * \internal
  * \brief Apply node health values for all nodes in cluster
  *
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__apply_node_health(pe_working_set_t *data_set)
 {
     int base_health = 0;
     enum pcmk__health_strategy strategy;
     const char *strategy_str = pe_pref(data_set->config_hash,
                                        PCMK__OPT_NODE_HEALTH_STRATEGY);
 
     strategy = pcmk__parse_health_strategy(strategy_str);
     if (strategy == pcmk__health_strategy_none) {
         return;
     }
     crm_info("Applying node health strategy '%s'", strategy_str);
 
     // The progressive strategy can use a base health score
     if (strategy == pcmk__health_strategy_progressive) {
         base_health = pe__health_score(PCMK__OPT_NODE_HEALTH_BASE, data_set);
     }
 
     for (GList *iter = data_set->nodes; iter != NULL; iter = iter->next) {
         pe_node_t *node = (pe_node_t *) iter->data;
         int health = pe__sum_node_health_scores(node, base_health);
 
         // An overall health score of 0 has no effect
         if (health == 0) {
             continue;
         }
         crm_info("Overall system health of %s is %d",
                  pe__node_name(node), health);
 
         // Use node health as a location score for each resource on the node
         for (GList *r = data_set->resources; r != NULL; r = r->next) {
             pe_resource_t *rsc = (pe_resource_t *) r->data;
 
             bool constrain = true;
 
             if (health < 0) {
                 /* Negative health scores do not apply to resources with
                  * allow-unhealthy-nodes=true.
                  */
                 constrain = !crm_is_true(g_hash_table_lookup(rsc->meta,
                                          PCMK__META_ALLOW_UNHEALTHY_NODES));
             }
             if (constrain) {
                 pcmk__new_location(strategy_str, rsc, health, NULL, node,
                                    data_set);
             } else {
                 pe_rsc_trace(rsc, "%s is immune from health ban on %s",
                              rsc->id, pe__node_name(node));
             }
         }
     }
 }
 
 /*!
  * \internal
  * \brief Check for a node in a resource's parent's allowed nodes
  *
  * \param[in] rsc   Resource whose parent should be checked
  * \param[in] node  Node to check for
  *
  * \return Equivalent of \p node from \p rsc's parent's allowed nodes if any,
  *         otherwise NULL
  */
 pe_node_t *
 pcmk__top_allowed_node(const pe_resource_t *rsc, const pe_node_t *node)
 {
     GHashTable *allowed_nodes = NULL;
 
     if ((rsc == NULL) || (node == NULL)) {
         return NULL;
     } else if (rsc->parent == NULL) {
         allowed_nodes = rsc->allowed_nodes;
     } else {
         allowed_nodes = rsc->parent->allowed_nodes;
     }
     return pe_hash_table_lookup(allowed_nodes, node->details->id);
 }
diff --git a/lib/pacemaker/pcmk_sched_promotable.c b/lib/pacemaker/pcmk_sched_promotable.c
index d08823e1b4..c90ec582ff 100644
--- a/lib/pacemaker/pcmk_sched_promotable.c
+++ b/lib/pacemaker/pcmk_sched_promotable.c
@@ -1,1284 +1,1284 @@
 /*
  * Copyright 2004-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 
 #include <crm/msg_xml.h>
 #include <pacemaker-internal.h>
 
 #include "libpacemaker_private.h"
 
 /*!
  * \internal
  * \brief Add implicit promotion ordering for a promotable instance
  *
  * \param[in,out] clone  Clone resource
  * \param[in,out] child  Instance of \p clone being ordered
  * \param[in,out] last   Previous instance ordered (NULL if \p child is first)
  */
 static void
 order_instance_promotion(pe_resource_t *clone, pe_resource_t *child,
                          pe_resource_t *last)
 {
     // "Promote clone" -> promote instance -> "clone promoted"
     pcmk__order_resource_actions(clone, RSC_PROMOTE, child, RSC_PROMOTE,
                                  pe_order_optional);
     pcmk__order_resource_actions(child, RSC_PROMOTE, clone, RSC_PROMOTED,
                                  pe_order_optional);
 
     // If clone is ordered, order this instance relative to last
     if ((last != NULL) && pe__clone_is_ordered(clone)) {
         pcmk__order_resource_actions(last, RSC_PROMOTE, child, RSC_PROMOTE,
                                      pe_order_optional);
     }
 }
 
 /*!
  * \internal
  * \brief Add implicit demotion ordering for a promotable instance
  *
  * \param[in,out] clone  Clone resource
  * \param[in,out] child  Instance of \p clone being ordered
  * \param[in]     last   Previous instance ordered (NULL if \p child is first)
  */
 static void
 order_instance_demotion(pe_resource_t *clone, pe_resource_t *child,
                         pe_resource_t *last)
 {
     // "Demote clone" -> demote instance -> "clone demoted"
     pcmk__order_resource_actions(clone, RSC_DEMOTE, child, RSC_DEMOTE,
                                  pe_order_implies_first_printed);
     pcmk__order_resource_actions(child, RSC_DEMOTE, clone, RSC_DEMOTED,
                                  pe_order_implies_then_printed);
 
     // If clone is ordered, order this instance relative to last
     if ((last != NULL) && pe__clone_is_ordered(clone)) {
         pcmk__order_resource_actions(child, RSC_DEMOTE, last, RSC_DEMOTE,
                                      pe_order_optional);
     }
 }
 
 /*!
  * \internal
  * \brief Check whether an instance will be promoted or demoted
  *
  * \param[in]  rsc        Instance to check
  * \param[out] demoting   If \p rsc will be demoted, this will be set to true
  * \param[out] promoting  If \p rsc will be promoted, this will be set to true
  */
 static void
 check_for_role_change(const pe_resource_t *rsc, bool *demoting, bool *promoting)
 {
     const GList *iter = NULL;
 
     // If this is a cloned group, check group members recursively
     if (rsc->children != NULL) {
         for (iter = rsc->children; iter != NULL; iter = iter->next) {
             check_for_role_change((const pe_resource_t *) iter->data,
                                   demoting, promoting);
         }
         return;
     }
 
     for (iter = rsc->actions; iter != NULL; iter = iter->next) {
         const pe_action_t *action = (const pe_action_t *) iter->data;
 
         if (*promoting && *demoting) {
             return;
 
         } else if (pcmk_is_set(action->flags, pe_action_optional)) {
             continue;
 
         } else if (pcmk__str_eq(RSC_DEMOTE, action->task, pcmk__str_none)) {
             *demoting = true;
 
         } else if (pcmk__str_eq(RSC_PROMOTE, action->task, pcmk__str_none)) {
             *promoting = true;
         }
     }
 }
 
 /*!
  * \internal
  * \brief Add promoted-role location constraint scores to an instance's priority
  *
  * Adjust a promotable clone instance's promotion priority by the scores of any
  * location constraints in a list that are both limited to the promoted role and
  * for the node where the instance will be placed.
  *
  * \param[in,out] child                 Promotable clone instance
  * \param[in]     location_constraints  List of location constraints to apply
  * \param[in]     chosen                Node where \p child will be placed
  */
 static void
 apply_promoted_locations(pe_resource_t *child,
                          const GList *location_constraints,
                          const pe_node_t *chosen)
 {
     for (const GList *iter = location_constraints; iter; iter = iter->next) {
         const pe__location_t *location = iter->data;
         pe_node_t *weighted_node = NULL;
 
         if (location->role_filter == RSC_ROLE_PROMOTED) {
             weighted_node = pe_find_node_id(location->node_list_rh,
                                             chosen->details->id);
         }
         if (weighted_node != NULL) {
             int new_priority = pcmk__add_scores(child->priority,
                                                 weighted_node->weight);
 
             pe_rsc_trace(child,
                          "Applying location %s to %s promotion priority on %s: "
                          "%s + %s = %s",
                          location->id, child->id, pe__node_name(weighted_node),
                          pcmk_readable_score(child->priority),
                          pcmk_readable_score(weighted_node->weight),
                          pcmk_readable_score(new_priority));
             child->priority = new_priority;
         }
     }
 }
 
 /*!
  * \internal
  * \brief Get the node that an instance will be promoted on
  *
  * \param[in] rsc  Promotable clone instance to check
  *
  * \return Node that \p rsc will be promoted on, or NULL if none
  */
 static pe_node_t *
 node_to_be_promoted_on(const pe_resource_t *rsc)
 {
     pe_node_t *node = NULL;
     pe_node_t *local_node = NULL;
     const pe_resource_t *parent = NULL;
 
     // If this is a cloned group, bail if any group member can't be promoted
     for (GList *iter = rsc->children; iter != NULL; iter = iter->next) {
         pe_resource_t *child = (pe_resource_t *) iter->data;
 
         if (node_to_be_promoted_on(child) == NULL) {
             pe_rsc_trace(rsc,
                          "%s can't be promoted because member %s can't",
                          rsc->id, child->id);
             return NULL;
         }
     }
 
     node = rsc->fns->location(rsc, NULL, FALSE);
     if (node == NULL) {
         pe_rsc_trace(rsc, "%s can't be promoted because it won't be active",
                      rsc->id);
         return NULL;
 
     } else if (!pcmk_is_set(rsc->flags, pe_rsc_managed)) {
         if (rsc->fns->state(rsc, TRUE) == RSC_ROLE_PROMOTED) {
             crm_notice("Unmanaged instance %s will be left promoted on %s",
                        rsc->id, pe__node_name(node));
         } else {
             pe_rsc_trace(rsc, "%s can't be promoted because it is unmanaged",
                          rsc->id);
             return NULL;
         }
 
     } else if (rsc->priority < 0) {
         pe_rsc_trace(rsc,
                      "%s can't be promoted because its promotion priority %d "
                      "is negative",
                      rsc->id, rsc->priority);
         return NULL;
 
     } else if (!pcmk__node_available(node, false, true)) {
         pe_rsc_trace(rsc, "%s can't be promoted because %s can't run resources",
                      rsc->id, pe__node_name(node));
         return NULL;
     }
 
     parent = pe__const_top_resource(rsc, false);
     local_node = pe_hash_table_lookup(parent->allowed_nodes, node->details->id);
 
     if (local_node == NULL) {
-        /* It should not be possible for the scheduler to have allocated the
+        /* It should not be possible for the scheduler to have assigned the
          * instance to a node where its parent is not allowed, but it's good to
          * have a fail-safe.
          */
         if (pcmk_is_set(rsc->flags, pe_rsc_managed)) {
             crm_warn("%s can't be promoted because %s is not allowed on %s "
                      "(scheduler bug?)",
                      rsc->id, parent->id, pe__node_name(node));
         } // else the instance is unmanaged and already promoted
         return NULL;
 
     } else if ((local_node->count >= pe__clone_promoted_node_max(parent))
                && pcmk_is_set(rsc->flags, pe_rsc_managed)) {
         pe_rsc_trace(rsc,
                      "%s can't be promoted because %s has "
                      "maximum promoted instances already",
                      rsc->id, pe__node_name(node));
         return NULL;
     }
 
     return local_node;
 }
 
 /*!
  * \internal
  * \brief Compare two promotable clone instances by promotion priority
  *
  * \param[in] a  First instance to compare
  * \param[in] b  Second instance to compare
  *
  * \return A negative number if \p a has higher promotion priority,
  *         a positive number if \p b has higher promotion priority,
  *         or 0 if promotion priorities are equal
  */
 static gint
 cmp_promotable_instance(gconstpointer a, gconstpointer b)
 {
     const pe_resource_t *rsc1 = (const pe_resource_t *) a;
     const pe_resource_t *rsc2 = (const pe_resource_t *) b;
 
     enum rsc_role_e role1 = RSC_ROLE_UNKNOWN;
     enum rsc_role_e role2 = RSC_ROLE_UNKNOWN;
 
     CRM_ASSERT((rsc1 != NULL) && (rsc2 != NULL));
 
     // Check sort index set by pcmk__set_instance_roles()
     if (rsc1->sort_index > rsc2->sort_index) {
         pe_rsc_trace(rsc1,
                      "%s has higher promotion priority than %s "
                      "(sort index %d > %d)",
                      rsc1->id, rsc2->id, rsc1->sort_index, rsc2->sort_index);
         return -1;
     } else if (rsc1->sort_index < rsc2->sort_index) {
         pe_rsc_trace(rsc1,
                      "%s has lower promotion priority than %s "
                      "(sort index %d < %d)",
                      rsc1->id, rsc2->id, rsc1->sort_index, rsc2->sort_index);
         return 1;
     }
 
     // If those are the same, prefer instance whose current role is higher
     role1 = rsc1->fns->state(rsc1, TRUE);
     role2 = rsc2->fns->state(rsc2, TRUE);
     if (role1 > role2) {
         pe_rsc_trace(rsc1,
                      "%s has higher promotion priority than %s "
                      "(higher current role)",
                      rsc1->id, rsc2->id);
         return -1;
     } else if (role1 < role2) {
         pe_rsc_trace(rsc1,
                      "%s has lower promotion priority than %s "
                      "(lower current role)",
                      rsc1->id, rsc2->id);
         return 1;
     }
 
     // Finally, do normal clone instance sorting
     return pcmk__cmp_instance(a, b);
 }
 
 /*!
  * \internal
  * \brief Add a promotable clone instance's sort index to its node's weight
  *
  * Add a promotable clone instance's sort index (which sums its promotion
  * preferences and scores of relevant location constraints for the promoted
- * role) to the node weight of the instance's allocated node.
+ * role) to the node weight of the instance's assigned node.
  *
  * \param[in]     data       Promotable clone instance
  * \param[in,out] user_data  Clone parent of \p data
  */
 static void
 add_sort_index_to_node_weight(gpointer data, gpointer user_data)
 {
     const pe_resource_t *child = (const pe_resource_t *) data;
     pe_resource_t *clone = (pe_resource_t *) user_data;
 
     pe_node_t *node = NULL;
     const pe_node_t *chosen = NULL;
 
     if (child->sort_index < 0) {
         pe_rsc_trace(clone, "Not adding sort index of %s: negative", child->id);
         return;
     }
 
     chosen = child->fns->location(child, NULL, FALSE);
     if (chosen == NULL) {
         pe_rsc_trace(clone, "Not adding sort index of %s: inactive", child->id);
         return;
     }
 
     node = (pe_node_t *) pe_hash_table_lookup(clone->allowed_nodes,
                                               chosen->details->id);
     CRM_ASSERT(node != NULL);
 
     node->weight = pcmk__add_scores(child->sort_index, node->weight);
     pe_rsc_trace(clone,
                  "Added cumulative priority of %s (%s) to score on %s (now %s)",
                  child->id, pcmk_readable_score(child->sort_index),
                  pe__node_name(node), pcmk_readable_score(node->weight));
 }
 
 /*!
  * \internal
  * \brief Apply colocation to dependent's node weights if for promoted role
  *
  * \param[in,out] data       Colocation constraint to apply
  * \param[in,out] user_data  Promotable clone that is constraint's dependent
  */
 static void
 apply_coloc_to_dependent(gpointer data, gpointer user_data)
 {
     pcmk__colocation_t *constraint = (pcmk__colocation_t *) data;
     pe_resource_t *clone = (pe_resource_t *) user_data;
     pe_resource_t *primary = constraint->primary;
     uint32_t flags = pcmk__coloc_select_default;
     float factor = constraint->score / (float) INFINITY;
 
     if (constraint->dependent_role != RSC_ROLE_PROMOTED) {
         return;
     }
     if (constraint->score < INFINITY) {
         flags = pcmk__coloc_select_active;
     }
     pe_rsc_trace(clone, "Applying colocation %s (promoted %s with %s) @%s",
                  constraint->id, constraint->dependent->id,
                  constraint->primary->id,
                  pcmk_readable_score(constraint->score));
     primary->cmds->add_colocated_node_scores(primary, clone->id,
                                              &clone->allowed_nodes,
                                              constraint, factor, flags);
 }
 
 /*!
  * \internal
  * \brief Apply colocation to primary's node weights if for promoted role
  *
  * \param[in,out] data       Colocation constraint to apply
  * \param[in,out] user_data  Promotable clone that is constraint's primary
  */
 static void
 apply_coloc_to_primary(gpointer data, gpointer user_data)
 {
     pcmk__colocation_t *constraint = (pcmk__colocation_t *) data;
     pe_resource_t *clone = (pe_resource_t *) user_data;
     pe_resource_t *dependent = constraint->dependent;
     const float factor = constraint->score / (float) INFINITY;
     const uint32_t flags = pcmk__coloc_select_active
                            |pcmk__coloc_select_nonnegative;
 
     if ((constraint->primary_role != RSC_ROLE_PROMOTED)
          || !pcmk__colocation_has_influence(constraint, NULL)) {
         return;
     }
 
     pe_rsc_trace(clone, "Applying colocation %s (%s with promoted %s) @%s",
                  constraint->id, constraint->dependent->id,
                  constraint->primary->id,
                  pcmk_readable_score(constraint->score));
     dependent->cmds->add_colocated_node_scores(dependent, clone->id,
                                                &clone->allowed_nodes,
                                                constraint, factor, flags);
 }
 
 /*!
  * \internal
  * \brief Set clone instance's sort index to its node's weight
  *
  * \param[in,out] data       Promotable clone instance
  * \param[in]     user_data  Parent clone of \p data
  */
 static void
 set_sort_index_to_node_weight(gpointer data, gpointer user_data)
 {
     pe_resource_t *child = (pe_resource_t *) data;
     const pe_resource_t *clone = (const pe_resource_t *) user_data;
 
     pe_node_t *chosen = child->fns->location(child, NULL, FALSE);
 
     if (!pcmk_is_set(child->flags, pe_rsc_managed)
         && (child->next_role == RSC_ROLE_PROMOTED)) {
         child->sort_index = INFINITY;
         pe_rsc_trace(clone,
                      "Final sort index for %s is INFINITY (unmanaged promoted)",
                      child->id);
 
     } else if ((chosen == NULL) || (child->sort_index < 0)) {
         pe_rsc_trace(clone,
                      "Final sort index for %s is %d (ignoring node weight)",
                      child->id, child->sort_index);
 
     } else {
         const pe_node_t *node = NULL;
 
         node = pe_hash_table_lookup(clone->allowed_nodes, chosen->details->id);
         CRM_ASSERT(node != NULL);
 
         child->sort_index = node->weight;
         pe_rsc_trace(clone,
                      "Merging weights for %s: final sort index for %s is %d",
                      clone->id, child->id, child->sort_index);
     }
 }
 
 /*!
  * \internal
  * \brief Sort a promotable clone's instances by descending promotion priority
  *
  * \param[in,out] clone  Promotable clone to sort
  */
 static void
 sort_promotable_instances(pe_resource_t *clone)
 {
     if (pe__set_clone_flag(clone, pe__clone_promotion_constrained)
             == pcmk_rc_already) {
         return;
     }
     pe__set_resource_flags(clone, pe_rsc_merging);
 
     for (GList *iter = clone->children; iter != NULL; iter = iter->next) {
         pe_resource_t *child = (pe_resource_t *) iter->data;
 
         pe_rsc_trace(clone,
                      "Merging weights for %s: initial sort index for %s is %d",
                      clone->id, child->id, child->sort_index);
     }
     pe__show_node_weights(true, clone, "Before", clone->allowed_nodes,
                           clone->cluster);
 
     /* Because the this_with_colocations() and with_this_colocations() methods
      * boil down to copies of rsc_cons and rsc_cons_lhs for clones, we can use
      * those here directly for efficiency.
      */
     g_list_foreach(clone->children, add_sort_index_to_node_weight, clone);
     g_list_foreach(clone->rsc_cons, apply_coloc_to_dependent, clone);
     g_list_foreach(clone->rsc_cons_lhs, apply_coloc_to_primary, clone);
 
     // Ban resource from all nodes if it needs a ticket but doesn't have it
     pcmk__require_promotion_tickets(clone);
 
     pe__show_node_weights(true, clone, "After", clone->allowed_nodes,
                           clone->cluster);
 
     // Reset sort indexes to final node weights
     g_list_foreach(clone->children, set_sort_index_to_node_weight, clone);
 
     // Finally, sort instances in descending order of promotion priority
     clone->children = g_list_sort(clone->children, cmp_promotable_instance);
     pe__clear_resource_flags(clone, pe_rsc_merging);
 }
 
 /*!
  * \internal
  * \brief Find the active instance (if any) of an anonymous clone on a node
  *
  * \param[in] clone  Anonymous clone to check
  * \param[in] id     Instance ID (without instance number) to check
  * \param[in] node   Node to check
  *
  * \return
  */
 static pe_resource_t *
 find_active_anon_instance(const pe_resource_t *clone, const char *id,
                           const pe_node_t *node)
 {
     for (GList *iter = clone->children; iter; iter = iter->next) {
         pe_resource_t *child = iter->data;
         pe_resource_t *active = NULL;
 
         // Use ->find_rsc() in case this is a cloned group
         active = clone->fns->find_rsc(child, id, node,
                                       pe_find_clone|pe_find_current);
         if (active != NULL) {
             return active;
         }
     }
     return NULL;
 }
 
 /*
  * \brief Check whether an anonymous clone instance is known on a node
  *
  * \param[in] clone  Anonymous clone to check
  * \param[in] id     Instance ID (without instance number) to check
  * \param[in] node   Node to check
  *
  * \return true if \p id instance of \p clone is known on \p node,
  *         otherwise false
  */
 static bool
 anonymous_known_on(const pe_resource_t *clone, const char *id,
                    const pe_node_t *node)
 {
     for (GList *iter = clone->children; iter; iter = iter->next) {
         pe_resource_t *child = iter->data;
 
         /* Use ->find_rsc() because this might be a cloned group, and knowing
          * that other members of the group are known here implies nothing.
          */
         child = clone->fns->find_rsc(child, id, NULL, pe_find_clone);
         CRM_LOG_ASSERT(child != NULL);
         if (child != NULL) {
             if (g_hash_table_lookup(child->known_on, node->details->id)) {
                 return true;
             }
         }
     }
     return false;
 }
 
 /*!
  * \internal
  * \brief Check whether a node is allowed to run a resource
  *
  * \param[in] rsc   Resource to check
  * \param[in] node  Node to check
  *
  * \return true if \p node is allowed to run \p rsc, otherwise false
  */
 static bool
 is_allowed(const pe_resource_t *rsc, const pe_node_t *node)
 {
     pe_node_t *allowed = pe_hash_table_lookup(rsc->allowed_nodes,
                                               node->details->id);
 
     return (allowed != NULL) && (allowed->weight >= 0);
 }
 
 /*!
  * \brief Check whether a clone instance's promotion score should be considered
  *
  * \param[in] rsc   Promotable clone instance to check
  * \param[in] node  Node where score would be applied
  *
  * \return true if \p rsc's promotion score should be considered on \p node,
  *         otherwise false
  */
 static bool
 promotion_score_applies(const pe_resource_t *rsc, const pe_node_t *node)
 {
     char *id = clone_strip(rsc->id);
     const pe_resource_t *parent = pe__const_top_resource(rsc, false);
     pe_resource_t *active = NULL;
     const char *reason = "allowed";
 
     // Some checks apply only to anonymous clone instances
     if (!pcmk_is_set(rsc->flags, pe_rsc_unique)) {
 
         // If instance is active on the node, its score definitely applies
         active = find_active_anon_instance(parent, id, node);
         if (active == rsc) {
             reason = "active";
             goto check_allowed;
         }
 
         /* If *no* instance is active on this node, this instance's score will
          * count if it has been probed on this node.
          */
         if ((active == NULL) && anonymous_known_on(parent, id, node)) {
             reason = "probed";
             goto check_allowed;
         }
     }
 
     /* If this clone's status is unknown on *all* nodes (e.g. cluster startup),
      * take all instances' scores into account, to make sure we use any
      * permanent promotion scores.
      */
     if ((rsc->running_on == NULL) && (g_hash_table_size(rsc->known_on) == 0)) {
         reason = "none probed";
         goto check_allowed;
     }
 
     /* Otherwise, we've probed and/or started the resource *somewhere*, so
      * consider promotion scores on nodes where we know the status.
      */
     if ((pe_hash_table_lookup(rsc->known_on, node->details->id) != NULL)
         || (pe_find_node_id(rsc->running_on, node->details->id) != NULL)) {
         reason = "known";
     } else {
         pe_rsc_trace(rsc,
                      "Ignoring %s promotion score (for %s) on %s: not probed",
                      rsc->id, id, pe__node_name(node));
         free(id);
         return false;
     }
 
 check_allowed:
     if (is_allowed(rsc, node)) {
         pe_rsc_trace(rsc, "Counting %s promotion score (for %s) on %s: %s",
                      rsc->id, id, pe__node_name(node), reason);
         free(id);
         return true;
     }
 
     pe_rsc_trace(rsc, "Ignoring %s promotion score (for %s) on %s: not allowed",
                  rsc->id, id, pe__node_name(node));
     free(id);
     return false;
 }
 
 /*!
  * \internal
  * \brief Get the value of a promotion score node attribute
  *
  * \param[in] rsc   Promotable clone instance to get promotion score for
  * \param[in] node  Node to get promotion score for
  * \param[in] name  Resource name to use in promotion score attribute name
  *
  * \return Value of promotion score node attribute for \p rsc on \p node
  */
 static const char *
 promotion_attr_value(const pe_resource_t *rsc, const pe_node_t *node,
                      const char *name)
 {
     char *attr_name = NULL;
     const char *attr_value = NULL;
 
     CRM_CHECK((rsc != NULL) && (node != NULL) && (name != NULL), return NULL);
 
     attr_name = pcmk_promotion_score_name(name);
     attr_value = pe_node_attribute_calculated(node, attr_name, rsc);
     free(attr_name);
     return attr_value;
 }
 
 /*!
  * \internal
  * \brief Get the promotion score for a clone instance on a node
  *
  * \param[in]  rsc         Promotable clone instance to get score for
  * \param[in]  node        Node to get score for
  * \param[out] is_default  If non-NULL, will be set true if no score available
  *
  * \return Promotion score for \p rsc on \p node (or 0 if none)
  */
 static int
 promotion_score(const pe_resource_t *rsc, const pe_node_t *node,
                 bool *is_default)
 {
     char *name = NULL;
     const char *attr_value = NULL;
 
     if (is_default != NULL) {
         *is_default = true;
     }
 
     CRM_CHECK((rsc != NULL) && (node != NULL), return 0);
 
     /* If this is an instance of a cloned group, the promotion score is the sum
      * of all members' promotion scores.
      */
     if (rsc->children != NULL) {
         int score = 0;
 
         for (const GList *iter = rsc->children;
              iter != NULL; iter = iter->next) {
 
             const pe_resource_t *child = (const pe_resource_t *) iter->data;
             bool child_default = false;
             int child_score = promotion_score(child, node, &child_default);
 
             if (!child_default && (is_default != NULL)) {
                 *is_default = false;
             }
             score += child_score;
         }
         return score;
     }
 
     if (!promotion_score_applies(rsc, node)) {
         return 0;
     }
 
     /* For the promotion score attribute name, use the name the resource is
      * known as in resource history, since that's what crm_attribute --promotion
      * would have used.
      */
     name = (rsc->clone_name == NULL)? rsc->id : rsc->clone_name;
 
     attr_value = promotion_attr_value(rsc, node, name);
     if (attr_value != NULL) {
         pe_rsc_trace(rsc, "Promotion score for %s on %s = %s",
                      name, pe__node_name(node), pcmk__s(attr_value, "(unset)"));
     } else if (!pcmk_is_set(rsc->flags, pe_rsc_unique)) {
         /* If we don't have any resource history yet, we won't have clone_name.
          * In that case, for anonymous clones, try the resource name without
          * any instance number.
          */
         name = clone_strip(rsc->id);
         if (strcmp(rsc->id, name) != 0) {
             attr_value = promotion_attr_value(rsc, node, name);
             pe_rsc_trace(rsc, "Promotion score for %s on %s (for %s) = %s",
                          name, pe__node_name(node), rsc->id,
                          pcmk__s(attr_value, "(unset)"));
         }
         free(name);
     }
 
     if (attr_value == NULL) {
         return 0;
     }
 
     if (is_default != NULL) {
         *is_default = false;
     }
     return char2score(attr_value);
 }
 
 /*!
  * \internal
  * \brief Include promotion scores in instances' node weights and priorities
  *
  * \param[in,out] rsc  Promotable clone resource to update
  */
 void
 pcmk__add_promotion_scores(pe_resource_t *rsc)
 {
     if (pe__set_clone_flag(rsc, pe__clone_promotion_added) == pcmk_rc_already) {
         return;
     }
 
     for (GList *iter = rsc->children; iter != NULL; iter = iter->next) {
         pe_resource_t *child_rsc = (pe_resource_t *) iter->data;
 
         GHashTableIter iter;
         pe_node_t *node = NULL;
         int score, new_score;
 
         g_hash_table_iter_init(&iter, child_rsc->allowed_nodes);
         while (g_hash_table_iter_next(&iter, NULL, (void **) &node)) {
             if (!pcmk__node_available(node, false, false)) {
                 /* This node will never be promoted, so don't apply the
                  * promotion score, as that may lead to clone shuffling.
                  */
                 continue;
             }
 
             score = promotion_score(child_rsc, node, NULL);
             if (score > 0) {
                 new_score = pcmk__add_scores(node->weight, score);
                 if (new_score != node->weight) { // Could remain INFINITY
                     node->weight = new_score;
                     pe_rsc_trace(rsc,
                                  "Added %s promotion priority (%s) to score "
                                  "on %s (now %s)",
                                  child_rsc->id, pcmk_readable_score(score),
                                  pe__node_name(node),
                                  pcmk_readable_score(new_score));
                 }
             }
 
             if (score > child_rsc->priority) {
                 pe_rsc_trace(rsc,
                              "Updating %s priority to promotion score (%d->%d)",
                              child_rsc->id, child_rsc->priority, score);
                 child_rsc->priority = score;
             }
         }
     }
 }
 
 /*!
  * \internal
  * \brief If a resource's current role is started, change it to unpromoted
  *
  * \param[in,out] data       Resource to update
  * \param[in]     user_data  Ignored
  */
 static void
 set_current_role_unpromoted(void *data, void *user_data)
 {
     pe_resource_t *rsc = (pe_resource_t *) data;
 
     if (rsc->role == RSC_ROLE_STARTED) {
         // Promotable clones should use unpromoted role instead of started
         rsc->role = RSC_ROLE_UNPROMOTED;
     }
     g_list_foreach(rsc->children, set_current_role_unpromoted, NULL);
 }
 
 /*!
  * \internal
  * \brief Set a resource's next role to unpromoted (or stopped if unassigned)
  *
  * \param[in,out] data       Resource to update
  * \param[in]     user_data  Ignored
  */
 static void
 set_next_role_unpromoted(void *data, void *user_data)
 {
     pe_resource_t *rsc = (pe_resource_t *) data;
     GList *assigned = NULL;
 
     rsc->fns->location(rsc, &assigned, FALSE);
     if (assigned == NULL) {
         pe__set_next_role(rsc, RSC_ROLE_STOPPED, "stopped instance");
     } else {
         pe__set_next_role(rsc, RSC_ROLE_UNPROMOTED, "unpromoted instance");
         g_list_free(assigned);
     }
     g_list_foreach(rsc->children, set_next_role_unpromoted, NULL);
 }
 
 /*!
  * \internal
  * \brief Set a resource's next role to promoted if not already set
  *
  * \param[in,out] data       Resource to update
  * \param[in]     user_data  Ignored
  */
 static void
 set_next_role_promoted(void *data, gpointer user_data)
 {
     pe_resource_t *rsc = (pe_resource_t *) data;
 
     if (rsc->next_role == RSC_ROLE_UNKNOWN) {
         pe__set_next_role(rsc, RSC_ROLE_PROMOTED, "promoted instance");
     }
     g_list_foreach(rsc->children, set_next_role_promoted, NULL);
 }
 
 /*!
  * \internal
  * \brief Show instance's promotion score on node where it will be active
  *
  * \param[in,out] instance  Promotable clone instance to show
  */
 static void
 show_promotion_score(pe_resource_t *instance)
 {
     pe_node_t *chosen = instance->fns->location(instance, NULL, FALSE);
 
     if (pcmk_is_set(instance->cluster->flags, pe_flag_show_scores)
         && !pcmk__is_daemon && (instance->cluster->priv != NULL)) {
 
         pcmk__output_t *out = instance->cluster->priv;
 
         out->message(out, "promotion-score", instance, chosen,
                      pcmk_readable_score(instance->sort_index));
     } else {
         pe_rsc_debug(pe__const_top_resource(instance, false),
                      "%s promotion score on %s: sort=%s priority=%s",
                      instance->id,
                      ((chosen == NULL)? "none" : pe__node_name(chosen)),
                      pcmk_readable_score(instance->sort_index),
                      pcmk_readable_score(instance->priority));
     }
 }
 
 /*!
  * \internal
  * \brief Set a clone instance's promotion priority
  *
  * \param[in,out] data       Promotable clone instance to update
  * \param[in]     user_data  Instance's parent clone
  */
 static void
 set_instance_priority(gpointer data, gpointer user_data)
 {
     pe_resource_t *instance = (pe_resource_t *) data;
     const pe_resource_t *clone = (const pe_resource_t *) user_data;
     const pe_node_t *chosen = NULL;
     enum rsc_role_e next_role = RSC_ROLE_UNKNOWN;
     GList *list = NULL;
 
     pe_rsc_trace(clone, "Assigning priority for %s: %s", instance->id,
                  role2text(instance->next_role));
 
     if (instance->fns->state(instance, TRUE) == RSC_ROLE_STARTED) {
         set_current_role_unpromoted(instance, NULL);
     }
 
     // Only an instance that will be active can be promoted
     chosen = instance->fns->location(instance, &list, FALSE);
     if (pcmk__list_of_multiple(list)) {
         pcmk__config_err("Cannot promote non-colocated child %s",
                          instance->id);
     }
     g_list_free(list);
     if (chosen == NULL) {
         return;
     }
 
     next_role = instance->fns->state(instance, FALSE);
     switch (next_role) {
         case RSC_ROLE_STARTED:
         case RSC_ROLE_UNKNOWN:
             // Set instance priority to its promotion score (or -1 if none)
             {
                 bool is_default = false;
 
                 instance->priority = promotion_score(instance, chosen,
                                                       &is_default);
                 if (is_default) {
                     /*
                      * Default to -1 if no value is set. This allows
                      * instances eligible for promotion to be specified
                      * based solely on rsc_location constraints, but
                      * prevents any instance from being promoted if neither
                      * a constraint nor a promotion score is present
                      */
                     instance->priority = -1;
                 }
             }
             break;
 
         case RSC_ROLE_UNPROMOTED:
         case RSC_ROLE_STOPPED:
             // Instance can't be promoted
             instance->priority = -INFINITY;
             break;
 
         case RSC_ROLE_PROMOTED:
             // Nothing needed (re-creating actions after scheduling fencing)
             break;
 
         default:
             CRM_CHECK(FALSE, crm_err("Unknown resource role %d for %s",
                                      next_role, instance->id));
     }
 
     // Add relevant location constraint scores for promoted role
     apply_promoted_locations(instance, instance->rsc_location, chosen);
     apply_promoted_locations(instance, clone->rsc_location, chosen);
 
     // Consider instance's role-based colocations with other resources
     list = pcmk__this_with_colocations(instance);
     for (GList *iter = list; iter != NULL; iter = iter->next) {
         pcmk__colocation_t *cons = (pcmk__colocation_t *) iter->data;
 
         instance->cmds->apply_coloc_score(instance, cons->primary, cons, true);
     }
     g_list_free(list);
 
     instance->sort_index = instance->priority;
     if (next_role == RSC_ROLE_PROMOTED) {
         instance->sort_index = INFINITY;
     }
     pe_rsc_trace(clone, "Assigning %s priority = %d",
                  instance->id, instance->priority);
 }
 
 /*!
  * \internal
  * \brief Set a promotable clone instance's role
  *
  * \param[in,out] data       Promotable clone instance to update
  * \param[in,out] user_data  Pointer to count of instances chosen for promotion
  */
 static void
 set_instance_role(gpointer data, gpointer user_data)
 {
     pe_resource_t *instance = (pe_resource_t *) data;
     int *count = (int *) user_data;
 
     const pe_resource_t *clone = pe__const_top_resource(instance, false);
     pe_node_t *chosen = NULL;
 
     show_promotion_score(instance);
 
     if (instance->sort_index < 0) {
         pe_rsc_trace(clone, "Not supposed to promote instance %s",
                      instance->id);
 
     } else if ((*count < pe__clone_promoted_max(instance))
                || !pcmk_is_set(clone->flags, pe_rsc_managed)) {
         chosen = node_to_be_promoted_on(instance);
     }
 
     if (chosen == NULL) {
         set_next_role_unpromoted(instance, NULL);
         return;
     }
 
     if ((instance->role < RSC_ROLE_PROMOTED)
         && !pcmk_is_set(instance->cluster->flags, pe_flag_have_quorum)
         && (instance->cluster->no_quorum_policy == no_quorum_freeze)) {
         crm_notice("Clone instance %s cannot be promoted without quorum",
                    instance->id);
         set_next_role_unpromoted(instance, NULL);
         return;
     }
 
     chosen->count++;
     pe_rsc_info(clone, "Choosing %s (%s) on %s for promotion",
                 instance->id, role2text(instance->role),
                 pe__node_name(chosen));
     set_next_role_promoted(instance, NULL);
     (*count)++;
 }
 
 /*!
  * \internal
  * \brief Set roles for all instances of a promotable clone
  *
  * \param[in,out] rsc  Promotable clone resource to update
  */
 void
 pcmk__set_instance_roles(pe_resource_t *rsc)
 {
     int promoted = 0;
     GHashTableIter iter;
     pe_node_t *node = NULL;
 
-    // Repurpose count to track the number of promoted instances allocated
+    // Repurpose count to track the number of promoted instances assigned
     g_hash_table_iter_init(&iter, rsc->allowed_nodes);
     while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
         node->count = 0;
     }
 
     // Set instances' promotion priorities and sort by highest priority first
     g_list_foreach(rsc->children, set_instance_priority, rsc);
     sort_promotable_instances(rsc);
 
     // Choose the first N eligible instances to be promoted
     g_list_foreach(rsc->children, set_instance_role, &promoted);
     pe_rsc_info(rsc, "%s: Promoted %d instances of a possible %d",
                 rsc->id, promoted, pe__clone_promoted_max(rsc));
 }
 
 /*!
  *
  * \internal
  * \brief Create actions for promotable clone instances
  *
  * \param[in,out] clone          Promotable clone to create actions for
  * \param[out]    any_promoting  Will be set true if any instance is promoting
  * \param[out]    any_demoting   Will be set true if any instance is demoting
  */
 static void
 create_promotable_instance_actions(pe_resource_t *clone,
                                    bool *any_promoting, bool *any_demoting)
 {
     for (GList *iter = clone->children; iter != NULL; iter = iter->next) {
         pe_resource_t *instance = (pe_resource_t *) iter->data;
 
         instance->cmds->create_actions(instance);
         check_for_role_change(instance, any_demoting, any_promoting);
     }
 }
 
 /*!
  * \internal
  * \brief Reset each promotable instance's resource priority
  *
  * Reset the priority of each instance of a promotable clone to the clone's
  * priority (after promotion actions are scheduled, when instance priorities
  * were repurposed as promotion scores).
  *
  * \param[in,out] clone  Promotable clone to reset
  */
 static void
 reset_instance_priorities(pe_resource_t *clone)
 {
     for (GList *iter = clone->children; iter != NULL; iter = iter->next) {
         pe_resource_t *instance = (pe_resource_t *) iter->data;
 
         instance->priority = clone->priority;
     }
 }
 
 /*!
  * \internal
  * \brief Create actions specific to promotable clones
  *
  * \param[in,out] clone  Promotable clone to create actions for
  */
 void
 pcmk__create_promotable_actions(pe_resource_t *clone)
 {
     bool any_promoting = false;
     bool any_demoting = false;
 
     // Create actions for each clone instance individually
     create_promotable_instance_actions(clone, &any_promoting, &any_demoting);
 
     // Create pseudo-actions for clone as a whole
     pe__create_promotable_pseudo_ops(clone, any_promoting, any_demoting);
 
     // Undo our temporary repurposing of resource priority for instances
     reset_instance_priorities(clone);
 }
 
 /*!
  * \internal
  * \brief Create internal orderings for a promotable clone's instances
  *
  * \param[in,out] clone  Promotable clone instance to order
  */
 void
 pcmk__order_promotable_instances(pe_resource_t *clone)
 {
     pe_resource_t *previous = NULL; // Needed for ordered clones
 
     pcmk__promotable_restart_ordering(clone);
 
     for (GList *iter = clone->children; iter != NULL; iter = iter->next) {
         pe_resource_t *instance = (pe_resource_t *) iter->data;
 
         // Demote before promote
         pcmk__order_resource_actions(instance, RSC_DEMOTE,
                                      instance, RSC_PROMOTE,
                                      pe_order_optional);
 
         order_instance_promotion(clone, instance, previous);
         order_instance_demotion(clone, instance, previous);
         previous = instance;
     }
 }
 
 /*!
  * \internal
  * \brief Update dependent's allowed nodes for colocation with promotable
  *
  * \param[in,out] dependent     Dependent resource to update
  * \param[in]     primary_node  Node where an instance of the primary will be
  * \param[in]     colocation    Colocation constraint to apply
  */
 static void
 update_dependent_allowed_nodes(pe_resource_t *dependent,
                                const pe_node_t *primary_node,
                                const pcmk__colocation_t *colocation)
 {
     GHashTableIter iter;
     pe_node_t *node = NULL;
     const char *primary_value = NULL;
     const char *attr = NULL;
 
     if (colocation->score >= INFINITY) {
         return; // Colocation is mandatory, so allowed node scores don't matter
     }
 
     // Get value of primary's colocation node attribute
     attr = colocation->node_attribute;
     if (attr == NULL) {
         attr = CRM_ATTR_UNAME;
     }
     primary_value = pe_node_attribute_raw(primary_node, attr);
 
     pe_rsc_trace(colocation->primary,
                  "Applying %s (%s with %s on %s by %s @%d) to %s",
                  colocation->id, colocation->dependent->id,
                  colocation->primary->id, pe__node_name(primary_node), attr,
                  colocation->score, dependent->id);
 
     g_hash_table_iter_init(&iter, dependent->allowed_nodes);
     while (g_hash_table_iter_next(&iter, NULL, (void **) &node)) {
         const char *dependent_value = pe_node_attribute_raw(node, attr);
 
         if (pcmk__str_eq(primary_value, dependent_value, pcmk__str_casei)) {
             node->weight = pcmk__add_scores(node->weight, colocation->score);
             pe_rsc_trace(colocation->primary,
                          "Added %s score (%s) to %s (now %s)",
                          colocation->id, pcmk_readable_score(colocation->score),
                          pe__node_name(node),
                          pcmk_readable_score(node->weight));
         }
     }
 }
 
 /*!
  * \brief Update dependent for a colocation with a promotable clone
  *
  * \param[in]     primary     Primary resource in the colocation
  * \param[in,out] dependent   Dependent resource in the colocation
  * \param[in]     colocation  Colocation constraint to apply
  */
 void
 pcmk__update_dependent_with_promotable(const pe_resource_t *primary,
                                        pe_resource_t *dependent,
                                        const pcmk__colocation_t *colocation)
 {
     GList *affected_nodes = NULL;
 
     /* Build a list of all nodes where an instance of the primary will be, and
      * (for optional colocations) update the dependent's allowed node scores for
      * each one.
      */
     for (GList *iter = primary->children; iter != NULL; iter = iter->next) {
         pe_resource_t *instance = (pe_resource_t *) iter->data;
         pe_node_t *node = instance->fns->location(instance, NULL, FALSE);
 
         if (node == NULL) {
             continue;
         }
         if (instance->fns->state(instance, FALSE) == colocation->primary_role) {
             update_dependent_allowed_nodes(dependent, node, colocation);
             affected_nodes = g_list_prepend(affected_nodes, node);
         }
     }
 
     /* For mandatory colocations, add the primary's node weight to the
      * dependent's node weight for each affected node, and ban the dependent
      * from all other nodes.
      *
      * However, skip this for promoted-with-promoted colocations, otherwise
      * inactive dependent instances can't start (in the unpromoted role).
      */
     if ((colocation->score >= INFINITY)
         && ((colocation->dependent_role != RSC_ROLE_PROMOTED)
             || (colocation->primary_role != RSC_ROLE_PROMOTED))) {
 
         pe_rsc_trace(colocation->primary,
                      "Applying %s (mandatory %s with %s) to %s",
                      colocation->id, colocation->dependent->id,
                      colocation->primary->id, dependent->id);
         node_list_exclude(dependent->allowed_nodes, affected_nodes,
                           TRUE);
     }
     g_list_free(affected_nodes);
 }
 
 /*!
  * \internal
  * \brief Update dependent priority for colocation with promotable
  *
  * \param[in]     primary     Primary resource in the colocation
  * \param[in,out] dependent   Dependent resource in the colocation
  * \param[in]     colocation  Colocation constraint to apply
  */
 void
 pcmk__update_promotable_dependent_priority(const pe_resource_t *primary,
                                            pe_resource_t *dependent,
                                            const pcmk__colocation_t *colocation)
 {
     pe_resource_t *primary_instance = NULL;
 
     // Look for a primary instance where dependent will be
     primary_instance = pcmk__find_compatible_instance(dependent, primary,
                                                       colocation->primary_role,
                                                       false);
 
     if (primary_instance != NULL) {
         // Add primary instance's priority to dependent's
         int new_priority = pcmk__add_scores(dependent->priority,
                                             colocation->score);
 
         pe_rsc_trace(colocation->primary,
                      "Applying %s (%s with %s) to %s priority (%s + %s = %s)",
                      colocation->id, colocation->dependent->id,
                      colocation->primary->id, dependent->id,
                      pcmk_readable_score(dependent->priority),
                      pcmk_readable_score(colocation->score),
                      pcmk_readable_score(new_priority));
         dependent->priority = new_priority;
 
     } else if (colocation->score >= INFINITY) {
         // Mandatory colocation, but primary won't be here
         pe_rsc_trace(colocation->primary,
                      "Applying %s (%s with %s) to %s: can't be promoted",
                      colocation->id, colocation->dependent->id,
                      colocation->primary->id, dependent->id);
         dependent->priority = -INFINITY;
     }
 }
diff --git a/lib/pacemaker/pcmk_sched_remote.c b/lib/pacemaker/pcmk_sched_remote.c
index 6adb5d4d51..a5a3830761 100644
--- a/lib/pacemaker/pcmk_sched_remote.c
+++ b/lib/pacemaker/pcmk_sched_remote.c
@@ -1,729 +1,729 @@
 /*
  * Copyright 2004-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 
 #include <sys/param.h>
 
 #include <crm/crm.h>
 #include <crm/cib.h>
 #include <crm/msg_xml.h>
 #include <crm/common/xml.h>
 #include <crm/common/xml_internal.h>
 
 #include <glib.h>
 
 #include <crm/pengine/status.h>
 #include <pacemaker-internal.h>
 #include "libpacemaker_private.h"
 
 enum remote_connection_state {
     remote_state_unknown = 0,
     remote_state_alive = 1,
     remote_state_resting = 2,
     remote_state_failed = 3,
     remote_state_stopped = 4
 };
 
 static const char *
 state2text(enum remote_connection_state state)
 {
     switch (state) {
         case remote_state_unknown:
             return "unknown";
         case remote_state_alive:
             return "alive";
         case remote_state_resting:
             return "resting";
         case remote_state_failed:
             return "failed";
         case remote_state_stopped:
             return "stopped";
     }
 
     return "impossible";
 }
 
 /* We always use pe_order_preserve with these convenience functions to exempt
  * internally generated constraints from the prohibition of user constraints
  * involving remote connection resources.
  *
  * The start ordering additionally uses pe_order_runnable_left so that the
  * specified action is not runnable if the start is not runnable.
  */
 
 static inline void
 order_start_then_action(pe_resource_t *first_rsc, pe_action_t *then_action,
                         uint32_t extra, pe_working_set_t *data_set)
 {
     if ((first_rsc != NULL) && (then_action != NULL) && (data_set != NULL)) {
         pcmk__new_ordering(first_rsc, start_key(first_rsc), NULL,
                            then_action->rsc, NULL, then_action,
                            pe_order_preserve|pe_order_runnable_left|extra,
                            data_set);
     }
 }
 
 static inline void
 order_action_then_stop(pe_action_t *first_action, pe_resource_t *then_rsc,
                        uint32_t extra, pe_working_set_t *data_set)
 {
     if ((first_action != NULL) && (then_rsc != NULL) && (data_set != NULL)) {
         pcmk__new_ordering(first_action->rsc, NULL, first_action,
                            then_rsc, stop_key(then_rsc), NULL,
                            pe_order_preserve|extra, data_set);
     }
 }
 
 static enum remote_connection_state
 get_remote_node_state(const pe_node_t *node)
 {
     const pe_resource_t *remote_rsc = NULL;
     const pe_node_t *cluster_node = NULL;
 
     CRM_ASSERT(node != NULL);
 
     remote_rsc = node->details->remote_rsc;
     CRM_ASSERT(remote_rsc != NULL);
 
     cluster_node = pe__current_node(remote_rsc);
 
     /* If the cluster node the remote connection resource resides on
      * is unclean or went offline, we can't process any operations
      * on that remote node until after it starts elsewhere.
      */
     if ((remote_rsc->next_role == RSC_ROLE_STOPPED)
         || (remote_rsc->allocated_to == NULL)) {
 
         // The connection resource is not going to run anywhere
 
         if ((cluster_node != NULL) && cluster_node->details->unclean) {
             /* The remote connection is failed because its resource is on a
              * failed node and can't be recovered elsewhere, so we must fence.
              */
             return remote_state_failed;
         }
 
         if (!pcmk_is_set(remote_rsc->flags, pe_rsc_failed)) {
             /* Connection resource is cleanly stopped */
             return remote_state_stopped;
         }
 
         /* Connection resource is failed */
 
         if ((remote_rsc->next_role == RSC_ROLE_STOPPED)
             && remote_rsc->remote_reconnect_ms
             && node->details->remote_was_fenced
             && !pe__shutdown_requested(node)) {
 
             /* We won't know whether the connection is recoverable until the
              * reconnect interval expires and we reattempt connection.
              */
             return remote_state_unknown;
         }
 
         /* The remote connection is in a failed state. If there are any
          * resources known to be active on it (stop) or in an unknown state
          * (probe), we must assume the worst and fence it.
          */
         return remote_state_failed;
 
     } else if (cluster_node == NULL) {
         /* Connection is recoverable but not currently running anywhere, so see
          * if we can recover it first
          */
         return remote_state_unknown;
 
     } else if (cluster_node->details->unclean
                || !(cluster_node->details->online)) {
         // Connection is running on a dead node, see if we can recover it first
         return remote_state_resting;
 
     } else if (pcmk__list_of_multiple(remote_rsc->running_on)
                && (remote_rsc->partial_migration_source != NULL)
                && (remote_rsc->partial_migration_target != NULL)) {
         /* We're in the middle of migrating a connection resource, so wait until
          * after the migration completes before performing any actions.
          */
         return remote_state_resting;
 
     }
     return remote_state_alive;
 }
 
 /*!
  * \internal
  * \brief Order actions on remote node relative to actions for the connection
  *
  * \param[in,out] action    An action scheduled on a Pacemaker Remote node
  */
 static void
 apply_remote_ordering(pe_action_t *action)
 {
     pe_resource_t *remote_rsc = NULL;
     enum action_tasks task = text2task(action->task);
     enum remote_connection_state state = get_remote_node_state(action->node);
 
     uint32_t order_opts = pe_order_none;
 
     if (action->rsc == NULL) {
         return;
     }
 
     CRM_ASSERT(pe__is_guest_or_remote_node(action->node));
 
     remote_rsc = action->node->details->remote_rsc;
     CRM_ASSERT(remote_rsc != NULL);
 
     crm_trace("Order %s action %s relative to %s%s (state: %s)",
               action->task, action->uuid,
               pcmk_is_set(remote_rsc->flags, pe_rsc_failed)? "failed " : "",
               remote_rsc->id, state2text(state));
 
     if (pcmk__strcase_any_of(action->task, CRMD_ACTION_MIGRATE,
                              CRMD_ACTION_MIGRATED, NULL)) {
         /* Migration ops map to "no_action", but we need to apply the same
          * ordering as for stop or demote (see get_router_node()).
          */
         task = stop_rsc;
     }
 
     switch (task) {
         case start_rsc:
         case action_promote:
             order_opts = pe_order_none;
 
             if (state == remote_state_failed) {
                 /* Force recovery, by making this action required */
                 pe__set_order_flags(order_opts, pe_order_implies_then);
             }
 
             /* Ensure connection is up before running this action */
             order_start_then_action(remote_rsc, action, order_opts,
                                     remote_rsc->cluster);
             break;
 
         case stop_rsc:
             if (state == remote_state_alive) {
                 order_action_then_stop(action, remote_rsc,
                                        pe_order_implies_first,
                                        remote_rsc->cluster);
 
             } else if (state == remote_state_failed) {
                 /* The resource is active on the node, but since we don't have a
                  * valid connection, the only way to stop the resource is by
                  * fencing the node. There is no need to order the stop relative
                  * to the remote connection, since the stop will become implied
                  * by the fencing.
                  */
                 pe_fence_node(remote_rsc->cluster, action->node,
                               "resources are active but connection is unrecoverable",
                               FALSE);
 
             } else if (remote_rsc->next_role == RSC_ROLE_STOPPED) {
                 /* State must be remote_state_unknown or remote_state_stopped.
                  * Since the connection is not coming back up in this
                  * transition, stop this resource first.
                  */
                 order_action_then_stop(action, remote_rsc,
                                        pe_order_implies_first,
                                        remote_rsc->cluster);
 
             } else {
                 /* The connection is going to be started somewhere else, so
                  * stop this resource after that completes.
                  */
                 order_start_then_action(remote_rsc, action, pe_order_none,
                                         remote_rsc->cluster);
             }
             break;
 
         case action_demote:
             /* Only order this demote relative to the connection start if the
              * connection isn't being torn down. Otherwise, the demote would be
              * blocked because the connection start would not be allowed.
              */
             if ((state == remote_state_resting)
                 || (state == remote_state_unknown)) {
 
                 order_start_then_action(remote_rsc, action, pe_order_none,
                                         remote_rsc->cluster);
             } /* Otherwise we can rely on the stop ordering */
             break;
 
         default:
             /* Wait for the connection resource to be up */
             if (pcmk__action_is_recurring(action)) {
                 /* In case we ever get the recovery logic wrong, force
                  * recurring monitors to be restarted, even if just
                  * the connection was re-established
                  */
                 order_start_then_action(remote_rsc, action,
                                         pe_order_implies_then,
                                         remote_rsc->cluster);
 
             } else {
                 pe_node_t *cluster_node = pe__current_node(remote_rsc);
 
                 if ((task == monitor_rsc) && (state == remote_state_failed)) {
                     /* We would only be here if we do not know the state of the
                      * resource on the remote node. Since we have no way to find
                      * out, it is necessary to fence the node.
                      */
                     pe_fence_node(remote_rsc->cluster, action->node,
                                   "resources are in unknown state "
                                   "and connection is unrecoverable", FALSE);
                 }
 
                 if ((cluster_node != NULL) && (state == remote_state_stopped)) {
                     /* The connection is currently up, but is going down
                      * permanently. Make sure we check services are actually
                      * stopped _before_ we let the connection get closed.
                      */
                     order_action_then_stop(action, remote_rsc,
                                            pe_order_runnable_left,
                                            remote_rsc->cluster);
 
                 } else {
                     order_start_then_action(remote_rsc, action, pe_order_none,
                                             remote_rsc->cluster);
                 }
             }
             break;
     }
 }
 
 static void
 apply_container_ordering(pe_action_t *action, pe_working_set_t *data_set)
 {
     /* VMs are also classified as containers for these purposes... in
      * that they both involve a 'thing' running on a real or remote
      * cluster node.
      *
      * This allows us to be smarter about the type and extent of
      * recovery actions required in various scenarios
      */
     pe_resource_t *remote_rsc = NULL;
     pe_resource_t *container = NULL;
     enum action_tasks task = text2task(action->task);
 
     CRM_ASSERT(action->rsc != NULL);
     CRM_ASSERT(action->node != NULL);
     CRM_ASSERT(pe__is_guest_or_remote_node(action->node));
 
     remote_rsc = action->node->details->remote_rsc;
     CRM_ASSERT(remote_rsc != NULL);
 
     container = remote_rsc->container;
     CRM_ASSERT(container != NULL);
 
     if (pcmk_is_set(container->flags, pe_rsc_failed)) {
         pe_fence_node(data_set, action->node, "container failed", FALSE);
     }
 
     crm_trace("Order %s action %s relative to %s%s for %s%s",
               action->task, action->uuid,
               pcmk_is_set(remote_rsc->flags, pe_rsc_failed)? "failed " : "",
               remote_rsc->id,
               pcmk_is_set(container->flags, pe_rsc_failed)? "failed " : "",
               container->id);
 
     if (pcmk__strcase_any_of(action->task, CRMD_ACTION_MIGRATE,
                              CRMD_ACTION_MIGRATED, NULL)) {
         /* Migration ops map to "no_action", but we need to apply the same
          * ordering as for stop or demote (see get_router_node()).
          */
         task = stop_rsc;
     }
 
     switch (task) {
         case start_rsc:
         case action_promote:
             // Force resource recovery if the container is recovered
             order_start_then_action(container, action, pe_order_implies_then,
                                     data_set);
 
             // Wait for the connection resource to be up, too
             order_start_then_action(remote_rsc, action, pe_order_none,
                                     data_set);
             break;
 
         case stop_rsc:
         case action_demote:
             if (pcmk_is_set(container->flags, pe_rsc_failed)) {
                 /* When the container representing a guest node fails, any stop
                  * or demote actions for resources running on the guest node
                  * are implied by the container stopping. This is similar to
                  * how fencing operations work for cluster nodes and remote
                  * nodes.
                  */
             } else {
                 /* Ensure the operation happens before the connection is brought
                  * down.
                  *
                  * If we really wanted to, we could order these after the
                  * connection start, IFF the container's current role was
                  * stopped (otherwise we re-introduce an ordering loop when the
                  * connection is restarting).
                  */
                 order_action_then_stop(action, remote_rsc, pe_order_none,
                                        data_set);
             }
             break;
 
         default:
             /* Wait for the connection resource to be up */
             if (pcmk__action_is_recurring(action)) {
                 /* In case we ever get the recovery logic wrong, force
                  * recurring monitors to be restarted, even if just
                  * the connection was re-established
                  */
                 if(task != no_action) {
                     order_start_then_action(remote_rsc, action,
                                             pe_order_implies_then, data_set);
                 }
             } else {
                 order_start_then_action(remote_rsc, action, pe_order_none,
                                         data_set);
             }
             break;
     }
 }
 
 /*!
  * \internal
  * \brief Order all relevant actions relative to remote connection actions
  *
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__order_remote_connection_actions(pe_working_set_t *data_set)
 {
     if (!pcmk_is_set(data_set->flags, pe_flag_have_remote_nodes)) {
         return;
     }
 
     crm_trace("Creating remote connection orderings");
 
     for (GList *gIter = data_set->actions; gIter != NULL; gIter = gIter->next) {
         pe_action_t *action = (pe_action_t *) gIter->data;
         pe_resource_t *remote = NULL;
 
         // We are only interested in resource actions
         if (action->rsc == NULL) {
             continue;
         }
 
         /* Special case: If we are clearing the failcount of an actual
          * remote connection resource, then make sure this happens before
          * any start of the resource in this transition.
          */
         if (action->rsc->is_remote_node &&
             pcmk__str_eq(action->task, CRM_OP_CLEAR_FAILCOUNT, pcmk__str_casei)) {
 
             pcmk__new_ordering(action->rsc, NULL, action, action->rsc,
                                pcmk__op_key(action->rsc->id, RSC_START, 0),
                                NULL, pe_order_optional, data_set);
 
             continue;
         }
 
-        // We are only interested in actions allocated to a node
+        // We are only interested in actions assigned to a node
         if (action->node == NULL) {
             continue;
         }
 
         if (!pe__is_guest_or_remote_node(action->node)) {
             continue;
         }
 
         /* We are only interested in real actions.
          *
          * @TODO This is probably wrong; pseudo-actions might be converted to
          * real actions and vice versa later in update_actions() at the end of
          * pcmk__apply_orderings().
          */
         if (pcmk_is_set(action->flags, pe_action_pseudo)) {
             continue;
         }
 
         remote = action->node->details->remote_rsc;
         if (remote == NULL) {
             // Orphaned
             continue;
         }
 
         /* Another special case: if a resource is moving to a Pacemaker Remote
          * node, order the stop on the original node after any start of the
          * remote connection. This ensures that if the connection fails to
          * start, we leave the resource running on the original node.
          */
         if (pcmk__str_eq(action->task, RSC_START, pcmk__str_casei)) {
             for (GList *item = action->rsc->actions; item != NULL;
                  item = item->next) {
                 pe_action_t *rsc_action = item->data;
 
                 if ((rsc_action->node->details != action->node->details)
                     && pcmk__str_eq(rsc_action->task, RSC_STOP, pcmk__str_casei)) {
                     pcmk__new_ordering(remote, start_key(remote), NULL,
                                        action->rsc, NULL, rsc_action,
                                        pe_order_optional, data_set);
                 }
             }
         }
 
         /* The action occurs across a remote connection, so create
          * ordering constraints that guarantee the action occurs while the node
          * is active (after start, before stop ... things like that).
          *
          * This is somewhat brittle in that we need to make sure the results of
          * this ordering are compatible with the result of get_router_node().
          * It would probably be better to add XML_LRM_ATTR_ROUTER_NODE as part
          * of this logic rather than create_graph_action().
          */
         if (remote->container) {
             crm_trace("Container ordering for %s", action->uuid);
             apply_container_ordering(action, data_set);
 
         } else {
             crm_trace("Remote ordering for %s", action->uuid);
             apply_remote_ordering(action);
         }
     }
 }
 
 /*!
  * \internal
  * \brief Check whether a node is a failed remote node
  *
  * \param[in] node  Node to check
  *
  * \return true if \p node is a failed remote node, false otherwise
  */
 bool
 pcmk__is_failed_remote_node(const pe_node_t *node)
 {
     return pe__is_remote_node(node) && (node->details->remote_rsc != NULL)
            && (get_remote_node_state(node) == remote_state_failed);
 }
 
 /*!
  * \internal
  * \brief Check whether a given resource corresponds to a given node as guest
  *
  * \param[in] rsc   Resource to check
  * \param[in] node  Node to check
  *
  * \return true if \p node is a guest node and \p rsc is its containing
  *         resource, otherwise false
  */
 bool
 pcmk__rsc_corresponds_to_guest(const pe_resource_t *rsc, const pe_node_t *node)
 {
     return (rsc != NULL) && (rsc->fillers != NULL) && (node != NULL)
             && (node->details->remote_rsc != NULL)
             && (node->details->remote_rsc->container == rsc);
 }
 
 /*!
  * \internal
  * \brief Get proper connection host that a remote action must be routed through
  *
  * A remote connection resource might be starting, stopping, or migrating in the
  * same transition that an action needs to be executed on its Pacemaker Remote
  * node. Determine the proper node that the remote action should be routed
  * through.
  *
  * \param[in] action  (Potentially remote) action to route
  *
  * \return Connection host that action should be routed through if remote,
  *         otherwise NULL
  */
 pe_node_t *
 pcmk__connection_host_for_action(const pe_action_t *action)
 {
     pe_node_t *began_on = NULL;
     pe_node_t *ended_on = NULL;
     bool partial_migration = false;
     const char *task = action->task;
 
     if (pcmk__str_eq(task, CRM_OP_FENCE, pcmk__str_casei)
         || !pe__is_guest_or_remote_node(action->node)) {
         return NULL;
     }
 
     CRM_ASSERT(action->node->details->remote_rsc != NULL);
 
     began_on = pe__current_node(action->node->details->remote_rsc);
     ended_on = action->node->details->remote_rsc->allocated_to;
     if (action->node->details->remote_rsc
         && (action->node->details->remote_rsc->container == NULL)
         && action->node->details->remote_rsc->partial_migration_target) {
         partial_migration = true;
     }
 
     if (began_on == NULL) {
         crm_trace("Routing %s for %s through remote connection's "
                   "next node %s (starting)%s",
                   action->task, (action->rsc? action->rsc->id : "no resource"),
                   (ended_on? ended_on->details->uname : "none"),
                   partial_migration? " (partial migration)" : "");
         return ended_on;
     }
 
     if (ended_on == NULL) {
         crm_trace("Routing %s for %s through remote connection's "
                   "current node %s (stopping)%s",
                   action->task, (action->rsc? action->rsc->id : "no resource"),
                   (began_on? began_on->details->uname : "none"),
                   partial_migration? " (partial migration)" : "");
         return began_on;
     }
 
     if (began_on->details == ended_on->details) {
         crm_trace("Routing %s for %s through remote connection's "
                   "current node %s (not moving)%s",
                   action->task, (action->rsc? action->rsc->id : "no resource"),
                   (began_on? began_on->details->uname : "none"),
                   partial_migration? " (partial migration)" : "");
         return began_on;
     }
 
     /* If we get here, the remote connection is moving during this transition.
      * This means some actions for resources behind the connection will get
      * routed through the cluster node the connection resource is currently on,
      * and others are routed through the cluster node the connection will end up
      * on.
      */
 
     if (pcmk__str_eq(task, "notify", pcmk__str_casei)) {
         task = g_hash_table_lookup(action->meta, "notify_operation");
     }
 
     /*
      * Stop, demote, and migration actions must occur before the connection can
      * move (these actions are required before the remote resource can stop). In
      * this case, we know these actions have to be routed through the initial
      * cluster node the connection resource lived on before the move takes
      * place.
      *
      * The exception is a partial migration of a (non-guest) remote connection
      * resource; in that case, all actions (even these) will be ordered after
      * the connection's pseudo-start on the migration target, so the target is
      * the router node.
      */
     if (pcmk__strcase_any_of(task, "cancel", "stop", "demote", "migrate_from",
                              "migrate_to", NULL) && !partial_migration) {
         crm_trace("Routing %s for %s through remote connection's "
                   "current node %s (moving)%s",
                   action->task, (action->rsc? action->rsc->id : "no resource"),
                   (began_on? began_on->details->uname : "none"),
                   partial_migration? " (partial migration)" : "");
         return began_on;
     }
 
     /* Everything else (start, promote, monitor, probe, refresh,
      * clear failcount, delete, ...) must occur after the connection starts on
      * the node it is moving to.
      */
     crm_trace("Routing %s for %s through remote connection's "
               "next node %s (moving)%s",
               action->task, (action->rsc? action->rsc->id : "no resource"),
               (ended_on? ended_on->details->uname : "none"),
               partial_migration? " (partial migration)" : "");
     return ended_on;
 }
 
 /*!
  * \internal
  * \brief Replace remote connection's addr="#uname" with actual address
  *
  * REMOTE_CONTAINER_HACK: If a given resource is a remote connection resource
  * with its "addr" parameter set to "#uname", pull the actual value from the
  * parameters evaluated without a node (which was put there earlier in
  * pcmk__create_graph() when the bundle's expand() method was called).
  *
  * \param[in,out] rsc     Resource to check
  * \param[in,out] params  Resource parameters evaluated per node
  */
 void
 pcmk__substitute_remote_addr(pe_resource_t *rsc, GHashTable *params)
 {
     const char *remote_addr = g_hash_table_lookup(params,
                                                   XML_RSC_ATTR_REMOTE_RA_ADDR);
 
     if (pcmk__str_eq(remote_addr, "#uname", pcmk__str_none)) {
         GHashTable *base = pe_rsc_params(rsc, NULL, rsc->cluster);
 
         remote_addr = g_hash_table_lookup(base, XML_RSC_ATTR_REMOTE_RA_ADDR);
         if (remote_addr != NULL) {
             g_hash_table_insert(params, strdup(XML_RSC_ATTR_REMOTE_RA_ADDR),
                                 strdup(remote_addr));
         }
     }
 }
 
 /*!
  * \brief Add special bundle meta-attributes to XML
  *
  * If a given action will be executed on a guest node (including a bundle),
  * add the special bundle meta-attribute "container-attribute-target" and
  * environment variable "physical_host" as XML attributes (using meta-attribute
  * naming).
  *
  * \param[in,out] args_xml  XML to add attributes to
  * \param[in]     action    Action to check
  */
 void
 pcmk__add_bundle_meta_to_xml(xmlNode *args_xml, const pe_action_t *action)
 {
     const pe_node_t *host = NULL;
     enum action_tasks task;
 
     if (!pe__is_guest_node(action->node)) {
         return;
     }
 
     task = text2task(action->task);
     if ((task == action_notify) || (task == action_notified)) {
         task = text2task(g_hash_table_lookup(action->meta, "notify_operation"));
     }
 
     switch (task) {
         case stop_rsc:
         case stopped_rsc:
         case action_demote:
         case action_demoted:
             // "Down" actions take place on guest's current host
             host = pe__current_node(action->node->details->remote_rsc->container);
             break;
 
         case start_rsc:
         case started_rsc:
         case monitor_rsc:
         case action_promote:
         case action_promoted:
             // "Up" actions take place on guest's next host
             host = action->node->details->remote_rsc->container->allocated_to;
             break;
 
         default:
             break;
     }
 
     if (host != NULL) {
         hash2metafield((gpointer) XML_RSC_ATTR_TARGET,
                        (gpointer) g_hash_table_lookup(action->rsc->meta,
                                                       XML_RSC_ATTR_TARGET),
                        (gpointer) args_xml);
         hash2metafield((gpointer) PCMK__ENV_PHYSICAL_HOST,
                        (gpointer) host->details->uname,
                        (gpointer) args_xml);
     }
 }
diff --git a/lib/pacemaker/pcmk_sched_resource.c b/lib/pacemaker/pcmk_sched_resource.c
index 11efbd1b19..e06a110ab6 100644
--- a/lib/pacemaker/pcmk_sched_resource.c
+++ b/lib/pacemaker/pcmk_sched_resource.c
@@ -1,722 +1,723 @@
 /*
  * Copyright 2014-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 
 #include <stdlib.h>
 #include <string.h>
 #include <crm/msg_xml.h>
 #include <pacemaker-internal.h>
 
 #include "libpacemaker_private.h"
 
-// Resource allocation methods that vary by resource variant
-static resource_alloc_functions_t allocation_methods[] = {
+// Resource assignment methods by resource variant
+static resource_alloc_functions_t assignment_methods[] = {
     {
         pcmk__primitive_assign,
         pcmk__primitive_create_actions,
         pcmk__probe_rsc_on_node,
         pcmk__primitive_internal_constraints,
         pcmk__primitive_apply_coloc_score,
         pcmk__colocated_resources,
         pcmk__with_primitive_colocations,
         pcmk__primitive_with_colocations,
         pcmk__add_colocated_node_scores,
         pcmk__apply_location,
         pcmk__primitive_action_flags,
         pcmk__update_ordered_actions,
         pcmk__output_resource_actions,
         pcmk__add_rsc_actions_to_graph,
         pcmk__primitive_add_graph_meta,
         pcmk__primitive_add_utilization,
         pcmk__primitive_shutdown_lock,
     },
     {
         pcmk__group_assign,
         pcmk__group_create_actions,
         pcmk__probe_rsc_on_node,
         pcmk__group_internal_constraints,
         pcmk__group_apply_coloc_score,
         pcmk__group_colocated_resources,
         pcmk__with_group_colocations,
         pcmk__group_with_colocations,
         pcmk__group_add_colocated_node_scores,
         pcmk__group_apply_location,
         pcmk__group_action_flags,
         pcmk__group_update_ordered_actions,
         pcmk__output_resource_actions,
         pcmk__add_rsc_actions_to_graph,
         pcmk__noop_add_graph_meta,
         pcmk__group_add_utilization,
         pcmk__group_shutdown_lock,
     },
     {
         pcmk__clone_assign,
         pcmk__clone_create_actions,
         pcmk__clone_create_probe,
         pcmk__clone_internal_constraints,
         pcmk__clone_apply_coloc_score,
         pcmk__colocated_resources,
         pcmk__with_clone_colocations,
         pcmk__clone_with_colocations,
         pcmk__add_colocated_node_scores,
         pcmk__clone_apply_location,
         pcmk__clone_action_flags,
         pcmk__instance_update_ordered_actions,
         pcmk__output_resource_actions,
         pcmk__clone_add_actions_to_graph,
         pcmk__clone_add_graph_meta,
         pcmk__clone_add_utilization,
         pcmk__clone_shutdown_lock,
     },
     {
         pcmk__bundle_assign,
         pcmk__bundle_create_actions,
         pcmk__bundle_create_probe,
         pcmk__bundle_internal_constraints,
         pcmk__bundle_apply_coloc_score,
         pcmk__colocated_resources,
         pcmk__with_bundle_colocations,
         pcmk__bundle_with_colocations,
         pcmk__add_colocated_node_scores,
         pcmk__bundle_apply_location,
         pcmk__bundle_action_flags,
         pcmk__instance_update_ordered_actions,
         pcmk__output_bundle_actions,
         pcmk__bundle_add_actions_to_graph,
         pcmk__noop_add_graph_meta,
         pcmk__bundle_add_utilization,
         pcmk__bundle_shutdown_lock,
     }
 };
 
 /*!
  * \internal
  * \brief Check whether a resource's agent standard, provider, or type changed
  *
  * \param[in,out] rsc             Resource to check
  * \param[in,out] node            Node needing unfencing if agent changed
  * \param[in]     rsc_entry       XML with previously known agent information
  * \param[in]     active_on_node  Whether \p rsc is active on \p node
  *
  * \return true if agent for \p rsc changed, otherwise false
  */
 bool
 pcmk__rsc_agent_changed(pe_resource_t *rsc, pe_node_t *node,
                         const xmlNode *rsc_entry, bool active_on_node)
 {
     bool changed = false;
     const char *attr_list[] = {
         XML_ATTR_TYPE,
         XML_AGENT_ATTR_CLASS,
         XML_AGENT_ATTR_PROVIDER
     };
 
     for (int i = 0; i < PCMK__NELEM(attr_list); i++) {
         const char *value = crm_element_value(rsc->xml, attr_list[i]);
         const char *old_value = crm_element_value(rsc_entry, attr_list[i]);
 
         if (!pcmk__str_eq(value, old_value, pcmk__str_none)) {
             changed = true;
             trigger_unfencing(rsc, node, "Device definition changed", NULL,
                               rsc->cluster);
             if (active_on_node) {
                 crm_notice("Forcing restart of %s on %s "
                            "because %s changed from '%s' to '%s'",
                            rsc->id, pe__node_name(node), attr_list[i],
                            pcmk__s(old_value, ""), pcmk__s(value, ""));
             }
         }
     }
     if (changed && active_on_node) {
         // Make sure the resource is restarted
         custom_action(rsc, stop_key(rsc), CRMD_ACTION_STOP, node, FALSE, TRUE,
                       rsc->cluster);
         pe__set_resource_flags(rsc, pe_rsc_start_pending);
     }
     return changed;
 }
 
 /*!
  * \internal
  * \brief Add resource (and any matching children) to list if it matches ID
  *
  * \param[in] result  List to add resource to
  * \param[in] rsc     Resource to check
  * \param[in] id      ID to match
  *
  * \return (Possibly new) head of list
  */
 static GList *
 add_rsc_if_matching(GList *result, pe_resource_t *rsc, const char *id)
 {
     if ((strcmp(rsc->id, id) == 0)
         || ((rsc->clone_name != NULL) && (strcmp(rsc->clone_name, id) == 0))) {
         result = g_list_prepend(result, rsc);
     }
     for (GList *iter = rsc->children; iter != NULL; iter = iter->next) {
         pe_resource_t *child = (pe_resource_t *) iter->data;
 
         result = add_rsc_if_matching(result, child, id);
     }
     return result;
 }
 
 /*!
  * \internal
  * \brief Find all resources matching a given ID by either ID or clone name
  *
  * \param[in] id        Resource ID to check
  * \param[in] data_set  Cluster working set
  *
  * \return List of all resources that match \p id
  * \note The caller is responsible for freeing the return value with
  *       g_list_free().
  */
 GList *
 pcmk__rscs_matching_id(const char *id, const pe_working_set_t *data_set)
 {
     GList *result = NULL;
 
     CRM_CHECK((id != NULL) && (data_set != NULL), return NULL);
     for (GList *iter = data_set->resources; iter != NULL; iter = iter->next) {
         result = add_rsc_if_matching(result, (pe_resource_t *) iter->data, id);
     }
     return result;
 }
 
 /*!
  * \internal
- * \brief Set the variant-appropriate allocation methods for a resource
+ * \brief Set the variant-appropriate assignment methods for a resource
  *
- * \param[in,out] rsc      Resource to set allocation methods for
+ * \param[in,out] rsc      Resource to set assignment methods for
  * \param[in]     ignored  Here so function can be used with g_list_foreach()
  */
 static void
-set_allocation_methods_for_rsc(pe_resource_t *rsc, void *ignored)
+set_assignment_methods_for_rsc(pe_resource_t *rsc, void *ignored)
 {
-    rsc->cmds = &allocation_methods[rsc->variant];
-    g_list_foreach(rsc->children, (GFunc) set_allocation_methods_for_rsc, NULL);
+    rsc->cmds = &assignment_methods[rsc->variant];
+    g_list_foreach(rsc->children, (GFunc) set_assignment_methods_for_rsc, NULL);
 }
 
 /*!
  * \internal
- * \brief Set the variant-appropriate allocation methods for all resources
+ * \brief Set the variant-appropriate assignment methods for all resources
  *
  * \param[in,out] data_set  Cluster working set
  */
 void
-pcmk__set_allocation_methods(pe_working_set_t *data_set)
+pcmk__set_assignment_methods(pe_working_set_t *data_set)
 {
-    g_list_foreach(data_set->resources, (GFunc) set_allocation_methods_for_rsc,
+    g_list_foreach(data_set->resources, (GFunc) set_assignment_methods_for_rsc,
                    NULL);
 }
 
 // Shared implementation of resource_alloc_functions_t:colocated_resources()
 GList *
 pcmk__colocated_resources(const pe_resource_t *rsc, const pe_resource_t *orig_rsc,
                           GList *colocated_rscs)
 {
     const GList *iter = NULL;
     GList *colocations = NULL;
 
     if (orig_rsc == NULL) {
         orig_rsc = rsc;
     }
 
     if ((rsc == NULL) || (g_list_find(colocated_rscs, rsc) != NULL)) {
         return colocated_rscs;
     }
 
     pe_rsc_trace(orig_rsc, "%s is in colocation chain with %s",
                  rsc->id, orig_rsc->id);
     colocated_rscs = g_list_prepend(colocated_rscs, (gpointer) rsc);
 
     // Follow colocations where this resource is the dependent resource
     colocations = pcmk__this_with_colocations(rsc);
     for (iter = colocations; iter != NULL; iter = iter->next) {
         const pcmk__colocation_t *constraint = iter->data;
         const pe_resource_t *primary = constraint->primary;
 
         if (primary == orig_rsc) {
             continue; // Break colocation loop
         }
 
         if ((constraint->score == INFINITY) &&
             (pcmk__colocation_affects(rsc, primary, constraint,
                                       true) == pcmk__coloc_affects_location)) {
 
             colocated_rscs = primary->cmds->colocated_resources(primary,
                                                                 orig_rsc,
                                                                 colocated_rscs);
         }
     }
     g_list_free(colocations);
 
     // Follow colocations where this resource is the primary resource
     colocations = pcmk__with_this_colocations(rsc);
     for (iter = colocations; iter != NULL; iter = iter->next) {
         const pcmk__colocation_t *constraint = iter->data;
         const pe_resource_t *dependent = constraint->dependent;
 
         if (dependent == orig_rsc) {
             continue; // Break colocation loop
         }
 
         if (pe_rsc_is_clone(rsc) && !pe_rsc_is_clone(dependent)) {
             continue; // We can't be sure whether dependent will be colocated
         }
 
         if ((constraint->score == INFINITY) &&
             (pcmk__colocation_affects(dependent, rsc, constraint,
                                       true) == pcmk__coloc_affects_location)) {
 
             colocated_rscs = dependent->cmds->colocated_resources(dependent,
                                                                   orig_rsc,
                                                                   colocated_rscs);
         }
     }
     g_list_free(colocations);
 
     return colocated_rscs;
 }
 
 // No-op function for variants that don't need to implement add_graph_meta()
 void
 pcmk__noop_add_graph_meta(const pe_resource_t *rsc, xmlNode *xml)
 {
 }
 
 void
 pcmk__output_resource_actions(pe_resource_t *rsc)
 {
     pcmk__output_t *out = rsc->cluster->priv;
 
     pe_node_t *next = NULL;
     pe_node_t *current = NULL;
 
     if (rsc->children != NULL) {
         for (GList *iter = rsc->children; iter != NULL; iter = iter->next) {
             pe_resource_t *child = (pe_resource_t *) iter->data;
 
             child->cmds->output_actions(child);
         }
         return;
     }
 
     next = rsc->allocated_to;
     if (rsc->running_on) {
         current = pe__current_node(rsc);
         if (rsc->role == RSC_ROLE_STOPPED) {
             /* This can occur when resources are being recovered because
              * the current role can change in pcmk__primitive_create_actions()
              */
             rsc->role = RSC_ROLE_STARTED;
         }
     }
 
     if ((current == NULL) && pcmk_is_set(rsc->flags, pe_rsc_orphan)) {
         /* Don't log stopped orphans */
         return;
     }
 
     out->message(out, "rsc-action", rsc, current, next);
 }
 
 /*!
  * \internal
  * \brief Assign a specified primitive resource to a node
  *
  * Assign a specified primitive resource to a specified node, if the node can
  * run the resource (or unconditionally, if \p force is true). Mark the resource
  * as no longer provisional. If the primitive can't be assigned (or \p chosen is
  * NULL), unassign any previous assignment for it, set its next role to stopped,
  * and update any existing actions scheduled for it. This is not done
  * recursively for children, so it should be called only for primitives.
  *
  * \param[in,out] rsc     Resource to assign
  * \param[in,out] chosen  Node to assign \p rsc to
  * \param[in]     force   If true, assign to \p chosen even if unavailable
  *
  * \return true if \p rsc could be assigned, otherwise false
  *
  * \note Assigning a resource to the NULL node using this function is different
  *       from calling pcmk__unassign_resource(), in that it will also update any
  *       actions created for the resource.
  */
 bool
 pcmk__finalize_assignment(pe_resource_t *rsc, pe_node_t *chosen, bool force)
 {
     pcmk__output_t *out = rsc->cluster->priv;
 
     CRM_ASSERT(rsc->variant == pe_native);
 
     if (!force && (chosen != NULL)) {
         if ((chosen->weight < 0)
             // Allow the graph to assume that guest node connections will come up
             || (!pcmk__node_available(chosen, true, false)
                 && !pe__is_guest_node(chosen))) {
 
             crm_debug("All nodes for resource %s are unavailable, unclean or "
                       "shutting down (%s can%s run resources, with weight %d)",
                       rsc->id, pe__node_name(chosen),
                       (pcmk__node_available(chosen, true, false)? "" : "not"),
                       chosen->weight);
             pe__set_next_role(rsc, RSC_ROLE_STOPPED, "node availability");
             chosen = NULL;
         }
     }
 
     pcmk__unassign_resource(rsc);
     pe__clear_resource_flags(rsc, pe_rsc_provisional);
 
     if (chosen == NULL) {
-        crm_debug("Could not allocate a node for %s", rsc->id);
-        pe__set_next_role(rsc, RSC_ROLE_STOPPED, "unable to allocate");
+        crm_debug("Could not assign %s to a node", rsc->id);
+        pe__set_next_role(rsc, RSC_ROLE_STOPPED, "unable to assign");
 
         for (GList *iter = rsc->actions; iter != NULL; iter = iter->next) {
             pe_action_t *op = (pe_action_t *) iter->data;
 
-            crm_debug("Updating %s for allocation failure", op->uuid);
+            pe_rsc_debug(rsc, "Updating %s for %s assignment failure",
+                         op->uuid, rsc->id);
 
             if (pcmk__str_eq(op->task, RSC_STOP, pcmk__str_casei)) {
                 pe__clear_action_flags(op, pe_action_optional);
 
             } else if (pcmk__str_eq(op->task, RSC_START, pcmk__str_casei)) {
                 pe__clear_action_flags(op, pe_action_runnable);
                 //pe__set_resource_flags(rsc, pe_rsc_block);
 
             } else {
                 // Cancel recurring actions, unless for stopped state
                 const char *interval_ms_s = NULL;
                 const char *target_rc_s = NULL;
                 char *rc_stopped = pcmk__itoa(PCMK_OCF_NOT_RUNNING);
 
                 interval_ms_s = g_hash_table_lookup(op->meta,
                                                     XML_LRM_ATTR_INTERVAL_MS);
                 target_rc_s = g_hash_table_lookup(op->meta,
                                                   XML_ATTR_TE_TARGET_RC);
                 if ((interval_ms_s != NULL)
                     && !pcmk__str_eq(interval_ms_s, "0", pcmk__str_none)
                     && !pcmk__str_eq(rc_stopped, target_rc_s, pcmk__str_none)) {
                     pe__clear_action_flags(op, pe_action_runnable);
                 }
                 free(rc_stopped);
             }
         }
         return false;
     }
 
     crm_debug("Assigning %s to %s", rsc->id, pe__node_name(chosen));
     rsc->allocated_to = pe__copy_node(chosen);
 
     chosen->details->allocated_rsc = g_list_prepend(chosen->details->allocated_rsc,
                                                     rsc);
     chosen->details->num_resources++;
     chosen->count++;
     pcmk__consume_node_capacity(chosen->details->utilization, rsc);
 
     if (pcmk_is_set(rsc->cluster->flags, pe_flag_show_utilization)) {
         out->message(out, "resource-util", rsc, chosen, __func__);
     }
     return true;
 }
 
 /*!
  * \internal
  * \brief Assign a specified resource (of any variant) to a node
  *
  * Assign a specified resource and its children (if any) to a specified node, if
  * the node can run the resource (or unconditionally, if \p force is true). Mark
  * the resources as no longer provisional. If the resources can't be assigned
  * (or \p chosen is NULL), unassign any previous assignments, set next role to
  * stopped, and update any existing actions scheduled for them.
  *
  * \param[in,out] rsc     Resource to assign
  * \param[in,out] chosen  Node to assign \p rsc to
  * \param[in]     force   If true, assign to \p chosen even if unavailable
  *
  * \return true if \p rsc could be assigned, otherwise false
  *
  * \note Assigning a resource to the NULL node using this function is different
  *       from calling pcmk__unassign_resource(), in that it will also update any
  *       actions created for the resource.
  */
 bool
 pcmk__assign_resource(pe_resource_t *rsc, pe_node_t *node, bool force)
 {
     bool changed = false;
 
     if (rsc->children == NULL) {
         if (rsc->allocated_to != NULL) {
             changed = true;
         }
         pcmk__finalize_assignment(rsc, node, force);
 
     } else {
         for (GList *iter = rsc->children; iter != NULL; iter = iter->next) {
             pe_resource_t *child_rsc = (pe_resource_t *) iter->data;
 
             changed |= pcmk__assign_resource(child_rsc, node, force);
         }
     }
     return changed;
 }
 
 /*!
  * \internal
  * \brief Remove any assignment of a specified resource to a node
  *
  * If a specified resource has been assigned to a node, remove that assignment
  * and mark the resource as provisional again. This is not done recursively for
  * children, so it should be called only for primitives.
  *
  * \param[in,out] rsc  Resource to unassign
  */
 void
 pcmk__unassign_resource(pe_resource_t *rsc)
 {
     pe_node_t *old = rsc->allocated_to;
 
     if (old == NULL) {
         return;
     }
 
     crm_info("Unassigning %s from %s", rsc->id, pe__node_name(old));
     pe__set_resource_flags(rsc, pe_rsc_provisional);
     rsc->allocated_to = NULL;
 
     /* We're going to free the pe_node_t, but its details member is shared and
      * will remain, so update that appropriately first.
      */
     old->details->allocated_rsc = g_list_remove(old->details->allocated_rsc,
                                                 rsc);
     old->details->num_resources--;
     pcmk__release_node_capacity(old->details->utilization, rsc);
     free(old);
 }
 
 /*!
  * \internal
  * \brief Check whether a resource has reached its migration threshold on a node
  *
  * \param[in,out] rsc       Resource to check
  * \param[in]     node      Node to check
  * \param[out]    failed    If threshold has been reached, this will be set to
  *                          resource that failed (possibly a parent of \p rsc)
  *
  * \return true if the migration threshold has been reached, false otherwise
  */
 bool
 pcmk__threshold_reached(pe_resource_t *rsc, const pe_node_t *node,
                         pe_resource_t **failed)
 {
     int fail_count, remaining_tries;
     pe_resource_t *rsc_to_ban = rsc;
 
     // Migration threshold of 0 means never force away
     if (rsc->migration_threshold == 0) {
         return false;
     }
 
     // If we're ignoring failures, also ignore the migration threshold
     if (pcmk_is_set(rsc->flags, pe_rsc_failure_ignored)) {
         return false;
     }
 
     // If there are no failures, there's no need to force away
     fail_count = pe_get_failcount(node, rsc, NULL,
                                   pe_fc_effective|pe_fc_fillers, NULL);
     if (fail_count <= 0) {
         return false;
     }
 
     // If failed resource is anonymous clone instance, we'll force clone away
     if (!pcmk_is_set(rsc->flags, pe_rsc_unique)) {
         rsc_to_ban = uber_parent(rsc);
     }
 
     // How many more times recovery will be tried on this node
     remaining_tries = rsc->migration_threshold - fail_count;
 
     if (remaining_tries <= 0) {
         crm_warn("%s cannot run on %s due to reaching migration threshold "
                  "(clean up resource to allow again)"
                  CRM_XS " failures=%d migration-threshold=%d",
                  rsc_to_ban->id, pe__node_name(node), fail_count,
                  rsc->migration_threshold);
         if (failed != NULL) {
             *failed = rsc_to_ban;
         }
         return true;
     }
 
     crm_info("%s can fail %d more time%s on "
              "%s before reaching migration threshold (%d)",
              rsc_to_ban->id, remaining_tries, pcmk__plural_s(remaining_tries),
              pe__node_name(node), rsc->migration_threshold);
     return false;
 }
 
 static void *
 convert_const_pointer(const void *ptr)
 {
     /* Worst function ever */
     return (void *)ptr;
 }
 
 /*!
  * \internal
  * \brief Get a node's weight
  *
  * \param[in] node     Unweighted node to check (for node ID)
  * \param[in] nodes    List of weighted nodes to look for \p node in
  *
  * \return Node's weight, or -INFINITY if not found
  */
 static int
 get_node_weight(const pe_node_t *node, GHashTable *nodes)
 {
     pe_node_t *weighted_node = NULL;
 
     if ((node != NULL) && (nodes != NULL)) {
         weighted_node = g_hash_table_lookup(nodes, node->details->id);
     }
     return (weighted_node == NULL)? -INFINITY : weighted_node->weight;
 }
 
 /*!
  * \internal
- * \brief Compare two resources according to which should be allocated first
+ * \brief Compare two resources according to which should be assigned first
  *
  * \param[in] a     First resource to compare
  * \param[in] b     Second resource to compare
  * \param[in] data  Sorted list of all nodes in cluster
  *
- * \return -1 if \p a should be allocated before \b, 0 if they are equal,
- *         or +1 if \p a should be allocated after \b
+ * \return -1 if \p a should be assigned before \b, 0 if they are equal,
+ *         or +1 if \p a should be assigned after \b
  */
 static gint
 cmp_resources(gconstpointer a, gconstpointer b, gpointer data)
 {
     const pe_resource_t *resource1 = a;
     const pe_resource_t *resource2 = b;
     const GList *nodes = (const GList *) data;
 
     int rc = 0;
     int r1_weight = -INFINITY;
     int r2_weight = -INFINITY;
     pe_node_t *r1_node = NULL;
     pe_node_t *r2_node = NULL;
     GHashTable *r1_nodes = NULL;
     GHashTable *r2_nodes = NULL;
     const char *reason = NULL;
 
-    // Resources with highest priority should be allocated first
+    // Resources with highest priority should be assigned first
     reason = "priority";
     r1_weight = resource1->priority;
     r2_weight = resource2->priority;
     if (r1_weight > r2_weight) {
         rc = -1;
         goto done;
     }
     if (r1_weight < r2_weight) {
         rc = 1;
         goto done;
     }
 
     // We need nodes to make any other useful comparisons
     reason = "no node list";
     if (nodes == NULL) {
         goto done;
     }
 
     // Calculate and log node weights
     resource1->cmds->add_colocated_node_scores(convert_const_pointer(resource1),
                                                resource1->id, &r1_nodes, NULL,
                                                1, pcmk__coloc_select_this_with);
     resource2->cmds->add_colocated_node_scores(convert_const_pointer(resource2),
                                                resource2->id, &r2_nodes, NULL,
                                                1, pcmk__coloc_select_this_with);
     pe__show_node_weights(true, NULL, resource1->id, r1_nodes,
                           resource1->cluster);
     pe__show_node_weights(true, NULL, resource2->id, r2_nodes,
                           resource2->cluster);
 
     // The resource with highest score on its current node goes first
     reason = "current location";
     if (resource1->running_on != NULL) {
         r1_node = pe__current_node(resource1);
     }
     if (resource2->running_on != NULL) {
         r2_node = pe__current_node(resource2);
     }
     r1_weight = get_node_weight(r1_node, r1_nodes);
     r2_weight = get_node_weight(r2_node, r2_nodes);
     if (r1_weight > r2_weight) {
         rc = -1;
         goto done;
     }
     if (r1_weight < r2_weight) {
         rc = 1;
         goto done;
     }
 
     // Otherwise a higher weight on any node will do
     reason = "score";
     for (const GList *iter = nodes; iter != NULL; iter = iter->next) {
         const pe_node_t *node = (const pe_node_t *) iter->data;
 
         r1_weight = get_node_weight(node, r1_nodes);
         r2_weight = get_node_weight(node, r2_nodes);
         if (r1_weight > r2_weight) {
             rc = -1;
             goto done;
         }
         if (r1_weight < r2_weight) {
             rc = 1;
             goto done;
         }
     }
 
 done:
     crm_trace("%s (%d)%s%s %c %s (%d)%s%s: %s",
               resource1->id, r1_weight,
               ((r1_node == NULL)? "" : " on "),
               ((r1_node == NULL)? "" : r1_node->details->id),
               ((rc < 0)? '>' : ((rc > 0)? '<' : '=')),
               resource2->id, r2_weight,
               ((r2_node == NULL)? "" : " on "),
               ((r2_node == NULL)? "" : r2_node->details->id),
               reason);
     if (r1_nodes != NULL) {
         g_hash_table_destroy(r1_nodes);
     }
     if (r2_nodes != NULL) {
         g_hash_table_destroy(r2_nodes);
     }
     return rc;
 }
 
 /*!
  * \internal
- * \brief Sort resources in the order they should be allocated to nodes
+ * \brief Sort resources in the order they should be assigned to nodes
  *
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__sort_resources(pe_working_set_t *data_set)
 {
     GList *nodes = g_list_copy(data_set->nodes);
 
     nodes = pcmk__sort_nodes(nodes, NULL);
     data_set->resources = g_list_sort_with_data(data_set->resources,
                                                 cmp_resources, nodes);
     g_list_free(nodes);
 }
diff --git a/lib/pacemaker/pcmk_sched_utilization.c b/lib/pacemaker/pcmk_sched_utilization.c
index 0a4bec373b..c443ef80af 100644
--- a/lib/pacemaker/pcmk_sched_utilization.c
+++ b/lib/pacemaker/pcmk_sched_utilization.c
@@ -1,469 +1,469 @@
 /*
  * Copyright 2014-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 #include <crm/msg_xml.h>
 #include <pacemaker-internal.h>
 
 #include "libpacemaker_private.h"
 
 // Name for a pseudo-op to use in ordering constraints for utilization
 #define LOAD_STOPPED "load_stopped"
 
 /*!
  * \internal
  * \brief Get integer utilization from a string
  *
  * \param[in] s  String representation of a node utilization value
  *
  * \return Integer equivalent of \p s
  * \todo It would make sense to restrict utilization values to nonnegative
  *       integers, but the documentation just says "integers" and we didn't
  *       restrict them initially, so for backward compatibility, allow any
  *       integer.
  */
 static int
 utilization_value(const char *s)
 {
     int value = 0;
 
     if ((s != NULL) && (pcmk__scan_min_int(s, &value, INT_MIN) == EINVAL)) {
         pe_warn("Using 0 for utilization instead of invalid value '%s'", value);
         value = 0;
     }
     return value;
 }
 
 
 /*
  * Functions for comparing node capacities
  */
 
 struct compare_data {
     const pe_node_t *node1;
     const pe_node_t *node2;
     bool node2_only;
     int result;
 };
 
 /*!
  * \internal
  * \brief Compare a single utilization attribute for two nodes
  *
  * Compare one utilization attribute for two nodes, incrementing the result if
  * the first node has greater capacity, and decrementing it if the second node
  * has greater capacity.
  *
  * \param[in]     key        Utilization attribute name to compare
  * \param[in]     value      Utilization attribute value to compare
  * \param[in,out] user_data  Comparison data (as struct compare_data*)
  */
 static void
 compare_utilization_value(gpointer key, gpointer value, gpointer user_data)
 {
     int node1_capacity = 0;
     int node2_capacity = 0;
     struct compare_data *data = user_data;
     const char *node2_value = NULL;
 
     if (data->node2_only) {
         if (g_hash_table_lookup(data->node1->details->utilization, key)) {
             return; // We've already compared this attribute
         }
     } else {
         node1_capacity = utilization_value((const char *) value);
     }
 
     node2_value = g_hash_table_lookup(data->node2->details->utilization, key);
     node2_capacity = utilization_value(node2_value);
 
     if (node1_capacity > node2_capacity) {
         data->result--;
     } else if (node1_capacity < node2_capacity) {
         data->result++;
     }
 }
 
 /*!
  * \internal
  * \brief Compare utilization capacities of two nodes
  *
  * \param[in] node1  First node to compare
  * \param[in] node2  Second node to compare
  *
  * \return Negative integer if node1 has more free capacity,
  *         0 if the capacities are equal, or a positive integer
  *         if node2 has more free capacity
  */
 int
 pcmk__compare_node_capacities(const pe_node_t *node1, const pe_node_t *node2)
 {
     struct compare_data data = {
         .node1      = node1,
         .node2      = node2,
         .node2_only = false,
         .result     = 0,
     };
 
     // Compare utilization values that node1 and maybe node2 have
     g_hash_table_foreach(node1->details->utilization, compare_utilization_value,
                          &data);
 
     // Compare utilization values that only node2 has
     data.node2_only = true;
     g_hash_table_foreach(node2->details->utilization, compare_utilization_value,
                          &data);
 
     return data.result;
 }
 
 
 /*
  * Functions for updating node capacities
  */
 
 struct calculate_data {
     GHashTable *current_utilization;
     bool plus;
 };
 
 /*!
  * \internal
  * \brief Update a single utilization attribute with a new value
  *
  * \param[in]     key        Name of utilization attribute to update
  * \param[in]     value      Value to add or substract
  * \param[in,out] user_data  Calculation data (as struct calculate_data *)
  */
 static void
 update_utilization_value(gpointer key, gpointer value, gpointer user_data)
 {
     int result = 0;
     const char *current = NULL;
     struct calculate_data *data = user_data;
 
     current = g_hash_table_lookup(data->current_utilization, key);
     if (data->plus) {
         result = utilization_value(current) + utilization_value(value);
     } else if (current) {
         result = utilization_value(current) - utilization_value(value);
     }
     g_hash_table_replace(data->current_utilization,
                          strdup(key), pcmk__itoa(result));
 }
 
 /*!
  * \internal
  * \brief Subtract a resource's utilization from node capacity
  *
  * \param[in,out] current_utilization  Current node utilization attributes
  * \param[in]     rsc                  Resource with utilization to subtract
  */
 void
 pcmk__consume_node_capacity(GHashTable *current_utilization,
                             const pe_resource_t *rsc)
 {
     struct calculate_data data = {
         .current_utilization = current_utilization,
         .plus = false,
     };
 
     g_hash_table_foreach(rsc->utilization, update_utilization_value, &data);
 }
 
 /*!
  * \internal
  * \brief Add a resource's utilization to node capacity
  *
  * \param[in,out] current_utilization  Current node utilization attributes
  * \param[in]     rsc                  Resource with utilization to add
  */
 void
 pcmk__release_node_capacity(GHashTable *current_utilization,
                             const pe_resource_t *rsc)
 {
     struct calculate_data data = {
         .current_utilization = current_utilization,
         .plus = true,
     };
 
     g_hash_table_foreach(rsc->utilization, update_utilization_value, &data);
 }
 
 
 /*
  * Functions for checking for sufficient node capacity
  */
 
 struct capacity_data {
     const pe_node_t *node;
     const char *rsc_id;
     bool is_enough;
 };
 
 /*!
  * \internal
  * \brief Check whether a single utilization attribute has sufficient capacity
  *
  * \param[in]     key        Name of utilization attribute to check
  * \param[in]     value      Amount of utilization required
  * \param[in,out] user_data  Capacity data (as struct capacity_data *)
  */
 static void
 check_capacity(gpointer key, gpointer value, gpointer user_data)
 {
     int required = 0;
     int remaining = 0;
     const char *node_value_s = NULL;
     struct capacity_data *data = user_data;
 
     node_value_s = g_hash_table_lookup(data->node->details->utilization, key);
 
     required = utilization_value(value);
     remaining = utilization_value(node_value_s);
 
     if (required > remaining) {
         crm_debug("Remaining capacity for %s on %s (%d) is insufficient "
                   "for resource %s usage (%d)",
                   (const char *) key, pe__node_name(data->node), remaining,
                   data->rsc_id, required);
         data->is_enough = false;
     }
 }
 
 /*!
  * \internal
  * \brief Check whether a node has sufficient capacity for a resource
  *
  * \param[in] node         Node to check
  * \param[in] rsc_id       ID of resource to check (for debug logs only)
  * \param[in] utilization  Required utilization amounts
  *
  * \return true if node has sufficient capacity for resource, otherwise false
  */
 static bool
 have_enough_capacity(const pe_node_t *node, const char *rsc_id,
                      GHashTable *utilization)
 {
     struct capacity_data data = {
         .node = node,
         .rsc_id = rsc_id,
         .is_enough = true,
     };
 
     g_hash_table_foreach(utilization, check_capacity, &data);
     return data.is_enough;
 }
 
 /*!
  * \internal
  * \brief Sum the utilization requirements of a list of resources
  *
- * \param[in] orig_rsc  Resource being allocated (for logging purposes)
+ * \param[in] orig_rsc  Resource being assigned (for logging purposes)
  * \param[in] rscs      Resources whose utilization should be summed
  *
  * \return Newly allocated hash table with sum of all utilization values
  * \note It is the caller's responsibility to free the return value using
  *       g_hash_table_destroy().
  */
 static GHashTable *
 sum_resource_utilization(const pe_resource_t *orig_rsc, GList *rscs)
 {
     GHashTable *utilization = pcmk__strkey_table(free, free);
 
     for (GList *iter = rscs; iter != NULL; iter = iter->next) {
         pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
         rsc->cmds->add_utilization(rsc, orig_rsc, rscs, utilization);
     }
     return utilization;
 }
 
 /*!
  * \internal
  * \brief Ban resource from nodes with insufficient utilization capacity
  *
  * \param[in,out] rsc  Resource to check
  *
  * \return Allowed node for \p rsc with most spare capacity, if there are no
  *         nodes with enough capacity for \p rsc and all its colocated resources
  */
 const pe_node_t *
 pcmk__ban_insufficient_capacity(pe_resource_t *rsc)
 {
     bool any_capable = false;
     char *rscs_id = NULL;
     pe_node_t *node = NULL;
     const pe_node_t *most_capable_node = NULL;
     GList *colocated_rscs = NULL;
-    GHashTable *unallocated_utilization = NULL;
+    GHashTable *unassigned_utilization = NULL;
     GHashTableIter iter;
 
     CRM_CHECK(rsc != NULL, return NULL);
 
     // The default placement strategy ignores utilization
     if (pcmk__str_eq(rsc->cluster->placement_strategy, "default",
                      pcmk__str_casei)) {
         return NULL;
     }
 
     // Check whether any resources are colocated with this one
     colocated_rscs = rsc->cmds->colocated_resources(rsc, NULL, NULL);
     if (colocated_rscs == NULL) {
         return NULL;
     }
 
     rscs_id = crm_strdup_printf("%s and its colocated resources", rsc->id);
 
     // If rsc isn't in the list, add it so we include its utilization
     if (g_list_find(colocated_rscs, rsc) == NULL) {
         colocated_rscs = g_list_append(colocated_rscs, rsc);
     }
 
-    // Sum utilization of colocated resources that haven't been allocated yet
-    unallocated_utilization = sum_resource_utilization(rsc, colocated_rscs);
+    // Sum utilization of colocated resources that haven't been assigned yet
+    unassigned_utilization = sum_resource_utilization(rsc, colocated_rscs);
 
     // Check whether any node has enough capacity for all the resources
     g_hash_table_iter_init(&iter, rsc->allowed_nodes);
     while (g_hash_table_iter_next(&iter, NULL, (void **) &node)) {
         if (!pcmk__node_available(node, true, false)) {
             continue;
         }
 
-        if (have_enough_capacity(node, rscs_id, unallocated_utilization)) {
+        if (have_enough_capacity(node, rscs_id, unassigned_utilization)) {
             any_capable = true;
         }
 
         // Keep track of node with most free capacity
         if ((most_capable_node == NULL)
             || (pcmk__compare_node_capacities(node, most_capable_node) < 0)) {
             most_capable_node = node;
         }
     }
 
     if (any_capable) {
         // If so, ban resource from any node with insufficient capacity
         g_hash_table_iter_init(&iter, rsc->allowed_nodes);
         while (g_hash_table_iter_next(&iter, NULL, (void **) &node)) {
             if (pcmk__node_available(node, true, false)
                 && !have_enough_capacity(node, rscs_id,
-                                         unallocated_utilization)) {
+                                         unassigned_utilization)) {
                 pe_rsc_debug(rsc, "%s does not have enough capacity for %s",
                              pe__node_name(node), rscs_id);
                 resource_location(rsc, node, -INFINITY, "__limit_utilization__",
                                   rsc->cluster);
             }
         }
         most_capable_node = NULL;
 
     } else {
         // Otherwise, ban from nodes with insufficient capacity for rsc alone
         g_hash_table_iter_init(&iter, rsc->allowed_nodes);
         while (g_hash_table_iter_next(&iter, NULL, (void **) &node)) {
             if (pcmk__node_available(node, true, false)
                 && !have_enough_capacity(node, rsc->id, rsc->utilization)) {
                 pe_rsc_debug(rsc, "%s does not have enough capacity for %s",
                              pe__node_name(node), rsc->id);
                 resource_location(rsc, node, -INFINITY, "__limit_utilization__",
                                   rsc->cluster);
             }
         }
     }
 
-    g_hash_table_destroy(unallocated_utilization);
+    g_hash_table_destroy(unassigned_utilization);
     g_list_free(colocated_rscs);
     free(rscs_id);
 
     pe__show_node_weights(true, rsc, "Post-utilization",
                           rsc->allowed_nodes, rsc->cluster);
     return most_capable_node;
 }
 
 /*!
  * \internal
  * \brief Create a new load_stopped pseudo-op for a node
  *
  * \param[in]     node      Node to create op for
  * \param[in,out] data_set  Cluster working set
  *
  * \return Newly created load_stopped op
  */
 static pe_action_t *
 new_load_stopped_op(const pe_node_t *node, pe_working_set_t *data_set)
 {
     char *load_stopped_task = crm_strdup_printf(LOAD_STOPPED "_%s",
                                                 node->details->uname);
     pe_action_t *load_stopped = get_pseudo_op(load_stopped_task, data_set);
 
     if (load_stopped->node == NULL) {
         load_stopped->node = pe__copy_node(node);
         pe__clear_action_flags(load_stopped, pe_action_optional);
     }
     free(load_stopped_task);
     return load_stopped;
 }
 
 /*!
  * \internal
  * \brief Create utilization-related internal constraints for a resource
  *
  * \param[in,out] rsc            Resource to create constraints for
  * \param[in]     allowed_nodes  List of allowed next nodes for \p rsc
  */
 void
 pcmk__create_utilization_constraints(pe_resource_t *rsc,
                                      const GList *allowed_nodes)
 {
     const GList *iter = NULL;
     const pe_node_t *node = NULL;
     pe_action_t *load_stopped = NULL;
 
     pe_rsc_trace(rsc, "Creating utilization constraints for %s - strategy: %s",
                  rsc->id, rsc->cluster->placement_strategy);
 
     // "stop rsc then load_stopped" constraints for current nodes
     for (iter = rsc->running_on; iter != NULL; iter = iter->next) {
         node = (const pe_node_t *) iter->data;
         load_stopped = new_load_stopped_op(node, rsc->cluster);
         pcmk__new_ordering(rsc, stop_key(rsc), NULL, NULL, NULL, load_stopped,
                            pe_order_load, rsc->cluster);
     }
 
     // "load_stopped then start/migrate_to rsc" constraints for allowed nodes
     for (iter = allowed_nodes; iter; iter = iter->next) {
         node = (const pe_node_t *) iter->data;
         load_stopped = new_load_stopped_op(node, rsc->cluster);
         pcmk__new_ordering(NULL, NULL, load_stopped, rsc, start_key(rsc), NULL,
                            pe_order_load, rsc->cluster);
         pcmk__new_ordering(NULL, NULL, load_stopped,
                            rsc, pcmk__op_key(rsc->id, RSC_MIGRATE, 0), NULL,
                            pe_order_load, rsc->cluster);
     }
 }
 
 /*!
  * \internal
  * \brief Output node capacities if enabled
  *
  * \param[in]     desc      Prefix for output
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__show_node_capacities(const char *desc, pe_working_set_t *data_set)
 {
     if (!pcmk_is_set(data_set->flags, pe_flag_show_utilization)) {
         return;
     }
     for (const GList *iter = data_set->nodes; iter != NULL; iter = iter->next) {
         const pe_node_t *node = (const pe_node_t *) iter->data;
         pcmk__output_t *out = data_set->priv;
 
         out->message(out, "node-capacity", node, desc);
     }
 }
diff --git a/lib/pacemaker/pcmk_scheduler.c b/lib/pacemaker/pcmk_scheduler.c
index b4e670d865..edd21180f6 100644
--- a/lib/pacemaker/pcmk_scheduler.c
+++ b/lib/pacemaker/pcmk_scheduler.c
@@ -1,811 +1,811 @@
 /*
  * Copyright 2004-2023 the Pacemaker project contributors
  *
  * The version control history for this file may have further details.
  *
  * This source code is licensed under the GNU General Public License version 2
  * or later (GPLv2+) WITHOUT ANY WARRANTY.
  */
 
 #include <crm_internal.h>
 
 #include <crm/crm.h>
 #include <crm/cib.h>
 #include <crm/msg_xml.h>
 #include <crm/common/xml.h>
 #include <crm/common/xml_internal.h>
 
 #include <glib.h>
 
 #include <crm/pengine/status.h>
 #include <pacemaker-internal.h>
 #include "libpacemaker_private.h"
 
 CRM_TRACE_INIT_DATA(pacemaker);
 
 /*!
  * \internal
- * \brief Do deferred action checks after allocation
+ * \brief Do deferred action checks after assignment
  *
  * When unpacking the resource history, the scheduler checks for resource
  * configurations that have changed since an action was run. However, at that
  * time, bundles using the REMOTE_CONTAINER_HACK don't have their final
  * parameter information, so instead they add a deferred check to a list. This
  * function processes one entry in that list.
  *
  * \param[in,out] rsc     Resource that action history is for
  * \param[in,out] node    Node that action history is for
  * \param[in]     rsc_op  Action history entry
  * \param[in]     check   Type of deferred check to do
  */
 static void
 check_params(pe_resource_t *rsc, pe_node_t *node, const xmlNode *rsc_op,
              enum pe_check_parameters check)
 {
     const char *reason = NULL;
     op_digest_cache_t *digest_data = NULL;
 
     switch (check) {
         case pe_check_active:
             if (pcmk__check_action_config(rsc, node, rsc_op)
                 && pe_get_failcount(node, rsc, NULL, pe_fc_effective, NULL)) {
                 reason = "action definition changed";
             }
             break;
 
         case pe_check_last_failure:
             digest_data = rsc_action_digest_cmp(rsc, rsc_op, node,
                                                 rsc->cluster);
             switch (digest_data->rc) {
                 case RSC_DIGEST_UNKNOWN:
                     crm_trace("Resource %s history entry %s on %s has "
                               "no digest to compare",
                               rsc->id, ID(rsc_op), node->details->id);
                     break;
                 case RSC_DIGEST_MATCH:
                     break;
                 default:
                     reason = "resource parameters have changed";
                     break;
             }
             break;
     }
     if (reason != NULL) {
         pe__clear_failcount(rsc, node, reason, rsc->cluster);
     }
 }
 
 /*!
  * \internal
  * \brief Check whether a resource has failcount clearing scheduled on a node
  *
  * \param[in] node  Node to check
  * \param[in] rsc   Resource to check
  *
  * \return true if \p rsc has failcount clearing scheduled on \p node,
  *         otherwise false
  */
 static bool
 failcount_clear_action_exists(const pe_node_t *node, const pe_resource_t *rsc)
 {
     GList *list = pe__resource_actions(rsc, node, CRM_OP_CLEAR_FAILCOUNT, TRUE);
 
     if (list != NULL) {
         g_list_free(list);
         return true;
     }
     return false;
 }
 
 /*!
  * \internal
  * \brief Ban a resource from a node if it reached its failure threshold there
  *
  * \param[in,out] rsc   Resource to check failure threshold for
  * \param[in]     node  Node to check \p rsc on
  */
 static void
 check_failure_threshold(pe_resource_t *rsc, const pe_node_t *node)
 {
     // If this is a collective resource, apply recursively to children instead
     if (rsc->children != NULL) {
         g_list_foreach(rsc->children, (GFunc) check_failure_threshold,
                        (gpointer) node);
         return;
 
     } else if (failcount_clear_action_exists(node, rsc)) {
         /* Don't force the resource away from this node due to a failcount
          * that's going to be cleared.
          *
          * @TODO Failcount clearing can be scheduled in
          * pcmk__handle_rsc_config_changes() via process_rsc_history(), or in
          * schedule_resource_actions() via check_params(). This runs well before
          * then, so it cannot detect those, meaning we might check the migration
          * threshold when we shouldn't. Worst case, we stop or move the
          * resource, then move it back in the next transition.
          */
         return;
 
     } else {
         pe_resource_t *failed = NULL;
 
         if (pcmk__threshold_reached(rsc, node, &failed)) {
             resource_location(failed, node, -INFINITY, "__fail_limit__",
                               rsc->cluster);
         }
     }
 }
 
 /*!
  * \internal
  * \brief If resource has exclusive discovery, ban node if not allowed
  *
  * Location constraints have a resource-discovery option that allows users to
  * specify where probes are done for the affected resource. If this is set to
  * exclusive, probes will only be done on nodes listed in exclusive constraints.
  * This function bans the resource from the node if the node is not listed.
  *
  * \param[in,out] rsc   Resource to check
  * \param[in]     node  Node to check \p rsc on
  */
 static void
 apply_exclusive_discovery(pe_resource_t *rsc, const pe_node_t *node)
 {
     if (rsc->exclusive_discover
         || pe__const_top_resource(rsc, false)->exclusive_discover) {
         pe_node_t *match = NULL;
 
         // If this is a collective resource, apply recursively to children
         g_list_foreach(rsc->children, (GFunc) apply_exclusive_discovery,
                        (gpointer) node);
 
         match = g_hash_table_lookup(rsc->allowed_nodes, node->details->id);
         if ((match != NULL)
             && (match->rsc_discover_mode != pe_discover_exclusive)) {
             match->weight = -INFINITY;
         }
     }
 }
 
 /*!
  * \internal
  * \brief Apply stickiness to a resource if appropriate
  *
  * \param[in,out] rsc       Resource to check for stickiness
  * \param[in,out] data_set  Cluster working set
  */
 static void
 apply_stickiness(pe_resource_t *rsc, pe_working_set_t *data_set)
 {
     pe_node_t *node = NULL;
 
     // If this is a collective resource, apply recursively to children instead
     if (rsc->children != NULL) {
         g_list_foreach(rsc->children, (GFunc) apply_stickiness, data_set);
         return;
     }
 
     /* A resource is sticky if it is managed, has stickiness configured, and is
      * active on a single node.
      */
     if (!pcmk_is_set(rsc->flags, pe_rsc_managed)
         || (rsc->stickiness < 1) || !pcmk__list_of_1(rsc->running_on)) {
         return;
     }
 
     node = rsc->running_on->data;
 
     /* In a symmetric cluster, stickiness can always be used. In an
      * asymmetric cluster, we have to check whether the resource is still
      * allowed on the node, so we don't keep the resource somewhere it is no
      * longer explicitly enabled.
      */
     if (!pcmk_is_set(rsc->cluster->flags, pe_flag_symmetric_cluster)
         && (pe_hash_table_lookup(rsc->allowed_nodes,
                                  node->details->id) == NULL)) {
         pe_rsc_debug(rsc,
                      "Ignoring %s stickiness because the cluster is "
                      "asymmetric and %s is not explicitly allowed",
                      rsc->id, pe__node_name(node));
         return;
     }
 
     pe_rsc_debug(rsc, "Resource %s has %d stickiness on %s",
                  rsc->id, rsc->stickiness, pe__node_name(node));
     resource_location(rsc, node, rsc->stickiness, "stickiness", data_set);
 }
 
 /*!
  * \internal
  * \brief Apply shutdown locks for all resources as appropriate
  *
  * \param[in,out] data_set  Cluster working set
  */
 static void
 apply_shutdown_locks(pe_working_set_t *data_set)
 {
     if (!pcmk_is_set(data_set->flags, pe_flag_shutdown_lock)) {
         return;
     }
     for (GList *iter = data_set->resources; iter != NULL; iter = iter->next) {
         pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
         rsc->cmds->shutdown_lock(rsc);
     }
 }
 
 /*!
  * \internal
  * \brief Calculate the number of available nodes in the cluster
  *
  * \param[in,out] data_set  Cluster working set
  */
 static void
 count_available_nodes(pe_working_set_t *data_set)
 {
     if (pcmk_is_set(data_set->flags, pe_flag_no_compat)) {
         return;
     }
 
     // @COMPAT for API backward compatibility only (cluster does not use value)
     for (GList *iter = data_set->nodes; iter != NULL; iter = iter->next) {
         pe_node_t *node = (pe_node_t *) iter->data;
 
         if ((node != NULL) && (node->weight >= 0) && node->details->online
             && (node->details->type != node_ping)) {
             data_set->max_valid_nodes++;
         }
     }
     crm_trace("Online node count: %d", data_set->max_valid_nodes);
 }
 
 /*
  * \internal
  * \brief Apply node-specific scheduling criteria
  *
  * After the CIB has been unpacked, process node-specific scheduling criteria
  * including shutdown locks, location constraints, resource stickiness,
  * migration thresholds, and exclusive resource discovery.
  */
 static void
 apply_node_criteria(pe_working_set_t *data_set)
 {
     crm_trace("Applying node-specific scheduling criteria");
     apply_shutdown_locks(data_set);
     count_available_nodes(data_set);
     pcmk__apply_locations(data_set);
     g_list_foreach(data_set->resources, (GFunc) apply_stickiness, data_set);
 
     for (GList *node_iter = data_set->nodes; node_iter != NULL;
          node_iter = node_iter->next) {
         for (GList *rsc_iter = data_set->resources; rsc_iter != NULL;
              rsc_iter = rsc_iter->next) {
             pe_node_t *node = (pe_node_t *) node_iter->data;
             pe_resource_t *rsc = (pe_resource_t *) rsc_iter->data;
 
             check_failure_threshold(rsc, node);
             apply_exclusive_discovery(rsc, node);
         }
     }
 }
 
 /*!
  * \internal
- * \brief Allocate resources to nodes
+ * \brief Assign resources to nodes
  *
  * \param[in,out] data_set  Cluster working set
  */
 static void
-allocate_resources(pe_working_set_t *data_set)
+assign_resources(pe_working_set_t *data_set)
 {
     GList *iter = NULL;
 
-    crm_trace("Allocating resources to nodes");
+    crm_trace("Assigning resources to nodes");
 
     if (!pcmk__str_eq(data_set->placement_strategy, "default", pcmk__str_casei)) {
         pcmk__sort_resources(data_set);
     }
     pcmk__show_node_capacities("Original", data_set);
 
     if (pcmk_is_set(data_set->flags, pe_flag_have_remote_nodes)) {
-        /* Allocate remote connection resources first (which will also allocate
-         * any colocation dependencies). If the connection is migrating, always
+        /* Assign remote connection resources first (which will also assign any
+         * colocation dependencies). If the connection is migrating, always
          * prefer the partial migration target.
          */
         for (iter = data_set->resources; iter != NULL; iter = iter->next) {
             pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
             if (rsc->is_remote_node) {
-                pe_rsc_trace(rsc, "Allocating remote connection resource '%s'",
+                pe_rsc_trace(rsc, "Assigning remote connection resource '%s'",
                              rsc->id);
                 rsc->cmds->assign(rsc, rsc->partial_migration_target);
             }
         }
     }
 
     /* now do the rest of the resources */
     for (iter = data_set->resources; iter != NULL; iter = iter->next) {
         pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
         if (!rsc->is_remote_node) {
-            pe_rsc_trace(rsc, "Allocating %s resource '%s'",
+            pe_rsc_trace(rsc, "Assigning %s resource '%s'",
                          crm_element_name(rsc->xml), rsc->id);
             rsc->cmds->assign(rsc, NULL);
         }
     }
 
     pcmk__show_node_capacities("Remaining", data_set);
 }
 
 /*!
  * \internal
  * \brief Schedule fail count clearing on online nodes if resource is orphaned
  *
  * \param[in,out] rsc       Resource to check
  * \param[in,out] data_set  Cluster working set
  */
 static void
 clear_failcounts_if_orphaned(pe_resource_t *rsc, pe_working_set_t *data_set)
 {
     if (!pcmk_is_set(rsc->flags, pe_rsc_orphan)) {
         return;
     }
     crm_trace("Clear fail counts for orphaned resource %s", rsc->id);
 
     /* There's no need to recurse into rsc->children because those
-     * should just be unallocated clone instances.
+     * should just be unassigned clone instances.
      */
 
     for (GList *iter = data_set->nodes; iter != NULL; iter = iter->next) {
         pe_node_t *node = (pe_node_t *) iter->data;
         pe_action_t *clear_op = NULL;
 
         if (!node->details->online) {
             continue;
         }
         if (pe_get_failcount(node, rsc, NULL, pe_fc_effective, NULL) == 0) {
             continue;
         }
 
         clear_op = pe__clear_failcount(rsc, node, "it is orphaned", data_set);
 
         /* We can't use order_action_then_stop() here because its
          * pe_order_preserve breaks things
          */
         pcmk__new_ordering(clear_op->rsc, NULL, clear_op, rsc, stop_key(rsc),
                            NULL, pe_order_optional, data_set);
     }
 }
 
 /*!
  * \internal
  * \brief Schedule any resource actions needed
  *
  * \param[in,out] data_set  Cluster working set
  */
 static void
 schedule_resource_actions(pe_working_set_t *data_set)
 {
     // Process deferred action checks
     pe__foreach_param_check(data_set, check_params);
     pe__free_param_checks(data_set);
 
     if (pcmk_is_set(data_set->flags, pe_flag_startup_probes)) {
         crm_trace("Scheduling probes");
         pcmk__schedule_probes(data_set);
     }
 
     if (pcmk_is_set(data_set->flags, pe_flag_stop_rsc_orphans)) {
         g_list_foreach(data_set->resources,
                        (GFunc) clear_failcounts_if_orphaned, data_set);
     }
 
     crm_trace("Scheduling resource actions");
     for (GList *iter = data_set->resources; iter != NULL; iter = iter->next) {
         pe_resource_t *rsc = (pe_resource_t *) iter->data;
 
         rsc->cmds->create_actions(rsc);
     }
 }
 
 /*!
  * \internal
  * \brief Check whether a resource or any of its descendants are managed
  *
  * \param[in] rsc  Resource to check
  *
  * \return true if resource or any descendant is managed, otherwise false
  */
 static bool
 is_managed(const pe_resource_t *rsc)
 {
     if (pcmk_is_set(rsc->flags, pe_rsc_managed)) {
         return true;
     }
     for (GList *iter = rsc->children; iter != NULL; iter = iter->next) {
         if (is_managed((pe_resource_t *) iter->data)) {
             return true;
         }
     }
     return false;
 }
 
 /*!
  * \internal
  * \brief Check whether any resources in the cluster are managed
  *
  * \param[in] data_set  Cluster working set
  *
  * \return true if any resource is managed, otherwise false
  */
 static bool
 any_managed_resources(const pe_working_set_t *data_set)
 {
     for (const GList *iter = data_set->resources;
          iter != NULL; iter = iter->next) {
         if (is_managed((const pe_resource_t *) iter->data)) {
             return true;
         }
     }
     return false;
 }
 
 /*!
  * \internal
  * \brief Check whether a node requires fencing
  *
  * \param[in] node          Node to check
  * \param[in] have_managed  Whether any resource in cluster is managed
  * \param[in] data_set      Cluster working set
  *
  * \return true if \p node should be fenced, otherwise false
  */
 static bool
 needs_fencing(const pe_node_t *node, bool have_managed,
               const pe_working_set_t *data_set)
 {
     return have_managed && node->details->unclean
            && pe_can_fence(data_set, node);
 }
 
 /*!
  * \internal
  * \brief Check whether a node requires shutdown
  *
  * \param[in] node          Node to check
  *
  * \return true if \p node should be shut down, otherwise false
  */
 static bool
 needs_shutdown(const pe_node_t *node)
 {
     if (pe__is_guest_or_remote_node(node)) {
        /* Do not send shutdown actions for Pacemaker Remote nodes.
         * @TODO We might come up with a good use for this in the future.
         */
         return false;
     }
     return node->details->online && node->details->shutdown;
 }
 
 /*!
  * \internal
  * \brief Track and order non-DC fencing
  *
  * \param[in,out] list      List of existing non-DC fencing actions
  * \param[in,out] action    Fencing action to prepend to \p list
  * \param[in]     data_set  Cluster working set
  *
  * \return (Possibly new) head of \p list
  */
 static GList *
 add_nondc_fencing(GList *list, pe_action_t *action,
                   const pe_working_set_t *data_set)
 {
     if (!pcmk_is_set(data_set->flags, pe_flag_concurrent_fencing)
         && (list != NULL)) {
         /* Concurrent fencing is disabled, so order each non-DC
          * fencing in a chain. If there is any DC fencing or
          * shutdown, it will be ordered after the last action in the
          * chain later.
          */
         order_actions((pe_action_t *) list->data, action, pe_order_optional);
     }
     return g_list_prepend(list, action);
 }
 
 /*!
  * \internal
  * \brief Schedule a node for fencing
  *
  * \param[in,out] node      Node that requires fencing
  * \param[in,out] data_set  Cluster working set
  */
 static pe_action_t *
 schedule_fencing(pe_node_t *node, pe_working_set_t *data_set)
 {
     pe_action_t *fencing = pe_fence_op(node, NULL, FALSE, "node is unclean",
                                        FALSE, data_set);
 
     pe_warn("Scheduling node %s for fencing", pe__node_name(node));
     pcmk__order_vs_fence(fencing, data_set);
     return fencing;
 }
 
 /*!
  * \internal
  * \brief Create and order node fencing and shutdown actions
  *
  * \param[in,out] data_set  Cluster working set
  */
 static void
 schedule_fencing_and_shutdowns(pe_working_set_t *data_set)
 {
     pe_action_t *dc_down = NULL;
     bool integrity_lost = false;
     bool have_managed = any_managed_resources(data_set);
     GList *fencing_ops = NULL;
     GList *shutdown_ops = NULL;
 
     crm_trace("Scheduling fencing and shutdowns as needed");
     if (!have_managed) {
         crm_notice("No fencing will be done until there are resources to manage");
     }
 
     // Check each node for whether it needs fencing or shutdown
     for (GList *iter = data_set->nodes; iter != NULL; iter = iter->next) {
         pe_node_t *node = (pe_node_t *) iter->data;
         pe_action_t *fencing = NULL;
 
         /* Guest nodes are "fenced" by recovering their container resource,
          * so handle them separately.
          */
         if (pe__is_guest_node(node)) {
             if (node->details->remote_requires_reset && have_managed
                 && pe_can_fence(data_set, node)) {
                 pcmk__fence_guest(node);
             }
             continue;
         }
 
         if (needs_fencing(node, have_managed, data_set)) {
             fencing = schedule_fencing(node, data_set);
 
             // Track DC and non-DC fence actions separately
             if (node->details->is_dc) {
                 dc_down = fencing;
             } else {
                 fencing_ops = add_nondc_fencing(fencing_ops, fencing, data_set);
             }
 
         } else if (needs_shutdown(node)) {
             pe_action_t *down_op = pcmk__new_shutdown_action(node);
 
             // Track DC and non-DC shutdown actions separately
             if (node->details->is_dc) {
                 dc_down = down_op;
             } else {
                 shutdown_ops = g_list_prepend(shutdown_ops, down_op);
             }
         }
 
         if ((fencing == NULL) && node->details->unclean) {
             integrity_lost = true;
             pe_warn("Node %s is unclean but cannot be fenced",
                     pe__node_name(node));
         }
     }
 
     if (integrity_lost) {
         if (!pcmk_is_set(data_set->flags, pe_flag_stonith_enabled)) {
             pe_warn("Resource functionality and data integrity cannot be "
                     "guaranteed (configure, enable, and test fencing to "
                     "correct this)");
 
         } else if (!pcmk_is_set(data_set->flags, pe_flag_have_quorum)) {
             crm_notice("Unclean nodes will not be fenced until quorum is "
                        "attained or no-quorum-policy is set to ignore");
         }
     }
 
     if (dc_down != NULL) {
         /* Order any non-DC shutdowns before any DC shutdown, to avoid repeated
          * DC elections. However, we don't want to order non-DC shutdowns before
          * a DC *fencing*, because even though we don't want a node that's
          * shutting down to become DC, the DC fencing could be ordered before a
          * clone stop that's also ordered before the shutdowns, thus leading to
          * a graph loop.
          */
         if (pcmk__str_eq(dc_down->task, CRM_OP_SHUTDOWN, pcmk__str_none)) {
             pcmk__order_after_each(dc_down, shutdown_ops);
         }
 
         // Order any non-DC fencing before any DC fencing or shutdown
 
         if (pcmk_is_set(data_set->flags, pe_flag_concurrent_fencing)) {
             /* With concurrent fencing, order each non-DC fencing action
              * separately before any DC fencing or shutdown.
              */
             pcmk__order_after_each(dc_down, fencing_ops);
         } else if (fencing_ops != NULL) {
             /* Without concurrent fencing, the non-DC fencing actions are
              * already ordered relative to each other, so we just need to order
              * the DC fencing after the last action in the chain (which is the
              * first item in the list).
              */
             order_actions((pe_action_t *) fencing_ops->data, dc_down,
                           pe_order_optional);
         }
     }
     g_list_free(fencing_ops);
     g_list_free(shutdown_ops);
 }
 
 static void
 log_resource_details(pe_working_set_t *data_set)
 {
     pcmk__output_t *out = data_set->priv;
     GList *all = NULL;
 
     /* We need a list of nodes that we are allowed to output information for.
      * This is necessary because out->message for all the resource-related
      * messages expects such a list, due to the `crm_mon --node=` feature.  Here,
      * we just make it a list of all the nodes.
      */
     all = g_list_prepend(all, (gpointer) "*");
 
     for (GList *item = data_set->resources; item != NULL; item = item->next) {
         pe_resource_t *rsc = (pe_resource_t *) item->data;
 
         // Log all resources except inactive orphans
         if (!pcmk_is_set(rsc->flags, pe_rsc_orphan)
             || (rsc->role != RSC_ROLE_STOPPED)) {
             out->message(out, crm_map_element_name(rsc->xml), 0, rsc, all, all);
         }
     }
 
     g_list_free(all);
 }
 
 static void
 log_all_actions(pe_working_set_t *data_set)
 {
     /* This only ever outputs to the log, so ignore whatever output object was
      * previously set and just log instead.
      */
     pcmk__output_t *prev_out = data_set->priv;
     pcmk__output_t *out = NULL;
 
     if (pcmk__log_output_new(&out) != pcmk_rc_ok) {
         return;
     }
 
     pe__register_messages(out);
     pcmk__register_lib_messages(out);
     pcmk__output_set_log_level(out, LOG_NOTICE);
     data_set->priv = out;
 
     out->begin_list(out, NULL, NULL, "Actions");
     pcmk__output_actions(data_set);
     out->end_list(out);
     out->finish(out, CRM_EX_OK, true, NULL);
     pcmk__output_free(out);
 
     data_set->priv = prev_out;
 }
 
 /*!
  * \internal
  * \brief Log all required but unrunnable actions at trace level
  *
  * \param[in] data_set  Cluster working set
  */
 static void
 log_unrunnable_actions(const pe_working_set_t *data_set)
 {
     const uint64_t flags = pe_action_optional|pe_action_runnable|pe_action_pseudo;
 
     crm_trace("Required but unrunnable actions:");
     for (const GList *iter = data_set->actions;
          iter != NULL; iter = iter->next) {
 
         const pe_action_t *action = (const pe_action_t *) iter->data;
 
         if (!pcmk_any_flags_set(action->flags, flags)) {
             pcmk__log_action("\t", action, true);
         }
     }
 }
 
 /*!
  * \internal
  * \brief Unpack the CIB for scheduling
  *
  * \param[in,out] cib       CIB XML to unpack (may be NULL if already unpacked)
  * \param[in]     flags     Working set flags to set in addition to defaults
  * \param[in,out] data_set  Cluster working set
  */
 static void
 unpack_cib(xmlNode *cib, unsigned long long flags, pe_working_set_t *data_set)
 {
     const char* localhost_save = NULL;
 
     if (pcmk_is_set(data_set->flags, pe_flag_have_status)) {
         crm_trace("Reusing previously calculated cluster status");
         pe__set_working_set_flags(data_set, flags);
         return;
     }
 
     if (data_set->localhost) {
         localhost_save = data_set->localhost;
     }
 
     CRM_ASSERT(cib != NULL);
     crm_trace("Calculating cluster status");
 
     /* This will zero the entire struct without freeing anything first, so
      * callers should never call pcmk__schedule_actions() with a populated data
      * set unless pe_flag_have_status is set (i.e. cluster_status() was
      * previously called, whether directly or via pcmk__schedule_actions()).
      */
     set_working_set_defaults(data_set);
 
     if (localhost_save) {
         data_set->localhost = localhost_save;
     }
 
     pe__set_working_set_flags(data_set, flags);
     data_set->input = cib;
     cluster_status(data_set); // Sets pe_flag_have_status
 }
 
 /*!
  * \internal
  * \brief Run the scheduler for a given CIB
  *
  * \param[in,out] cib       CIB XML to use as scheduler input
  * \param[in]     flags     Working set flags to set in addition to defaults
  * \param[in,out] data_set  Cluster working set
  */
 void
 pcmk__schedule_actions(xmlNode *cib, unsigned long long flags,
                        pe_working_set_t *data_set)
 {
     unpack_cib(cib, flags, data_set);
-    pcmk__set_allocation_methods(data_set);
+    pcmk__set_assignment_methods(data_set);
     pcmk__apply_node_health(data_set);
     pcmk__unpack_constraints(data_set);
     if (pcmk_is_set(data_set->flags, pe_flag_check_config)) {
         return;
     }
 
     if (!pcmk_is_set(data_set->flags, pe_flag_quick_location) &&
          pcmk__is_daemon) {
         log_resource_details(data_set);
     }
 
     apply_node_criteria(data_set);
 
     if (pcmk_is_set(data_set->flags, pe_flag_quick_location)) {
         return;
     }
 
     pcmk__create_internal_constraints(data_set);
     pcmk__handle_rsc_config_changes(data_set);
-    allocate_resources(data_set);
+    assign_resources(data_set);
     schedule_resource_actions(data_set);
 
     /* Remote ordering constraints need to happen prior to calculating fencing
      * because it is one more place we can mark nodes as needing fencing.
      */
     pcmk__order_remote_connection_actions(data_set);
 
     schedule_fencing_and_shutdowns(data_set);
     pcmk__apply_orderings(data_set);
     log_all_actions(data_set);
     pcmk__create_graph(data_set);
 
     if (get_crm_log_level() == LOG_TRACE) {
         log_unrunnable_actions(data_set);
     }
 }