diff --git a/cts/cli/regression.acls.exp b/cts/cli/regression.acls.exp
index be91b93455..17ef44847a 100644
--- a/cts/cli/regression.acls.exp
+++ b/cts/cli/regression.acls.exp
@@ -1,2875 +1,2875 @@
=#=#=#= Begin test: Configure some ACLs =#=#=#=
=#=#=#= Current cib after: Configure some ACLs =#=#=#=
=#=#=#= End test: Configure some ACLs - OK (0) =#=#=#=
* Passed: cibadmin - Configure some ACLs
=#=#=#= Begin test: Enable ACLs =#=#=#=
=#=#=#= Current cib after: Enable ACLs =#=#=#=
=#=#=#= End test: Enable ACLs - OK (0) =#=#=#=
* Passed: crm_attribute - Enable ACLs
=#=#=#= Begin test: Set cluster option =#=#=#=
=#=#=#= Current cib after: Set cluster option =#=#=#=
=#=#=#= End test: Set cluster option - OK (0) =#=#=#=
* Passed: crm_attribute - Set cluster option
=#=#=#= Begin test: New ACL role =#=#=#=
=#=#=#= Current cib after: New ACL role =#=#=#=
=#=#=#= End test: New ACL role - OK (0) =#=#=#=
* Passed: cibadmin - New ACL role
=#=#=#= Begin test: New ACL target =#=#=#=
=#=#=#= Current cib after: New ACL target =#=#=#=
=#=#=#= End test: New ACL target - OK (0) =#=#=#=
* Passed: cibadmin - New ACL target
=#=#=#= Begin test: Another ACL role =#=#=#=
=#=#=#= Current cib after: Another ACL role =#=#=#=
=#=#=#= End test: Another ACL role - OK (0) =#=#=#=
* Passed: cibadmin - Another ACL role
=#=#=#= Begin test: Another ACL target =#=#=#=
=#=#=#= Current cib after: Another ACL target =#=#=#=
=#=#=#= End test: Another ACL target - OK (0) =#=#=#=
* Passed: cibadmin - Another ACL target
=#=#=#= Begin test: Updated ACL =#=#=#=
=#=#=#= Current cib after: Updated ACL =#=#=#=
=#=#=#= End test: Updated ACL - OK (0) =#=#=#=
* Passed: cibadmin - Updated ACL
=#=#=#= Begin test: unknownguy: Query configuration =#=#=#=
Call failed: Permission denied
=#=#=#= End test: unknownguy: Query configuration - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - unknownguy: Query configuration
=#=#=#= Begin test: unknownguy: Set enable-acl =#=#=#=
crm_attribute: Error performing operation: Permission denied
=#=#=#= End test: unknownguy: Set enable-acl - Insufficient privileges (4) =#=#=#=
* Passed: crm_attribute - unknownguy: Set enable-acl
=#=#=#= Begin test: unknownguy: Set stonith-enabled =#=#=#=
crm_attribute: Error performing operation: Permission denied
=#=#=#= End test: unknownguy: Set stonith-enabled - Insufficient privileges (4) =#=#=#=
* Passed: crm_attribute - unknownguy: Set stonith-enabled
=#=#=#= Begin test: unknownguy: Create a resource =#=#=#=
pcmk__check_acl trace: User 'unknownguy' without ACLs denied read/write access to /cib/configuration/resources/primitive[@id='dummy']
pcmk__apply_creation_acl trace: ACLs disallow creation of with id="dummy"
Call failed: Permission denied
=#=#=#= End test: unknownguy: Create a resource - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - unknownguy: Create a resource
=#=#=#= Begin test: l33t-haxor: Query configuration =#=#=#=
Call failed: Permission denied
=#=#=#= End test: l33t-haxor: Query configuration - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - l33t-haxor: Query configuration
=#=#=#= Begin test: l33t-haxor: Set enable-acl =#=#=#=
crm_attribute: Error performing operation: Permission denied
=#=#=#= End test: l33t-haxor: Set enable-acl - Insufficient privileges (4) =#=#=#=
* Passed: crm_attribute - l33t-haxor: Set enable-acl
=#=#=#= Begin test: l33t-haxor: Set stonith-enabled =#=#=#=
crm_attribute: Error performing operation: Permission denied
=#=#=#= End test: l33t-haxor: Set stonith-enabled - Insufficient privileges (4) =#=#=#=
* Passed: crm_attribute - l33t-haxor: Set stonith-enabled
=#=#=#= Begin test: l33t-haxor: Create a resource =#=#=#=
pcmk__check_acl trace: Parent ACL denies user 'l33t-haxor' read/write access to /cib/configuration/resources/primitive[@id='dummy']
pcmk__apply_creation_acl trace: ACLs disallow creation of with id="dummy"
Call failed: Permission denied
=#=#=#= End test: l33t-haxor: Create a resource - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - l33t-haxor: Create a resource
=#=#=#= Begin test: niceguy: Query configuration =#=#=#=
=#=#=#= End test: niceguy: Query configuration - OK (0) =#=#=#=
* Passed: cibadmin - niceguy: Query configuration
=#=#=#= Begin test: niceguy: Set enable-acl =#=#=#=
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]
Error setting enable-acl=false (section=crm_config, set=): Permission denied
crm_attribute: Error performing operation: Permission denied
=#=#=#= End test: niceguy: Set enable-acl - Insufficient privileges (4) =#=#=#=
* Passed: crm_attribute - niceguy: Set enable-acl
=#=#=#= Begin test: niceguy: Set stonith-enabled =#=#=#=
pcmk__apply_creation_acl trace: ACLs allow creation of with id="cib-bootstrap-options-stonith-enabled"
=#=#=#= Current cib after: niceguy: Set stonith-enabled =#=#=#=
=#=#=#= End test: niceguy: Set stonith-enabled - OK (0) =#=#=#=
* Passed: crm_attribute - niceguy: Set stonith-enabled
=#=#=#= Begin test: niceguy: Create a resource =#=#=#=
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib/configuration/resources/primitive[@id='dummy']
pcmk__apply_creation_acl trace: ACLs disallow creation of with id="dummy"
Call failed: Permission denied
=#=#=#= End test: niceguy: Create a resource - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - niceguy: Create a resource
=#=#=#= Begin test: root: Query configuration =#=#=#=
=#=#=#= End test: root: Query configuration - OK (0) =#=#=#=
* Passed: cibadmin - root: Query configuration
=#=#=#= Begin test: root: Set stonith-enabled =#=#=#=
=#=#=#= Current cib after: root: Set stonith-enabled =#=#=#=
=#=#=#= End test: root: Set stonith-enabled - OK (0) =#=#=#=
* Passed: crm_attribute - root: Set stonith-enabled
=#=#=#= Begin test: root: Create a resource =#=#=#=
=#=#=#= Current cib after: root: Create a resource =#=#=#=
=#=#=#= End test: root: Create a resource - OK (0) =#=#=#=
* Passed: cibadmin - root: Create a resource
=#=#=#= Begin test: root: Create another resource (with description) =#=#=#=
=#=#=#= Current cib after: root: Create another resource (with description) =#=#=#=
=#=#=#= End test: root: Create another resource (with description) - OK (0) =#=#=#=
* Passed: cibadmin - root: Create another resource (with description)
=#=#=#= Begin test: l33t-haxor: Create a resource meta attribute =#=#=#=
Could not obtain the current CIB: Permission denied
crm_resource: Error performing operation: Insufficient privileges
=#=#=#= End test: l33t-haxor: Create a resource meta attribute - Insufficient privileges (4) =#=#=#=
* Passed: crm_resource - l33t-haxor: Create a resource meta attribute
=#=#=#= Begin test: l33t-haxor: Query a resource meta attribute =#=#=#=
Could not obtain the current CIB: Permission denied
crm_resource: Error performing operation: Insufficient privileges
=#=#=#= End test: l33t-haxor: Query a resource meta attribute - Insufficient privileges (4) =#=#=#=
* Passed: crm_resource - l33t-haxor: Query a resource meta attribute
=#=#=#= Begin test: l33t-haxor: Remove a resource meta attribute =#=#=#=
Could not obtain the current CIB: Permission denied
crm_resource: Error performing operation: Insufficient privileges
=#=#=#= End test: l33t-haxor: Remove a resource meta attribute - Insufficient privileges (4) =#=#=#=
* Passed: crm_resource - l33t-haxor: Remove a resource meta attribute
=#=#=#= Begin test: niceguy: Create a resource meta attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
pcmk__apply_creation_acl trace: Creation of scaffolding with id="dummy-meta_attributes" is implicitly allowed
pcmk__apply_creation_acl trace: ACLs allow creation of with id="dummy-meta_attributes-target-role"
Set 'dummy' option: id=dummy-meta_attributes-target-role set=dummy-meta_attributes name=target-role value=Stopped
=#=#=#= Current cib after: niceguy: Create a resource meta attribute =#=#=#=
=#=#=#= End test: niceguy: Create a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - niceguy: Create a resource meta attribute
=#=#=#= Begin test: niceguy: Query a resource meta attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Stopped
=#=#=#= Current cib after: niceguy: Query a resource meta attribute =#=#=#=
=#=#=#= End test: niceguy: Query a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - niceguy: Query a resource meta attribute
=#=#=#= Begin test: niceguy: Remove a resource meta attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Deleted 'dummy' option: id=dummy-meta_attributes-target-role name=target-role
=#=#=#= Current cib after: niceguy: Remove a resource meta attribute =#=#=#=
=#=#=#= End test: niceguy: Remove a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - niceguy: Remove a resource meta attribute
=#=#=#= Begin test: niceguy: Create a resource meta attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
pcmk__apply_creation_acl trace: ACLs allow creation of with id="dummy-meta_attributes-target-role"
Set 'dummy' option: id=dummy-meta_attributes-target-role set=dummy-meta_attributes name=target-role value=Started
=#=#=#= Current cib after: niceguy: Create a resource meta attribute =#=#=#=
=#=#=#= End test: niceguy: Create a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - niceguy: Create a resource meta attribute
=#=#=#= Begin test: badidea: Query configuration - implied deny =#=#=#=
=#=#=#= End test: badidea: Query configuration - implied deny - OK (0) =#=#=#=
* Passed: cibadmin - badidea: Query configuration - implied deny
=#=#=#= Begin test: betteridea: Query configuration - explicit deny =#=#=#=
=#=#=#= End test: betteridea: Query configuration - explicit deny - OK (0) =#=#=#=
* Passed: cibadmin - betteridea: Query configuration - explicit deny
=#=#=#= Begin test: niceguy: Replace - remove acls =#=#=#=
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib[@epoch]
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib/configuration/acls
Call failed: Permission denied
=#=#=#= End test: niceguy: Replace - remove acls - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - niceguy: Replace - remove acls
=#=#=#= Begin test: niceguy: Replace - create resource =#=#=#=
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib[@epoch]
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib/configuration/resources/primitive[@id='dummy2']
pcmk__apply_creation_acl trace: ACLs disallow creation of with id="dummy2"
Call failed: Permission denied
=#=#=#= End test: niceguy: Replace - create resource - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - niceguy: Replace - create resource
=#=#=#= Begin test: niceguy: Replace - modify attribute (deny) =#=#=#=
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib[@epoch]
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]
Call failed: Permission denied
=#=#=#= End test: niceguy: Replace - modify attribute (deny) - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - niceguy: Replace - modify attribute (deny)
=#=#=#= Begin test: niceguy: Replace - delete attribute (deny) =#=#=#=
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib[@epoch]
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib/configuration/resources/primitive[@id='dummy_desc']
Call failed: Permission denied
=#=#=#= End test: niceguy: Replace - delete attribute (deny) - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - niceguy: Replace - delete attribute (deny)
=#=#=#= Begin test: niceguy: Replace - create attribute (deny) =#=#=#=
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib[@epoch]
pcmk__check_acl trace: Default ACL denies user 'niceguy' read/write access to /cib/configuration/resources/primitive[@id='dummy'][@description]
Call failed: Permission denied
=#=#=#= End test: niceguy: Replace - create attribute (deny) - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - niceguy: Replace - create attribute (deny)
=#=#=#= Begin test: bob: Replace - create attribute (direct allow) =#=#=#=
=#=#=#= End test: bob: Replace - create attribute (direct allow) - OK (0) =#=#=#=
* Passed: cibadmin - bob: Replace - create attribute (direct allow)
=#=#=#= Begin test: bob: Replace - modify attribute (direct allow) =#=#=#=
=#=#=#= End test: bob: Replace - modify attribute (direct allow) - OK (0) =#=#=#=
* Passed: cibadmin - bob: Replace - modify attribute (direct allow)
=#=#=#= Begin test: bob: Replace - delete attribute (direct allow) =#=#=#=
=#=#=#= End test: bob: Replace - delete attribute (direct allow) - OK (0) =#=#=#=
* Passed: cibadmin - bob: Replace - delete attribute (direct allow)
=#=#=#= Begin test: joe: Replace - create attribute (inherited allow) =#=#=#=
=#=#=#= End test: joe: Replace - create attribute (inherited allow) - OK (0) =#=#=#=
* Passed: cibadmin - joe: Replace - create attribute (inherited allow)
=#=#=#= Begin test: joe: Replace - modify attribute (inherited allow) =#=#=#=
=#=#=#= End test: joe: Replace - modify attribute (inherited allow) - OK (0) =#=#=#=
* Passed: cibadmin - joe: Replace - modify attribute (inherited allow)
=#=#=#= Begin test: joe: Replace - delete attribute (inherited allow) =#=#=#=
=#=#=#= End test: joe: Replace - delete attribute (inherited allow) - OK (0) =#=#=#=
* Passed: cibadmin - joe: Replace - delete attribute (inherited allow)
=#=#=#= Begin test: mike: Replace - create attribute (allow overrides deny) =#=#=#=
=#=#=#= End test: mike: Replace - create attribute (allow overrides deny) - OK (0) =#=#=#=
* Passed: cibadmin - mike: Replace - create attribute (allow overrides deny)
=#=#=#= Begin test: mike: Replace - modify attribute (allow overrides deny) =#=#=#=
=#=#=#= End test: mike: Replace - modify attribute (allow overrides deny) - OK (0) =#=#=#=
* Passed: cibadmin - mike: Replace - modify attribute (allow overrides deny)
=#=#=#= Begin test: mike: Replace - delete attribute (allow overrides deny) =#=#=#=
=#=#=#= End test: mike: Replace - delete attribute (allow overrides deny) - OK (0) =#=#=#=
* Passed: cibadmin - mike: Replace - delete attribute (allow overrides deny)
=#=#=#= Begin test: mike: Create another resource =#=#=#=
pcmk__apply_creation_acl trace: ACLs allow creation of with id="dummy2"
=#=#=#= Current cib after: mike: Create another resource =#=#=#=
=#=#=#= End test: mike: Create another resource - OK (0) =#=#=#=
* Passed: cibadmin - mike: Create another resource
=#=#=#= Begin test: chris: Replace - create attribute (deny overrides allow) =#=#=#=
pcmk__check_acl trace: Parent ACL denies user 'chris' read/write access to /cib/configuration/resources/primitive[@id='dummy'][@description]
Call failed: Permission denied
=#=#=#= End test: chris: Replace - create attribute (deny overrides allow) - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - chris: Replace - create attribute (deny overrides allow)
=#=#=#= Begin test: chris: Replace - modify attribute (deny overrides allow) =#=#=#=
pcmk__check_acl trace: Parent ACL denies user 'chris' read/write access to /cib/configuration/resources/primitive[@id='dummy'][@description]
Call failed: Permission denied
=#=#=#= End test: chris: Replace - modify attribute (deny overrides allow) - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - chris: Replace - modify attribute (deny overrides allow)
=#=#=#= Begin test: chris: Replace - delete attribute (deny overrides allow) =#=#=#=
pcmk__check_acl trace: Parent ACL denies user 'chris' read/write access to /cib/configuration/resources/primitive[@id='dummy2']
Call failed: Permission denied
=#=#=#= End test: chris: Replace - delete attribute (deny overrides allow) - Insufficient privileges (4) =#=#=#=
* Passed: cibadmin - chris: Replace - delete attribute (deny overrides allow)
diff --git a/cts/cli/regression.crm_attribute.exp b/cts/cli/regression.crm_attribute.exp
index b2005095ba..5d58115304 100644
--- a/cts/cli/regression.crm_attribute.exp
+++ b/cts/cli/regression.crm_attribute.exp
@@ -1,1899 +1,1899 @@
=#=#=#= Begin test: List all available options (invalid type) =#=#=#=
crm_attribute: Invalid --list-options value 'asdf'. Allowed values: cluster
=#=#=#= End test: List all available options (invalid type) - Incorrect usage (64) =#=#=#=
* Passed: crm_attribute - List all available options (invalid type)
=#=#=#= Begin test: List all available options (invalid type) (XML) =#=#=#=
crm_attribute: Invalid --list-options value 'asdf'. Allowed values: cluster
=#=#=#= End test: List all available options (invalid type) (XML) - Incorrect usage (64) =#=#=#=
* Passed: crm_attribute - List all available options (invalid type) (XML)
=#=#=#= Begin test: List non-advanced cluster options =#=#=#=
Pacemaker cluster options
Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.
* dc-version: Pacemaker version on cluster node elected Designated Controller (DC)
* Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes.
* Possible values (generated by Pacemaker): version (no default)
* cluster-infrastructure: The messaging layer on which Pacemaker is currently running
* Used for informational and diagnostic purposes.
* Possible values (generated by Pacemaker): string (no default)
* cluster-name: An arbitrary name for the cluster
* This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.
* Possible values: string (no default)
* dc-deadtime: How long to wait for a response from other nodes during start-up
* The optimal value will depend on the speed and load of your network and the type of switches used.
* Possible values: duration (default: )
* cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications
* Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").
* Possible values: duration (default: )
* fence-reaction: How a cluster node should react if notified of its own fencing
* A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.
* Possible values: "stop" (default), "panic"
* no-quorum-policy: What to do when the cluster does not have quorum
* Possible values: "stop" (default), "freeze", "ignore", "demote", "fence", "suicide"
* shutdown-lock: Whether to lock resources to a cleanly shut down node
* When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.
* Possible values: boolean (default: )
* shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this
* If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.
* Possible values: duration (default: )
* enable-acl: Enable Access Control Lists (ACLs) for the CIB
* Possible values: boolean (default: )
* symmetric-cluster: Whether resources can run on any node by default
* Possible values: boolean (default: )
* maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources
* Possible values: boolean (default: )
* start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node
* When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.
* Possible values: boolean (default: )
* enable-startup-probes: Whether the cluster should check for active resources during start-up
* Possible values: boolean (default: )
* stonith-action: Action to send to fence device when a node needs to be fenced
* Possible values: "reboot" (default), "off"
* stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default
* Possible values: duration (default: )
* have-watchdog: Whether watchdog integration is enabled
* This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.
* Possible values (generated by Pacemaker): boolean (default: )
* stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use
* If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.
* Possible values: timeout (default: )
* stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target
* Possible values: score (default: )
* priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority
* Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.
* Possible values: duration (default: )
* node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group
* Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.
* Possible values: duration (default: )
* cluster-delay: Maximum time for node-to-node communication
* The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.
* Possible values: duration (default: )
* load-threshold: Maximum amount of system load that should be used by cluster nodes
* The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit
* Possible values: percentage (default: )
* node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores)
* Possible values: integer (default: )
* batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes
* The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.
* Possible values: integer (default: )
* migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)
* Possible values: integer (default: )
* cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon
* Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).
* Possible values: nonnegative_integer (default: )
* stop-all-resources: Whether the cluster should stop all active resources
* Possible values: boolean (default: )
* stop-orphan-resources: Whether to stop resources that were removed from the configuration
* Possible values: boolean (default: )
* stop-orphan-actions: Whether to cancel recurring actions removed from the configuration
* Possible values: boolean (default: )
* pe-error-series-max: The number of scheduler inputs resulting in errors to save
* Zero to disable, -1 to store unlimited.
* Possible values: integer (default: )
* pe-warn-series-max: The number of scheduler inputs resulting in warnings to save
* Zero to disable, -1 to store unlimited.
* Possible values: integer (default: )
* pe-input-series-max: The number of scheduler inputs without errors or warnings to save
* Zero to disable, -1 to store unlimited.
* Possible values: integer (default: )
* node-health-strategy: How cluster should react to node health attributes
* Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".
* Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom"
* node-health-base: Base health score assigned to a node
* Only used when "node-health-strategy" is set to "progressive".
* Possible values: score (default: )
* node-health-green: The score to use for a node health attribute whose value is "green"
* Only used when "node-health-strategy" is set to "custom" or "progressive".
* Possible values: score (default: )
* node-health-yellow: The score to use for a node health attribute whose value is "yellow"
* Only used when "node-health-strategy" is set to "custom" or "progressive".
* Possible values: score (default: )
* node-health-red: The score to use for a node health attribute whose value is "red"
* Only used when "node-health-strategy" is set to "custom" or "progressive".
* Possible values: score (default: )
* placement-strategy: How the cluster should allocate resources to nodes
* Possible values: "default" (default), "utilization", "minimal", "balanced"
=#=#=#= End test: List non-advanced cluster options - OK (0) =#=#=#=
* Passed: crm_attribute - List non-advanced cluster options
=#=#=#= Begin test: List non-advanced cluster options (XML) =#=#=#=
1.1Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.Pacemaker cluster optionsIncludes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes.Pacemaker version on cluster node elected Designated Controller (DC)Used for informational and diagnostic purposes.The messaging layer on which Pacemaker is currently runningThis optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.An arbitrary name for the clusterThe optimal value will depend on the speed and load of your network and the type of switches used.How long to wait for a response from other nodes during start-upPacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").Polling interval to recheck cluster state and evaluate rules with date specificationsA cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.How a cluster node should react if notified of its own fencingDeclare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive.Enabling this option will slow down cluster recovery under all conditionsWhat to do when the cluster does not have quorumWhat to do when the cluster does not have quorumWhen true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.Whether to lock resources to a cleanly shut down nodeIf shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.Do not lock resources to a cleanly shut down node longer than thisEnable Access Control Lists (ACLs) for the CIBEnable Access Control Lists (ACLs) for the CIBWhether resources can run on any node by defaultWhether resources can run on any node by defaultWhether the cluster should refrain from monitoring, starting, and stopping resourcesWhether the cluster should refrain from monitoring, starting, and stopping resourcesWhen true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.Whether a start failure should prevent a resource from being recovered on the same nodeWhether the cluster should check for active resources during start-upWhether the cluster should check for active resources during start-upIf false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability.Whether nodes may be fenced as part of recoveryAction to send to fence device when a node needs to be fencedAction to send to fence device when a node needs to be fencedHow long to wait for on, off, and reboot fence actions to complete by defaultHow long to wait for on, off, and reboot fence actions to complete by defaultThis is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.Whether watchdog integration is enabledIf this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in useHow many times fencing can fail before it will no longer be immediately re-attempted on a targetHow many times fencing can fail before it will no longer be immediately re-attempted on a targetAllow performing fencing operations in parallelAllow performing fencing operations in parallelSetting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability.Whether to fence unseen nodes at start-upApply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.Apply fencing delay targeting the lost nodes with the highest total resource priorityFence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.How long to wait for a node that has joined the cluster to join the controller process groupThe node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.Maximum time for node-to-node communicationThe cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limitMaximum amount of system load that should be used by cluster nodesMaximum number of jobs that can be scheduled per node (defaults to 2x cores)Maximum number of jobs that can be scheduled per node (defaults to 2x cores)The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.Maximum number of jobs that the cluster may execute in parallel across all nodesThe number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).Maximum IPC message backlog before disconnecting a cluster daemonWhether the cluster should stop all active resourcesWhether the cluster should stop all active resourcesWhether to stop resources that were removed from the configurationWhether to stop resources that were removed from the configurationWhether to cancel recurring actions removed from the configurationWhether to cancel recurring actions removed from the configurationZero to disable, -1 to store unlimited.The number of scheduler inputs resulting in errors to saveZero to disable, -1 to store unlimited.The number of scheduler inputs resulting in warnings to saveZero to disable, -1 to store unlimited.The number of scheduler inputs without errors or warnings to saveRequires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".How cluster should react to node health attributesOnly used when "node-health-strategy" is set to "progressive".Base health score assigned to a nodeOnly used when "node-health-strategy" is set to "custom" or "progressive".The score to use for a node health attribute whose value is "green"Only used when "node-health-strategy" is set to "custom" or "progressive".The score to use for a node health attribute whose value is "yellow"Only used when "node-health-strategy" is set to "custom" or "progressive".The score to use for a node health attribute whose value is "red"How the cluster should allocate resources to nodesHow the cluster should allocate resources to nodes
=#=#=#= End test: List non-advanced cluster options (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - List non-advanced cluster options (XML)
=#=#=#= Begin test: List all available cluster options =#=#=#=
Pacemaker cluster options
Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.
* dc-version: Pacemaker version on cluster node elected Designated Controller (DC)
* Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes.
* Possible values (generated by Pacemaker): version (no default)
* cluster-infrastructure: The messaging layer on which Pacemaker is currently running
* Used for informational and diagnostic purposes.
* Possible values (generated by Pacemaker): string (no default)
* cluster-name: An arbitrary name for the cluster
* This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.
* Possible values: string (no default)
* dc-deadtime: How long to wait for a response from other nodes during start-up
* The optimal value will depend on the speed and load of your network and the type of switches used.
* Possible values: duration (default: )
* cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications
* Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").
* Possible values: duration (default: )
* fence-reaction: How a cluster node should react if notified of its own fencing
* A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.
* Possible values: "stop" (default), "panic"
* no-quorum-policy: What to do when the cluster does not have quorum
* Possible values: "stop" (default), "freeze", "ignore", "demote", "fence", "suicide"
* shutdown-lock: Whether to lock resources to a cleanly shut down node
* When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.
* Possible values: boolean (default: )
* shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this
* If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.
* Possible values: duration (default: )
* enable-acl: Enable Access Control Lists (ACLs) for the CIB
* Possible values: boolean (default: )
* symmetric-cluster: Whether resources can run on any node by default
* Possible values: boolean (default: )
* maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources
* Possible values: boolean (default: )
* start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node
* When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.
* Possible values: boolean (default: )
* enable-startup-probes: Whether the cluster should check for active resources during start-up
* Possible values: boolean (default: )
* stonith-action: Action to send to fence device when a node needs to be fenced
* Possible values: "reboot" (default), "off"
* stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default
* Possible values: duration (default: )
* have-watchdog: Whether watchdog integration is enabled
* This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.
* Possible values (generated by Pacemaker): boolean (default: )
* stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use
* If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.
* Possible values: timeout (default: )
* stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target
* Possible values: score (default: )
* priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority
* Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.
* Possible values: duration (default: )
* node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group
* Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.
* Possible values: duration (default: )
* cluster-delay: Maximum time for node-to-node communication
* The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.
* Possible values: duration (default: )
* load-threshold: Maximum amount of system load that should be used by cluster nodes
* The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit
* Possible values: percentage (default: )
* node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores)
* Possible values: integer (default: )
* batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes
* The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.
* Possible values: integer (default: )
* migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)
* Possible values: integer (default: )
* cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon
* Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).
* Possible values: nonnegative_integer (default: )
* stop-all-resources: Whether the cluster should stop all active resources
* Possible values: boolean (default: )
* stop-orphan-resources: Whether to stop resources that were removed from the configuration
* Possible values: boolean (default: )
* stop-orphan-actions: Whether to cancel recurring actions removed from the configuration
* Possible values: boolean (default: )
* pe-error-series-max: The number of scheduler inputs resulting in errors to save
* Zero to disable, -1 to store unlimited.
* Possible values: integer (default: )
* pe-warn-series-max: The number of scheduler inputs resulting in warnings to save
* Zero to disable, -1 to store unlimited.
* Possible values: integer (default: )
* pe-input-series-max: The number of scheduler inputs without errors or warnings to save
* Zero to disable, -1 to store unlimited.
* Possible values: integer (default: )
* node-health-strategy: How cluster should react to node health attributes
* Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".
* Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom"
* node-health-base: Base health score assigned to a node
* Only used when "node-health-strategy" is set to "progressive".
* Possible values: score (default: )
* node-health-green: The score to use for a node health attribute whose value is "green"
* Only used when "node-health-strategy" is set to "custom" or "progressive".
* Possible values: score (default: )
* node-health-yellow: The score to use for a node health attribute whose value is "yellow"
* Only used when "node-health-strategy" is set to "custom" or "progressive".
* Possible values: score (default: )
* node-health-red: The score to use for a node health attribute whose value is "red"
* Only used when "node-health-strategy" is set to "custom" or "progressive".
* Possible values: score (default: )
* placement-strategy: How the cluster should allocate resources to nodes
* Possible values: "default" (default), "utilization", "minimal", "balanced"
* ADVANCED OPTIONS:
* election-timeout: Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.
* Possible values: duration (default: )
* shutdown-escalation: Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.
* Possible values: duration (default: )
* join-integration-timeout: If you need to adjust this value, it probably indicates the presence of a bug.
* Possible values: duration (default: )
* join-finalization-timeout: If you need to adjust this value, it probably indicates the presence of a bug.
* Possible values: duration (default: )
* transition-delay: Enabling this option will slow down cluster recovery under all conditions
* Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive.
* Possible values: duration (default: )
* stonith-enabled: Whether nodes may be fenced as part of recovery
* If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability.
* Possible values: boolean (default: )
* startup-fencing: Whether to fence unseen nodes at start-up
* Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability.
* Possible values: boolean (default: )
* DEPRECATED OPTIONS (will be removed in a future release):
* concurrent-fencing: Allow performing fencing operations in parallel
* Possible values: boolean (default: )
=#=#=#= End test: List all available cluster options - OK (0) =#=#=#=
* Passed: crm_attribute - List all available cluster options
=#=#=#= Begin test: List all available cluster options (XML) =#=#=#=
1.1Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.Pacemaker cluster optionsIncludes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes.Pacemaker version on cluster node elected Designated Controller (DC)Used for informational and diagnostic purposes.The messaging layer on which Pacemaker is currently runningThis optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.An arbitrary name for the clusterThe optimal value will depend on the speed and load of your network and the type of switches used.How long to wait for a response from other nodes during start-upPacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").Polling interval to recheck cluster state and evaluate rules with date specificationsA cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.How a cluster node should react if notified of its own fencingDeclare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.If you need to adjust this value, it probably indicates the presence of a bug.Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive.Enabling this option will slow down cluster recovery under all conditionsWhat to do when the cluster does not have quorumWhat to do when the cluster does not have quorumWhen true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.Whether to lock resources to a cleanly shut down nodeIf shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.Do not lock resources to a cleanly shut down node longer than thisEnable Access Control Lists (ACLs) for the CIBEnable Access Control Lists (ACLs) for the CIBWhether resources can run on any node by defaultWhether resources can run on any node by defaultWhether the cluster should refrain from monitoring, starting, and stopping resourcesWhether the cluster should refrain from monitoring, starting, and stopping resourcesWhen true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.Whether a start failure should prevent a resource from being recovered on the same nodeWhether the cluster should check for active resources during start-upWhether the cluster should check for active resources during start-upIf false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability.Whether nodes may be fenced as part of recoveryAction to send to fence device when a node needs to be fencedAction to send to fence device when a node needs to be fencedHow long to wait for on, off, and reboot fence actions to complete by defaultHow long to wait for on, off, and reboot fence actions to complete by defaultThis is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.Whether watchdog integration is enabledIf this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in useHow many times fencing can fail before it will no longer be immediately re-attempted on a targetHow many times fencing can fail before it will no longer be immediately re-attempted on a targetAllow performing fencing operations in parallelAllow performing fencing operations in parallelSetting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability.Whether to fence unseen nodes at start-upApply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.Apply fencing delay targeting the lost nodes with the highest total resource priorityFence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.How long to wait for a node that has joined the cluster to join the controller process groupThe node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.Maximum time for node-to-node communicationThe cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limitMaximum amount of system load that should be used by cluster nodesMaximum number of jobs that can be scheduled per node (defaults to 2x cores)Maximum number of jobs that can be scheduled per node (defaults to 2x cores)The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.Maximum number of jobs that the cluster may execute in parallel across all nodesThe number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).Maximum IPC message backlog before disconnecting a cluster daemonWhether the cluster should stop all active resourcesWhether the cluster should stop all active resourcesWhether to stop resources that were removed from the configurationWhether to stop resources that were removed from the configurationWhether to cancel recurring actions removed from the configurationWhether to cancel recurring actions removed from the configurationZero to disable, -1 to store unlimited.The number of scheduler inputs resulting in errors to saveZero to disable, -1 to store unlimited.The number of scheduler inputs resulting in warnings to saveZero to disable, -1 to store unlimited.The number of scheduler inputs without errors or warnings to saveRequires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".How cluster should react to node health attributesOnly used when "node-health-strategy" is set to "progressive".Base health score assigned to a nodeOnly used when "node-health-strategy" is set to "custom" or "progressive".The score to use for a node health attribute whose value is "green"Only used when "node-health-strategy" is set to "custom" or "progressive".The score to use for a node health attribute whose value is "yellow"Only used when "node-health-strategy" is set to "custom" or "progressive".The score to use for a node health attribute whose value is "red"How the cluster should allocate resources to nodesHow the cluster should allocate resources to nodes
=#=#=#= End test: List all available cluster options (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - List all available cluster options (XML)
=#=#=#= Begin test: Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings =#=#=#=
crm_attribute: -p/--promotion must be called from an OCF resource agent or with a resource ID specified
=#=#=#= End test: Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings - Incorrect usage (64) =#=#=#=
* Passed: crm_attribute - Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings
=#=#=#= Begin test: Query the value of an attribute that does not exist =#=#=#=
crm_attribute: Error performing operation: No such device or address
=#=#=#= End test: Query the value of an attribute that does not exist - No such object (105) =#=#=#=
* Passed: crm_attribute - Query the value of an attribute that does not exist
=#=#=#= Begin test: Configure something before erasing =#=#=#=
=#=#=#= Current cib after: Configure something before erasing =#=#=#=
=#=#=#= End test: Configure something before erasing - OK (0) =#=#=#=
* Passed: crm_attribute - Configure something before erasing
=#=#=#= Begin test: Test '++' XML attribute update syntax =#=#=#=
=#=#=#= Current cib after: Test '++' XML attribute update syntax =#=#=#=
=#=#=#= End test: Test '++' XML attribute update syntax - OK (0) =#=#=#=
* Passed: cibadmin - Test '++' XML attribute update syntax
=#=#=#= Begin test: Test '+=' XML attribute update syntax =#=#=#=
=#=#=#= Current cib after: Test '+=' XML attribute update syntax =#=#=#=
=#=#=#= End test: Test '+=' XML attribute update syntax - OK (0) =#=#=#=
* Passed: cibadmin - Test '+=' XML attribute update syntax
=#=#=#= Begin test: Test '++' nvpair value update syntax =#=#=#=
=#=#=#= Current cib after: Test '++' nvpair value update syntax =#=#=#=
=#=#=#= End test: Test '++' nvpair value update syntax - OK (0) =#=#=#=
* Passed: crm_attribute - Test '++' nvpair value update syntax
=#=#=#= Begin test: Test '++' nvpair value update syntax (XML) =#=#=#=
=#=#=#= Current cib after: Test '++' nvpair value update syntax (XML) =#=#=#=
=#=#=#= End test: Test '++' nvpair value update syntax (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Test '++' nvpair value update syntax (XML)
=#=#=#= Begin test: Test '+=' nvpair value update syntax =#=#=#=
=#=#=#= Current cib after: Test '+=' nvpair value update syntax =#=#=#=
=#=#=#= End test: Test '+=' nvpair value update syntax - OK (0) =#=#=#=
* Passed: crm_attribute - Test '+=' nvpair value update syntax
=#=#=#= Begin test: Test '+=' nvpair value update syntax (XML) =#=#=#=
=#=#=#= Current cib after: Test '+=' nvpair value update syntax (XML) =#=#=#=
=#=#=#= End test: Test '+=' nvpair value update syntax (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Test '+=' nvpair value update syntax (XML)
=#=#=#= Begin test: Test '++' XML attribute update syntax (--score not set) =#=#=#=
=#=#=#= Current cib after: Test '++' XML attribute update syntax (--score not set) =#=#=#=
=#=#=#= End test: Test '++' XML attribute update syntax (--score not set) - OK (0) =#=#=#=
* Passed: cibadmin - Test '++' XML attribute update syntax (--score not set)
=#=#=#= Begin test: Test '+=' XML attribute update syntax (--score not set) =#=#=#=
=#=#=#= Current cib after: Test '+=' XML attribute update syntax (--score not set) =#=#=#=
=#=#=#= End test: Test '+=' XML attribute update syntax (--score not set) - OK (0) =#=#=#=
* Passed: cibadmin - Test '+=' XML attribute update syntax (--score not set)
=#=#=#= Begin test: Test '++' nvpair value update syntax (--score not set) =#=#=#=
=#=#=#= Current cib after: Test '++' nvpair value update syntax (--score not set) =#=#=#=
=#=#=#= End test: Test '++' nvpair value update syntax (--score not set) - OK (0) =#=#=#=
* Passed: crm_attribute - Test '++' nvpair value update syntax (--score not set)
=#=#=#= Begin test: Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#=
=#=#=#= Current cib after: Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#=
=#=#=#= End test: Test '++' nvpair value update syntax (--score not set) (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Test '++' nvpair value update syntax (--score not set) (XML)
=#=#=#= Begin test: Test '+=' nvpair value update syntax (--score not set) =#=#=#=
=#=#=#= Current cib after: Test '+=' nvpair value update syntax (--score not set) =#=#=#=
=#=#=#= End test: Test '+=' nvpair value update syntax (--score not set) - OK (0) =#=#=#=
* Passed: crm_attribute - Test '+=' nvpair value update syntax (--score not set)
=#=#=#= Begin test: Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#=
=#=#=#= Current cib after: Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#=
=#=#=#= End test: Test '+=' nvpair value update syntax (--score not set) (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Test '+=' nvpair value update syntax (--score not set) (XML)
=#=#=#= Begin test: Set cluster option =#=#=#=
=#=#=#= Current cib after: Set cluster option =#=#=#=
=#=#=#= End test: Set cluster option - OK (0) =#=#=#=
* Passed: crm_attribute - Set cluster option
=#=#=#= Begin test: Query new cluster option =#=#=#=
=#=#=#= End test: Query new cluster option - OK (0) =#=#=#=
* Passed: cibadmin - Query new cluster option
=#=#=#= Begin test: Set no-quorum policy =#=#=#=
=#=#=#= Current cib after: Set no-quorum policy =#=#=#=
=#=#=#= End test: Set no-quorum policy - OK (0) =#=#=#=
* Passed: crm_attribute - Set no-quorum policy
=#=#=#= Begin test: Delete nvpair =#=#=#=
=#=#=#= Current cib after: Delete nvpair =#=#=#=
=#=#=#= End test: Delete nvpair - OK (0) =#=#=#=
* Passed: cibadmin - Delete nvpair
=#=#=#= Begin test: Create operation should fail =#=#=#=
Call failed: File exists
=#=#=#= Current cib after: Create operation should fail =#=#=#=
=#=#=#= End test: Create operation should fail - Requested item already exists (108) =#=#=#=
* Passed: cibadmin - Create operation should fail
=#=#=#= Begin test: Modify cluster options section =#=#=#=
=#=#=#= Current cib after: Modify cluster options section =#=#=#=
=#=#=#= End test: Modify cluster options section - OK (0) =#=#=#=
* Passed: cibadmin - Modify cluster options section
=#=#=#= Begin test: Query updated cluster option =#=#=#=
=#=#=#= Current cib after: Query updated cluster option =#=#=#=
=#=#=#= End test: Query updated cluster option - OK (0) =#=#=#=
* Passed: cibadmin - Query updated cluster option
=#=#=#= Begin test: Set duplicate cluster option =#=#=#=
=#=#=#= Current cib after: Set duplicate cluster option =#=#=#=
=#=#=#= End test: Set duplicate cluster option - OK (0) =#=#=#=
* Passed: crm_attribute - Set duplicate cluster option
=#=#=#= Begin test: Setting multiply defined cluster option should fail =#=#=#=
crm_attribute: Please choose from one of the matches below and supply the 'id' with --attr-id
Multiple attributes match name=cluster-delay
Value: 60s (id=cib-bootstrap-options-cluster-delay)
Value: 40s (id=duplicate-cluster-delay)
=#=#=#= Current cib after: Setting multiply defined cluster option should fail =#=#=#=
=#=#=#= End test: Setting multiply defined cluster option should fail - Multiple items match request (109) =#=#=#=
* Passed: crm_attribute - Setting multiply defined cluster option should fail
=#=#=#= Begin test: Set cluster option with -s =#=#=#=
=#=#=#= Current cib after: Set cluster option with -s =#=#=#=
=#=#=#= End test: Set cluster option with -s - OK (0) =#=#=#=
* Passed: crm_attribute - Set cluster option with -s
=#=#=#= Begin test: Delete cluster option with -i =#=#=#=
Deleted crm_config option: id=(null) name=cluster-delay
=#=#=#= Current cib after: Delete cluster option with -i =#=#=#=
=#=#=#= End test: Delete cluster option with -i - OK (0) =#=#=#=
* Passed: crm_attribute - Delete cluster option with -i
=#=#=#= Begin test: Create node1 and bring it online =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Current cluster status:
* Full List of Resources:
* No resources
Performing Requested Modifications:
* Bringing node node1 online
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 ]
* Full List of Resources:
* No resources
=#=#=#= Current cib after: Create node1 and bring it online =#=#=#=
=#=#=#= End test: Create node1 and bring it online - OK (0) =#=#=#=
* Passed: crm_simulate - Create node1 and bring it online
=#=#=#= Begin test: Create node attribute =#=#=#=
=#=#=#= Current cib after: Create node attribute =#=#=#=
=#=#=#= End test: Create node attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Create node attribute
=#=#=#= Begin test: Query new node attribute =#=#=#=
=#=#=#= Current cib after: Query new node attribute =#=#=#=
=#=#=#= End test: Query new node attribute - OK (0) =#=#=#=
* Passed: cibadmin - Query new node attribute
=#=#=#= Begin test: Create second node attribute =#=#=#=
=#=#=#= Current cib after: Create second node attribute =#=#=#=
=#=#=#= End test: Create second node attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Create second node attribute
=#=#=#= Begin test: Query node attributes by pattern =#=#=#=
scope=nodes name=ram value=1024M
scope=nodes name=rattr value=XYZ
=#=#=#= End test: Query node attributes by pattern - OK (0) =#=#=#=
* Passed: crm_attribute - Query node attributes by pattern
=#=#=#= Begin test: Update node attributes by pattern =#=#=#=
=#=#=#= Current cib after: Update node attributes by pattern =#=#=#=
=#=#=#= End test: Update node attributes by pattern - OK (0) =#=#=#=
* Passed: crm_attribute - Update node attributes by pattern
=#=#=#= Begin test: Delete node attributes by pattern =#=#=#=
Deleted nodes attribute: id=nodes-node1-rattr name=rattr
=#=#=#= Current cib after: Delete node attributes by pattern =#=#=#=
=#=#=#= End test: Delete node attributes by pattern - OK (0) =#=#=#=
* Passed: crm_attribute - Delete node attributes by pattern
=#=#=#= Begin test: Set a transient (fail-count) node attribute =#=#=#=
=#=#=#= Current cib after: Set a transient (fail-count) node attribute =#=#=#=
=#=#=#= End test: Set a transient (fail-count) node attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Set a transient (fail-count) node attribute
=#=#=#= Begin test: Query a fail count =#=#=#=
scope=status name=fail-count-foo value=3
=#=#=#= Current cib after: Query a fail count =#=#=#=
=#=#=#= End test: Query a fail count - OK (0) =#=#=#=
* Passed: crm_failcount - Query a fail count
=#=#=#= Begin test: Show node attributes with crm_simulate =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Current cluster status:
* Node List:
* Online: [ node1 ]
* Full List of Resources:
* No resources
* Node Attributes:
* Node: node1:
* ram : 1024M
=#=#=#= End test: Show node attributes with crm_simulate - OK (0) =#=#=#=
* Passed: crm_simulate - Show node attributes with crm_simulate
=#=#=#= Begin test: Set a second transient node attribute =#=#=#=
=#=#=#= Current cib after: Set a second transient node attribute =#=#=#=
=#=#=#= End test: Set a second transient node attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Set a second transient node attribute
=#=#=#= Begin test: Query transient node attributes by pattern =#=#=#=
scope=status name=fail-count-foo value=3
scope=status name=fail-count-bar value=5
=#=#=#= End test: Query transient node attributes by pattern - OK (0) =#=#=#=
* Passed: crm_attribute - Query transient node attributes by pattern
=#=#=#= Begin test: Update transient node attributes by pattern =#=#=#=
=#=#=#= Current cib after: Update transient node attributes by pattern =#=#=#=
=#=#=#= End test: Update transient node attributes by pattern - OK (0) =#=#=#=
* Passed: crm_attribute - Update transient node attributes by pattern
=#=#=#= Begin test: Delete transient node attributes by pattern =#=#=#=
Deleted status attribute: id=status-node1-fail-count-foo name=fail-count-foo
Deleted status attribute: id=status-node1-fail-count-bar name=fail-count-bar
=#=#=#= Current cib after: Delete transient node attributes by pattern =#=#=#=
=#=#=#= End test: Delete transient node attributes by pattern - OK (0) =#=#=#=
* Passed: crm_attribute - Delete transient node attributes by pattern
=#=#=#= Begin test: crm_attribute given invalid delete usage =#=#=#=
crm_attribute: Error: must specify attribute name or pattern to delete
=#=#=#= End test: crm_attribute given invalid delete usage - Incorrect usage (64) =#=#=#=
* Passed: crm_attribute - crm_attribute given invalid delete usage
=#=#=#= Begin test: Set a utilization node attribute =#=#=#=
=#=#=#= Current cib after: Set a utilization node attribute =#=#=#=
=#=#=#= End test: Set a utilization node attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Set a utilization node attribute
=#=#=#= Begin test: Query utilization node attribute =#=#=#=
scope=nodes name=cpu value=1
=#=#=#= End test: Query utilization node attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Query utilization node attribute
=#=#=#= Begin test: Replace operation should fail =#=#=#=
Call failed: Update was older than existing configuration
=#=#=#= End test: Replace operation should fail - Update was older than existing configuration (103) =#=#=#=
* Passed: cibadmin - Replace operation should fail
=#=#=#= Begin test: Query a nonexistent promotable score attribute =#=#=#=
crm_attribute: Error performing operation: No such device or address
=#=#=#= End test: Query a nonexistent promotable score attribute - No such object (105) =#=#=#=
* Passed: crm_attribute - Query a nonexistent promotable score attribute
=#=#=#= Begin test: Query a nonexistent promotable score attribute (XML) =#=#=#=
crm_attribute: Error performing operation: No such device or address
=#=#=#= End test: Query a nonexistent promotable score attribute (XML) - No such object (105) =#=#=#=
* Passed: crm_attribute - Query a nonexistent promotable score attribute (XML)
=#=#=#= Begin test: Delete a nonexistent promotable score attribute =#=#=#=
=#=#=#= End test: Delete a nonexistent promotable score attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Delete a nonexistent promotable score attribute
=#=#=#= Begin test: Delete a nonexistent promotable score attribute (XML) =#=#=#=
=#=#=#= End test: Delete a nonexistent promotable score attribute (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Delete a nonexistent promotable score attribute (XML)
=#=#=#= Begin test: Query after deleting a nonexistent promotable score attribute =#=#=#=
crm_attribute: Error performing operation: No such device or address
=#=#=#= End test: Query after deleting a nonexistent promotable score attribute - No such object (105) =#=#=#=
* Passed: crm_attribute - Query after deleting a nonexistent promotable score attribute
=#=#=#= Begin test: Query after deleting a nonexistent promotable score attribute (XML) =#=#=#=
crm_attribute: Error performing operation: No such device or address
=#=#=#= End test: Query after deleting a nonexistent promotable score attribute (XML) - No such object (105) =#=#=#=
* Passed: crm_attribute - Query after deleting a nonexistent promotable score attribute (XML)
=#=#=#= Begin test: Update a nonexistent promotable score attribute =#=#=#=
=#=#=#= End test: Update a nonexistent promotable score attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Update a nonexistent promotable score attribute
=#=#=#= Begin test: Update a nonexistent promotable score attribute (XML) =#=#=#=
=#=#=#= End test: Update a nonexistent promotable score attribute (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Update a nonexistent promotable score attribute (XML)
=#=#=#= Begin test: Query after updating a nonexistent promotable score attribute =#=#=#=
scope=status name=master-promotable-rsc value=1
=#=#=#= End test: Query after updating a nonexistent promotable score attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Query after updating a nonexistent promotable score attribute
=#=#=#= Begin test: Query after updating a nonexistent promotable score attribute (XML) =#=#=#=
=#=#=#= End test: Query after updating a nonexistent promotable score attribute (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Query after updating a nonexistent promotable score attribute (XML)
=#=#=#= Begin test: Update an existing promotable score attribute =#=#=#=
=#=#=#= End test: Update an existing promotable score attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Update an existing promotable score attribute
=#=#=#= Begin test: Update an existing promotable score attribute (XML) =#=#=#=
=#=#=#= End test: Update an existing promotable score attribute (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Update an existing promotable score attribute (XML)
=#=#=#= Begin test: Query after updating an existing promotable score attribute =#=#=#=
scope=status name=master-promotable-rsc value=5
=#=#=#= End test: Query after updating an existing promotable score attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Query after updating an existing promotable score attribute
=#=#=#= Begin test: Query after updating an existing promotable score attribute (XML) =#=#=#=
=#=#=#= End test: Query after updating an existing promotable score attribute (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Query after updating an existing promotable score attribute (XML)
=#=#=#= Begin test: Delete an existing promotable score attribute =#=#=#=
Deleted status attribute: id=status-1-master-promotable-rsc name=master-promotable-rsc
=#=#=#= End test: Delete an existing promotable score attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Delete an existing promotable score attribute
=#=#=#= Begin test: Delete an existing promotable score attribute (XML) =#=#=#=
=#=#=#= End test: Delete an existing promotable score attribute (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Delete an existing promotable score attribute (XML)
=#=#=#= Begin test: Query after deleting an existing promotable score attribute =#=#=#=
crm_attribute: Error performing operation: No such device or address
=#=#=#= End test: Query after deleting an existing promotable score attribute - No such object (105) =#=#=#=
* Passed: crm_attribute - Query after deleting an existing promotable score attribute
=#=#=#= Begin test: Query after deleting an existing promotable score attribute (XML) =#=#=#=
crm_attribute: Error performing operation: No such device or address
=#=#=#= End test: Query after deleting an existing promotable score attribute (XML) - No such object (105) =#=#=#=
* Passed: crm_attribute - Query after deleting an existing promotable score attribute (XML)
=#=#=#= Begin test: Update a promotable score attribute to -INFINITY =#=#=#=
=#=#=#= End test: Update a promotable score attribute to -INFINITY - OK (0) =#=#=#=
* Passed: crm_attribute - Update a promotable score attribute to -INFINITY
=#=#=#= Begin test: Update a promotable score attribute to -INFINITY (XML) =#=#=#=
=#=#=#= End test: Update a promotable score attribute to -INFINITY (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Update a promotable score attribute to -INFINITY (XML)
=#=#=#= Begin test: Query after updating a promotable score attribute to -INFINITY =#=#=#=
scope=status name=master-promotable-rsc value=-INFINITY
=#=#=#= End test: Query after updating a promotable score attribute to -INFINITY - OK (0) =#=#=#=
* Passed: crm_attribute - Query after updating a promotable score attribute to -INFINITY
=#=#=#= Begin test: Query after updating a promotable score attribute to -INFINITY (XML) =#=#=#=
=#=#=#= End test: Query after updating a promotable score attribute to -INFINITY (XML) - OK (0) =#=#=#=
* Passed: crm_attribute - Query after updating a promotable score attribute to -INFINITY (XML)
=#=#=#= Begin test: Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string =#=#=#=
scope=status name=master-promotable-rsc value=-INFINITY
=#=#=#= End test: Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string - OK (0) =#=#=#=
* Passed: crm_attribute - Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string
diff --git a/cts/cli/regression.crm_resource.exp b/cts/cli/regression.crm_resource.exp
index 9859fe316d..63280a1896 100644
--- a/cts/cli/regression.crm_resource.exp
+++ b/cts/cli/regression.crm_resource.exp
@@ -1,4049 +1,4086 @@
=#=#=#= Begin test: crm_resource run with extra arguments =#=#=#=
crm_resource: non-option ARGV-elements:
[1 of 2] foo
[2 of 2] bar
=#=#=#= End test: crm_resource run with extra arguments - Incorrect usage (64) =#=#=#=
* Passed: crm_resource - crm_resource run with extra arguments
=#=#=#= Begin test: List all available resource options (invalid type) =#=#=#=
crm_resource: Error parsing option --list-options
=#=#=#= End test: List all available resource options (invalid type) - Incorrect usage (64) =#=#=#=
* Passed: crm_resource - List all available resource options (invalid type)
=#=#=#= Begin test: List all available resource options (invalid type) =#=#=#=
crm_resource: Error parsing option --list-options
=#=#=#= End test: List all available resource options (invalid type) - Incorrect usage (64) =#=#=#=
* Passed: crm_resource - List all available resource options (invalid type)
=#=#=#= Begin test: List non-advanced primitive meta-attributes =#=#=#=
Primitive meta-attributes
Meta-attributes applicable to primitive resources
* priority: Resource assignment priority
* If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active.
* Possible values: score (default: )
* critical: Default value for influence in colocation constraints
* Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group.
* Possible values: boolean (default: )
* target-role: State the cluster should attempt to keep this resource in
* "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started".
* Possible values: "Stopped", "Started" (default), "Unpromoted", "Promoted"
* is-managed: Whether the cluster is allowed to actively change the resource's state
* If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this.
* Possible values: boolean (default: )
* maintenance: If true, the cluster will not schedule any actions involving the resource
* If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this.
* Possible values: boolean (default: )
* resource-stickiness: Score to add to the current node when a resource is already active
* Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources.
* Possible values: score (no default)
* requires: Conditions under which the resource can be started
* Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum".
* Possible values: "nothing", "quorum", "fencing", "unfencing"
* migration-threshold: Number of failures on a node before the resource becomes ineligible to run there.
* Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false.
* Possible values: score (default: )
* failure-timeout: Number of seconds before acting as if a failure had not occurred
* Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled.
* Possible values: duration (default: )
* multiple-active: What to do if the cluster finds the resource active on more than one node
* What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.)
* Possible values: "block", "stop_only", "stop_start" (default), "stop_unexpected"
* allow-migrate: Whether the cluster should try to "live migrate" this resource when it needs to be moved
* Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise.
* Possible values: boolean (no default)
* allow-unhealthy-nodes: Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it
* Possible values: boolean (default: )
* container-attribute-target: Where to check user-defined node attributes
* Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node).
* Possible values: string (no default)
* remote-node: Name of the Pacemaker Remote guest node this resource is associated with, if any
* Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs.
* Possible values: string (no default)
* remote-addr: If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote
* If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute.
* Possible values: string (no default)
* remote-port: If remote-node is specified, port on the guest used for its Pacemaker Remote connection
* If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port.
* Possible values: port (default: )
* remote-connect-timeout: If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out.
* Possible values: timeout (default: )
* remote-allow-migrate: If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote).
* Possible values: boolean (default: )
=#=#=#= End test: List non-advanced primitive meta-attributes - OK (0) =#=#=#=
* Passed: crm_resource - List non-advanced primitive meta-attributes
=#=#=#= Begin test: List non-advanced primitive meta-attributes (XML) =#=#=#=
1.1Meta-attributes applicable to primitive resourcesPrimitive meta-attributesIf not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active.Resource assignment priorityUse this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group.Default value for influence in colocation constraints"Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started".State the cluster should attempt to keep this resource inIf false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this.Whether the cluster is allowed to actively change the resource's stateIf true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this.If true, the cluster will not schedule any actions involving the resourceScore to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources.Score to add to the current node when a resource is already activeConditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum".Conditions under which the resource can be startedNumber of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false.Number of failures on a node before the resource becomes ineligible to run there.Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled.Number of seconds before acting as if a failure had not occurredWhat to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.)What to do if the cluster finds the resource active on more than one nodeWhether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise.Whether the cluster should try to "live migrate" this resource when it needs to be movedWhether the resource should be allowed to run on a node even if the node's health score would otherwise prevent itWhether the resource should be allowed to run on a node even if the node's health score would otherwise prevent itWhether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node).Where to check user-defined node attributesName of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs.Name of the Pacemaker Remote guest node this resource is associated with, if anyIf remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute.If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker RemoteIf remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port.If remote-node is specified, port on the guest used for its Pacemaker Remote connectionIf remote-node is specified, how long before a pending Pacemaker Remote guest connection times out.If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out.If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote).If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote).
=#=#=#= End test: List non-advanced primitive meta-attributes (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List non-advanced primitive meta-attributes (XML)
=#=#=#= Begin test: List all available primitive meta-attributes =#=#=#=
Primitive meta-attributes
Meta-attributes applicable to primitive resources
* priority: Resource assignment priority
* If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active.
* Possible values: score (default: )
* critical: Default value for influence in colocation constraints
* Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group.
* Possible values: boolean (default: )
* target-role: State the cluster should attempt to keep this resource in
* "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started".
* Possible values: "Stopped", "Started" (default), "Unpromoted", "Promoted"
* is-managed: Whether the cluster is allowed to actively change the resource's state
* If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this.
* Possible values: boolean (default: )
* maintenance: If true, the cluster will not schedule any actions involving the resource
* If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this.
* Possible values: boolean (default: )
* resource-stickiness: Score to add to the current node when a resource is already active
* Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources.
* Possible values: score (no default)
* requires: Conditions under which the resource can be started
* Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum".
* Possible values: "nothing", "quorum", "fencing", "unfencing"
* migration-threshold: Number of failures on a node before the resource becomes ineligible to run there.
* Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false.
* Possible values: score (default: )
* failure-timeout: Number of seconds before acting as if a failure had not occurred
* Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled.
* Possible values: duration (default: )
* multiple-active: What to do if the cluster finds the resource active on more than one node
* What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.)
* Possible values: "block", "stop_only", "stop_start" (default), "stop_unexpected"
* allow-migrate: Whether the cluster should try to "live migrate" this resource when it needs to be moved
* Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise.
* Possible values: boolean (no default)
* allow-unhealthy-nodes: Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it
* Possible values: boolean (default: )
* container-attribute-target: Where to check user-defined node attributes
* Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node).
* Possible values: string (no default)
* remote-node: Name of the Pacemaker Remote guest node this resource is associated with, if any
* Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs.
* Possible values: string (no default)
* remote-addr: If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote
* If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute.
* Possible values: string (no default)
* remote-port: If remote-node is specified, port on the guest used for its Pacemaker Remote connection
* If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port.
* Possible values: port (default: )
* remote-connect-timeout: If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out.
* Possible values: timeout (default: )
* remote-allow-migrate: If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote).
* Possible values: boolean (default: )
=#=#=#= End test: List all available primitive meta-attributes - OK (0) =#=#=#=
* Passed: crm_resource - List all available primitive meta-attributes
=#=#=#= Begin test: List all available primitive meta-attributes (XML) =#=#=#=
1.1Meta-attributes applicable to primitive resourcesPrimitive meta-attributesIf not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active.Resource assignment priorityUse this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group.Default value for influence in colocation constraints"Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started".State the cluster should attempt to keep this resource inIf false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this.Whether the cluster is allowed to actively change the resource's stateIf true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this.If true, the cluster will not schedule any actions involving the resourceScore to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources.Score to add to the current node when a resource is already activeConditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum".Conditions under which the resource can be startedNumber of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false.Number of failures on a node before the resource becomes ineligible to run there.Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled.Number of seconds before acting as if a failure had not occurredWhat to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.)What to do if the cluster finds the resource active on more than one nodeWhether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise.Whether the cluster should try to "live migrate" this resource when it needs to be movedWhether the resource should be allowed to run on a node even if the node's health score would otherwise prevent itWhether the resource should be allowed to run on a node even if the node's health score would otherwise prevent itWhether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node).Where to check user-defined node attributesName of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs.Name of the Pacemaker Remote guest node this resource is associated with, if anyIf remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute.If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker RemoteIf remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port.If remote-node is specified, port on the guest used for its Pacemaker Remote connectionIf remote-node is specified, how long before a pending Pacemaker Remote guest connection times out.If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out.If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote).If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote).
=#=#=#= End test: List all available primitive meta-attributes (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List all available primitive meta-attributes (XML)
=#=#=#= Begin test: List non-advanced fencing parameters =#=#=#=
Fencing resource common parameters
Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library.
* pcmk_host_map: A mapping of node names to port numbers for devices that do not support node names.
* For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2.
* Possible values: string (no default)
* pcmk_host_list: Nodes targeted by this device
* Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set.
* Possible values: string (no default)
* pcmk_host_check: How to determine which nodes can be targeted by the device
* Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none"
* Possible values: "dynamic-list", "static-list", "status", "none"
* pcmk_delay_max: Enable a delay of no more than the time specified before executing fencing actions.
* Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum.
* Possible values: duration (default: )
* pcmk_delay_base: Enable a base delay for fencing actions and specify base delay value.
* This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target.
* Possible values: string (default: )
* pcmk_action_limit: The maximum number of actions can be performed in parallel on this device
* If the concurrent-fencing cluster property is "true", this specifies the maximum number of actions that can be performed in parallel on this device. A value of -1 means unlimited.
* Possible values: integer (default: )
=#=#=#= End test: List non-advanced fencing parameters - OK (0) =#=#=#=
* Passed: crm_resource - List non-advanced fencing parameters
=#=#=#= Begin test: List non-advanced fencing parameters (XML) =#=#=#=
1.1Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library.Fencing resource common parametersIf the fencing agent metadata advertises support for the "port" or "plug" parameter, that will be used as the default, otherwise "none" will be used, which tells the cluster not to supply any additional parameters.Name of agent parameter that should be set to the fencing targetFor example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2.A mapping of node names to port numbers for devices that do not support node names.Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set.Nodes targeted by this deviceUse "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none"How to determine which nodes can be targeted by the deviceEnable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum.Enable a delay of no more than the time specified before executing fencing actions.This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target.Enable a base delay for fencing actions and specify base delay value.If the concurrent-fencing cluster property is "true", this specifies the maximum number of actions that can be performed in parallel on this device. A value of -1 means unlimited.The maximum number of actions can be performed in parallel on this deviceSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action.An alternate command to run instead of 'reboot'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions.Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up.The maximum number of times to try the 'reboot' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action.An alternate command to run instead of 'off'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions.Specify an alternate timeout to use for 'off' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up.The maximum number of times to try the 'off' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action.An alternate command to run instead of 'on'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions.Specify an alternate timeout to use for 'on' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up.The maximum number of times to try the 'on' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action.An alternate command to run instead of 'list'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions.Specify an alternate timeout to use for 'list' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up.The maximum number of times to try the 'list' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action.An alternate command to run instead of 'monitor'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions.Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up.The maximum number of times to try the 'monitor' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action.An alternate command to run instead of 'status'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions.Specify an alternate timeout to use for 'status' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up.The maximum number of times to try the 'status' command within the timeout period
=#=#=#= End test: List non-advanced fencing parameters (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List non-advanced fencing parameters (XML)
=#=#=#= Begin test: List all available fencing parameters =#=#=#=
Fencing resource common parameters
Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library.
* pcmk_host_map: A mapping of node names to port numbers for devices that do not support node names.
* For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2.
* Possible values: string (no default)
* pcmk_host_list: Nodes targeted by this device
* Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set.
* Possible values: string (no default)
* pcmk_host_check: How to determine which nodes can be targeted by the device
* Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none"
* Possible values: "dynamic-list", "static-list", "status", "none"
* pcmk_delay_max: Enable a delay of no more than the time specified before executing fencing actions.
* Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum.
* Possible values: duration (default: )
* pcmk_delay_base: Enable a base delay for fencing actions and specify base delay value.
* This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target.
* Possible values: string (default: )
* pcmk_action_limit: The maximum number of actions can be performed in parallel on this device
* If the concurrent-fencing cluster property is "true", this specifies the maximum number of actions that can be performed in parallel on this device. A value of -1 means unlimited.
* Possible values: integer (default: )
* ADVANCED OPTIONS:
* pcmk_host_argument: Name of agent parameter that should be set to the fencing target
* If the fencing agent metadata advertises support for the "port" or "plug" parameter, that will be used as the default, otherwise "none" will be used, which tells the cluster not to supply any additional parameters.
* Possible values: string (no default)
* pcmk_reboot_action: An alternate command to run instead of 'reboot'
* Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action.
* Possible values: string (default: )
* pcmk_reboot_timeout: Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout
* Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions.
* Possible values: timeout (default: )
* pcmk_reboot_retries: The maximum number of times to try the 'reboot' command within the timeout period
* Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up.
* Possible values: integer (default: )
* pcmk_off_action: An alternate command to run instead of 'off'
* Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action.
* Possible values: string (default: )
* pcmk_off_timeout: Specify an alternate timeout to use for 'off' actions instead of stonith-timeout
* Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions.
* Possible values: timeout (default: )
* pcmk_off_retries: The maximum number of times to try the 'off' command within the timeout period
* Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up.
* Possible values: integer (default: )
* pcmk_on_action: An alternate command to run instead of 'on'
* Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action.
* Possible values: string (default: )
* pcmk_on_timeout: Specify an alternate timeout to use for 'on' actions instead of stonith-timeout
* Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions.
* Possible values: timeout (default: )
* pcmk_on_retries: The maximum number of times to try the 'on' command within the timeout period
* Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up.
* Possible values: integer (default: )
* pcmk_list_action: An alternate command to run instead of 'list'
* Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action.
* Possible values: string (default: )
* pcmk_list_timeout: Specify an alternate timeout to use for 'list' actions instead of stonith-timeout
* Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions.
* Possible values: timeout (default: )
* pcmk_list_retries: The maximum number of times to try the 'list' command within the timeout period
* Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up.
* Possible values: integer (default: )
* pcmk_monitor_action: An alternate command to run instead of 'monitor'
* Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action.
* Possible values: string (default: )
* pcmk_monitor_timeout: Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout
* Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions.
* Possible values: timeout (default: )
* pcmk_monitor_retries: The maximum number of times to try the 'monitor' command within the timeout period
* Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up.
* Possible values: integer (default: )
* pcmk_status_action: An alternate command to run instead of 'status'
* Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action.
* Possible values: string (default: )
* pcmk_status_timeout: Specify an alternate timeout to use for 'status' actions instead of stonith-timeout
* Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions.
* Possible values: timeout (default: )
* pcmk_status_retries: The maximum number of times to try the 'status' command within the timeout period
* Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up.
* Possible values: integer (default: )
=#=#=#= End test: List all available fencing parameters - OK (0) =#=#=#=
* Passed: crm_resource - List all available fencing parameters
=#=#=#= Begin test: List all available fencing parameters (XML) =#=#=#=
1.1Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library.Fencing resource common parametersIf the fencing agent metadata advertises support for the "port" or "plug" parameter, that will be used as the default, otherwise "none" will be used, which tells the cluster not to supply any additional parameters.Name of agent parameter that should be set to the fencing targetFor example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2.A mapping of node names to port numbers for devices that do not support node names.Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set.Nodes targeted by this deviceUse "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none"How to determine which nodes can be targeted by the deviceEnable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum.Enable a delay of no more than the time specified before executing fencing actions.This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target.Enable a base delay for fencing actions and specify base delay value.If the concurrent-fencing cluster property is "true", this specifies the maximum number of actions that can be performed in parallel on this device. A value of -1 means unlimited.The maximum number of actions can be performed in parallel on this deviceSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action.An alternate command to run instead of 'reboot'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions.Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up.The maximum number of times to try the 'reboot' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action.An alternate command to run instead of 'off'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions.Specify an alternate timeout to use for 'off' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up.The maximum number of times to try the 'off' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action.An alternate command to run instead of 'on'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions.Specify an alternate timeout to use for 'on' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up.The maximum number of times to try the 'on' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action.An alternate command to run instead of 'list'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions.Specify an alternate timeout to use for 'list' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up.The maximum number of times to try the 'list' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action.An alternate command to run instead of 'monitor'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions.Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up.The maximum number of times to try the 'monitor' command within the timeout periodSome devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action.An alternate command to run instead of 'status'Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions.Specify an alternate timeout to use for 'status' actions instead of stonith-timeoutSome devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up.The maximum number of times to try the 'status' command within the timeout period
=#=#=#= End test: List all available fencing parameters (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List all available fencing parameters (XML)
=#=#=#= Begin test: Create a resource =#=#=#=
=#=#=#= Current cib after: Create a resource =#=#=#=
=#=#=#= End test: Create a resource - OK (0) =#=#=#=
* Passed: cibadmin - Create a resource
=#=#=#= Begin test: crm_resource given both -r and resource config =#=#=#=
crm_resource: --resource cannot be used with --class, --agent, and --provider
=#=#=#= End test: crm_resource given both -r and resource config - Incorrect usage (64) =#=#=#=
* Passed: crm_resource - crm_resource given both -r and resource config
=#=#=#= Begin test: crm_resource given resource config with invalid action =#=#=#=
crm_resource: --class, --agent, and --provider can only be used with --validate and --force-*
=#=#=#= End test: crm_resource given resource config with invalid action - Incorrect usage (64) =#=#=#=
* Passed: crm_resource - crm_resource given resource config with invalid action
=#=#=#= Begin test: Create a resource meta attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Set 'dummy' option: id=dummy-meta_attributes-is-managed set=dummy-meta_attributes name=is-managed value=false
=#=#=#= Current cib after: Create a resource meta attribute =#=#=#=
=#=#=#= End test: Create a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource meta attribute
=#=#=#= Begin test: Query a resource meta attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
false
=#=#=#= Current cib after: Query a resource meta attribute =#=#=#=
=#=#=#= End test: Query a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - Query a resource meta attribute
=#=#=#= Begin test: Remove a resource meta attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Deleted 'dummy' option: id=dummy-meta_attributes-is-managed name=is-managed
=#=#=#= Current cib after: Remove a resource meta attribute =#=#=#=
=#=#=#= End test: Remove a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - Remove a resource meta attribute
=#=#=#= Begin test: Create another resource meta attribute (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= End test: Create another resource meta attribute (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Create another resource meta attribute (XML)
=#=#=#= Begin test: Show why a resource is not running (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= End test: Show why a resource is not running (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Show why a resource is not running (XML)
=#=#=#= Begin test: Remove another resource meta attribute (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= End test: Remove another resource meta attribute (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Remove another resource meta attribute (XML)
=#=#=#= Begin test: Get a non-existent attribute from a resource element (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrityAttribute 'nonexistent' not found for 'dummy'
=#=#=#= End test: Get a non-existent attribute from a resource element (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Get a non-existent attribute from a resource element (XML)
=#=#=#= Begin test: Get a non-existent attribute from a resource element =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Attribute 'nonexistent' not found for 'dummy'
=#=#=#= Current cib after: Get a non-existent attribute from a resource element =#=#=#=
=#=#=#= End test: Get a non-existent attribute from a resource element - OK (0) =#=#=#=
* Passed: crm_resource - Get a non-existent attribute from a resource element
=#=#=#= Begin test: Get a non-existent attribute from a resource element (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrityAttribute 'nonexistent' not found for 'dummy'
=#=#=#= Current cib after: Get a non-existent attribute from a resource element (XML) =#=#=#=
=#=#=#= End test: Get a non-existent attribute from a resource element (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Get a non-existent attribute from a resource element (XML)
=#=#=#= Begin test: Get an existent attribute from a resource element =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
ocf
=#=#=#= Current cib after: Get an existent attribute from a resource element =#=#=#=
=#=#=#= End test: Get an existent attribute from a resource element - OK (0) =#=#=#=
* Passed: crm_resource - Get an existent attribute from a resource element
=#=#=#= Begin test: Set a non-existent attribute for a resource element (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= Current cib after: Set a non-existent attribute for a resource element (XML) =#=#=#=
=#=#=#= End test: Set a non-existent attribute for a resource element (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Set a non-existent attribute for a resource element (XML)
=#=#=#= Begin test: Set an existent attribute for a resource element (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= Current cib after: Set an existent attribute for a resource element (XML) =#=#=#=
=#=#=#= End test: Set an existent attribute for a resource element (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Set an existent attribute for a resource element (XML)
=#=#=#= Begin test: Delete an existent attribute for a resource element (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= Current cib after: Delete an existent attribute for a resource element (XML) =#=#=#=
=#=#=#= End test: Delete an existent attribute for a resource element (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Delete an existent attribute for a resource element (XML)
=#=#=#= Begin test: Delete a non-existent attribute for a resource element (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= Current cib after: Delete a non-existent attribute for a resource element (XML) =#=#=#=
=#=#=#= End test: Delete a non-existent attribute for a resource element (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Delete a non-existent attribute for a resource element (XML)
=#=#=#= Begin test: Set a non-existent attribute for a resource element =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Set attribute: name=description value=test_description
=#=#=#= Current cib after: Set a non-existent attribute for a resource element =#=#=#=
=#=#=#= End test: Set a non-existent attribute for a resource element - OK (0) =#=#=#=
* Passed: crm_resource - Set a non-existent attribute for a resource element
=#=#=#= Begin test: Set an existent attribute for a resource element =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Set attribute: name=description value=test_description
=#=#=#= Current cib after: Set an existent attribute for a resource element =#=#=#=
=#=#=#= End test: Set an existent attribute for a resource element - OK (0) =#=#=#=
* Passed: crm_resource - Set an existent attribute for a resource element
=#=#=#= Begin test: Delete an existent attribute for a resource element =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Deleted attribute: description
=#=#=#= Current cib after: Delete an existent attribute for a resource element =#=#=#=
=#=#=#= End test: Delete an existent attribute for a resource element - OK (0) =#=#=#=
* Passed: crm_resource - Delete an existent attribute for a resource element
=#=#=#= Begin test: Delete a non-existent attribute for a resource element =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Deleted attribute: description
=#=#=#= Current cib after: Delete a non-existent attribute for a resource element =#=#=#=
=#=#=#= End test: Delete a non-existent attribute for a resource element - OK (0) =#=#=#=
* Passed: crm_resource - Delete a non-existent attribute for a resource element
=#=#=#= Begin test: Create a resource attribute =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Set 'dummy' option: id=dummy-instance_attributes-delay set=dummy-instance_attributes name=delay value=10s
=#=#=#= Current cib after: Create a resource attribute =#=#=#=
=#=#=#= End test: Create a resource attribute - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource attribute
=#=#=#= Begin test: List the configured resources =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Stopped
=#=#=#= Current cib after: List the configured resources =#=#=#=
=#=#=#= End test: List the configured resources - OK (0) =#=#=#=
* Passed: crm_resource - List the configured resources
=#=#=#= Begin test: List the configured resources (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= Current cib after: List the configured resources (XML) =#=#=#=
=#=#=#= End test: List the configured resources (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List the configured resources (XML)
=#=#=#= Begin test: Implicitly list the configured resources =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Stopped
=#=#=#= End test: Implicitly list the configured resources - OK (0) =#=#=#=
* Passed: crm_resource - Implicitly list the configured resources
=#=#=#= Begin test: List IDs of instantiated resources =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
dummy
=#=#=#= End test: List IDs of instantiated resources - OK (0) =#=#=#=
* Passed: crm_resource - List IDs of instantiated resources
=#=#=#= Begin test: Show XML configuration of resource =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
dummy (ocf:pacemaker:Dummy): Stopped
Resource XML:
=#=#=#= End test: Show XML configuration of resource - OK (0) =#=#=#=
* Passed: crm_resource - Show XML configuration of resource
=#=#=#= Begin test: Show XML configuration of resource (XML) =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
]]>
-
+
+
+ error: Resource start-up disabled since no STONITH resources have been defined
+ error: Either configure some or disable STONITH with the stonith-enabled option
+ error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+
+
=#=#=#= End test: Show XML configuration of resource (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Show XML configuration of resource (XML)
=#=#=#= Begin test: Require a destination when migrating a resource that is stopped =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
crm_resource: Resource 'dummy' not moved: active in 0 locations.
To prevent 'dummy' from running on a specific location, specify a node.
=#=#=#= Current cib after: Require a destination when migrating a resource that is stopped =#=#=#=
=#=#=#= End test: Require a destination when migrating a resource that is stopped - Incorrect usage (64) =#=#=#=
* Passed: crm_resource - Require a destination when migrating a resource that is stopped
=#=#=#= Begin test: Don't support migration to non-existent locations =#=#=#=
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
crm_resource: Node 'i.do.not.exist' not found
Error performing operation: No such object
=#=#=#= Current cib after: Don't support migration to non-existent locations =#=#=#=
=#=#=#= End test: Don't support migration to non-existent locations - No such object (105) =#=#=#=
* Passed: crm_resource - Don't support migration to non-existent locations
=#=#=#= Begin test: Create a fencing resource =#=#=#=
=#=#=#= Current cib after: Create a fencing resource =#=#=#=
=#=#=#= End test: Create a fencing resource - OK (0) =#=#=#=
* Passed: cibadmin - Create a fencing resource
=#=#=#= Begin test: Bring resources online =#=#=#=
Current cluster status:
* Node List:
* Online: [ node1 ]
* Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Stopped
* Fence (stonith:fence_true): Stopped
Transition Summary:
* Start dummy ( node1 )
* Start Fence ( node1 )
Executing Cluster Transition:
* Resource action: dummy monitor on node1
* Resource action: Fence monitor on node1
* Resource action: dummy start on node1
* Resource action: Fence start on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 ]
* Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Started node1
* Fence (stonith:fence_true): Started node1
=#=#=#= Current cib after: Bring resources online =#=#=#=
=#=#=#= End test: Bring resources online - OK (0) =#=#=#=
* Passed: crm_simulate - Bring resources online
=#=#=#= Begin test: Try to move a resource to its existing location =#=#=#=
crm_resource: Error performing operation: Requested item already exists
=#=#=#= Current cib after: Try to move a resource to its existing location =#=#=#=
=#=#=#= End test: Try to move a resource to its existing location - Requested item already exists (108) =#=#=#=
* Passed: crm_resource - Try to move a resource to its existing location
=#=#=#= Begin test: Try to move a resource that doesn't exist =#=#=#=
crm_resource: Resource 'xyz' not found
Error performing operation: No such object
=#=#=#= End test: Try to move a resource that doesn't exist - No such object (105) =#=#=#=
* Passed: crm_resource - Try to move a resource that doesn't exist
=#=#=#= Begin test: Move a resource from its existing location =#=#=#=
WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1.
This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool.
This will be the case even if node1 is the last node in the cluster
=#=#=#= Current cib after: Move a resource from its existing location =#=#=#=
=#=#=#= End test: Move a resource from its existing location - OK (0) =#=#=#=
* Passed: crm_resource - Move a resource from its existing location
=#=#=#= Begin test: Clear out constraints generated by --move =#=#=#=
+warning: More than one node entry has name 'node1'
Removing constraint: cli-ban-dummy-on-node1
=#=#=#= Current cib after: Clear out constraints generated by --move =#=#=#=
=#=#=#= End test: Clear out constraints generated by --move - OK (0) =#=#=#=
* Passed: crm_resource - Clear out constraints generated by --move
=#=#=#= Begin test: Ban a resource on unknown node =#=#=#=
crm_resource: Node 'host1' not found
Error performing operation: No such object
=#=#=#= End test: Ban a resource on unknown node - No such object (105) =#=#=#=
* Passed: crm_resource - Ban a resource on unknown node
=#=#=#= Begin test: Create two more nodes and bring them online =#=#=#=
Current cluster status:
* Node List:
* Online: [ node1 ]
* Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Started node1
* Fence (stonith:fence_true): Started node1
Performing Requested Modifications:
* Bringing node node2 online
* Bringing node node3 online
Transition Summary:
* Move Fence ( node1 -> node2 )
Executing Cluster Transition:
* Resource action: dummy monitor on node3
* Resource action: dummy monitor on node2
* Resource action: Fence stop on node1
* Resource action: Fence monitor on node3
* Resource action: Fence monitor on node2
* Resource action: Fence start on node2
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 node3 ]
* Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Started node1
* Fence (stonith:fence_true): Started node2
=#=#=#= Current cib after: Create two more nodes and bring them online =#=#=#=
=#=#=#= End test: Create two more nodes and bring them online - OK (0) =#=#=#=
* Passed: crm_simulate - Create two more nodes and bring them online
=#=#=#= Begin test: Ban dummy from node1 =#=#=#=
WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1.
This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool.
This will be the case even if node1 is the last node in the cluster
=#=#=#= Current cib after: Ban dummy from node1 =#=#=#=
=#=#=#= End test: Ban dummy from node1 - OK (0) =#=#=#=
* Passed: crm_resource - Ban dummy from node1
=#=#=#= Begin test: Show where a resource is running =#=#=#=
resource dummy is running on: node1
=#=#=#= End test: Show where a resource is running - OK (0) =#=#=#=
* Passed: crm_resource - Show where a resource is running
=#=#=#= Begin test: Show constraints on a resource =#=#=#=
Locations:
* Node node1 (score=-INFINITY, id=cli-ban-dummy-on-node1, rsc=dummy)
=#=#=#= End test: Show constraints on a resource - OK (0) =#=#=#=
* Passed: crm_resource - Show constraints on a resource
=#=#=#= Begin test: Ban dummy from node2 (XML) =#=#=#=
=#=#=#= Current cib after: Ban dummy from node2 (XML) =#=#=#=
=#=#=#= End test: Ban dummy from node2 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Ban dummy from node2 (XML)
=#=#=#= Begin test: Relocate resources due to ban =#=#=#=
Current cluster status:
* Node List:
* Online: [ node1 node2 node3 ]
* Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Started node1
* Fence (stonith:fence_true): Started node2
Transition Summary:
* Move dummy ( node1 -> node3 )
Executing Cluster Transition:
* Resource action: dummy stop on node1
* Resource action: dummy start on node3
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 node3 ]
* Full List of Resources:
* dummy (ocf:pacemaker:Dummy): Started node3
* Fence (stonith:fence_true): Started node2
=#=#=#= Current cib after: Relocate resources due to ban =#=#=#=
=#=#=#= End test: Relocate resources due to ban - OK (0) =#=#=#=
* Passed: crm_simulate - Relocate resources due to ban
=#=#=#= Begin test: Move dummy to node1 (XML) =#=#=#=
=#=#=#= Current cib after: Move dummy to node1 (XML) =#=#=#=
=#=#=#= End test: Move dummy to node1 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Move dummy to node1 (XML)
=#=#=#= Begin test: Clear implicit constraints for dummy on node2 =#=#=#=
+warning: More than one node entry has name 'node1'
+warning: More than one node entry has name 'node2'
+warning: More than one node entry has name 'node3'
Removing constraint: cli-ban-dummy-on-node2
=#=#=#= Current cib after: Clear implicit constraints for dummy on node2 =#=#=#=
=#=#=#= End test: Clear implicit constraints for dummy on node2 - OK (0) =#=#=#=
* Passed: crm_resource - Clear implicit constraints for dummy on node2
=#=#=#= Begin test: Drop the status section =#=#=#=
=#=#=#= End test: Drop the status section - OK (0) =#=#=#=
* Passed: cibadmin - Drop the status section
=#=#=#= Begin test: Create a clone =#=#=#=
=#=#=#= End test: Create a clone - OK (0) =#=#=#=
* Passed: cibadmin - Create a clone
=#=#=#= Begin test: Create a resource meta attribute =#=#=#=
Performing update of 'is-managed' on 'test-clone', the parent of 'test-primitive'
Set 'test-clone' option: id=test-clone-meta_attributes-is-managed set=test-clone-meta_attributes name=is-managed value=false
=#=#=#= Current cib after: Create a resource meta attribute =#=#=#=
=#=#=#= End test: Create a resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource meta attribute
=#=#=#= Begin test: Create a resource meta attribute in the primitive =#=#=#=
Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed set=test-primitive-meta_attributes name=is-managed value=false
=#=#=#= Current cib after: Create a resource meta attribute in the primitive =#=#=#=
=#=#=#= End test: Create a resource meta attribute in the primitive - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource meta attribute in the primitive
=#=#=#= Begin test: Update resource meta attribute with duplicates =#=#=#=
Multiple attributes match name=is-managed
Value: false (id=test-primitive-meta_attributes-is-managed)
Value: false (id=test-clone-meta_attributes-is-managed)
A value for 'is-managed' already exists in child 'test-primitive', performing update on that instead of 'test-clone'
Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=true
=#=#=#= Current cib after: Update resource meta attribute with duplicates =#=#=#=
=#=#=#= End test: Update resource meta attribute with duplicates - OK (0) =#=#=#=
* Passed: crm_resource - Update resource meta attribute with duplicates
=#=#=#= Begin test: Update resource meta attribute with duplicates (force clone) =#=#=#=
Set 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed value=true
=#=#=#= Current cib after: Update resource meta attribute with duplicates (force clone) =#=#=#=
=#=#=#= End test: Update resource meta attribute with duplicates (force clone) - OK (0) =#=#=#=
* Passed: crm_resource - Update resource meta attribute with duplicates (force clone)
=#=#=#= Begin test: Update child resource meta attribute with duplicates =#=#=#=
Multiple attributes match name=is-managed
Value: true (id=test-primitive-meta_attributes-is-managed)
Value: true (id=test-clone-meta_attributes-is-managed)
Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=false
=#=#=#= Current cib after: Update child resource meta attribute with duplicates =#=#=#=
=#=#=#= End test: Update child resource meta attribute with duplicates - OK (0) =#=#=#=
* Passed: crm_resource - Update child resource meta attribute with duplicates
=#=#=#= Begin test: Delete resource meta attribute with duplicates =#=#=#=
Multiple attributes match name=is-managed
Value: false (id=test-primitive-meta_attributes-is-managed)
Value: true (id=test-clone-meta_attributes-is-managed)
A value for 'is-managed' already exists in child 'test-primitive', performing delete on that instead of 'test-clone'
Deleted 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed
=#=#=#= Current cib after: Delete resource meta attribute with duplicates =#=#=#=
=#=#=#= End test: Delete resource meta attribute with duplicates - OK (0) =#=#=#=
* Passed: crm_resource - Delete resource meta attribute with duplicates
=#=#=#= Begin test: Delete resource meta attribute in parent =#=#=#=
Performing delete of 'is-managed' on 'test-clone', the parent of 'test-primitive'
Deleted 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed
=#=#=#= Current cib after: Delete resource meta attribute in parent =#=#=#=
=#=#=#= End test: Delete resource meta attribute in parent - OK (0) =#=#=#=
* Passed: crm_resource - Delete resource meta attribute in parent
=#=#=#= Begin test: Create a resource meta attribute in the primitive =#=#=#=
Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed set=test-primitive-meta_attributes name=is-managed value=false
=#=#=#= Current cib after: Create a resource meta attribute in the primitive =#=#=#=
=#=#=#= End test: Create a resource meta attribute in the primitive - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource meta attribute in the primitive
=#=#=#= Begin test: Update existing resource meta attribute =#=#=#=
A value for 'is-managed' already exists in child 'test-primitive', performing update on that instead of 'test-clone'
Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=true
=#=#=#= Current cib after: Update existing resource meta attribute =#=#=#=
=#=#=#= End test: Update existing resource meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - Update existing resource meta attribute
=#=#=#= Begin test: Create a resource meta attribute in the parent =#=#=#=
Set 'test-clone' option: id=test-clone-meta_attributes-is-managed set=test-clone-meta_attributes name=is-managed value=true
=#=#=#= Current cib after: Create a resource meta attribute in the parent =#=#=#=
=#=#=#= End test: Create a resource meta attribute in the parent - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource meta attribute in the parent
=#=#=#= Begin test: Delete resource parent meta attribute (force) =#=#=#=
Deleted 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed
=#=#=#= Current cib after: Delete resource parent meta attribute (force) =#=#=#=
=#=#=#= End test: Delete resource parent meta attribute (force) - OK (0) =#=#=#=
* Passed: crm_resource - Delete resource parent meta attribute (force)
=#=#=#= Begin test: Delete resource child meta attribute =#=#=#=
Multiple attributes match name=is-managed
Value: true (id=test-primitive-meta_attributes-is-managed)
Value: true (id=test-clone-meta_attributes-is-managed)
Deleted 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed
=#=#=#= Current cib after: Delete resource child meta attribute =#=#=#=
=#=#=#= End test: Delete resource child meta attribute - OK (0) =#=#=#=
* Passed: crm_resource - Delete resource child meta attribute
=#=#=#= Begin test: Create the dummy-group resource group =#=#=#=
=#=#=#= Current cib after: Create the dummy-group resource group =#=#=#=
=#=#=#= End test: Create the dummy-group resource group - OK (0) =#=#=#=
* Passed: cibadmin - Create the dummy-group resource group
=#=#=#= Begin test: Create a resource meta attribute in dummy1 =#=#=#=
Set 'dummy1' option: id=dummy1-meta_attributes-is-managed set=dummy1-meta_attributes name=is-managed value=true
=#=#=#= Current cib after: Create a resource meta attribute in dummy1 =#=#=#=
=#=#=#= End test: Create a resource meta attribute in dummy1 - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource meta attribute in dummy1
=#=#=#= Begin test: Create a resource meta attribute in dummy-group =#=#=#=
Set 'dummy1' option: id=dummy1-meta_attributes-is-managed name=is-managed value=false
Set 'dummy-group' option: id=dummy-group-meta_attributes-is-managed set=dummy-group-meta_attributes name=is-managed value=false
=#=#=#= Current cib after: Create a resource meta attribute in dummy-group =#=#=#=
=#=#=#= End test: Create a resource meta attribute in dummy-group - OK (0) =#=#=#=
* Passed: crm_resource - Create a resource meta attribute in dummy-group
=#=#=#= Begin test: Delete the dummy-group resource group =#=#=#=
=#=#=#= Current cib after: Delete the dummy-group resource group =#=#=#=
=#=#=#= End test: Delete the dummy-group resource group - OK (0) =#=#=#=
* Passed: cibadmin - Delete the dummy-group resource group
=#=#=#= Begin test: Specify a lifetime when moving a resource =#=#=#=
Migration will take effect until:
=#=#=#= Current cib after: Specify a lifetime when moving a resource =#=#=#=
=#=#=#= End test: Specify a lifetime when moving a resource - OK (0) =#=#=#=
* Passed: crm_resource - Specify a lifetime when moving a resource
=#=#=#= Begin test: Try to move a resource previously moved with a lifetime =#=#=#=
=#=#=#= Current cib after: Try to move a resource previously moved with a lifetime =#=#=#=
=#=#=#= End test: Try to move a resource previously moved with a lifetime - OK (0) =#=#=#=
* Passed: crm_resource - Try to move a resource previously moved with a lifetime
=#=#=#= Begin test: Ban dummy from node1 for a short time =#=#=#=
Migration will take effect until:
WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1.
This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool.
This will be the case even if node1 is the last node in the cluster
=#=#=#= Current cib after: Ban dummy from node1 for a short time =#=#=#=
=#=#=#= End test: Ban dummy from node1 for a short time - OK (0) =#=#=#=
* Passed: crm_resource - Ban dummy from node1 for a short time
=#=#=#= Begin test: Remove expired constraints =#=#=#=
+warning: More than one node entry has name 'node1'
+warning: More than one node entry has name 'node2'
+warning: More than one node entry has name 'node3'
Removing constraint: cli-ban-dummy-on-node1
=#=#=#= Current cib after: Remove expired constraints =#=#=#=
=#=#=#= End test: Remove expired constraints - OK (0) =#=#=#=
* Passed: sleep - Remove expired constraints
=#=#=#= Begin test: Clear all implicit constraints for dummy =#=#=#=
+warning: More than one node entry has name 'node1'
+warning: More than one node entry has name 'node2'
+warning: More than one node entry has name 'node3'
Removing constraint: cli-prefer-dummy
=#=#=#= Current cib after: Clear all implicit constraints for dummy =#=#=#=
=#=#=#= End test: Clear all implicit constraints for dummy - OK (0) =#=#=#=
* Passed: crm_resource - Clear all implicit constraints for dummy
=#=#=#= Begin test: Set a node health strategy =#=#=#=
=#=#=#= Current cib after: Set a node health strategy =#=#=#=
=#=#=#= End test: Set a node health strategy - OK (0) =#=#=#=
* Passed: crm_attribute - Set a node health strategy
=#=#=#= Begin test: Set a node health attribute =#=#=#=
=#=#=#= Current cib after: Set a node health attribute =#=#=#=
=#=#=#= End test: Set a node health attribute - OK (0) =#=#=#=
* Passed: crm_attribute - Set a node health attribute
=#=#=#= Begin test: Show why a resource is not running on an unhealthy node (XML) =#=#=#=
=#=#=#= End test: Show why a resource is not running on an unhealthy node (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Show why a resource is not running on an unhealthy node (XML)
=#=#=#= Begin test: Delete a resource =#=#=#=
=#=#=#= Current cib after: Delete a resource =#=#=#=
=#=#=#= End test: Delete a resource - OK (0) =#=#=#=
* Passed: crm_resource - Delete a resource
=#=#=#= Begin test: Check locations and constraints for prim1 =#=#=#=
=#=#=#= End test: Check locations and constraints for prim1 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim1
=#=#=#= Begin test: Check locations and constraints for prim1 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim1 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim1 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim1 =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim1 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim1
=#=#=#= Begin test: Recursively check locations and constraints for prim1 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim1 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim1 (XML)
=#=#=#= Begin test: Check locations and constraints for prim2 =#=#=#=
Locations:
* Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2)
Resources prim2 is colocated with:
* prim3 (score=INFINITY, id=colocation-prim2-prim3-INFINITY)
=#=#=#= End test: Check locations and constraints for prim2 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim2
=#=#=#= Begin test: Check locations and constraints for prim2 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim2 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim2 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim2 =#=#=#=
Locations:
* Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2)
Resources prim2 is colocated with:
* prim3 (score=INFINITY, id=colocation-prim2-prim3-INFINITY)
* Resources prim3 is colocated with:
* prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY)
* Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
* Resources prim4 is colocated with:
* prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for prim2 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim2
=#=#=#= Begin test: Recursively check locations and constraints for prim2 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim2 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim2 (XML)
=#=#=#= Begin test: Check locations and constraints for prim3 =#=#=#=
Resources colocated with prim3:
* prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY)
* Locations:
* Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2)
Resources prim3 is colocated with:
* prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY)
* Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
=#=#=#= End test: Check locations and constraints for prim3 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim3
=#=#=#= Begin test: Check locations and constraints for prim3 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim3 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim3 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim3 =#=#=#=
Resources colocated with prim3:
* prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY)
* Locations:
* Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2)
Resources prim3 is colocated with:
* prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY)
* Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
* Resources prim4 is colocated with:
* prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for prim3 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim3
=#=#=#= Begin test: Recursively check locations and constraints for prim3 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim3 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim3 (XML)
=#=#=#= Begin test: Check locations and constraints for prim4 =#=#=#=
Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
Resources colocated with prim4:
* prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY)
* prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY)
Resources prim4 is colocated with:
* prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY)
=#=#=#= End test: Check locations and constraints for prim4 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim4
=#=#=#= Begin test: Check locations and constraints for prim4 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim4 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim4 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim4 =#=#=#=
Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
Resources colocated with prim4:
* prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY)
* prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY)
* Resources colocated with prim3:
* prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY)
* Locations:
* Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2)
Resources prim4 is colocated with:
* prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for prim4 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim4
=#=#=#= Begin test: Recursively check locations and constraints for prim4 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim4 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim4 (XML)
=#=#=#= Begin test: Check locations and constraints for prim5 =#=#=#=
Resources colocated with prim5:
* prim4 (score=INFINITY, id=colocation-prim4-prim5-INFINITY)
* Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
=#=#=#= End test: Check locations and constraints for prim5 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim5
=#=#=#= Begin test: Check locations and constraints for prim5 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim5 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim5 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim5 =#=#=#=
Resources colocated with prim5:
* prim4 (score=INFINITY, id=colocation-prim4-prim5-INFINITY)
* Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
* Resources colocated with prim4:
* prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY)
* prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY)
* Resources colocated with prim3:
* prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY)
* Locations:
* Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2)
=#=#=#= End test: Recursively check locations and constraints for prim5 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim5
=#=#=#= Begin test: Recursively check locations and constraints for prim5 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim5 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim5 (XML)
=#=#=#= Begin test: Check locations and constraints for prim6 =#=#=#=
Locations:
* Node cluster02 (score=-INFINITY, id=prim6-not-on-cluster2, rsc=prim6)
=#=#=#= End test: Check locations and constraints for prim6 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim6
=#=#=#= Begin test: Check locations and constraints for prim6 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim6 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim6 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim6 =#=#=#=
Locations:
* Node cluster02 (score=-INFINITY, id=prim6-not-on-cluster2, rsc=prim6)
=#=#=#= End test: Recursively check locations and constraints for prim6 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim6
=#=#=#= Begin test: Recursively check locations and constraints for prim6 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim6 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim6 (XML)
=#=#=#= Begin test: Check locations and constraints for prim7 =#=#=#=
Resources prim7 is colocated with:
* group (score=INFINITY, id=colocation-prim7-group-INFINITY)
=#=#=#= End test: Check locations and constraints for prim7 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim7
=#=#=#= Begin test: Check locations and constraints for prim7 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim7 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim7 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim7 =#=#=#=
Resources prim7 is colocated with:
* group (score=INFINITY, id=colocation-prim7-group-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for prim7 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim7
=#=#=#= Begin test: Recursively check locations and constraints for prim7 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim7 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim7 (XML)
=#=#=#= Begin test: Check locations and constraints for prim8 =#=#=#=
Resources prim8 is colocated with:
* gr2 (score=INFINITY, id=colocation-prim8-gr2-INFINITY)
=#=#=#= End test: Check locations and constraints for prim8 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim8
=#=#=#= Begin test: Check locations and constraints for prim8 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim8 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim8 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim8 =#=#=#=
Resources prim8 is colocated with:
* gr2 (score=INFINITY, id=colocation-prim8-gr2-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for prim8 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim8
=#=#=#= Begin test: Recursively check locations and constraints for prim8 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim8 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim8 (XML)
=#=#=#= Begin test: Check locations and constraints for prim9 =#=#=#=
Resources prim9 is colocated with:
* clone (score=INFINITY, id=colocation-prim9-clone-INFINITY)
=#=#=#= End test: Check locations and constraints for prim9 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim9
=#=#=#= Begin test: Check locations and constraints for prim9 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim9 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim9 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim9 =#=#=#=
Resources prim9 is colocated with:
* clone (score=INFINITY, id=colocation-prim9-clone-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for prim9 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim9
=#=#=#= Begin test: Recursively check locations and constraints for prim9 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim9 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim9 (XML)
=#=#=#= Begin test: Check locations and constraints for prim10 =#=#=#=
Resources prim10 is colocated with:
* prim4 (score=INFINITY, id=colocation-prim10-prim4-INFINITY)
* Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
=#=#=#= End test: Check locations and constraints for prim10 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim10
=#=#=#= Begin test: Check locations and constraints for prim10 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim10 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim10 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim10 =#=#=#=
Resources prim10 is colocated with:
* prim4 (score=INFINITY, id=colocation-prim10-prim4-INFINITY)
* Locations:
* Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4)
* Resources prim4 is colocated with:
* prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for prim10 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim10
=#=#=#= Begin test: Recursively check locations and constraints for prim10 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim10 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim10 (XML)
=#=#=#= Begin test: Check locations and constraints for prim11 =#=#=#=
Resources colocated with prim11:
* prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY)
Resources prim11 is colocated with:
* prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY)
=#=#=#= End test: Check locations and constraints for prim11 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim11
=#=#=#= Begin test: Check locations and constraints for prim11 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim11 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim11 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim11 =#=#=#=
Resources colocated with prim11:
* prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY)
* Resources colocated with prim13:
* prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY)
* Resources colocated with prim12:
* prim11 (id=colocation-prim11-prim12-INFINITY - loop)
Resources prim11 is colocated with:
* prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY)
* Resources prim12 is colocated with:
* prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY)
* Resources prim13 is colocated with:
* prim11 (id=colocation-prim13-prim11-INFINITY - loop)
=#=#=#= End test: Recursively check locations and constraints for prim11 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim11
=#=#=#= Begin test: Recursively check locations and constraints for prim11 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim11 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim11 (XML)
=#=#=#= Begin test: Check locations and constraints for prim12 =#=#=#=
Resources colocated with prim12:
* prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY)
Resources prim12 is colocated with:
* prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY)
=#=#=#= End test: Check locations and constraints for prim12 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim12
=#=#=#= Begin test: Check locations and constraints for prim12 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim12 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim12 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim12 =#=#=#=
Resources colocated with prim12:
* prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY)
* Resources colocated with prim11:
* prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY)
* Resources colocated with prim13:
* prim12 (id=colocation-prim12-prim13-INFINITY - loop)
Resources prim12 is colocated with:
* prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY)
* Resources prim13 is colocated with:
* prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY)
* Resources prim11 is colocated with:
* prim12 (id=colocation-prim11-prim12-INFINITY - loop)
=#=#=#= End test: Recursively check locations and constraints for prim12 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim12
=#=#=#= Begin test: Recursively check locations and constraints for prim12 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim12 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim12 (XML)
=#=#=#= Begin test: Check locations and constraints for prim13 =#=#=#=
Resources colocated with prim13:
* prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY)
Resources prim13 is colocated with:
* prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY)
=#=#=#= End test: Check locations and constraints for prim13 - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim13
=#=#=#= Begin test: Check locations and constraints for prim13 (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for prim13 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for prim13 (XML)
=#=#=#= Begin test: Recursively check locations and constraints for prim13 =#=#=#=
Resources colocated with prim13:
* prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY)
* Resources colocated with prim12:
* prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY)
* Resources colocated with prim11:
* prim13 (id=colocation-prim13-prim11-INFINITY - loop)
Resources prim13 is colocated with:
* prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY)
* Resources prim11 is colocated with:
* prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY)
* Resources prim12 is colocated with:
* prim13 (id=colocation-prim12-prim13-INFINITY - loop)
=#=#=#= End test: Recursively check locations and constraints for prim13 - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim13
=#=#=#= Begin test: Recursively check locations and constraints for prim13 (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for prim13 (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for prim13 (XML)
=#=#=#= Begin test: Check locations and constraints for group =#=#=#=
Resources colocated with group:
* prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY)
=#=#=#= End test: Check locations and constraints for group - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for group
=#=#=#= Begin test: Check locations and constraints for group (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for group (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for group (XML)
=#=#=#= Begin test: Recursively check locations and constraints for group =#=#=#=
Resources colocated with group:
* prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for group - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for group
=#=#=#= Begin test: Recursively check locations and constraints for group (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for group (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for group (XML)
=#=#=#= Begin test: Check locations and constraints for clone =#=#=#=
Resources colocated with clone:
* prim9 (score=INFINITY, id=colocation-prim9-clone-INFINITY)
=#=#=#= End test: Check locations and constraints for clone - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for clone
=#=#=#= Begin test: Check locations and constraints for clone (XML) =#=#=#=
=#=#=#= End test: Check locations and constraints for clone (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for clone (XML)
=#=#=#= Begin test: Recursively check locations and constraints for clone =#=#=#=
Resources colocated with clone:
* prim9 (score=INFINITY, id=colocation-prim9-clone-INFINITY)
=#=#=#= End test: Recursively check locations and constraints for clone - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for clone
=#=#=#= Begin test: Recursively check locations and constraints for clone (XML) =#=#=#=
=#=#=#= End test: Recursively check locations and constraints for clone (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Recursively check locations and constraints for clone (XML)
=#=#=#= Begin test: Check locations and constraints for group member (referring to group) =#=#=#=
Resources colocated with group:
* prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY)
=#=#=#= End test: Check locations and constraints for group member (referring to group) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for group member (referring to group)
=#=#=#= Begin test: Check locations and constraints for group member (without referring to group) =#=#=#=
Resources colocated with gr2:
* prim8 (score=INFINITY, id=colocation-prim8-gr2-INFINITY)
=#=#=#= End test: Check locations and constraints for group member (without referring to group) - OK (0) =#=#=#=
* Passed: crm_resource - Check locations and constraints for group member (without referring to group)
=#=#=#= Begin test: Set a meta-attribute for primitive and resources colocated with it (XML) =#=#=#=
=#=#=#= End test: Set a meta-attribute for primitive and resources colocated with it (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Set a meta-attribute for primitive and resources colocated with it (XML)
=#=#=#= Begin test: Set a meta-attribute for group and resource colocated with it =#=#=#=
Set 'group' option: id=group-meta_attributes-target-role set=group-meta_attributes name=target-role value=Stopped
Set 'prim7' option: id=prim7-meta_attributes-target-role set=prim7-meta_attributes name=target-role value=Stopped
=#=#=#= End test: Set a meta-attribute for group and resource colocated with it - OK (0) =#=#=#=
* Passed: crm_resource - Set a meta-attribute for group and resource colocated with it
=#=#=#= Begin test: Set a meta-attribute for clone and resource colocated with it (XML) =#=#=#=
=#=#=#= End test: Set a meta-attribute for clone and resource colocated with it (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Set a meta-attribute for clone and resource colocated with it (XML)
=#=#=#= Begin test: Show resource digests (XML) =#=#=#=
=#=#=#= End test: Show resource digests (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Show resource digests (XML)
=#=#=#= Begin test: Show resource digests with overrides =#=#=#=
=#=#=#= End test: Show resource digests with overrides - OK (0) =#=#=#=
* Passed: crm_resource - Show resource digests with overrides
=#=#=#= Begin test: Show resource operations =#=#=#=
rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node4, call=136, rc=7, exec=28ms): Done
Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node4, call=5, rc=7, exec=2ms): Done
rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node2, call=101, rc=7, exec=45ms): Done
Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node2, call=5, rc=7, exec=4ms): Done
Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node3, call=5, rc=7, exec=24ms): Done
rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node5, call=99, rc=193, exec=27ms): Pending
Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node5, call=5, rc=7, exec=14ms): Done
rsc1 (ocf:pacemaker:Dummy): Started: rsc1_start_0 (node=node1, call=104, rc=0, exec=22ms): Done
rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_10000 (node=node1, call=106, rc=0, exec=20ms): Done
Fencing (stonith:fence_xvm): Started: Fencing_start_0 (node=node1, call=10, rc=0, exec=59ms): Done
Fencing (stonith:fence_xvm): Started: Fencing_monitor_120000 (node=node1, call=12, rc=0, exec=70ms): Done
=#=#=#= End test: Show resource operations - OK (0) =#=#=#=
* Passed: crm_resource - Show resource operations
=#=#=#= Begin test: Show resource operations (XML) =#=#=#=
=#=#=#= End test: Show resource operations (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Show resource operations (XML)
=#=#=#= Begin test: List a promotable clone resource =#=#=#=
resource promotable-clone is running on: cluster01
resource promotable-clone is running on: cluster02 Promoted
=#=#=#= End test: List a promotable clone resource - OK (0) =#=#=#=
* Passed: crm_resource - List a promotable clone resource
=#=#=#= Begin test: List a promotable clone resource (XML) =#=#=#=
cluster01cluster02
=#=#=#= End test: List a promotable clone resource (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List a promotable clone resource (XML)
=#=#=#= Begin test: List the primitive of a promotable clone resource =#=#=#=
resource promotable-rsc is running on: cluster01
resource promotable-rsc is running on: cluster02 Promoted
=#=#=#= End test: List the primitive of a promotable clone resource - OK (0) =#=#=#=
* Passed: crm_resource - List the primitive of a promotable clone resource
=#=#=#= Begin test: List the primitive of a promotable clone resource (XML) =#=#=#=
cluster01cluster02
=#=#=#= End test: List the primitive of a promotable clone resource (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List the primitive of a promotable clone resource (XML)
=#=#=#= Begin test: List a single instance of a promotable clone resource =#=#=#=
resource promotable-rsc:0 is running on: cluster02 Promoted
=#=#=#= End test: List a single instance of a promotable clone resource - OK (0) =#=#=#=
* Passed: crm_resource - List a single instance of a promotable clone resource
=#=#=#= Begin test: List a single instance of a promotable clone resource (XML) =#=#=#=
cluster02
=#=#=#= End test: List a single instance of a promotable clone resource (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List a single instance of a promotable clone resource (XML)
=#=#=#= Begin test: List another instance of a promotable clone resource =#=#=#=
resource promotable-rsc:1 is running on: cluster01
=#=#=#= End test: List another instance of a promotable clone resource - OK (0) =#=#=#=
* Passed: crm_resource - List another instance of a promotable clone resource
=#=#=#= Begin test: List another instance of a promotable clone resource (XML) =#=#=#=
cluster01
=#=#=#= End test: List another instance of a promotable clone resource (XML) - OK (0) =#=#=#=
* Passed: crm_resource - List another instance of a promotable clone resource (XML)
=#=#=#= Begin test: Try to move an instance of a cloned resource =#=#=#=
crm_resource: Cannot operate on clone resource instance 'promotable-rsc:0'
Error performing operation: Invalid parameter
=#=#=#= End test: Try to move an instance of a cloned resource - Invalid parameter (2) =#=#=#=
* Passed: crm_resource - Try to move an instance of a cloned resource
=#=#=#= Begin test: Check that CIB_file="-" works - crm_resource (XML) =#=#=#=
=#=#=#= End test: Check that CIB_file="-" works - crm_resource (XML) - OK (0) =#=#=#=
* Passed: crm_resource - Check that CIB_file="-" works - crm_resource (XML)
diff --git a/cts/cli/regression.validity.exp b/cts/cli/regression.validity.exp
index c98b485ea2..3b70f24163 100644
--- a/cts/cli/regression.validity.exp
+++ b/cts/cli/regression.validity.exp
@@ -1,92 +1,97 @@
=#=#=#= Begin test: Try to set unrecognized validate-with =#=#=#=
Call failed: Update does not conform to the configured schema
=#=#=#= End test: Try to set unrecognized validate-with - Invalid configuration (78) =#=#=#=
* Passed: cibadmin - Try to set unrecognized validate-with
=#=#=#= Begin test: Try to remove validate-with attribute =#=#=#=
Call failed: Update does not conform to the configured schema
=#=#=#= End test: Try to remove validate-with attribute - Invalid configuration (78) =#=#=#=
* Passed: cibadmin - Try to remove validate-with attribute
=#=#=#= Begin test: Try to use rsc_order first-action value disallowed by schema =#=#=#=
Call failed: Update does not conform to the configured schema
=#=#=#= Current cib after: Try to use rsc_order first-action value disallowed by schema =#=#=#=
=#=#=#= End test: Try to use rsc_order first-action value disallowed by schema - Invalid configuration (78) =#=#=#=
* Passed: cibadmin - Try to use rsc_order first-action value disallowed by schema
=#=#=#= Begin test: Try to use configuration legal only with schema after configured one =#=#=#=
Call failed: Update does not conform to the configured schema
=#=#=#= Current cib after: Try to use configuration legal only with schema after configured one =#=#=#=
=#=#=#= End test: Try to use configuration legal only with schema after configured one - Invalid configuration (78) =#=#=#=
* Passed: cibadmin - Try to use configuration legal only with schema after configured one
=#=#=#= Begin test: Disable schema validation =#=#=#=
=#=#=#= End test: Disable schema validation - OK (0) =#=#=#=
* Passed: cibadmin - Disable schema validation
=#=#=#= Begin test: Set invalid rsc_order first-action value (schema validation disabled) =#=#=#=
=#=#=#= Current cib after: Set invalid rsc_order first-action value (schema validation disabled) =#=#=#=
=#=#=#= End test: Set invalid rsc_order first-action value (schema validation disabled) - OK (0) =#=#=#=
* Passed: cibadmin - Set invalid rsc_order first-action value (schema validation disabled)
=#=#=#= Begin test: Run crm_simulate with invalid rsc_order first-action (schema validation disabled) =#=#=#=
+warning: Support for validate-with='none' is deprecated and will be removed in a future release without the possibility of upgrades (manually edit to use a supported schema)
+warning: Support for validate-with='none' is deprecated and will be removed in a future release without the possibility of upgrades (manually edit to use a supported schema)
Schema validation of configuration is disabled (support for validate-with set to "none" is deprecated and will be removed in a future release)
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
-invert_action warning: Unknown action 'break' specified in order constraint
-invert_action warning: Unknown action 'break' specified in order constraint
-unpack_resources error: Resource start-up disabled since no STONITH resources have been defined
-unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option
-unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+warning: Support for validate-with='none' is deprecated and will be removed in a future release without the possibility of upgrades (manually edit to use a supported schema)
+warning: Support for validate-with='none' is deprecated and will be removed in a future release without the possibility of upgrades (manually edit to use a supported schema)
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+warning: Unknown action 'break' specified in order constraint
+warning: Unknown action 'break' specified in order constraint
+warning: Cannot invert constraint 'ord_1-2' (please specify inverse manually)
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Current cluster status:
* Full List of Resources:
* dummy1 (ocf:pacemaker:Dummy): Stopped
* dummy2 (ocf:pacemaker:Dummy): Stopped
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Full List of Resources:
* dummy1 (ocf:pacemaker:Dummy): Stopped
* dummy2 (ocf:pacemaker:Dummy): Stopped
=#=#=#= End test: Run crm_simulate with invalid rsc_order first-action (schema validation disabled) - OK (0) =#=#=#=
* Passed: crm_simulate - Run crm_simulate with invalid rsc_order first-action (schema validation disabled)
diff --git a/cts/scheduler/summary/797.summary b/cts/scheduler/summary/797.summary
index d31572ba3d..3618f487d6 100644
--- a/cts/scheduler/summary/797.summary
+++ b/cts/scheduler/summary/797.summary
@@ -1,73 +1,74 @@
Current cluster status:
* Node List:
* Node c001n08: UNCLEAN (offline)
* Online: [ c001n01 c001n02 c001n03 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Started c001n03
* rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
* rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Started (Monitoring) [ c001n01 c001n03 ]
* child_DoFencing:1 (stonith:ssh): Started c001n02
* child_DoFencing:2 (stonith:ssh): Started c001n03
* child_DoFencing:3 (stonith:ssh): Stopped
+warning: Node c001n08 is unclean but cannot be fenced
Transition Summary:
* Stop DcIPaddr ( c001n03 ) due to no quorum
* Stop rsc_c001n08 ( c001n02 ) due to no quorum
* Stop rsc_c001n02 ( c001n02 ) due to no quorum
* Stop rsc_c001n03 ( c001n03 ) due to no quorum
* Stop rsc_c001n01 ( c001n01 ) due to no quorum
* Restart child_DoFencing:0 ( c001n01 )
* Stop child_DoFencing:1 ( c001n02 ) due to node availability
Executing Cluster Transition:
* Resource action: DcIPaddr monitor on c001n02
* Resource action: DcIPaddr monitor on c001n01
* Resource action: DcIPaddr stop on c001n03
* Resource action: rsc_c001n08 stop on c001n02
* Resource action: rsc_c001n08 monitor on c001n03
* Resource action: rsc_c001n08 monitor on c001n01
* Resource action: rsc_c001n02 stop on c001n02
* Resource action: rsc_c001n02 monitor on c001n03
* Resource action: rsc_c001n02 monitor on c001n01
* Resource action: rsc_c001n03 stop on c001n03
* Resource action: rsc_c001n03 monitor on c001n02
* Resource action: rsc_c001n03 monitor on c001n01
* Resource action: rsc_c001n01 stop on c001n01
* Resource action: rsc_c001n01 monitor on c001n03
* Resource action: child_DoFencing:2 monitor on c001n01
* Resource action: child_DoFencing:3 monitor on c001n03
* Resource action: child_DoFencing:3 monitor on c001n02
* Resource action: child_DoFencing:3 monitor on c001n01
* Pseudo action: DoFencing_stop_0
* Resource action: DcIPaddr delete on c001n03
* Resource action: child_DoFencing:0 stop on c001n03
* Resource action: child_DoFencing:0 stop on c001n01
* Resource action: child_DoFencing:1 stop on c001n02
* Pseudo action: DoFencing_stopped_0
* Pseudo action: DoFencing_start_0
* Cluster action: do_shutdown on c001n02
* Resource action: child_DoFencing:0 start on c001n01
* Resource action: child_DoFencing:0 monitor=5000 on c001n01
* Pseudo action: DoFencing_running_0
Revised Cluster Status:
* Node List:
* Node c001n08: UNCLEAN (offline)
* Online: [ c001n01 c001n02 c001n03 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n01 (ocf:heartbeat:IPaddr): Stopped
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Started c001n01
* child_DoFencing:1 (stonith:ssh): Stopped
* child_DoFencing:2 (stonith:ssh): Started c001n03
* child_DoFencing:3 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/bug-1822.summary b/cts/scheduler/summary/bug-1822.summary
index 3890a02730..83b9677275 100644
--- a/cts/scheduler/summary/bug-1822.summary
+++ b/cts/scheduler/summary/bug-1822.summary
@@ -1,44 +1,48 @@
+warning: Support for the 'ordered' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
Current cluster status:
* Node List:
* Online: [ process1a process2b ]
* Full List of Resources:
* Clone Set: ms-sf [ms-sf_group] (promotable, unique):
* Resource Group: ms-sf_group:0:
* promotable_Stateful:0 (ocf:heartbeat:Dummy-statful): Unpromoted process2b
* promotable_procdctl:0 (ocf:heartbeat:procdctl): Stopped
* Resource Group: ms-sf_group:1:
* promotable_Stateful:1 (ocf:heartbeat:Dummy-statful): Promoted process1a
* promotable_procdctl:1 (ocf:heartbeat:procdctl): Promoted process1a
+error: Resetting 'on-fail' for promotable_Stateful:0 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for promotable_Stateful:1 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for promotable_procdctl:1 stop action to default value because 'stop' is not allowed for stop
Transition Summary:
* Stop promotable_Stateful:1 ( Promoted process1a ) due to node availability
* Stop promotable_procdctl:1 ( Promoted process1a ) due to node availability
Executing Cluster Transition:
* Pseudo action: ms-sf_demote_0
* Pseudo action: ms-sf_group:1_demote_0
* Resource action: promotable_Stateful:1 demote on process1a
* Resource action: promotable_procdctl:1 demote on process1a
* Pseudo action: ms-sf_group:1_demoted_0
* Pseudo action: ms-sf_demoted_0
* Pseudo action: ms-sf_stop_0
* Pseudo action: ms-sf_group:1_stop_0
* Resource action: promotable_Stateful:1 stop on process1a
* Resource action: promotable_procdctl:1 stop on process1a
* Cluster action: do_shutdown on process1a
* Pseudo action: ms-sf_group:1_stopped_0
* Pseudo action: ms-sf_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ process1a process2b ]
* Full List of Resources:
* Clone Set: ms-sf [ms-sf_group] (promotable, unique):
* Resource Group: ms-sf_group:0:
* promotable_Stateful:0 (ocf:heartbeat:Dummy-statful): Unpromoted process2b
* promotable_procdctl:0 (ocf:heartbeat:procdctl): Stopped
* Resource Group: ms-sf_group:1:
* promotable_Stateful:1 (ocf:heartbeat:Dummy-statful): Stopped
* promotable_procdctl:1 (ocf:heartbeat:procdctl): Stopped
diff --git a/cts/scheduler/summary/bug-cl-5212.summary b/cts/scheduler/summary/bug-cl-5212.summary
index 7cbe97558b..496c064989 100644
--- a/cts/scheduler/summary/bug-cl-5212.summary
+++ b/cts/scheduler/summary/bug-cl-5212.summary
@@ -1,69 +1,71 @@
Current cluster status:
* Node List:
* Node srv01: UNCLEAN (offline)
* Node srv02: UNCLEAN (offline)
* Online: [ srv03 ]
* Full List of Resources:
* Resource Group: grpStonith1:
* prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN)
* Resource Group: grpStonith2:
* prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
* Resource Group: grpStonith3:
* prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
* Clone Set: msPostgresql [pgsql] (promotable):
* pgsql (ocf:pacemaker:Stateful): Unpromoted srv02 (UNCLEAN)
* pgsql (ocf:pacemaker:Stateful): Promoted srv01 (UNCLEAN)
* Unpromoted: [ srv03 ]
* Clone Set: clnPingd [prmPingd]:
* prmPingd (ocf:pacemaker:ping): Started srv02 (UNCLEAN)
* prmPingd (ocf:pacemaker:ping): Started srv01 (UNCLEAN)
* Started: [ srv03 ]
+warning: Node srv01 is unclean but cannot be fenced
+warning: Node srv02 is unclean but cannot be fenced
Transition Summary:
* Stop prmStonith1-1 ( srv02 ) blocked
* Stop prmStonith2-1 ( srv01 ) blocked
* Stop prmStonith3-1 ( srv01 ) due to node availability (blocked)
* Stop pgsql:0 ( Unpromoted srv02 ) due to node availability (blocked)
* Stop pgsql:1 ( Promoted srv01 ) due to node availability (blocked)
* Stop prmPingd:0 ( srv02 ) due to node availability (blocked)
* Stop prmPingd:1 ( srv01 ) due to node availability (blocked)
Executing Cluster Transition:
* Pseudo action: grpStonith1_stop_0
* Pseudo action: grpStonith1_start_0
* Pseudo action: grpStonith2_stop_0
* Pseudo action: grpStonith2_start_0
* Pseudo action: grpStonith3_stop_0
* Pseudo action: msPostgresql_pre_notify_stop_0
* Pseudo action: clnPingd_stop_0
* Resource action: pgsql notify on srv03
* Pseudo action: msPostgresql_confirmed-pre_notify_stop_0
* Pseudo action: msPostgresql_stop_0
* Pseudo action: clnPingd_stopped_0
* Pseudo action: msPostgresql_stopped_0
* Pseudo action: msPostgresql_post_notify_stopped_0
* Resource action: pgsql notify on srv03
* Pseudo action: msPostgresql_confirmed-post_notify_stopped_0
Revised Cluster Status:
* Node List:
* Node srv01: UNCLEAN (offline)
* Node srv02: UNCLEAN (offline)
* Online: [ srv03 ]
* Full List of Resources:
* Resource Group: grpStonith1:
* prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN)
* Resource Group: grpStonith2:
* prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
* Resource Group: grpStonith3:
* prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
* Clone Set: msPostgresql [pgsql] (promotable):
* pgsql (ocf:pacemaker:Stateful): Unpromoted srv02 (UNCLEAN)
* pgsql (ocf:pacemaker:Stateful): Promoted srv01 (UNCLEAN)
* Unpromoted: [ srv03 ]
* Clone Set: clnPingd [prmPingd]:
* prmPingd (ocf:pacemaker:ping): Started srv02 (UNCLEAN)
* prmPingd (ocf:pacemaker:ping): Started srv01 (UNCLEAN)
* Started: [ srv03 ]
diff --git a/cts/scheduler/summary/bug-lf-1852.summary b/cts/scheduler/summary/bug-lf-1852.summary
index 26c73e166a..bc8239c763 100644
--- a/cts/scheduler/summary/bug-lf-1852.summary
+++ b/cts/scheduler/summary/bug-lf-1852.summary
@@ -1,40 +1,50 @@
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Current cluster status:
* Node List:
* Online: [ mysql-01 mysql-02 ]
* Full List of Resources:
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Promoted: [ mysql-02 ]
* Stopped: [ mysql-01 ]
* Resource Group: fs_mysql_ip:
* fs0 (ocf:heartbeat:Filesystem): Started mysql-02
* mysqlid (lsb:mysql): Started mysql-02
* ip_resource (ocf:heartbeat:IPaddr2): Started mysql-02
Transition Summary:
* Start drbd0:1 ( mysql-01 )
Executing Cluster Transition:
* Pseudo action: ms-drbd0_pre_notify_start_0
* Resource action: drbd0:0 notify on mysql-02
* Pseudo action: ms-drbd0_confirmed-pre_notify_start_0
* Pseudo action: ms-drbd0_start_0
* Resource action: drbd0:1 start on mysql-01
* Pseudo action: ms-drbd0_running_0
* Pseudo action: ms-drbd0_post_notify_running_0
* Resource action: drbd0:0 notify on mysql-02
* Resource action: drbd0:1 notify on mysql-01
* Pseudo action: ms-drbd0_confirmed-post_notify_running_0
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Revised Cluster Status:
* Node List:
* Online: [ mysql-01 mysql-02 ]
* Full List of Resources:
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Promoted: [ mysql-02 ]
* Unpromoted: [ mysql-01 ]
* Resource Group: fs_mysql_ip:
* fs0 (ocf:heartbeat:Filesystem): Started mysql-02
* mysqlid (lsb:mysql): Started mysql-02
* ip_resource (ocf:heartbeat:IPaddr2): Started mysql-02
diff --git a/cts/scheduler/summary/bug-lf-2171.summary b/cts/scheduler/summary/bug-lf-2171.summary
index 5117608a20..b1bd1b99c2 100644
--- a/cts/scheduler/summary/bug-lf-2171.summary
+++ b/cts/scheduler/summary/bug-lf-2171.summary
@@ -1,39 +1,41 @@
+warning: Support for the 'ordered' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
+warning: Support for the 'collocated' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ xenserver1 xenserver2 ]
* Full List of Resources:
* Clone Set: cl_res_Dummy1 [res_Dummy1] (disabled):
* Started: [ xenserver1 xenserver2 ]
* Resource Group: gr_Dummy (disabled):
* res_Dummy2 (ocf:heartbeat:Dummy): Started xenserver1
* res_Dummy3 (ocf:heartbeat:Dummy): Started xenserver1
Transition Summary:
* Stop res_Dummy1:0 ( xenserver1 ) due to node availability
* Stop res_Dummy1:1 ( xenserver2 ) due to node availability
* Stop res_Dummy2 ( xenserver1 ) due to unrunnable cl_res_Dummy1 running
* Stop res_Dummy3 ( xenserver1 ) due to unrunnable cl_res_Dummy1 running
Executing Cluster Transition:
* Pseudo action: gr_Dummy_stop_0
* Resource action: res_Dummy2 stop on xenserver1
* Resource action: res_Dummy3 stop on xenserver1
* Pseudo action: gr_Dummy_stopped_0
* Pseudo action: cl_res_Dummy1_stop_0
* Resource action: res_Dummy1:1 stop on xenserver1
* Resource action: res_Dummy1:0 stop on xenserver2
* Pseudo action: cl_res_Dummy1_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ xenserver1 xenserver2 ]
* Full List of Resources:
* Clone Set: cl_res_Dummy1 [res_Dummy1] (disabled):
* Stopped (disabled): [ xenserver1 xenserver2 ]
* Resource Group: gr_Dummy (disabled):
* res_Dummy2 (ocf:heartbeat:Dummy): Stopped
* res_Dummy3 (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/bug-lf-2606.summary b/cts/scheduler/summary/bug-lf-2606.summary
index e0b7ebf0e6..9831385949 100644
--- a/cts/scheduler/summary/bug-lf-2606.summary
+++ b/cts/scheduler/summary/bug-lf-2606.summary
@@ -1,46 +1,54 @@
1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Node node2: UNCLEAN (online)
* Online: [ node1 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): FAILED node2 (disabled)
* rsc2 (ocf:pacemaker:Dummy): Started node2
* Clone Set: ms3 [rsc3] (promotable):
* Promoted: [ node2 ]
* Unpromoted: [ node1 ]
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-unpromoted-5 is duplicate of rsc3-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Fence (reboot) node2 'rsc1 failed there'
* Stop rsc1 ( node2 ) due to node availability
* Move rsc2 ( node2 -> node1 )
* Stop rsc3:1 ( Promoted node2 ) due to node availability
Executing Cluster Transition:
* Pseudo action: ms3_demote_0
* Fencing node2 (reboot)
* Pseudo action: rsc1_stop_0
* Pseudo action: rsc2_stop_0
* Pseudo action: rsc3:1_demote_0
* Pseudo action: ms3_demoted_0
* Pseudo action: ms3_stop_0
* Resource action: rsc2 start on node1
* Pseudo action: rsc3:1_stop_0
* Pseudo action: ms3_stopped_0
* Resource action: rsc2 monitor=10000 on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 ]
* OFFLINE: [ node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
* rsc2 (ocf:pacemaker:Dummy): Started node1
* Clone Set: ms3 [rsc3] (promotable):
* Unpromoted: [ node1 ]
* Stopped: [ node2 ]
diff --git a/cts/scheduler/summary/bug-pm-11.summary b/cts/scheduler/summary/bug-pm-11.summary
index c3f8f5b3af..37f327fed9 100644
--- a/cts/scheduler/summary/bug-pm-11.summary
+++ b/cts/scheduler/summary/bug-pm-11.summary
@@ -1,48 +1,49 @@
+warning: Support for the 'ordered' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
Current cluster status:
* Node List:
* Online: [ node-a node-b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
* stateful-2:0 (ocf:heartbeat:Stateful): Stopped
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
* stateful-2:1 (ocf:heartbeat:Stateful): Stopped
Transition Summary:
* Start stateful-2:0 ( node-b )
* Promote stateful-2:1 ( Stopped -> Promoted node-a )
Executing Cluster Transition:
* Resource action: stateful-2:0 monitor on node-b
* Resource action: stateful-2:0 monitor on node-a
* Resource action: stateful-2:1 monitor on node-b
* Resource action: stateful-2:1 monitor on node-a
* Pseudo action: ms-sf_start_0
* Pseudo action: group:0_start_0
* Resource action: stateful-2:0 start on node-b
* Pseudo action: group:1_start_0
* Resource action: stateful-2:1 start on node-a
* Pseudo action: group:0_running_0
* Pseudo action: group:1_running_0
* Pseudo action: ms-sf_running_0
* Pseudo action: ms-sf_promote_0
* Pseudo action: group:1_promote_0
* Resource action: stateful-2:1 promote on node-a
* Pseudo action: group:1_promoted_0
* Pseudo action: ms-sf_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ node-a node-b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
* stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
* stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a
diff --git a/cts/scheduler/summary/bug-pm-12.summary b/cts/scheduler/summary/bug-pm-12.summary
index 8defffe8d6..9f82560b3f 100644
--- a/cts/scheduler/summary/bug-pm-12.summary
+++ b/cts/scheduler/summary/bug-pm-12.summary
@@ -1,57 +1,58 @@
+warning: Support for the 'ordered' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
Current cluster status:
* Node List:
* Online: [ node-a node-b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
* stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
* stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a
Transition Summary:
* Restart stateful-2:0 ( Unpromoted node-b ) due to resource definition change
* Restart stateful-2:1 ( Promoted node-a ) due to resource definition change
Executing Cluster Transition:
* Pseudo action: ms-sf_demote_0
* Pseudo action: group:1_demote_0
* Resource action: stateful-2:1 demote on node-a
* Pseudo action: group:1_demoted_0
* Pseudo action: ms-sf_demoted_0
* Pseudo action: ms-sf_stop_0
* Pseudo action: group:0_stop_0
* Resource action: stateful-2:0 stop on node-b
* Pseudo action: group:1_stop_0
* Resource action: stateful-2:1 stop on node-a
* Pseudo action: group:0_stopped_0
* Pseudo action: group:1_stopped_0
* Pseudo action: ms-sf_stopped_0
* Pseudo action: ms-sf_start_0
* Pseudo action: group:0_start_0
* Resource action: stateful-2:0 start on node-b
* Pseudo action: group:1_start_0
* Resource action: stateful-2:1 start on node-a
* Pseudo action: group:0_running_0
* Pseudo action: group:1_running_0
* Pseudo action: ms-sf_running_0
* Pseudo action: ms-sf_promote_0
* Pseudo action: group:1_promote_0
* Resource action: stateful-2:1 promote on node-a
* Pseudo action: group:1_promoted_0
* Pseudo action: ms-sf_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ node-a node-b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
* stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
* stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a
diff --git a/cts/scheduler/summary/bug-rh-1097457.summary b/cts/scheduler/summary/bug-rh-1097457.summary
index f68a509609..0b0f14e122 100644
--- a/cts/scheduler/summary/bug-rh-1097457.summary
+++ b/cts/scheduler/summary/bug-rh-1097457.summary
@@ -1,126 +1,130 @@
2 of 26 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ lama2 lama3 ]
* GuestOnline: [ lamaVM1 lamaVM2 lamaVM3 ]
* Full List of Resources:
* restofencelama2 (stonith:fence_ipmilan): Started lama3
* restofencelama3 (stonith:fence_ipmilan): Started lama2
* VM1 (ocf:heartbeat:VirtualDomain): Started lama2
* FSlun1 (ocf:heartbeat:Filesystem): Started lamaVM1
* FSlun2 (ocf:heartbeat:Filesystem): Started lamaVM1
* VM2 (ocf:heartbeat:VirtualDomain): FAILED lama3
* VM3 (ocf:heartbeat:VirtualDomain): Started lama3
* FSlun3 (ocf:heartbeat:Filesystem): FAILED lamaVM2
* FSlun4 (ocf:heartbeat:Filesystem): Started lamaVM3
* FAKE5-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
* FAKE6-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
* FAKE5 (ocf:heartbeat:Dummy): Started lamaVM3
* Resource Group: lamaVM1-G1:
* FAKE1 (ocf:heartbeat:Dummy): Started lamaVM1
* FAKE1-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
* Resource Group: lamaVM1-G2:
* FAKE2 (ocf:heartbeat:Dummy): Started lamaVM1
* FAKE2-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
* Resource Group: lamaVM1-G3:
* FAKE3 (ocf:heartbeat:Dummy): Started lamaVM1
* FAKE3-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
* Resource Group: lamaVM2-G4:
* FAKE4 (ocf:heartbeat:Dummy): Started lamaVM2
* FAKE4-IP (ocf:heartbeat:IPaddr2): Started lamaVM2
* Clone Set: FAKE6-clone [FAKE6]:
* Started: [ lamaVM1 lamaVM2 lamaVM3 ]
+warning: Invalid ordering constraint between FSlun4 and VM3
+warning: Invalid ordering constraint between FSlun3 and VM2
+warning: Invalid ordering constraint between FSlun2 and VM1
+warning: Invalid ordering constraint between FSlun1 and VM1
Transition Summary:
* Fence (reboot) lamaVM2 (resource: VM2) 'guest is unclean'
* Recover VM2 ( lama3 )
* Recover FSlun3 ( lamaVM2 -> lama2 )
* Restart FAKE4 ( lamaVM2 ) due to required VM2 start
* Restart FAKE4-IP ( lamaVM2 ) due to required VM2 start
* Restart FAKE6:2 ( lamaVM2 ) due to required VM2 start
* Restart lamaVM2 ( lama3 ) due to required VM2 start
Executing Cluster Transition:
* Resource action: FSlun1 monitor on lamaVM3
* Resource action: FSlun2 monitor on lamaVM3
* Resource action: FSlun3 monitor on lamaVM3
* Resource action: FSlun3 monitor on lamaVM1
* Resource action: FSlun4 monitor on lamaVM1
* Resource action: FAKE5-IP monitor on lamaVM3
* Resource action: FAKE5-IP monitor on lamaVM1
* Resource action: FAKE6-IP monitor on lamaVM3
* Resource action: FAKE6-IP monitor on lamaVM1
* Resource action: FAKE5 monitor on lamaVM1
* Resource action: FAKE1 monitor on lamaVM3
* Resource action: FAKE1-IP monitor on lamaVM3
* Resource action: FAKE2 monitor on lamaVM3
* Resource action: FAKE2-IP monitor on lamaVM3
* Resource action: FAKE3 monitor on lamaVM3
* Resource action: FAKE3-IP monitor on lamaVM3
* Resource action: FAKE4 monitor on lamaVM3
* Resource action: FAKE4 monitor on lamaVM1
* Resource action: FAKE4-IP monitor on lamaVM3
* Resource action: FAKE4-IP monitor on lamaVM1
* Resource action: lamaVM2 stop on lama3
* Resource action: VM2 stop on lama3
* Pseudo action: stonith-lamaVM2-reboot on lamaVM2
* Resource action: VM2 start on lama3
* Resource action: VM2 monitor=10000 on lama3
* Pseudo action: lamaVM2-G4_stop_0
* Pseudo action: FAKE4-IP_stop_0
* Pseudo action: FAKE6-clone_stop_0
* Resource action: lamaVM2 start on lama3
* Resource action: lamaVM2 monitor=30000 on lama3
* Resource action: FSlun3 monitor=10000 on lamaVM2
* Pseudo action: FAKE4_stop_0
* Pseudo action: FAKE6_stop_0
* Pseudo action: FAKE6-clone_stopped_0
* Pseudo action: FAKE6-clone_start_0
* Pseudo action: lamaVM2-G4_stopped_0
* Resource action: FAKE6 start on lamaVM2
* Resource action: FAKE6 monitor=30000 on lamaVM2
* Pseudo action: FAKE6-clone_running_0
* Pseudo action: FSlun3_stop_0
* Resource action: FSlun3 start on lama2
* Pseudo action: lamaVM2-G4_start_0
* Resource action: FAKE4 start on lamaVM2
* Resource action: FAKE4 monitor=30000 on lamaVM2
* Resource action: FAKE4-IP start on lamaVM2
* Resource action: FAKE4-IP monitor=30000 on lamaVM2
* Resource action: FSlun3 monitor=10000 on lama2
* Pseudo action: lamaVM2-G4_running_0
Revised Cluster Status:
* Node List:
* Online: [ lama2 lama3 ]
* GuestOnline: [ lamaVM1 lamaVM2 lamaVM3 ]
* Full List of Resources:
* restofencelama2 (stonith:fence_ipmilan): Started lama3
* restofencelama3 (stonith:fence_ipmilan): Started lama2
* VM1 (ocf:heartbeat:VirtualDomain): Started lama2
* FSlun1 (ocf:heartbeat:Filesystem): Started lamaVM1
* FSlun2 (ocf:heartbeat:Filesystem): Started lamaVM1
* VM2 (ocf:heartbeat:VirtualDomain): FAILED lama3
* VM3 (ocf:heartbeat:VirtualDomain): Started lama3
* FSlun3 (ocf:heartbeat:Filesystem): FAILED [ lama2 lamaVM2 ]
* FSlun4 (ocf:heartbeat:Filesystem): Started lamaVM3
* FAKE5-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
* FAKE6-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
* FAKE5 (ocf:heartbeat:Dummy): Started lamaVM3
* Resource Group: lamaVM1-G1:
* FAKE1 (ocf:heartbeat:Dummy): Started lamaVM1
* FAKE1-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
* Resource Group: lamaVM1-G2:
* FAKE2 (ocf:heartbeat:Dummy): Started lamaVM1
* FAKE2-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
* Resource Group: lamaVM1-G3:
* FAKE3 (ocf:heartbeat:Dummy): Started lamaVM1
* FAKE3-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
* Resource Group: lamaVM2-G4:
* FAKE4 (ocf:heartbeat:Dummy): Started lamaVM2
* FAKE4-IP (ocf:heartbeat:IPaddr2): Started lamaVM2
* Clone Set: FAKE6-clone [FAKE6]:
* Started: [ lamaVM1 lamaVM2 lamaVM3 ]
diff --git a/cts/scheduler/summary/cancel-behind-moving-remote.summary b/cts/scheduler/summary/cancel-behind-moving-remote.summary
index 945f3c81da..fd60a855d4 100644
--- a/cts/scheduler/summary/cancel-behind-moving-remote.summary
+++ b/cts/scheduler/summary/cancel-behind-moving-remote.summary
@@ -1,189 +1,381 @@
+warning: compute-0 requires fencing but fencing is disabled
+warning: compute-1 requires fencing but fencing is disabled
+warning: galera-bundle requires fencing but fencing is disabled
+warning: galera-bundle-master requires fencing but fencing is disabled
+warning: galera:0 requires fencing but fencing is disabled
+warning: galera:1 requires fencing but fencing is disabled
+warning: galera:2 requires fencing but fencing is disabled
+warning: galera-bundle-podman-0 requires fencing but fencing is disabled
+warning: galera-bundle-0 requires fencing but fencing is disabled
+warning: galera-bundle-podman-1 requires fencing but fencing is disabled
+warning: galera-bundle-1 requires fencing but fencing is disabled
+warning: galera-bundle-podman-2 requires fencing but fencing is disabled
+warning: galera-bundle-2 requires fencing but fencing is disabled
+warning: rabbitmq-bundle requires fencing but fencing is disabled
+warning: rabbitmq-bundle-clone requires fencing but fencing is disabled
+warning: rabbitmq:0 requires fencing but fencing is disabled
+warning: rabbitmq:1 requires fencing but fencing is disabled
+warning: rabbitmq:2 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-podman-0 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-0 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-podman-1 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-1 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-podman-2 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-2 requires fencing but fencing is disabled
+warning: redis-bundle requires fencing but fencing is disabled
+warning: redis-bundle-master requires fencing but fencing is disabled
+warning: redis:0 requires fencing but fencing is disabled
+warning: redis:1 requires fencing but fencing is disabled
+warning: redis:2 requires fencing but fencing is disabled
+warning: redis-bundle-podman-0 requires fencing but fencing is disabled
+warning: redis-bundle-0 requires fencing but fencing is disabled
+warning: redis-bundle-podman-1 requires fencing but fencing is disabled
+warning: redis-bundle-1 requires fencing but fencing is disabled
+warning: redis-bundle-podman-2 requires fencing but fencing is disabled
+warning: redis-bundle-2 requires fencing but fencing is disabled
+warning: ip-192.168.24.150 requires fencing but fencing is disabled
+warning: ip-10.0.0.150 requires fencing but fencing is disabled
+warning: ip-172.17.1.151 requires fencing but fencing is disabled
+warning: ip-172.17.1.150 requires fencing but fencing is disabled
+warning: ip-172.17.3.150 requires fencing but fencing is disabled
+warning: ip-172.17.4.150 requires fencing but fencing is disabled
+warning: haproxy-bundle requires fencing but fencing is disabled
+warning: haproxy-bundle-podman-0 requires fencing but fencing is disabled
+warning: haproxy-bundle-podman-1 requires fencing but fencing is disabled
+warning: haproxy-bundle-podman-2 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-master requires fencing but fencing is disabled
+warning: ovndb_servers:0 requires fencing but fencing is disabled
+warning: ovndb_servers:1 requires fencing but fencing is disabled
+warning: ovndb_servers:2 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-podman-0 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-0 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-podman-1 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-1 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-podman-2 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-2 requires fencing but fencing is disabled
+warning: ip-172.17.1.87 requires fencing but fencing is disabled
+warning: stonith-fence_compute-fence-nova requires fencing but fencing is disabled
+warning: compute-unfence-trigger-clone requires fencing but fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:0 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:1 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:2 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:3 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:4 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:5 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:6 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:7 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:8 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:9 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:10 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:11 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:12 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:13 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:14 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:15 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:16 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:17 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:18 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:19 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:20 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:21 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:22 to "quorum" because fencing is disabled
+warning: nova-evacuate requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400aa1373 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400dc23e0 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-52540040bb56 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400addd38 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-52540078fb07 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400ea59b0 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400066e50 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400e1534e requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-52540060dbba requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400e018b6 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400c87cdb requires fencing but fencing is disabled
+warning: openstack-cinder-volume requires fencing but fencing is disabled
+warning: openstack-cinder-volume-podman-0 requires fencing but fencing is disabled
Using the original execution date of: 2021-02-15 01:40:51Z
Current cluster status:
* Node List:
* Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-2 ]
* OFFLINE: [ messaging-1 ]
* RemoteOnline: [ compute-0 compute-1 ]
* GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
* Full List of Resources:
* compute-0 (ocf:pacemaker:remote): Started controller-1
* compute-1 (ocf:pacemaker:remote): Started controller-2
* Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
* galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
* galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
* galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
* Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
* rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Stopped
* rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
* Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
* redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
* redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
* redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
* ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-2
* ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-2
* Container bundle set: haproxy-bundle [cluster.common.tag/rhosp16-openstack-haproxy:pcmklatest]:
* haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2
* haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0
* haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1
* Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
* ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Stopped
* ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-2
* ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1
* ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Stopped
* stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-1
* Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
* Started: [ compute-0 compute-1 ]
* Stopped: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
* nova-evacuate (ocf:openstack:NovaEvacuate): Started database-2
* stonith-fence_ipmilan-525400aa1373 (stonith:fence_ipmilan): Started messaging-0
* stonith-fence_ipmilan-525400dc23e0 (stonith:fence_ipmilan): Started messaging-2
* stonith-fence_ipmilan-52540040bb56 (stonith:fence_ipmilan): Started messaging-2
* stonith-fence_ipmilan-525400addd38 (stonith:fence_ipmilan): Started messaging-0
* stonith-fence_ipmilan-52540078fb07 (stonith:fence_ipmilan): Started database-0
* stonith-fence_ipmilan-525400ea59b0 (stonith:fence_ipmilan): Started database-1
* stonith-fence_ipmilan-525400066e50 (stonith:fence_ipmilan): Started database-2
* stonith-fence_ipmilan-525400e1534e (stonith:fence_ipmilan): Started database-1
* stonith-fence_ipmilan-52540060dbba (stonith:fence_ipmilan): Started database-2
* stonith-fence_ipmilan-525400e018b6 (stonith:fence_ipmilan): Started database-0
* stonith-fence_ipmilan-525400c87cdb (stonith:fence_ipmilan): Started messaging-0
* Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
* openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-2
Transition Summary:
* Start rabbitmq-bundle-1 ( controller-0 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked)
* Start rabbitmq:1 ( rabbitmq-bundle-1 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked)
* Start ovn-dbs-bundle-podman-0 ( controller-0 )
* Start ovn-dbs-bundle-0 ( controller-0 )
* Start ovndb_servers:0 ( ovn-dbs-bundle-0 )
* Promote ovndb_servers:2 ( Unpromoted -> Promoted ovn-dbs-bundle-2 )
* Start ip-172.17.1.87 ( controller-1 )
* Move stonith-fence_ipmilan-52540040bb56 ( messaging-2 -> database-0 )
* Move stonith-fence_ipmilan-525400e1534e ( database-1 -> messaging-2 )
Executing Cluster Transition:
* Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
* Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-2
* Pseudo action: ovn-dbs-bundle-master_pre_notify_start_0
* Cluster action: clear_failcount for stonith-fence_compute-fence-nova on messaging-0
* Cluster action: clear_failcount for nova-evacuate on messaging-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400aa1373 on database-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400dc23e0 on database-2
* Resource action: stonith-fence_ipmilan-52540040bb56 stop on messaging-2
* Cluster action: clear_failcount for stonith-fence_ipmilan-52540078fb07 on messaging-2
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400ea59b0 on database-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400066e50 on messaging-2
* Resource action: stonith-fence_ipmilan-525400e1534e stop on database-1
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400e1534e on database-2
* Cluster action: clear_failcount for stonith-fence_ipmilan-52540060dbba on messaging-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400e018b6 on database-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400c87cdb on database-2
* Pseudo action: ovn-dbs-bundle_start_0
* Pseudo action: rabbitmq-bundle_start_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
* Pseudo action: rabbitmq-bundle-clone_start_0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
* Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_start_0
* Pseudo action: ovn-dbs-bundle-master_start_0
* Resource action: ovn-dbs-bundle-podman-0 start on controller-0
* Resource action: ovn-dbs-bundle-0 start on controller-0
* Resource action: stonith-fence_ipmilan-52540040bb56 start on database-0
* Resource action: stonith-fence_ipmilan-525400e1534e start on messaging-2
* Pseudo action: rabbitmq-bundle-clone_running_0
* Resource action: ovndb_servers start on ovn-dbs-bundle-0
* Pseudo action: ovn-dbs-bundle-master_running_0
* Resource action: ovn-dbs-bundle-podman-0 monitor=60000 on controller-0
* Resource action: ovn-dbs-bundle-0 monitor=30000 on controller-0
* Resource action: stonith-fence_ipmilan-52540040bb56 monitor=60000 on database-0
* Resource action: stonith-fence_ipmilan-525400e1534e monitor=60000 on messaging-2
* Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
* Pseudo action: ovn-dbs-bundle-master_post_notify_running_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
* Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_running_0
* Pseudo action: ovn-dbs-bundle_running_0
* Pseudo action: rabbitmq-bundle_running_0
* Pseudo action: ovn-dbs-bundle-master_pre_notify_promote_0
* Pseudo action: ovn-dbs-bundle_promote_0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
* Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_promote_0
* Pseudo action: ovn-dbs-bundle-master_promote_0
* Resource action: ip-172.17.1.87 start on controller-1
* Resource action: ovndb_servers promote on ovn-dbs-bundle-2
* Pseudo action: ovn-dbs-bundle-master_promoted_0
* Resource action: ip-172.17.1.87 monitor=10000 on controller-1
* Pseudo action: ovn-dbs-bundle-master_post_notify_promoted_0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
* Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_promoted_0
* Pseudo action: ovn-dbs-bundle_promoted_0
* Resource action: ovndb_servers monitor=10000 on ovn-dbs-bundle-2
* Resource action: ovndb_servers monitor=30000 on ovn-dbs-bundle-0
+warning: compute-0 requires fencing but fencing is disabled
+warning: compute-1 requires fencing but fencing is disabled
+warning: galera-bundle requires fencing but fencing is disabled
+warning: galera-bundle-master requires fencing but fencing is disabled
+warning: galera:0 requires fencing but fencing is disabled
+warning: galera:1 requires fencing but fencing is disabled
+warning: galera:2 requires fencing but fencing is disabled
+warning: galera-bundle-podman-0 requires fencing but fencing is disabled
+warning: galera-bundle-0 requires fencing but fencing is disabled
+warning: galera-bundle-podman-1 requires fencing but fencing is disabled
+warning: galera-bundle-1 requires fencing but fencing is disabled
+warning: galera-bundle-podman-2 requires fencing but fencing is disabled
+warning: galera-bundle-2 requires fencing but fencing is disabled
+warning: rabbitmq-bundle requires fencing but fencing is disabled
+warning: rabbitmq-bundle-clone requires fencing but fencing is disabled
+warning: rabbitmq:0 requires fencing but fencing is disabled
+warning: rabbitmq:1 requires fencing but fencing is disabled
+warning: rabbitmq:2 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-podman-0 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-0 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-podman-1 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-1 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-podman-2 requires fencing but fencing is disabled
+warning: rabbitmq-bundle-2 requires fencing but fencing is disabled
+warning: redis-bundle requires fencing but fencing is disabled
+warning: redis-bundle-master requires fencing but fencing is disabled
+warning: redis:0 requires fencing but fencing is disabled
+warning: redis:1 requires fencing but fencing is disabled
+warning: redis:2 requires fencing but fencing is disabled
+warning: redis-bundle-podman-0 requires fencing but fencing is disabled
+warning: redis-bundle-0 requires fencing but fencing is disabled
+warning: redis-bundle-podman-1 requires fencing but fencing is disabled
+warning: redis-bundle-1 requires fencing but fencing is disabled
+warning: redis-bundle-podman-2 requires fencing but fencing is disabled
+warning: redis-bundle-2 requires fencing but fencing is disabled
+warning: ip-192.168.24.150 requires fencing but fencing is disabled
+warning: ip-10.0.0.150 requires fencing but fencing is disabled
+warning: ip-172.17.1.151 requires fencing but fencing is disabled
+warning: ip-172.17.1.150 requires fencing but fencing is disabled
+warning: ip-172.17.3.150 requires fencing but fencing is disabled
+warning: ip-172.17.4.150 requires fencing but fencing is disabled
+warning: haproxy-bundle requires fencing but fencing is disabled
+warning: haproxy-bundle-podman-0 requires fencing but fencing is disabled
+warning: haproxy-bundle-podman-1 requires fencing but fencing is disabled
+warning: haproxy-bundle-podman-2 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-master requires fencing but fencing is disabled
+warning: ovndb_servers:0 requires fencing but fencing is disabled
+warning: ovndb_servers:1 requires fencing but fencing is disabled
+warning: ovndb_servers:2 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-podman-0 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-0 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-podman-1 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-1 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-podman-2 requires fencing but fencing is disabled
+warning: ovn-dbs-bundle-2 requires fencing but fencing is disabled
+warning: ip-172.17.1.87 requires fencing but fencing is disabled
+warning: stonith-fence_compute-fence-nova requires fencing but fencing is disabled
+warning: compute-unfence-trigger-clone requires fencing but fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:0 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:1 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:2 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:3 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:4 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:5 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:6 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:7 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:8 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:9 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:10 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:11 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:12 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:13 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:14 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:15 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:16 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:17 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:18 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:19 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:20 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:21 to "quorum" because fencing is disabled
+warning: Resetting "requires" for compute-unfence-trigger:22 to "quorum" because fencing is disabled
+warning: nova-evacuate requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400aa1373 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400dc23e0 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-52540040bb56 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400addd38 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-52540078fb07 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400ea59b0 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400066e50 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400e1534e requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-52540060dbba requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400e018b6 requires fencing but fencing is disabled
+warning: stonith-fence_ipmilan-525400c87cdb requires fencing but fencing is disabled
+warning: openstack-cinder-volume requires fencing but fencing is disabled
+warning: openstack-cinder-volume-podman-0 requires fencing but fencing is disabled
Using the original execution date of: 2021-02-15 01:40:51Z
Revised Cluster Status:
* Node List:
* Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-2 ]
* OFFLINE: [ messaging-1 ]
* RemoteOnline: [ compute-0 compute-1 ]
* GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-0 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
* Full List of Resources:
* compute-0 (ocf:pacemaker:remote): Started controller-1
* compute-1 (ocf:pacemaker:remote): Started controller-2
* Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
* galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
* galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
* galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
* Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
* rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Stopped
* rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
* Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
* redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
* redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
* redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
* ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-2
* ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-1
* ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-2
* Container bundle set: haproxy-bundle [cluster.common.tag/rhosp16-openstack-haproxy:pcmklatest]:
* haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2
* haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0
* haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1
* Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
* ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Unpromoted controller-0
* ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-2
* ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Promoted controller-1
* ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Started controller-1
* stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-1
* Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
* Started: [ compute-0 compute-1 ]
* Stopped: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
* nova-evacuate (ocf:openstack:NovaEvacuate): Started database-2
* stonith-fence_ipmilan-525400aa1373 (stonith:fence_ipmilan): Started messaging-0
* stonith-fence_ipmilan-525400dc23e0 (stonith:fence_ipmilan): Started messaging-2
* stonith-fence_ipmilan-52540040bb56 (stonith:fence_ipmilan): Started database-0
* stonith-fence_ipmilan-525400addd38 (stonith:fence_ipmilan): Started messaging-0
* stonith-fence_ipmilan-52540078fb07 (stonith:fence_ipmilan): Started database-0
* stonith-fence_ipmilan-525400ea59b0 (stonith:fence_ipmilan): Started database-1
* stonith-fence_ipmilan-525400066e50 (stonith:fence_ipmilan): Started database-2
* stonith-fence_ipmilan-525400e1534e (stonith:fence_ipmilan): Started messaging-2
* stonith-fence_ipmilan-52540060dbba (stonith:fence_ipmilan): Started database-2
* stonith-fence_ipmilan-525400e018b6 (stonith:fence_ipmilan): Started database-0
* stonith-fence_ipmilan-525400c87cdb (stonith:fence_ipmilan): Started messaging-0
* Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
* openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-2
diff --git a/cts/scheduler/summary/clone-anon-failcount.summary b/cts/scheduler/summary/clone-anon-failcount.summary
index 8d4f369e3e..2b39b0b687 100644
--- a/cts/scheduler/summary/clone-anon-failcount.summary
+++ b/cts/scheduler/summary/clone-anon-failcount.summary
@@ -1,119 +1,124 @@
Current cluster status:
* Node List:
* Online: [ srv01 srv02 srv03 srv04 ]
* Full List of Resources:
* Resource Group: UMgroup01:
* UmVIPcheck (ocf:pacemaker:Dummy): Started srv01
* UmIPaddr (ocf:pacemaker:Dummy): Started srv01
* UmDummy01 (ocf:pacemaker:Dummy): Started srv01
* UmDummy02 (ocf:pacemaker:Dummy): Started srv01
* Resource Group: OVDBgroup02-1:
* prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started srv01
* Resource Group: OVDBgroup02-2:
* prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started srv02
* Resource Group: OVDBgroup02-3:
* prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started srv03
* Resource Group: grpStonith1:
* prmStonithN1 (stonith:external/ssh): Started srv04
* Resource Group: grpStonith2:
* prmStonithN2 (stonith:external/ssh): Started srv01
* Resource Group: grpStonith3:
* prmStonithN3 (stonith:external/ssh): Started srv02
* Resource Group: grpStonith4:
* prmStonithN4 (stonith:external/ssh): Started srv03
* Clone Set: clnUMgroup01 [clnUmResource]:
* Resource Group: clnUmResource:0:
* clnUMdummy01 (ocf:pacemaker:Dummy): FAILED srv04
* clnUMdummy02 (ocf:pacemaker:Dummy): Started srv04
* Started: [ srv01 ]
* Stopped: [ srv02 srv03 ]
* Clone Set: clnPingd [clnPrmPingd]:
* Started: [ srv01 srv02 srv03 srv04 ]
* Clone Set: clnDiskd1 [clnPrmDiskd1]:
* Started: [ srv01 srv02 srv03 srv04 ]
* Clone Set: clnG3dummy1 [clnG3dummy01]:
* Started: [ srv01 srv02 srv03 srv04 ]
* Clone Set: clnG3dummy2 [clnG3dummy02]:
* Started: [ srv01 srv02 srv03 srv04 ]
+error: Resetting 'on-fail' for UmDummy01 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for clnG3dummy02:0 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for clnG3dummy02:1 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for clnG3dummy02:2 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for clnG3dummy02:3 stop action to default value because 'stop' is not allowed for stop
Transition Summary:
* Move UmVIPcheck ( srv01 -> srv04 )
* Move UmIPaddr ( srv01 -> srv04 )
* Move UmDummy01 ( srv01 -> srv04 )
* Move UmDummy02 ( srv01 -> srv04 )
* Recover clnUMdummy01:0 ( srv04 )
* Restart clnUMdummy02:0 ( srv04 ) due to required clnUMdummy01:0 start
* Stop clnUMdummy01:1 ( srv01 ) due to node availability
* Stop clnUMdummy02:1 ( srv01 ) due to node availability
Executing Cluster Transition:
* Pseudo action: UMgroup01_stop_0
* Resource action: UmDummy02 stop on srv01
* Resource action: UmDummy01 stop on srv01
* Resource action: UmIPaddr stop on srv01
* Resource action: UmVIPcheck stop on srv01
* Pseudo action: UMgroup01_stopped_0
* Pseudo action: clnUMgroup01_stop_0
* Pseudo action: clnUmResource:0_stop_0
* Resource action: clnUMdummy02:1 stop on srv04
* Pseudo action: clnUmResource:1_stop_0
* Resource action: clnUMdummy02:0 stop on srv01
* Resource action: clnUMdummy01:1 stop on srv04
* Resource action: clnUMdummy01:0 stop on srv01
* Pseudo action: clnUmResource:0_stopped_0
* Pseudo action: clnUmResource:1_stopped_0
* Pseudo action: clnUMgroup01_stopped_0
* Pseudo action: clnUMgroup01_start_0
* Pseudo action: clnUmResource:0_start_0
* Resource action: clnUMdummy01:1 start on srv04
* Resource action: clnUMdummy01:1 monitor=10000 on srv04
* Resource action: clnUMdummy02:1 start on srv04
* Resource action: clnUMdummy02:1 monitor=10000 on srv04
* Pseudo action: clnUmResource:0_running_0
* Pseudo action: clnUMgroup01_running_0
* Pseudo action: UMgroup01_start_0
* Resource action: UmVIPcheck start on srv04
* Resource action: UmIPaddr start on srv04
* Resource action: UmDummy01 start on srv04
* Resource action: UmDummy02 start on srv04
* Pseudo action: UMgroup01_running_0
* Resource action: UmIPaddr monitor=10000 on srv04
* Resource action: UmDummy01 monitor=10000 on srv04
* Resource action: UmDummy02 monitor=10000 on srv04
Revised Cluster Status:
* Node List:
* Online: [ srv01 srv02 srv03 srv04 ]
* Full List of Resources:
* Resource Group: UMgroup01:
* UmVIPcheck (ocf:pacemaker:Dummy): Started srv04
* UmIPaddr (ocf:pacemaker:Dummy): Started srv04
* UmDummy01 (ocf:pacemaker:Dummy): Started srv04
* UmDummy02 (ocf:pacemaker:Dummy): Started srv04
* Resource Group: OVDBgroup02-1:
* prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started srv01
* Resource Group: OVDBgroup02-2:
* prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started srv02
* Resource Group: OVDBgroup02-3:
* prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started srv03
* Resource Group: grpStonith1:
* prmStonithN1 (stonith:external/ssh): Started srv04
* Resource Group: grpStonith2:
* prmStonithN2 (stonith:external/ssh): Started srv01
* Resource Group: grpStonith3:
* prmStonithN3 (stonith:external/ssh): Started srv02
* Resource Group: grpStonith4:
* prmStonithN4 (stonith:external/ssh): Started srv03
* Clone Set: clnUMgroup01 [clnUmResource]:
* Started: [ srv04 ]
* Stopped: [ srv01 srv02 srv03 ]
* Clone Set: clnPingd [clnPrmPingd]:
* Started: [ srv01 srv02 srv03 srv04 ]
* Clone Set: clnDiskd1 [clnPrmDiskd1]:
* Started: [ srv01 srv02 srv03 srv04 ]
* Clone Set: clnG3dummy1 [clnG3dummy01]:
* Started: [ srv01 srv02 srv03 srv04 ]
* Clone Set: clnG3dummy2 [clnG3dummy02]:
* Started: [ srv01 srv02 srv03 srv04 ]
diff --git a/cts/scheduler/summary/clone-anon-probe-1.summary b/cts/scheduler/summary/clone-anon-probe-1.summary
index 51cf914a00..5539042553 100644
--- a/cts/scheduler/summary/clone-anon-probe-1.summary
+++ b/cts/scheduler/summary/clone-anon-probe-1.summary
@@ -1,27 +1,33 @@
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Current cluster status:
* Node List:
* Online: [ mysql-01 mysql-02 ]
* Full List of Resources:
* Clone Set: ms-drbd0 [drbd0]:
* Stopped: [ mysql-01 mysql-02 ]
Transition Summary:
* Start drbd0:0 ( mysql-01 )
* Start drbd0:1 ( mysql-02 )
Executing Cluster Transition:
* Resource action: drbd0:0 monitor on mysql-01
* Resource action: drbd0:1 monitor on mysql-02
* Pseudo action: ms-drbd0_start_0
* Resource action: drbd0:0 start on mysql-01
* Resource action: drbd0:1 start on mysql-02
* Pseudo action: ms-drbd0_running_0
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Revised Cluster Status:
* Node List:
* Online: [ mysql-01 mysql-02 ]
* Full List of Resources:
* Clone Set: ms-drbd0 [drbd0]:
* Started: [ mysql-01 mysql-02 ]
diff --git a/cts/scheduler/summary/clone-anon-probe-2.summary b/cts/scheduler/summary/clone-anon-probe-2.summary
index 79a2fb8785..aa37f7a828 100644
--- a/cts/scheduler/summary/clone-anon-probe-2.summary
+++ b/cts/scheduler/summary/clone-anon-probe-2.summary
@@ -1,24 +1,30 @@
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Current cluster status:
* Node List:
* Online: [ mysql-01 mysql-02 ]
* Full List of Resources:
* Clone Set: ms-drbd0 [drbd0]:
* Started: [ mysql-02 ]
* Stopped: [ mysql-01 ]
Transition Summary:
* Start drbd0:1 ( mysql-01 )
Executing Cluster Transition:
* Pseudo action: ms-drbd0_start_0
* Resource action: drbd0:1 start on mysql-01
* Pseudo action: ms-drbd0_running_0
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Revised Cluster Status:
* Node List:
* Online: [ mysql-01 mysql-02 ]
* Full List of Resources:
* Clone Set: ms-drbd0 [drbd0]:
* Started: [ mysql-01 mysql-02 ]
diff --git a/cts/scheduler/summary/clone-require-all-1.summary b/cts/scheduler/summary/clone-require-all-1.summary
index 7037eb8caa..cf4274b2fb 100644
--- a/cts/scheduler/summary/clone-require-all-1.summary
+++ b/cts/scheduler/summary/clone-require-all-1.summary
@@ -1,36 +1,37 @@
Current cluster status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 ]
* Stopped: [ rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Start B:0 ( rhel7-auto3 )
* Start B:1 ( rhel7-auto4 )
Executing Cluster Transition:
* Pseudo action: B-clone_start_0
* Resource action: B start on rhel7-auto3
* Resource action: B start on rhel7-auto4
* Pseudo action: B-clone_running_0
* Resource action: B monitor=10000 on rhel7-auto3
* Resource action: B monitor=10000 on rhel7-auto4
Revised Cluster Status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 ]
* Stopped: [ rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-2.summary b/cts/scheduler/summary/clone-require-all-2.summary
index 72d6f243f6..676810d22d 100644
--- a/cts/scheduler/summary/clone-require-all-2.summary
+++ b/cts/scheduler/summary/clone-require-all-2.summary
@@ -1,42 +1,43 @@
Current cluster status:
* Node List:
* Node rhel7-auto1: standby (with active resources)
* Node rhel7-auto2: standby (with active resources)
* Online: [ rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 ]
* Stopped: [ rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Move shooter ( rhel7-auto1 -> rhel7-auto3 )
* Stop A:0 ( rhel7-auto1 ) due to node availability
* Stop A:1 ( rhel7-auto2 ) due to node availability
* Start B:0 ( rhel7-auto4 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory (blocked)
* Start B:1 ( rhel7-auto3 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory (blocked)
Executing Cluster Transition:
* Resource action: shooter stop on rhel7-auto1
* Pseudo action: A-clone_stop_0
* Resource action: shooter start on rhel7-auto3
* Resource action: A stop on rhel7-auto1
* Resource action: A stop on rhel7-auto2
* Pseudo action: A-clone_stopped_0
* Resource action: shooter monitor=60000 on rhel7-auto3
Revised Cluster Status:
* Node List:
* Node rhel7-auto1: standby
* Node rhel7-auto2: standby
* Online: [ rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto3
* Clone Set: A-clone [A]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
diff --git a/cts/scheduler/summary/clone-require-all-3.summary b/cts/scheduler/summary/clone-require-all-3.summary
index b828bffce2..485595407a 100644
--- a/cts/scheduler/summary/clone-require-all-3.summary
+++ b/cts/scheduler/summary/clone-require-all-3.summary
@@ -1,47 +1,48 @@
Current cluster status:
* Node List:
* Node rhel7-auto1: standby (with active resources)
* Node rhel7-auto2: standby (with active resources)
* Online: [ rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 ]
* Stopped: [ rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Move shooter ( rhel7-auto1 -> rhel7-auto3 )
* Stop A:0 ( rhel7-auto1 ) due to node availability
* Stop A:1 ( rhel7-auto2 ) due to node availability
* Stop B:0 ( rhel7-auto3 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory
* Stop B:1 ( rhel7-auto4 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory
Executing Cluster Transition:
* Resource action: shooter stop on rhel7-auto1
* Pseudo action: B-clone_stop_0
* Resource action: shooter start on rhel7-auto3
* Resource action: B stop on rhel7-auto3
* Resource action: B stop on rhel7-auto4
* Pseudo action: B-clone_stopped_0
* Resource action: shooter monitor=60000 on rhel7-auto3
* Pseudo action: A-clone_stop_0
* Resource action: A stop on rhel7-auto1
* Resource action: A stop on rhel7-auto2
* Pseudo action: A-clone_stopped_0
Revised Cluster Status:
* Node List:
* Node rhel7-auto1: standby
* Node rhel7-auto2: standby
* Online: [ rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto3
* Clone Set: A-clone [A]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
diff --git a/cts/scheduler/summary/clone-require-all-4.summary b/cts/scheduler/summary/clone-require-all-4.summary
index ebd7b6bb46..2632aebbec 100644
--- a/cts/scheduler/summary/clone-require-all-4.summary
+++ b/cts/scheduler/summary/clone-require-all-4.summary
@@ -1,41 +1,42 @@
Current cluster status:
* Node List:
* Node rhel7-auto1: standby (with active resources)
* Online: [ rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 ]
* Stopped: [ rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Move shooter ( rhel7-auto1 -> rhel7-auto2 )
* Stop A:0 ( rhel7-auto1 ) due to node availability
Executing Cluster Transition:
* Resource action: shooter stop on rhel7-auto1
* Pseudo action: A-clone_stop_0
* Resource action: shooter start on rhel7-auto2
* Resource action: A stop on rhel7-auto1
* Pseudo action: A-clone_stopped_0
* Pseudo action: A-clone_start_0
* Resource action: shooter monitor=60000 on rhel7-auto2
* Pseudo action: A-clone_running_0
Revised Cluster Status:
* Node List:
* Node rhel7-auto1: standby
* Online: [ rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto2
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto2 ]
* Stopped: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-5.summary b/cts/scheduler/summary/clone-require-all-5.summary
index b47049e883..cae968b1eb 100644
--- a/cts/scheduler/summary/clone-require-all-5.summary
+++ b/cts/scheduler/summary/clone-require-all-5.summary
@@ -1,45 +1,46 @@
Current cluster status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 ]
* Stopped: [ rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Start A:2 ( rhel7-auto3 )
* Start B:0 ( rhel7-auto4 )
* Start B:1 ( rhel7-auto3 )
* Start B:2 ( rhel7-auto1 )
Executing Cluster Transition:
* Pseudo action: A-clone_start_0
* Resource action: A start on rhel7-auto3
* Pseudo action: A-clone_running_0
* Pseudo action: clone-one-or-more:order-A-clone-B-clone-mandatory
* Resource action: A monitor=10000 on rhel7-auto3
* Pseudo action: B-clone_start_0
* Resource action: B start on rhel7-auto4
* Resource action: B start on rhel7-auto3
* Resource action: B start on rhel7-auto1
* Pseudo action: B-clone_running_0
* Resource action: B monitor=10000 on rhel7-auto4
* Resource action: B monitor=10000 on rhel7-auto3
* Resource action: B monitor=10000 on rhel7-auto1
Revised Cluster Status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Stopped: [ rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-6.summary b/cts/scheduler/summary/clone-require-all-6.summary
index 5bae20c728..ef1a99b2d3 100644
--- a/cts/scheduler/summary/clone-require-all-6.summary
+++ b/cts/scheduler/summary/clone-require-all-6.summary
@@ -1,37 +1,38 @@
Current cluster status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Stopped: [ rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto2 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Stop A:0 ( rhel7-auto1 ) due to node availability
* Stop A:2 ( rhel7-auto3 ) due to node availability
Executing Cluster Transition:
* Pseudo action: A-clone_stop_0
* Resource action: A stop on rhel7-auto1
* Resource action: A stop on rhel7-auto3
* Pseudo action: A-clone_stopped_0
* Pseudo action: A-clone_start_0
* Pseudo action: A-clone_running_0
Revised Cluster Status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto2 ]
* Stopped: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-7.summary b/cts/scheduler/summary/clone-require-all-7.summary
index f0f2820c26..ac4af30a84 100644
--- a/cts/scheduler/summary/clone-require-all-7.summary
+++ b/cts/scheduler/summary/clone-require-all-7.summary
@@ -1,48 +1,49 @@
Current cluster status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Start A:0 ( rhel7-auto2 )
* Start A:1 ( rhel7-auto1 )
* Start B:0 ( rhel7-auto3 )
* Start B:1 ( rhel7-auto4 )
Executing Cluster Transition:
* Resource action: A:0 monitor on rhel7-auto4
* Resource action: A:0 monitor on rhel7-auto3
* Resource action: A:0 monitor on rhel7-auto2
* Resource action: A:1 monitor on rhel7-auto1
* Pseudo action: A-clone_start_0
* Resource action: A:0 start on rhel7-auto2
* Resource action: A:1 start on rhel7-auto1
* Pseudo action: A-clone_running_0
* Pseudo action: clone-one-or-more:order-A-clone-B-clone-mandatory
* Resource action: A:0 monitor=10000 on rhel7-auto2
* Resource action: A:1 monitor=10000 on rhel7-auto1
* Pseudo action: B-clone_start_0
* Resource action: B start on rhel7-auto3
* Resource action: B start on rhel7-auto4
* Pseudo action: B-clone_running_0
* Resource action: B monitor=10000 on rhel7-auto3
* Resource action: B monitor=10000 on rhel7-auto4
Revised Cluster Status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto1 rhel7-auto2 ]
* Stopped: [ rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto3 rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-no-interleave-1.summary b/cts/scheduler/summary/clone-require-all-no-interleave-1.summary
index 646bfa3ef5..50da4cc216 100644
--- a/cts/scheduler/summary/clone-require-all-no-interleave-1.summary
+++ b/cts/scheduler/summary/clone-require-all-no-interleave-1.summary
@@ -1,56 +1,57 @@
Current cluster status:
* Node List:
* Node rhel7-auto4: standby
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Clone Set: C-clone [C]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Start A:0 ( rhel7-auto3 )
* Start B:0 ( rhel7-auto3 )
* Start C:0 ( rhel7-auto2 )
* Start C:1 ( rhel7-auto1 )
* Start C:2 ( rhel7-auto3 )
Executing Cluster Transition:
* Pseudo action: A-clone_start_0
* Resource action: A start on rhel7-auto3
* Pseudo action: A-clone_running_0
* Pseudo action: B-clone_start_0
* Resource action: A monitor=10000 on rhel7-auto3
* Resource action: B start on rhel7-auto3
* Pseudo action: B-clone_running_0
* Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory
* Resource action: B monitor=10000 on rhel7-auto3
* Pseudo action: C-clone_start_0
* Resource action: C start on rhel7-auto2
* Resource action: C start on rhel7-auto1
* Resource action: C start on rhel7-auto3
* Pseudo action: C-clone_running_0
* Resource action: C monitor=10000 on rhel7-auto2
* Resource action: C monitor=10000 on rhel7-auto1
* Resource action: C monitor=10000 on rhel7-auto3
Revised Cluster Status:
* Node List:
* Node rhel7-auto4: standby
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto3 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto3 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Clone Set: C-clone [C]:
* Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Stopped: [ rhel7-auto4 ]
diff --git a/cts/scheduler/summary/clone-require-all-no-interleave-2.summary b/cts/scheduler/summary/clone-require-all-no-interleave-2.summary
index e40230cb52..bbd012cec2 100644
--- a/cts/scheduler/summary/clone-require-all-no-interleave-2.summary
+++ b/cts/scheduler/summary/clone-require-all-no-interleave-2.summary
@@ -1,56 +1,57 @@
Current cluster status:
* Node List:
* Node rhel7-auto3: standby
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
* Clone Set: C-clone [C]:
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Start A:0 ( rhel7-auto4 )
* Start B:0 ( rhel7-auto4 )
* Start C:0 ( rhel7-auto2 )
* Start C:1 ( rhel7-auto1 )
* Start C:2 ( rhel7-auto4 )
Executing Cluster Transition:
* Pseudo action: A-clone_start_0
* Resource action: A start on rhel7-auto4
* Pseudo action: A-clone_running_0
* Pseudo action: B-clone_start_0
* Resource action: A monitor=10000 on rhel7-auto4
* Resource action: B start on rhel7-auto4
* Pseudo action: B-clone_running_0
* Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory
* Resource action: B monitor=10000 on rhel7-auto4
* Pseudo action: C-clone_start_0
* Resource action: C start on rhel7-auto2
* Resource action: C start on rhel7-auto1
* Resource action: C start on rhel7-auto4
* Pseudo action: C-clone_running_0
* Resource action: C monitor=10000 on rhel7-auto2
* Resource action: C monitor=10000 on rhel7-auto1
* Resource action: C monitor=10000 on rhel7-auto4
Revised Cluster Status:
* Node List:
* Node rhel7-auto3: standby
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Clone Set: C-clone [C]:
* Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Stopped: [ rhel7-auto3 ]
diff --git a/cts/scheduler/summary/clone-require-all-no-interleave-3.summary b/cts/scheduler/summary/clone-require-all-no-interleave-3.summary
index a22bf455b6..85a03a0b37 100644
--- a/cts/scheduler/summary/clone-require-all-no-interleave-3.summary
+++ b/cts/scheduler/summary/clone-require-all-no-interleave-3.summary
@@ -1,62 +1,63 @@
Current cluster status:
* Node List:
* Node rhel7-auto4: standby (with active resources)
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto4 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Clone Set: C-clone [C]:
* Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Stopped: [ rhel7-auto3 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Move A:0 ( rhel7-auto4 -> rhel7-auto3 )
* Move B:0 ( rhel7-auto4 -> rhel7-auto3 )
* Move C:0 ( rhel7-auto4 -> rhel7-auto3 )
Executing Cluster Transition:
* Pseudo action: C-clone_stop_0
* Resource action: C stop on rhel7-auto4
* Pseudo action: C-clone_stopped_0
* Pseudo action: B-clone_stop_0
* Resource action: B stop on rhel7-auto4
* Pseudo action: B-clone_stopped_0
* Pseudo action: A-clone_stop_0
* Resource action: A stop on rhel7-auto4
* Pseudo action: A-clone_stopped_0
* Pseudo action: A-clone_start_0
* Resource action: A start on rhel7-auto3
* Pseudo action: A-clone_running_0
* Pseudo action: B-clone_start_0
* Resource action: A monitor=10000 on rhel7-auto3
* Resource action: B start on rhel7-auto3
* Pseudo action: B-clone_running_0
* Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory
* Resource action: B monitor=10000 on rhel7-auto3
* Pseudo action: C-clone_start_0
* Resource action: C start on rhel7-auto3
* Pseudo action: C-clone_running_0
* Resource action: C monitor=10000 on rhel7-auto3
Revised Cluster Status:
* Node List:
* Node rhel7-auto4: standby
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: A-clone [A]:
* Started: [ rhel7-auto3 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Clone Set: B-clone [B]:
* Started: [ rhel7-auto3 ]
* Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
* Clone Set: C-clone [C]:
* Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Stopped: [ rhel7-auto4 ]
diff --git a/cts/scheduler/summary/coloc-clone-stays-active.summary b/cts/scheduler/summary/coloc-clone-stays-active.summary
index cb212e1cde..9e35a5d13a 100644
--- a/cts/scheduler/summary/coloc-clone-stays-active.summary
+++ b/cts/scheduler/summary/coloc-clone-stays-active.summary
@@ -1,209 +1,210 @@
+warning: Support for the 'ordered' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
9 of 87 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ s01-0 s01-1 ]
* Full List of Resources:
* stonith-s01-0 (stonith:external/ipmi): Started s01-1
* stonith-s01-1 (stonith:external/ipmi): Started s01-0
* Resource Group: iscsi-pool-0-target-all:
* iscsi-pool-0-target (ocf:vds-ok:iSCSITarget): Started s01-0
* iscsi-pool-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-0
* Resource Group: iscsi-pool-0-vips:
* vip-235 (ocf:heartbeat:IPaddr2): Started s01-0
* vip-236 (ocf:heartbeat:IPaddr2): Started s01-0
* Resource Group: iscsi-pool-1-target-all:
* iscsi-pool-1-target (ocf:vds-ok:iSCSITarget): Started s01-1
* iscsi-pool-1-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-1
* Resource Group: iscsi-pool-1-vips:
* vip-237 (ocf:heartbeat:IPaddr2): Started s01-1
* vip-238 (ocf:heartbeat:IPaddr2): Started s01-1
* Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable):
* Promoted: [ s01-0 ]
* Unpromoted: [ s01-1 ]
* Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable):
* Promoted: [ s01-1 ]
* Unpromoted: [ s01-0 ]
* Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable):
* Promoted: [ s01-0 ]
* Unpromoted: [ s01-1 ]
* Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable):
* Promoted: [ s01-1 ]
* Unpromoted: [ s01-0 ]
* Clone Set: cl-o2cb [o2cb] (disabled):
* Stopped (disabled): [ s01-0 s01-1 ]
* Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-s01-service-fs [s01-service-fs]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-ietd [ietd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-dhcpd [dhcpd] (disabled):
* Stopped (disabled): [ s01-0 s01-1 ]
* Resource Group: http-server:
* vip-233 (ocf:heartbeat:IPaddr2): Started s01-0
* nginx (lsb:nginx): Stopped (disabled)
* Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-s01-logs-fs [s01-logs-fs]:
* Started: [ s01-0 s01-1 ]
* Resource Group: syslog-server:
* vip-234 (ocf:heartbeat:IPaddr2): Started s01-1
* syslog-ng (ocf:heartbeat:syslog-ng): Started s01-1
* Resource Group: tftp-server:
* vip-232 (ocf:heartbeat:IPaddr2): Stopped
* tftpd (ocf:heartbeat:Xinetd): Stopped
* Clone Set: cl-xinetd [xinetd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-ospf-routing [ospf-routing]:
* Started: [ s01-0 s01-1 ]
* Clone Set: connected-outer [ping-bmc-and-switch]:
* Started: [ s01-0 s01-1 ]
* Resource Group: iscsi-vds-dom0-stateless-0-target-all (disabled):
* iscsi-vds-dom0-stateless-0-target (ocf:vds-ok:iSCSITarget): Stopped (disabled)
* iscsi-vds-dom0-stateless-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Stopped (disabled)
* Resource Group: iscsi-vds-dom0-stateless-0-vips:
* vip-227 (ocf:heartbeat:IPaddr2): Stopped
* vip-228 (ocf:heartbeat:IPaddr2): Stopped
* Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable):
* Promoted: [ s01-0 ]
* Unpromoted: [ s01-1 ]
* Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable):
* Unpromoted: [ s01-0 s01-1 ]
* Clone Set: cl-dlm [dlm]:
* Started: [ s01-0 s01-1 ]
* Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] (disabled):
* Stopped (disabled): [ s01-0 s01-1 ]
* Clone Set: cl-gfs2 [gfs2]:
* Started: [ s01-0 s01-1 ]
* Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-vds-http-fs [vds-http-fs]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-clvmd [clvmd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data]:
* Started: [ s01-0 s01-1 ]
* mgmt-vm (ocf:vds-ok:VirtualDomain): Started s01-0
* Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-libvirtd [libvirtd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool]:
* Started: [ s01-0 s01-1 ]
Transition Summary:
* Migrate mgmt-vm ( s01-0 -> s01-1 )
Executing Cluster Transition:
* Resource action: mgmt-vm migrate_to on s01-0
* Resource action: mgmt-vm migrate_from on s01-1
* Resource action: mgmt-vm stop on s01-0
* Pseudo action: mgmt-vm_start_0
* Resource action: mgmt-vm monitor=10000 on s01-1
Revised Cluster Status:
* Node List:
* Online: [ s01-0 s01-1 ]
* Full List of Resources:
* stonith-s01-0 (stonith:external/ipmi): Started s01-1
* stonith-s01-1 (stonith:external/ipmi): Started s01-0
* Resource Group: iscsi-pool-0-target-all:
* iscsi-pool-0-target (ocf:vds-ok:iSCSITarget): Started s01-0
* iscsi-pool-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-0
* Resource Group: iscsi-pool-0-vips:
* vip-235 (ocf:heartbeat:IPaddr2): Started s01-0
* vip-236 (ocf:heartbeat:IPaddr2): Started s01-0
* Resource Group: iscsi-pool-1-target-all:
* iscsi-pool-1-target (ocf:vds-ok:iSCSITarget): Started s01-1
* iscsi-pool-1-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-1
* Resource Group: iscsi-pool-1-vips:
* vip-237 (ocf:heartbeat:IPaddr2): Started s01-1
* vip-238 (ocf:heartbeat:IPaddr2): Started s01-1
* Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable):
* Promoted: [ s01-0 ]
* Unpromoted: [ s01-1 ]
* Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable):
* Promoted: [ s01-1 ]
* Unpromoted: [ s01-0 ]
* Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable):
* Promoted: [ s01-0 ]
* Unpromoted: [ s01-1 ]
* Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable):
* Promoted: [ s01-1 ]
* Unpromoted: [ s01-0 ]
* Clone Set: cl-o2cb [o2cb] (disabled):
* Stopped (disabled): [ s01-0 s01-1 ]
* Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-s01-service-fs [s01-service-fs]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-ietd [ietd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-dhcpd [dhcpd] (disabled):
* Stopped (disabled): [ s01-0 s01-1 ]
* Resource Group: http-server:
* vip-233 (ocf:heartbeat:IPaddr2): Started s01-0
* nginx (lsb:nginx): Stopped (disabled)
* Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-s01-logs-fs [s01-logs-fs]:
* Started: [ s01-0 s01-1 ]
* Resource Group: syslog-server:
* vip-234 (ocf:heartbeat:IPaddr2): Started s01-1
* syslog-ng (ocf:heartbeat:syslog-ng): Started s01-1
* Resource Group: tftp-server:
* vip-232 (ocf:heartbeat:IPaddr2): Stopped
* tftpd (ocf:heartbeat:Xinetd): Stopped
* Clone Set: cl-xinetd [xinetd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-ospf-routing [ospf-routing]:
* Started: [ s01-0 s01-1 ]
* Clone Set: connected-outer [ping-bmc-and-switch]:
* Started: [ s01-0 s01-1 ]
* Resource Group: iscsi-vds-dom0-stateless-0-target-all (disabled):
* iscsi-vds-dom0-stateless-0-target (ocf:vds-ok:iSCSITarget): Stopped (disabled)
* iscsi-vds-dom0-stateless-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Stopped (disabled)
* Resource Group: iscsi-vds-dom0-stateless-0-vips:
* vip-227 (ocf:heartbeat:IPaddr2): Stopped
* vip-228 (ocf:heartbeat:IPaddr2): Stopped
* Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable):
* Promoted: [ s01-0 ]
* Unpromoted: [ s01-1 ]
* Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable):
* Unpromoted: [ s01-0 s01-1 ]
* Clone Set: cl-dlm [dlm]:
* Started: [ s01-0 s01-1 ]
* Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] (disabled):
* Stopped (disabled): [ s01-0 s01-1 ]
* Clone Set: cl-gfs2 [gfs2]:
* Started: [ s01-0 s01-1 ]
* Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-vds-http-fs [vds-http-fs]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-clvmd [clvmd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable):
* Promoted: [ s01-0 s01-1 ]
* Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data]:
* Started: [ s01-0 s01-1 ]
* mgmt-vm (ocf:vds-ok:VirtualDomain): Started s01-1
* Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-libvirtd [libvirtd]:
* Started: [ s01-0 s01-1 ]
* Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool]:
* Started: [ s01-0 s01-1 ]
diff --git a/cts/scheduler/summary/colocate-primitive-with-clone.summary b/cts/scheduler/summary/colocate-primitive-with-clone.summary
index e884428ee4..881ac31fb2 100644
--- a/cts/scheduler/summary/colocate-primitive-with-clone.summary
+++ b/cts/scheduler/summary/colocate-primitive-with-clone.summary
@@ -1,127 +1,130 @@
Current cluster status:
* Node List:
* Online: [ srv01 srv02 srv03 srv04 ]
* Full List of Resources:
* Resource Group: UMgroup01:
* UmVIPcheck (ocf:heartbeat:Dummy): Stopped
* UmIPaddr (ocf:heartbeat:Dummy): Stopped
* UmDummy01 (ocf:heartbeat:Dummy): Stopped
* UmDummy02 (ocf:heartbeat:Dummy): Stopped
* Resource Group: OVDBgroup02-1:
* prmExPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
* prmFsPostgreSQLDB1-1 (ocf:heartbeat:Dummy): Started srv04
* prmFsPostgreSQLDB1-2 (ocf:heartbeat:Dummy): Started srv04
* prmFsPostgreSQLDB1-3 (ocf:heartbeat:Dummy): Started srv04
* prmIpPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
* prmApPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
* Resource Group: OVDBgroup02-2:
* prmExPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
* prmFsPostgreSQLDB2-1 (ocf:heartbeat:Dummy): Started srv02
* prmFsPostgreSQLDB2-2 (ocf:heartbeat:Dummy): Started srv02
* prmFsPostgreSQLDB2-3 (ocf:heartbeat:Dummy): Started srv02
* prmIpPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
* prmApPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
* Resource Group: OVDBgroup02-3:
* prmExPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
* prmFsPostgreSQLDB3-1 (ocf:heartbeat:Dummy): Started srv03
* prmFsPostgreSQLDB3-2 (ocf:heartbeat:Dummy): Started srv03
* prmFsPostgreSQLDB3-3 (ocf:heartbeat:Dummy): Started srv03
* prmIpPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
* prmApPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
* Resource Group: grpStonith1:
* prmStonithN1 (stonith:external/ssh): Started srv04
* Resource Group: grpStonith2:
* prmStonithN2 (stonith:external/ssh): Started srv03
* Resource Group: grpStonith3:
* prmStonithN3 (stonith:external/ssh): Started srv02
* Resource Group: grpStonith4:
* prmStonithN4 (stonith:external/ssh): Started srv03
* Clone Set: clnUMgroup01 [clnUmResource]:
* Started: [ srv04 ]
* Stopped: [ srv01 srv02 srv03 ]
* Clone Set: clnPingd [clnPrmPingd]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
* Clone Set: clnDiskd1 [clnPrmDiskd1]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
* Clone Set: clnG3dummy1 [clnG3dummy01]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
* Clone Set: clnG3dummy2 [clnG3dummy02]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
+error: Resetting 'on-fail' for clnG3dummy02:0 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for clnG3dummy02:1 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for clnG3dummy02:2 stop action to default value because 'stop' is not allowed for stop
Transition Summary:
* Start UmVIPcheck ( srv04 )
* Start UmIPaddr ( srv04 )
* Start UmDummy01 ( srv04 )
* Start UmDummy02 ( srv04 )
Executing Cluster Transition:
* Pseudo action: UMgroup01_start_0
* Resource action: UmVIPcheck start on srv04
* Resource action: UmIPaddr start on srv04
* Resource action: UmDummy01 start on srv04
* Resource action: UmDummy02 start on srv04
* Cluster action: do_shutdown on srv01
* Pseudo action: UMgroup01_running_0
* Resource action: UmIPaddr monitor=10000 on srv04
* Resource action: UmDummy01 monitor=10000 on srv04
* Resource action: UmDummy02 monitor=10000 on srv04
Revised Cluster Status:
* Node List:
* Online: [ srv01 srv02 srv03 srv04 ]
* Full List of Resources:
* Resource Group: UMgroup01:
* UmVIPcheck (ocf:heartbeat:Dummy): Started srv04
* UmIPaddr (ocf:heartbeat:Dummy): Started srv04
* UmDummy01 (ocf:heartbeat:Dummy): Started srv04
* UmDummy02 (ocf:heartbeat:Dummy): Started srv04
* Resource Group: OVDBgroup02-1:
* prmExPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
* prmFsPostgreSQLDB1-1 (ocf:heartbeat:Dummy): Started srv04
* prmFsPostgreSQLDB1-2 (ocf:heartbeat:Dummy): Started srv04
* prmFsPostgreSQLDB1-3 (ocf:heartbeat:Dummy): Started srv04
* prmIpPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
* prmApPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
* Resource Group: OVDBgroup02-2:
* prmExPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
* prmFsPostgreSQLDB2-1 (ocf:heartbeat:Dummy): Started srv02
* prmFsPostgreSQLDB2-2 (ocf:heartbeat:Dummy): Started srv02
* prmFsPostgreSQLDB2-3 (ocf:heartbeat:Dummy): Started srv02
* prmIpPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
* prmApPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
* Resource Group: OVDBgroup02-3:
* prmExPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
* prmFsPostgreSQLDB3-1 (ocf:heartbeat:Dummy): Started srv03
* prmFsPostgreSQLDB3-2 (ocf:heartbeat:Dummy): Started srv03
* prmFsPostgreSQLDB3-3 (ocf:heartbeat:Dummy): Started srv03
* prmIpPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
* prmApPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
* Resource Group: grpStonith1:
* prmStonithN1 (stonith:external/ssh): Started srv04
* Resource Group: grpStonith2:
* prmStonithN2 (stonith:external/ssh): Started srv03
* Resource Group: grpStonith3:
* prmStonithN3 (stonith:external/ssh): Started srv02
* Resource Group: grpStonith4:
* prmStonithN4 (stonith:external/ssh): Started srv03
* Clone Set: clnUMgroup01 [clnUmResource]:
* Started: [ srv04 ]
* Stopped: [ srv01 srv02 srv03 ]
* Clone Set: clnPingd [clnPrmPingd]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
* Clone Set: clnDiskd1 [clnPrmDiskd1]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
* Clone Set: clnG3dummy1 [clnG3dummy01]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
* Clone Set: clnG3dummy2 [clnG3dummy02]:
* Started: [ srv02 srv03 srv04 ]
* Stopped: [ srv01 ]
diff --git a/cts/scheduler/summary/colocation-influence.summary b/cts/scheduler/summary/colocation-influence.summary
index e240003d92..2cd66b670d 100644
--- a/cts/scheduler/summary/colocation-influence.summary
+++ b/cts/scheduler/summary/colocation-influence.summary
@@ -1,170 +1,171 @@
Current cluster status:
* Node List:
* Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
* GuestOnline: [ bundle10-0 bundle10-1 bundle11-0 ]
* Full List of Resources:
* Fencing (stonith:fence_xvm): Started rhel7-1
* rsc1a (ocf:pacemaker:Dummy): Started rhel7-2
* rsc1b (ocf:pacemaker:Dummy): Started rhel7-2
* rsc2a (ocf:pacemaker:Dummy): Started rhel7-4
* rsc2b (ocf:pacemaker:Dummy): Started rhel7-4
* rsc3a (ocf:pacemaker:Dummy): Stopped
* rsc3b (ocf:pacemaker:Dummy): Stopped
* rsc4a (ocf:pacemaker:Dummy): Started rhel7-3
* rsc4b (ocf:pacemaker:Dummy): Started rhel7-3
* rsc5a (ocf:pacemaker:Dummy): Started rhel7-1
* Resource Group: group5a:
* rsc5a1 (ocf:pacemaker:Dummy): Started rhel7-1
* rsc5a2 (ocf:pacemaker:Dummy): Started rhel7-1
* Resource Group: group6a:
* rsc6a1 (ocf:pacemaker:Dummy): Started rhel7-2
* rsc6a2 (ocf:pacemaker:Dummy): Started rhel7-2
* rsc6a (ocf:pacemaker:Dummy): Started rhel7-2
* Resource Group: group7a:
* rsc7a1 (ocf:pacemaker:Dummy): Started rhel7-3
* rsc7a2 (ocf:pacemaker:Dummy): Started rhel7-3
* Clone Set: rsc8a-clone [rsc8a]:
* Started: [ rhel7-1 rhel7-3 rhel7-4 ]
* Clone Set: rsc8b-clone [rsc8b]:
* Started: [ rhel7-1 rhel7-3 rhel7-4 ]
* rsc9a (ocf:pacemaker:Dummy): Started rhel7-4
* rsc9b (ocf:pacemaker:Dummy): Started rhel7-4
* rsc9c (ocf:pacemaker:Dummy): Started rhel7-4
* rsc10a (ocf:pacemaker:Dummy): Started rhel7-2
* rsc11a (ocf:pacemaker:Dummy): Started rhel7-1
* rsc12a (ocf:pacemaker:Dummy): Started rhel7-1
* rsc12b (ocf:pacemaker:Dummy): Started rhel7-1
* rsc12c (ocf:pacemaker:Dummy): Started rhel7-1
* Container bundle set: bundle10 [pcmktest:http]:
* bundle10-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel7-2
* bundle10-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel7-3
* Container bundle set: bundle11 [pcmktest:http]:
* bundle11-0 (192.168.122.134) (ocf:pacemaker:Dummy): Started rhel7-1
* bundle11-1 (192.168.122.135) (ocf:pacemaker:Dummy): Stopped
* rsc13a (ocf:pacemaker:Dummy): Started rhel7-3
* Clone Set: rsc13b-clone [rsc13b] (promotable):
* Promoted: [ rhel7-3 ]
* Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ]
* Stopped: [ rhel7-5 ]
* rsc14b (ocf:pacemaker:Dummy): Started rhel7-4
* Clone Set: rsc14a-clone [rsc14a] (promotable):
* Promoted: [ rhel7-4 ]
* Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ]
* Stopped: [ rhel7-5 ]
+error: Constraint 'colocation-rsc1a-rsc1b-INFINITY' has invalid value for influence (using default)
Transition Summary:
* Move rsc1a ( rhel7-2 -> rhel7-3 )
* Move rsc1b ( rhel7-2 -> rhel7-3 )
* Stop rsc2a ( rhel7-4 ) due to node availability
* Start rsc3a ( rhel7-2 )
* Start rsc3b ( rhel7-2 )
* Stop rsc4a ( rhel7-3 ) due to node availability
* Stop rsc5a ( rhel7-1 ) due to node availability
* Stop rsc6a1 ( rhel7-2 ) due to node availability
* Stop rsc6a2 ( rhel7-2 ) due to node availability
* Stop rsc7a2 ( rhel7-3 ) due to node availability
* Stop rsc8a:1 ( rhel7-4 ) due to node availability
* Stop rsc9c ( rhel7-4 ) due to node availability
* Move rsc10a ( rhel7-2 -> rhel7-3 )
* Stop rsc12b ( rhel7-1 ) due to node availability
* Start bundle11-1 ( rhel7-5 ) due to unrunnable bundle11-docker-1 start (blocked)
* Start bundle11a:1 ( bundle11-1 ) due to unrunnable bundle11-docker-1 start (blocked)
* Stop rsc13a ( rhel7-3 ) due to node availability
* Stop rsc14a:1 ( Promoted rhel7-4 ) due to node availability
Executing Cluster Transition:
* Resource action: rsc1a stop on rhel7-2
* Resource action: rsc1b stop on rhel7-2
* Resource action: rsc2a stop on rhel7-4
* Resource action: rsc3a start on rhel7-2
* Resource action: rsc3b start on rhel7-2
* Resource action: rsc4a stop on rhel7-3
* Resource action: rsc5a stop on rhel7-1
* Pseudo action: group6a_stop_0
* Resource action: rsc6a2 stop on rhel7-2
* Pseudo action: group7a_stop_0
* Resource action: rsc7a2 stop on rhel7-3
* Pseudo action: rsc8a-clone_stop_0
* Resource action: rsc9c stop on rhel7-4
* Resource action: rsc10a stop on rhel7-2
* Resource action: rsc12b stop on rhel7-1
* Resource action: rsc13a stop on rhel7-3
* Pseudo action: rsc14a-clone_demote_0
* Pseudo action: bundle11_start_0
* Resource action: rsc1a start on rhel7-3
* Resource action: rsc1b start on rhel7-3
* Resource action: rsc3a monitor=10000 on rhel7-2
* Resource action: rsc3b monitor=10000 on rhel7-2
* Resource action: rsc6a1 stop on rhel7-2
* Pseudo action: group7a_stopped_0
* Resource action: rsc8a stop on rhel7-4
* Pseudo action: rsc8a-clone_stopped_0
* Resource action: rsc10a start on rhel7-3
* Pseudo action: bundle11-clone_start_0
* Resource action: rsc14a demote on rhel7-4
* Pseudo action: rsc14a-clone_demoted_0
* Pseudo action: rsc14a-clone_stop_0
* Resource action: rsc1a monitor=10000 on rhel7-3
* Resource action: rsc1b monitor=10000 on rhel7-3
* Pseudo action: group6a_stopped_0
* Resource action: rsc10a monitor=10000 on rhel7-3
* Pseudo action: bundle11-clone_running_0
* Resource action: rsc14a stop on rhel7-4
* Pseudo action: rsc14a-clone_stopped_0
* Pseudo action: bundle11_running_0
Revised Cluster Status:
* Node List:
* Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
* GuestOnline: [ bundle10-0 bundle10-1 bundle11-0 ]
* Full List of Resources:
* Fencing (stonith:fence_xvm): Started rhel7-1
* rsc1a (ocf:pacemaker:Dummy): Started rhel7-3
* rsc1b (ocf:pacemaker:Dummy): Started rhel7-3
* rsc2a (ocf:pacemaker:Dummy): Stopped
* rsc2b (ocf:pacemaker:Dummy): Started rhel7-4
* rsc3a (ocf:pacemaker:Dummy): Started rhel7-2
* rsc3b (ocf:pacemaker:Dummy): Started rhel7-2
* rsc4a (ocf:pacemaker:Dummy): Stopped
* rsc4b (ocf:pacemaker:Dummy): Started rhel7-3
* rsc5a (ocf:pacemaker:Dummy): Stopped
* Resource Group: group5a:
* rsc5a1 (ocf:pacemaker:Dummy): Started rhel7-1
* rsc5a2 (ocf:pacemaker:Dummy): Started rhel7-1
* Resource Group: group6a:
* rsc6a1 (ocf:pacemaker:Dummy): Stopped
* rsc6a2 (ocf:pacemaker:Dummy): Stopped
* rsc6a (ocf:pacemaker:Dummy): Started rhel7-2
* Resource Group: group7a:
* rsc7a1 (ocf:pacemaker:Dummy): Started rhel7-3
* rsc7a2 (ocf:pacemaker:Dummy): Stopped
* Clone Set: rsc8a-clone [rsc8a]:
* Started: [ rhel7-1 rhel7-3 ]
* Stopped: [ rhel7-2 rhel7-4 rhel7-5 ]
* Clone Set: rsc8b-clone [rsc8b]:
* Started: [ rhel7-1 rhel7-3 rhel7-4 ]
* rsc9a (ocf:pacemaker:Dummy): Started rhel7-4
* rsc9b (ocf:pacemaker:Dummy): Started rhel7-4
* rsc9c (ocf:pacemaker:Dummy): Stopped
* rsc10a (ocf:pacemaker:Dummy): Started rhel7-3
* rsc11a (ocf:pacemaker:Dummy): Started rhel7-1
* rsc12a (ocf:pacemaker:Dummy): Started rhel7-1
* rsc12b (ocf:pacemaker:Dummy): Stopped
* rsc12c (ocf:pacemaker:Dummy): Started rhel7-1
* Container bundle set: bundle10 [pcmktest:http]:
* bundle10-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel7-2
* bundle10-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel7-3
* Container bundle set: bundle11 [pcmktest:http]:
* bundle11-0 (192.168.122.134) (ocf:pacemaker:Dummy): Started rhel7-1
* bundle11-1 (192.168.122.135) (ocf:pacemaker:Dummy): Stopped
* rsc13a (ocf:pacemaker:Dummy): Stopped
* Clone Set: rsc13b-clone [rsc13b] (promotable):
* Promoted: [ rhel7-3 ]
* Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ]
* Stopped: [ rhel7-5 ]
* rsc14b (ocf:pacemaker:Dummy): Started rhel7-4
* Clone Set: rsc14a-clone [rsc14a] (promotable):
* Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ]
* Stopped: [ rhel7-4 rhel7-5 ]
diff --git a/cts/scheduler/summary/container-is-remote-node.summary b/cts/scheduler/summary/container-is-remote-node.summary
index c022e896f4..a33c9ed7db 100644
--- a/cts/scheduler/summary/container-is-remote-node.summary
+++ b/cts/scheduler/summary/container-is-remote-node.summary
@@ -1,59 +1,62 @@
3 of 19 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ lama2 lama3 ]
* GuestOnline: [ RNVM1 ]
* Full List of Resources:
* restofencelama2 (stonith:fence_ipmilan): Started lama3
* restofencelama3 (stonith:fence_ipmilan): Started lama2
* Clone Set: dlm-clone [dlm]:
* Started: [ lama2 lama3 ]
* Stopped: [ RNVM1 ]
* Clone Set: clvmd-clone [clvmd]:
* Started: [ lama2 lama3 ]
* Stopped: [ RNVM1 ]
* Clone Set: gfs2-lv_1_1-clone [gfs2-lv_1_1]:
* Started: [ lama2 lama3 ]
* Stopped: [ RNVM1 ]
* Clone Set: gfs2-lv_1_2-clone [gfs2-lv_1_2] (disabled):
* Stopped (disabled): [ lama2 lama3 RNVM1 ]
* VM1 (ocf:heartbeat:VirtualDomain): Started lama2
* Resource Group: RES1:
* FSdata1 (ocf:heartbeat:Filesystem): Started RNVM1
* RES1-IP (ocf:heartbeat:IPaddr2): Started RNVM1
* res-rsyslog (ocf:heartbeat:rsyslog.test): Started RNVM1
+warning: Invalid ordering constraint between gfs2-lv_1_1:0 and VM1
+warning: Invalid ordering constraint between clvmd:0 and VM1
+warning: Invalid ordering constraint between dlm:0 and VM1
Transition Summary:
Executing Cluster Transition:
* Resource action: dlm monitor on RNVM1
* Resource action: clvmd monitor on RNVM1
* Resource action: gfs2-lv_1_1 monitor on RNVM1
* Resource action: gfs2-lv_1_2 monitor on RNVM1
Revised Cluster Status:
* Node List:
* Online: [ lama2 lama3 ]
* GuestOnline: [ RNVM1 ]
* Full List of Resources:
* restofencelama2 (stonith:fence_ipmilan): Started lama3
* restofencelama3 (stonith:fence_ipmilan): Started lama2
* Clone Set: dlm-clone [dlm]:
* Started: [ lama2 lama3 ]
* Stopped: [ RNVM1 ]
* Clone Set: clvmd-clone [clvmd]:
* Started: [ lama2 lama3 ]
* Stopped: [ RNVM1 ]
* Clone Set: gfs2-lv_1_1-clone [gfs2-lv_1_1]:
* Started: [ lama2 lama3 ]
* Stopped: [ RNVM1 ]
* Clone Set: gfs2-lv_1_2-clone [gfs2-lv_1_2] (disabled):
* Stopped (disabled): [ lama2 lama3 RNVM1 ]
* VM1 (ocf:heartbeat:VirtualDomain): Started lama2
* Resource Group: RES1:
* FSdata1 (ocf:heartbeat:Filesystem): Started RNVM1
* RES1-IP (ocf:heartbeat:IPaddr2): Started RNVM1
* res-rsyslog (ocf:heartbeat:rsyslog.test): Started RNVM1
diff --git a/cts/scheduler/summary/expire-non-blocked-failure.summary b/cts/scheduler/summary/expire-non-blocked-failure.summary
index 0ca6c54046..92ba7c8a82 100644
--- a/cts/scheduler/summary/expire-non-blocked-failure.summary
+++ b/cts/scheduler/summary/expire-non-blocked-failure.summary
@@ -1,24 +1,26 @@
+warning: Ignoring failure timeout (1m) for rsc1 because it conflicts with on-fail=block
0 of 3 resource instances DISABLED and 1 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): FAILED node2 (blocked)
* rsc2 (ocf:pacemaker:Dummy): Started node1
Transition Summary:
Executing Cluster Transition:
* Cluster action: clear_failcount for rsc2 on node1
+warning: Ignoring failure timeout (1m) for rsc1 because it conflicts with on-fail=block
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): FAILED node2 (blocked)
* rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/failcount-block.summary b/cts/scheduler/summary/failcount-block.summary
index 646f76b400..179497942d 100644
--- a/cts/scheduler/summary/failcount-block.summary
+++ b/cts/scheduler/summary/failcount-block.summary
@@ -1,39 +1,44 @@
+error: Ignoring invalid node_state entry without id
+warning: Ignoring failure timeout (10s) for rsc_pcmk-2 because it conflicts with on-fail=block
+warning: Ignoring failure timeout (10s) for rsc_pcmk-4 because it conflicts with on-fail=block
0 of 5 resource instances DISABLED and 1 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ pcmk-1 ]
* OFFLINE: [ pcmk-4 ]
* Full List of Resources:
* rsc_pcmk-1 (ocf:heartbeat:IPaddr2): Started pcmk-1
* rsc_pcmk-2 (ocf:heartbeat:IPaddr2): FAILED pcmk-1 (blocked)
* rsc_pcmk-3 (ocf:heartbeat:IPaddr2): Stopped
* rsc_pcmk-4 (ocf:heartbeat:IPaddr2): Stopped
* rsc_pcmk-5 (ocf:heartbeat:IPaddr2): Started pcmk-1
Transition Summary:
* Start rsc_pcmk-3 ( pcmk-1 )
* Start rsc_pcmk-4 ( pcmk-1 )
Executing Cluster Transition:
* Resource action: rsc_pcmk-1 monitor=5000 on pcmk-1
* Cluster action: clear_failcount for rsc_pcmk-1 on pcmk-1
* Resource action: rsc_pcmk-3 start on pcmk-1
* Cluster action: clear_failcount for rsc_pcmk-3 on pcmk-1
* Resource action: rsc_pcmk-4 start on pcmk-1
* Cluster action: clear_failcount for rsc_pcmk-5 on pcmk-1
* Resource action: rsc_pcmk-3 monitor=5000 on pcmk-1
* Resource action: rsc_pcmk-4 monitor=5000 on pcmk-1
+error: Ignoring invalid node_state entry without id
+warning: Ignoring failure timeout (10s) for rsc_pcmk-2 because it conflicts with on-fail=block
Revised Cluster Status:
* Node List:
* Online: [ pcmk-1 ]
* OFFLINE: [ pcmk-4 ]
* Full List of Resources:
* rsc_pcmk-1 (ocf:heartbeat:IPaddr2): Started pcmk-1
* rsc_pcmk-2 (ocf:heartbeat:IPaddr2): FAILED pcmk-1 (blocked)
* rsc_pcmk-3 (ocf:heartbeat:IPaddr2): Started pcmk-1
* rsc_pcmk-4 (ocf:heartbeat:IPaddr2): Started pcmk-1
* rsc_pcmk-5 (ocf:heartbeat:IPaddr2): Started pcmk-1
diff --git a/cts/scheduler/summary/force-anon-clone-max.summary b/cts/scheduler/summary/force-anon-clone-max.summary
index d2320e9c57..2886410ab6 100644
--- a/cts/scheduler/summary/force-anon-clone-max.summary
+++ b/cts/scheduler/summary/force-anon-clone-max.summary
@@ -1,74 +1,89 @@
+warning: Ignoring globally-unique for clone1 because lsb resources such as lsb1:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone1 because lsb resources such as lsb1:1 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone2 because lsb resources such as lsb2:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone2 because lsb resources such as lsb2:1 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone2 because lsb resources such as lsb2:2 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone3 because lsb resources such as lsb3:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone3 because lsb resources such as lsb3:1 can be used only as anonymous clones
Current cluster status:
* Node List:
* Online: [ node1 node2 node3 ]
* Full List of Resources:
* Fencing (stonith:fence_imaginary): Stopped
* Clone Set: clone1 [lsb1]:
* Stopped: [ node1 node2 node3 ]
* Clone Set: clone2 [lsb2]:
* Stopped: [ node1 node2 node3 ]
* Clone Set: clone3 [group1]:
* Stopped: [ node1 node2 node3 ]
Transition Summary:
* Start Fencing ( node1 )
* Start lsb1:0 ( node2 )
* Start lsb1:1 ( node3 )
* Start lsb2:0 ( node1 )
* Start lsb2:1 ( node2 )
* Start lsb2:2 ( node3 )
* Start dummy1:0 ( node1 )
* Start dummy2:0 ( node1 )
* Start lsb3:0 ( node1 )
* Start dummy1:1 ( node2 )
* Start dummy2:1 ( node2 )
* Start lsb3:1 ( node2 )
Executing Cluster Transition:
* Resource action: Fencing start on node1
* Pseudo action: clone1_start_0
* Pseudo action: clone2_start_0
* Pseudo action: clone3_start_0
* Resource action: lsb1:0 start on node2
* Resource action: lsb1:1 start on node3
* Pseudo action: clone1_running_0
* Resource action: lsb2:0 start on node1
* Resource action: lsb2:1 start on node2
* Resource action: lsb2:2 start on node3
* Pseudo action: clone2_running_0
* Pseudo action: group1:0_start_0
* Resource action: dummy1:0 start on node1
* Resource action: dummy2:0 start on node1
* Resource action: lsb3:0 start on node1
* Pseudo action: group1:1_start_0
* Resource action: dummy1:1 start on node2
* Resource action: dummy2:1 start on node2
* Resource action: lsb3:1 start on node2
* Resource action: lsb1:0 monitor=5000 on node2
* Resource action: lsb1:1 monitor=5000 on node3
* Resource action: lsb2:0 monitor=5000 on node1
* Resource action: lsb2:1 monitor=5000 on node2
* Resource action: lsb2:2 monitor=5000 on node3
* Pseudo action: group1:0_running_0
* Resource action: dummy1:0 monitor=5000 on node1
* Resource action: dummy2:0 monitor=5000 on node1
* Resource action: lsb3:0 monitor=5000 on node1
* Pseudo action: group1:1_running_0
* Resource action: dummy1:1 monitor=5000 on node2
* Resource action: dummy2:1 monitor=5000 on node2
* Resource action: lsb3:1 monitor=5000 on node2
* Pseudo action: clone3_running_0
+warning: Ignoring globally-unique for clone1 because lsb resources such as lsb1:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone1 because lsb resources such as lsb1:1 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone2 because lsb resources such as lsb2:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone2 because lsb resources such as lsb2:1 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone2 because lsb resources such as lsb2:2 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone3 because lsb resources such as lsb3:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone3 because lsb resources such as lsb3:1 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone3 because lsb resources such as lsb3:2 can be used only as anonymous clones
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 node3 ]
* Full List of Resources:
* Fencing (stonith:fence_imaginary): Started node1
* Clone Set: clone1 [lsb1]:
* Started: [ node2 node3 ]
* Clone Set: clone2 [lsb2]:
* Started: [ node1 node2 node3 ]
* Clone Set: clone3 [group1]:
* Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/group-dependents.summary b/cts/scheduler/summary/group-dependents.summary
index 3365255547..a8ce9c2915 100644
--- a/cts/scheduler/summary/group-dependents.summary
+++ b/cts/scheduler/summary/group-dependents.summary
@@ -1,196 +1,197 @@
+warning: Support for the 'ordered' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
Current cluster status:
* Node List:
* Online: [ asttest1 asttest2 ]
* Full List of Resources:
* Resource Group: voip:
* mysqld (lsb:mysql): Started asttest1
* dahdi (lsb:dahdi): Started asttest1
* fonulator (lsb:fonulator): Stopped
* asterisk (lsb:asterisk-11.0.1): Stopped
* iax2_mon (lsb:iax2_mon): Stopped
* httpd (lsb:apache2): Stopped
* tftp (lsb:tftp-srce): Stopped
* Resource Group: ip_voip_routes:
* ip_voip_route_test1 (ocf:heartbeat:Route): Started asttest1
* ip_voip_route_test2 (ocf:heartbeat:Route): Started asttest1
* Resource Group: ip_voip_addresses_p:
* ip_voip_vlan850 (ocf:heartbeat:IPaddr2): Started asttest1
* ip_voip_vlan998 (ocf:heartbeat:IPaddr2): Started asttest1
* ip_voip_vlan851 (ocf:heartbeat:IPaddr2): Started asttest1
* ip_voip_vlan852 (ocf:heartbeat:IPaddr2): Started asttest1
* ip_voip_vlan853 (ocf:heartbeat:IPaddr2): Started asttest1
* ip_voip_vlan854 (ocf:heartbeat:IPaddr2): Started asttest1
* ip_voip_vlan855 (ocf:heartbeat:IPaddr2): Started asttest1
* ip_voip_vlan856 (ocf:heartbeat:IPaddr2): Started asttest1
* Clone Set: cl_route [ip_voip_route_default]:
* Started: [ asttest1 asttest2 ]
* fs_drbd (ocf:heartbeat:Filesystem): Started asttest1
* Clone Set: ms_drbd [drbd] (promotable):
* Promoted: [ asttest1 ]
* Unpromoted: [ asttest2 ]
Transition Summary:
* Migrate mysqld ( asttest1 -> asttest2 )
* Migrate dahdi ( asttest1 -> asttest2 )
* Start fonulator ( asttest2 )
* Start asterisk ( asttest2 )
* Start iax2_mon ( asttest2 )
* Start httpd ( asttest2 )
* Start tftp ( asttest2 )
* Migrate ip_voip_route_test1 ( asttest1 -> asttest2 )
* Migrate ip_voip_route_test2 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan850 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan998 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan851 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan852 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan853 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan854 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan855 ( asttest1 -> asttest2 )
* Migrate ip_voip_vlan856 ( asttest1 -> asttest2 )
* Move fs_drbd ( asttest1 -> asttest2 )
* Demote drbd:0 ( Promoted -> Unpromoted asttest1 )
* Promote drbd:1 ( Unpromoted -> Promoted asttest2 )
Executing Cluster Transition:
* Pseudo action: voip_stop_0
* Resource action: mysqld migrate_to on asttest1
* Resource action: ip_voip_route_test1 migrate_to on asttest1
* Resource action: ip_voip_route_test2 migrate_to on asttest1
* Resource action: ip_voip_vlan850 migrate_to on asttest1
* Resource action: ip_voip_vlan998 migrate_to on asttest1
* Resource action: ip_voip_vlan851 migrate_to on asttest1
* Resource action: ip_voip_vlan852 migrate_to on asttest1
* Resource action: ip_voip_vlan853 migrate_to on asttest1
* Resource action: ip_voip_vlan854 migrate_to on asttest1
* Resource action: ip_voip_vlan855 migrate_to on asttest1
* Resource action: ip_voip_vlan856 migrate_to on asttest1
* Resource action: drbd:1 cancel=31000 on asttest2
* Pseudo action: ms_drbd_pre_notify_demote_0
* Resource action: mysqld migrate_from on asttest2
* Resource action: dahdi migrate_to on asttest1
* Resource action: ip_voip_route_test1 migrate_from on asttest2
* Resource action: ip_voip_route_test2 migrate_from on asttest2
* Resource action: ip_voip_vlan850 migrate_from on asttest2
* Resource action: ip_voip_vlan998 migrate_from on asttest2
* Resource action: ip_voip_vlan851 migrate_from on asttest2
* Resource action: ip_voip_vlan852 migrate_from on asttest2
* Resource action: ip_voip_vlan853 migrate_from on asttest2
* Resource action: ip_voip_vlan854 migrate_from on asttest2
* Resource action: ip_voip_vlan855 migrate_from on asttest2
* Resource action: ip_voip_vlan856 migrate_from on asttest2
* Resource action: drbd:0 notify on asttest1
* Resource action: drbd:1 notify on asttest2
* Pseudo action: ms_drbd_confirmed-pre_notify_demote_0
* Resource action: dahdi migrate_from on asttest2
* Resource action: dahdi stop on asttest1
* Resource action: mysqld stop on asttest1
* Pseudo action: voip_stopped_0
* Pseudo action: ip_voip_routes_stop_0
* Resource action: ip_voip_route_test1 stop on asttest1
* Resource action: ip_voip_route_test2 stop on asttest1
* Pseudo action: ip_voip_routes_stopped_0
* Pseudo action: ip_voip_addresses_p_stop_0
* Resource action: ip_voip_vlan850 stop on asttest1
* Resource action: ip_voip_vlan998 stop on asttest1
* Resource action: ip_voip_vlan851 stop on asttest1
* Resource action: ip_voip_vlan852 stop on asttest1
* Resource action: ip_voip_vlan853 stop on asttest1
* Resource action: ip_voip_vlan854 stop on asttest1
* Resource action: ip_voip_vlan855 stop on asttest1
* Resource action: ip_voip_vlan856 stop on asttest1
* Pseudo action: ip_voip_addresses_p_stopped_0
* Resource action: fs_drbd stop on asttest1
* Pseudo action: ms_drbd_demote_0
* Resource action: drbd:0 demote on asttest1
* Pseudo action: ms_drbd_demoted_0
* Pseudo action: ms_drbd_post_notify_demoted_0
* Resource action: drbd:0 notify on asttest1
* Resource action: drbd:1 notify on asttest2
* Pseudo action: ms_drbd_confirmed-post_notify_demoted_0
* Pseudo action: ms_drbd_pre_notify_promote_0
* Resource action: drbd:0 notify on asttest1
* Resource action: drbd:1 notify on asttest2
* Pseudo action: ms_drbd_confirmed-pre_notify_promote_0
* Pseudo action: ms_drbd_promote_0
* Resource action: drbd:1 promote on asttest2
* Pseudo action: ms_drbd_promoted_0
* Pseudo action: ms_drbd_post_notify_promoted_0
* Resource action: drbd:0 notify on asttest1
* Resource action: drbd:1 notify on asttest2
* Pseudo action: ms_drbd_confirmed-post_notify_promoted_0
* Resource action: fs_drbd start on asttest2
* Resource action: drbd:0 monitor=31000 on asttest1
* Pseudo action: ip_voip_addresses_p_start_0
* Pseudo action: ip_voip_vlan850_start_0
* Pseudo action: ip_voip_vlan998_start_0
* Pseudo action: ip_voip_vlan851_start_0
* Pseudo action: ip_voip_vlan852_start_0
* Pseudo action: ip_voip_vlan853_start_0
* Pseudo action: ip_voip_vlan854_start_0
* Pseudo action: ip_voip_vlan855_start_0
* Pseudo action: ip_voip_vlan856_start_0
* Resource action: fs_drbd monitor=1000 on asttest2
* Pseudo action: ip_voip_addresses_p_running_0
* Resource action: ip_voip_vlan850 monitor=1000 on asttest2
* Resource action: ip_voip_vlan998 monitor=1000 on asttest2
* Resource action: ip_voip_vlan851 monitor=1000 on asttest2
* Resource action: ip_voip_vlan852 monitor=1000 on asttest2
* Resource action: ip_voip_vlan853 monitor=1000 on asttest2
* Resource action: ip_voip_vlan854 monitor=1000 on asttest2
* Resource action: ip_voip_vlan855 monitor=1000 on asttest2
* Resource action: ip_voip_vlan856 monitor=1000 on asttest2
* Pseudo action: ip_voip_routes_start_0
* Pseudo action: ip_voip_route_test1_start_0
* Pseudo action: ip_voip_route_test2_start_0
* Pseudo action: ip_voip_routes_running_0
* Resource action: ip_voip_route_test1 monitor=1000 on asttest2
* Resource action: ip_voip_route_test2 monitor=1000 on asttest2
* Pseudo action: voip_start_0
* Pseudo action: mysqld_start_0
* Pseudo action: dahdi_start_0
* Resource action: fonulator start on asttest2
* Resource action: asterisk start on asttest2
* Resource action: iax2_mon start on asttest2
* Resource action: httpd start on asttest2
* Resource action: tftp start on asttest2
* Pseudo action: voip_running_0
* Resource action: mysqld monitor=1000 on asttest2
* Resource action: dahdi monitor=1000 on asttest2
* Resource action: fonulator monitor=1000 on asttest2
* Resource action: asterisk monitor=1000 on asttest2
* Resource action: iax2_mon monitor=60000 on asttest2
* Resource action: httpd monitor=1000 on asttest2
* Resource action: tftp monitor=60000 on asttest2
Revised Cluster Status:
* Node List:
* Online: [ asttest1 asttest2 ]
* Full List of Resources:
* Resource Group: voip:
* mysqld (lsb:mysql): Started asttest2
* dahdi (lsb:dahdi): Started asttest2
* fonulator (lsb:fonulator): Started asttest2
* asterisk (lsb:asterisk-11.0.1): Started asttest2
* iax2_mon (lsb:iax2_mon): Started asttest2
* httpd (lsb:apache2): Started asttest2
* tftp (lsb:tftp-srce): Started asttest2
* Resource Group: ip_voip_routes:
* ip_voip_route_test1 (ocf:heartbeat:Route): Started asttest2
* ip_voip_route_test2 (ocf:heartbeat:Route): Started asttest2
* Resource Group: ip_voip_addresses_p:
* ip_voip_vlan850 (ocf:heartbeat:IPaddr2): Started asttest2
* ip_voip_vlan998 (ocf:heartbeat:IPaddr2): Started asttest2
* ip_voip_vlan851 (ocf:heartbeat:IPaddr2): Started asttest2
* ip_voip_vlan852 (ocf:heartbeat:IPaddr2): Started asttest2
* ip_voip_vlan853 (ocf:heartbeat:IPaddr2): Started asttest2
* ip_voip_vlan854 (ocf:heartbeat:IPaddr2): Started asttest2
* ip_voip_vlan855 (ocf:heartbeat:IPaddr2): Started asttest2
* ip_voip_vlan856 (ocf:heartbeat:IPaddr2): Started asttest2
* Clone Set: cl_route [ip_voip_route_default]:
* Started: [ asttest1 asttest2 ]
* fs_drbd (ocf:heartbeat:Filesystem): Started asttest2
* Clone Set: ms_drbd [drbd] (promotable):
* Promoted: [ asttest2 ]
* Unpromoted: [ asttest1 ]
diff --git a/cts/scheduler/summary/guest-host-not-fenceable.summary b/cts/scheduler/summary/guest-host-not-fenceable.summary
index 9e3b5db405..8fe32428bc 100644
--- a/cts/scheduler/summary/guest-host-not-fenceable.summary
+++ b/cts/scheduler/summary/guest-host-not-fenceable.summary
@@ -1,91 +1,93 @@
Using the original execution date of: 2019-08-26 04:52:42Z
Current cluster status:
* Node List:
* Node node2: UNCLEAN (offline)
* Node node3: UNCLEAN (offline)
* Online: [ node1 ]
* GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 ]
* Full List of Resources:
* Container bundle set: rabbitmq-bundle [192.168.122.139:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started node1
* rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): FAILED node2 (UNCLEAN)
* rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): FAILED node3 (UNCLEAN)
* Container bundle set: galera-bundle [192.168.122.139:8787/rhosp13/openstack-mariadb:pcmklatest]:
* galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted node1
* galera-bundle-1 (ocf:heartbeat:galera): FAILED Promoted node2 (UNCLEAN)
* galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted node3 (UNCLEAN)
* stonith-fence_ipmilan-node1 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
* stonith-fence_ipmilan-node3 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
* stonith-fence_ipmilan-node2 (stonith:fence_ipmilan): Started node3 (UNCLEAN)
+warning: Node node2 is unclean but cannot be fenced
+warning: Node node3 is unclean but cannot be fenced
Transition Summary:
* Stop rabbitmq-bundle-docker-0 ( node1 ) due to no quorum
* Stop rabbitmq-bundle-0 ( node1 ) due to no quorum
* Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to no quorum
* Stop rabbitmq-bundle-docker-1 ( node2 ) due to node availability (blocked)
* Stop rabbitmq-bundle-1 ( node2 ) due to no quorum (blocked)
* Stop rabbitmq:1 ( rabbitmq-bundle-1 ) due to no quorum (blocked)
* Stop rabbitmq-bundle-docker-2 ( node3 ) due to node availability (blocked)
* Stop rabbitmq-bundle-2 ( node3 ) due to no quorum (blocked)
* Stop rabbitmq:2 ( rabbitmq-bundle-2 ) due to no quorum (blocked)
* Stop galera-bundle-docker-0 ( node1 ) due to no quorum
* Stop galera-bundle-0 ( node1 ) due to no quorum
* Stop galera:0 ( Promoted galera-bundle-0 ) due to no quorum
* Stop galera-bundle-docker-1 ( node2 ) due to node availability (blocked)
* Stop galera-bundle-1 ( node2 ) due to no quorum (blocked)
* Stop galera:1 ( Promoted galera-bundle-1 ) due to no quorum (blocked)
* Stop galera-bundle-docker-2 ( node3 ) due to node availability (blocked)
* Stop galera-bundle-2 ( node3 ) due to no quorum (blocked)
* Stop galera:2 ( Promoted galera-bundle-2 ) due to no quorum (blocked)
* Stop stonith-fence_ipmilan-node1 ( node2 ) due to node availability (blocked)
* Stop stonith-fence_ipmilan-node3 ( node2 ) due to no quorum (blocked)
* Stop stonith-fence_ipmilan-node2 ( node3 ) due to no quorum (blocked)
Executing Cluster Transition:
* Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0
* Pseudo action: galera-bundle_demote_0
* Pseudo action: rabbitmq-bundle_stop_0
* Resource action: rabbitmq notify on rabbitmq-bundle-0
* Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0
* Pseudo action: rabbitmq-bundle-clone_stop_0
* Pseudo action: galera-bundle-master_demote_0
* Resource action: rabbitmq stop on rabbitmq-bundle-0
* Pseudo action: rabbitmq-bundle-clone_stopped_0
* Resource action: rabbitmq-bundle-0 stop on node1
* Resource action: rabbitmq-bundle-0 cancel=60000 on node1
* Resource action: galera demote on galera-bundle-0
* Pseudo action: galera-bundle-master_demoted_0
* Pseudo action: galera-bundle_demoted_0
* Pseudo action: galera-bundle_stop_0
* Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0
* Resource action: rabbitmq-bundle-docker-0 stop on node1
* Pseudo action: galera-bundle-master_stop_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
* Resource action: galera stop on galera-bundle-0
* Pseudo action: galera-bundle-master_stopped_0
* Resource action: galera-bundle-0 stop on node1
* Resource action: galera-bundle-0 cancel=60000 on node1
* Pseudo action: rabbitmq-bundle_stopped_0
* Resource action: galera-bundle-docker-0 stop on node1
* Pseudo action: galera-bundle_stopped_0
Using the original execution date of: 2019-08-26 04:52:42Z
Revised Cluster Status:
* Node List:
* Node node2: UNCLEAN (offline)
* Node node3: UNCLEAN (offline)
* Online: [ node1 ]
* Full List of Resources:
* Container bundle set: rabbitmq-bundle [192.168.122.139:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
* rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): FAILED node2 (UNCLEAN)
* rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): FAILED node3 (UNCLEAN)
* Container bundle set: galera-bundle [192.168.122.139:8787/rhosp13/openstack-mariadb:pcmklatest]:
* galera-bundle-0 (ocf:heartbeat:galera): Stopped
* galera-bundle-1 (ocf:heartbeat:galera): FAILED Promoted node2 (UNCLEAN)
* galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted node3 (UNCLEAN)
* stonith-fence_ipmilan-node1 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
* stonith-fence_ipmilan-node3 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
* stonith-fence_ipmilan-node2 (stonith:fence_ipmilan): Started node3 (UNCLEAN)
diff --git a/cts/scheduler/summary/intervals.summary b/cts/scheduler/summary/intervals.summary
index f6dc2e4b7f..b4ebad3f69 100644
--- a/cts/scheduler/summary/intervals.summary
+++ b/cts/scheduler/summary/intervals.summary
@@ -1,52 +1,54 @@
Using the original execution date of: 2018-03-21 23:12:42Z
0 of 7 resource instances DISABLED and 1 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
* Full List of Resources:
* Fencing (stonith:fence_xvm): Started rhel7-1
* rsc1 (ocf:pacemaker:Dummy): Started rhel7-2
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Started rhel7-4
* rsc4 (ocf:pacemaker:Dummy): FAILED rhel7-5 (blocked)
* rsc5 (ocf:pacemaker:Dummy): Started rhel7-1
* rsc6 (ocf:pacemaker:Dummy): Started rhel7-2
+error: Operation rsc3-monitor-interval-P40S is duplicate of rsc3-monitor-interval-40s (do not use same name and interval combination more than once per resource)
+error: Operation rsc3-monitor-interval-P40S is duplicate of rsc3-monitor-interval-40s (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc2 ( rhel7-3 )
* Move rsc5 ( rhel7-1 -> rhel7-2 )
* Move rsc6 ( rhel7-2 -> rhel7-1 )
Executing Cluster Transition:
* Resource action: rsc2 monitor on rhel7-5
* Resource action: rsc2 monitor on rhel7-4
* Resource action: rsc2 monitor on rhel7-3
* Resource action: rsc2 monitor on rhel7-2
* Resource action: rsc2 monitor on rhel7-1
* Resource action: rsc5 stop on rhel7-1
* Resource action: rsc5 cancel=25000 on rhel7-2
* Resource action: rsc6 stop on rhel7-2
* Resource action: rsc2 start on rhel7-3
* Resource action: rsc5 monitor=25000 on rhel7-1
* Resource action: rsc5 start on rhel7-2
* Resource action: rsc6 start on rhel7-1
* Resource action: rsc2 monitor=90000 on rhel7-3
* Resource action: rsc2 monitor=40000 on rhel7-3
* Resource action: rsc5 monitor=20000 on rhel7-2
* Resource action: rsc6 monitor=28000 on rhel7-1
Using the original execution date of: 2018-03-21 23:12:42Z
Revised Cluster Status:
* Node List:
* Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
* Full List of Resources:
* Fencing (stonith:fence_xvm): Started rhel7-1
* rsc1 (ocf:pacemaker:Dummy): Started rhel7-2
* rsc2 (ocf:pacemaker:Dummy): Started rhel7-3
* rsc3 (ocf:pacemaker:Dummy): Started rhel7-4
* rsc4 (ocf:pacemaker:Dummy): FAILED rhel7-5 (blocked)
* rsc5 (ocf:pacemaker:Dummy): Started rhel7-2
* rsc6 (ocf:pacemaker:Dummy): Started rhel7-1
diff --git a/cts/scheduler/summary/leftover-pending-monitor.summary b/cts/scheduler/summary/leftover-pending-monitor.summary
index 04b03f29d8..d5e7e39f10 100644
--- a/cts/scheduler/summary/leftover-pending-monitor.summary
+++ b/cts/scheduler/summary/leftover-pending-monitor.summary
@@ -1,30 +1,31 @@
Using the original execution date of: 2022-12-02 17:04:52Z
Current cluster status:
* Node List:
* Node node-2: pending
* Online: [ node-1 node-3 ]
* Full List of Resources:
* st-sbd (stonith:external/sbd): Started node-1
* Clone Set: promotable-1 [stateful-1] (promotable):
* Promoted: [ node-3 ]
* Stopped: [ node-1 node-2 ]
+warning: Support for the Promoted role is deprecated and will be removed in a future release. Use Promoted instead.
Transition Summary:
* Start stateful-1:1 ( node-1 ) due to unrunnable stateful-1:0 monitor (blocked)
Executing Cluster Transition:
* Pseudo action: promotable-1_start_0
* Pseudo action: promotable-1_running_0
Using the original execution date of: 2022-12-02 17:04:52Z
Revised Cluster Status:
* Node List:
* Node node-2: pending
* Online: [ node-1 node-3 ]
* Full List of Resources:
* st-sbd (stonith:external/sbd): Started node-1
* Clone Set: promotable-1 [stateful-1] (promotable):
* Promoted: [ node-3 ]
* Stopped: [ node-1 node-2 ]
diff --git a/cts/scheduler/summary/novell-239079.summary b/cts/scheduler/summary/novell-239079.summary
index 0afbba5797..401ccd11d7 100644
--- a/cts/scheduler/summary/novell-239079.summary
+++ b/cts/scheduler/summary/novell-239079.summary
@@ -1,33 +1,42 @@
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Current cluster status:
* Node List:
* Online: [ xen-1 xen-2 ]
* Full List of Resources:
* fs_1 (ocf:heartbeat:Filesystem): Stopped
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Stopped: [ xen-1 xen-2 ]
Transition Summary:
* Start drbd0:0 ( xen-1 )
* Start drbd0:1 ( xen-2 )
Executing Cluster Transition:
* Pseudo action: ms-drbd0_pre_notify_start_0
* Pseudo action: ms-drbd0_confirmed-pre_notify_start_0
* Pseudo action: ms-drbd0_start_0
* Resource action: drbd0:0 start on xen-1
* Resource action: drbd0:1 start on xen-2
* Pseudo action: ms-drbd0_running_0
* Pseudo action: ms-drbd0_post_notify_running_0
* Resource action: drbd0:0 notify on xen-1
* Resource action: drbd0:1 notify on xen-2
* Pseudo action: ms-drbd0_confirmed-post_notify_running_0
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Revised Cluster Status:
* Node List:
* Online: [ xen-1 xen-2 ]
* Full List of Resources:
* fs_1 (ocf:heartbeat:Filesystem): Stopped
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Unpromoted: [ xen-1 xen-2 ]
diff --git a/cts/scheduler/summary/novell-239082.summary b/cts/scheduler/summary/novell-239082.summary
index 051c0220e0..5d27e93076 100644
--- a/cts/scheduler/summary/novell-239082.summary
+++ b/cts/scheduler/summary/novell-239082.summary
@@ -1,59 +1,71 @@
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Current cluster status:
* Node List:
* Online: [ xen-1 xen-2 ]
* Full List of Resources:
* fs_1 (ocf:heartbeat:Filesystem): Started xen-1
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Promoted: [ xen-1 ]
* Unpromoted: [ xen-2 ]
+warning: Support for setting meta-attributes (such as target_role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target_role) to the explicit value '#default' is deprecated and will be removed in a future release
Transition Summary:
* Move fs_1 ( xen-1 -> xen-2 )
* Promote drbd0:0 ( Unpromoted -> Promoted xen-2 )
* Stop drbd0:1 ( Promoted xen-1 ) due to node availability
Executing Cluster Transition:
* Resource action: fs_1 stop on xen-1
* Pseudo action: ms-drbd0_pre_notify_demote_0
* Resource action: drbd0:0 notify on xen-2
* Resource action: drbd0:1 notify on xen-1
* Pseudo action: ms-drbd0_confirmed-pre_notify_demote_0
* Pseudo action: ms-drbd0_demote_0
* Resource action: drbd0:1 demote on xen-1
* Pseudo action: ms-drbd0_demoted_0
* Pseudo action: ms-drbd0_post_notify_demoted_0
* Resource action: drbd0:0 notify on xen-2
* Resource action: drbd0:1 notify on xen-1
* Pseudo action: ms-drbd0_confirmed-post_notify_demoted_0
* Pseudo action: ms-drbd0_pre_notify_stop_0
* Resource action: drbd0:0 notify on xen-2
* Resource action: drbd0:1 notify on xen-1
* Pseudo action: ms-drbd0_confirmed-pre_notify_stop_0
* Pseudo action: ms-drbd0_stop_0
* Resource action: drbd0:1 stop on xen-1
* Pseudo action: ms-drbd0_stopped_0
* Cluster action: do_shutdown on xen-1
* Pseudo action: ms-drbd0_post_notify_stopped_0
* Resource action: drbd0:0 notify on xen-2
* Pseudo action: ms-drbd0_confirmed-post_notify_stopped_0
* Pseudo action: ms-drbd0_pre_notify_promote_0
* Resource action: drbd0:0 notify on xen-2
* Pseudo action: ms-drbd0_confirmed-pre_notify_promote_0
* Pseudo action: ms-drbd0_promote_0
* Resource action: drbd0:0 promote on xen-2
* Pseudo action: ms-drbd0_promoted_0
* Pseudo action: ms-drbd0_post_notify_promoted_0
* Resource action: drbd0:0 notify on xen-2
* Pseudo action: ms-drbd0_confirmed-post_notify_promoted_0
* Resource action: fs_1 start on xen-2
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Revised Cluster Status:
* Node List:
* Online: [ xen-1 xen-2 ]
* Full List of Resources:
* fs_1 (ocf:heartbeat:Filesystem): Started xen-2
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Promoted: [ xen-2 ]
* Stopped: [ xen-1 ]
diff --git a/cts/scheduler/summary/novell-239087.summary b/cts/scheduler/summary/novell-239087.summary
index 0c158d3873..df2db7abfb 100644
--- a/cts/scheduler/summary/novell-239087.summary
+++ b/cts/scheduler/summary/novell-239087.summary
@@ -1,23 +1,33 @@
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Current cluster status:
* Node List:
* Online: [ xen-1 xen-2 ]
* Full List of Resources:
* fs_1 (ocf:heartbeat:Filesystem): Started xen-1
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Promoted: [ xen-1 ]
* Unpromoted: [ xen-2 ]
+warning: Support for setting meta-attributes (such as target_role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target_role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
+warning: Support for setting meta-attributes (such as target-role) to the explicit value '#default' is deprecated and will be removed in a future release
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ xen-1 xen-2 ]
* Full List of Resources:
* fs_1 (ocf:heartbeat:Filesystem): Started xen-1
* Clone Set: ms-drbd0 [drbd0] (promotable):
* Promoted: [ xen-1 ]
* Unpromoted: [ xen-2 ]
diff --git a/cts/scheduler/summary/one-or-more-unrunnable-instances.summary b/cts/scheduler/summary/one-or-more-unrunnable-instances.summary
index 58c572d199..13eeacbffe 100644
--- a/cts/scheduler/summary/one-or-more-unrunnable-instances.summary
+++ b/cts/scheduler/summary/one-or-more-unrunnable-instances.summary
@@ -1,736 +1,737 @@
Current cluster status:
* Node List:
* Online: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* RemoteOnline: [ mrg-07 mrg-08 mrg-09 ]
* Full List of Resources:
* fence1 (stonith:fence_xvm): Started rdo7-node2
* fence2 (stonith:fence_xvm): Started rdo7-node1
* fence3 (stonith:fence_xvm): Started rdo7-node3
* Clone Set: lb-haproxy-clone [lb-haproxy]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* vip-db (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-rabbitmq (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-keystone (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-glance (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-cinder (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-swift (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-neutron (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-nova (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-horizon (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-heat (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-ceilometer (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-qpid (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-node (ocf:heartbeat:IPaddr2): Started rdo7-node1
* Clone Set: galera-master [galera] (promotable):
* Promoted: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: rabbitmq-server-clone [rabbitmq-server]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: memcached-clone [memcached]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: mongodb-clone [mongodb]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: keystone-clone [keystone]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: glance-fs-clone [glance-fs]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: glance-registry-clone [glance-registry]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: glance-api-clone [glance-api]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: cinder-api-clone [cinder-api]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: cinder-scheduler-clone [cinder-scheduler]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* cinder-volume (systemd:openstack-cinder-volume): Stopped
* Clone Set: swift-fs-clone [swift-fs]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: swift-account-clone [swift-account]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: swift-container-clone [swift-container]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: swift-object-clone [swift-object]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: swift-proxy-clone [swift-proxy]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* swift-object-expirer (systemd:openstack-swift-object-expirer): Stopped
* Clone Set: neutron-server-clone [neutron-server]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: neutron-scale-clone [neutron-scale] (unique):
* neutron-scale:0 (ocf:neutron:NeutronScale): Stopped
* neutron-scale:1 (ocf:neutron:NeutronScale): Stopped
* neutron-scale:2 (ocf:neutron:NeutronScale): Stopped
* Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: nova-consoleauth-clone [nova-consoleauth]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: nova-novncproxy-clone [nova-novncproxy]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: nova-api-clone [nova-api]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: nova-scheduler-clone [nova-scheduler]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: nova-conductor-clone [nova-conductor]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: redis-master [redis] (promotable):
* Promoted: [ rdo7-node1 ]
* Unpromoted: [ rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* vip-redis (ocf:heartbeat:IPaddr2): Started rdo7-node1
* Clone Set: ceilometer-central-clone [ceilometer-central]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-collector-clone [ceilometer-collector]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-api-clone [ceilometer-api]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-delay-clone [ceilometer-delay]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-notification-clone [ceilometer-notification]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: heat-api-clone [heat-api]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: heat-api-cfn-clone [heat-api-cfn]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: heat-engine-clone [heat-engine]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: horizon-clone [horizon]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-openvswitch-agent-compute-clone [neutron-openvswitch-agent-compute]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: libvirtd-compute-clone [libvirtd-compute]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-compute-clone [ceilometer-compute]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: nova-compute-clone [nova-compute]:
* Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
* fence-nova (stonith:fence_compute): Stopped
* fence-compute (stonith:fence_apc_snmp): Started rdo7-node3
* mrg-07 (ocf:pacemaker:remote): Started rdo7-node1
* mrg-08 (ocf:pacemaker:remote): Started rdo7-node2
* mrg-09 (ocf:pacemaker:remote): Started rdo7-node3
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Start keystone:0 ( rdo7-node2 )
* Start keystone:1 ( rdo7-node3 )
* Start keystone:2 ( rdo7-node1 )
* Start glance-registry:0 ( rdo7-node2 )
* Start glance-registry:1 ( rdo7-node3 )
* Start glance-registry:2 ( rdo7-node1 )
* Start glance-api:0 ( rdo7-node2 )
* Start glance-api:1 ( rdo7-node3 )
* Start glance-api:2 ( rdo7-node1 )
* Start cinder-api:0 ( rdo7-node2 )
* Start cinder-api:1 ( rdo7-node3 )
* Start cinder-api:2 ( rdo7-node1 )
* Start cinder-scheduler:0 ( rdo7-node2 )
* Start cinder-scheduler:1 ( rdo7-node3 )
* Start cinder-scheduler:2 ( rdo7-node1 )
* Start cinder-volume ( rdo7-node2 )
* Start swift-account:0 ( rdo7-node3 )
* Start swift-account:1 ( rdo7-node1 )
* Start swift-account:2 ( rdo7-node2 )
* Start swift-container:0 ( rdo7-node3 )
* Start swift-container:1 ( rdo7-node1 )
* Start swift-container:2 ( rdo7-node2 )
* Start swift-object:0 ( rdo7-node3 )
* Start swift-object:1 ( rdo7-node1 )
* Start swift-object:2 ( rdo7-node2 )
* Start swift-proxy:0 ( rdo7-node3 )
* Start swift-proxy:1 ( rdo7-node1 )
* Start swift-proxy:2 ( rdo7-node2 )
* Start swift-object-expirer ( rdo7-node3 )
* Start neutron-server:0 ( rdo7-node1 )
* Start neutron-server:1 ( rdo7-node2 )
* Start neutron-server:2 ( rdo7-node3 )
* Start neutron-scale:0 ( rdo7-node1 )
* Start neutron-scale:1 ( rdo7-node2 )
* Start neutron-scale:2 ( rdo7-node3 )
* Start neutron-ovs-cleanup:0 ( rdo7-node1 )
* Start neutron-ovs-cleanup:1 ( rdo7-node2 )
* Start neutron-ovs-cleanup:2 ( rdo7-node3 )
* Start neutron-netns-cleanup:0 ( rdo7-node1 )
* Start neutron-netns-cleanup:1 ( rdo7-node2 )
* Start neutron-netns-cleanup:2 ( rdo7-node3 )
* Start neutron-openvswitch-agent:0 ( rdo7-node1 )
* Start neutron-openvswitch-agent:1 ( rdo7-node2 )
* Start neutron-openvswitch-agent:2 ( rdo7-node3 )
* Start neutron-dhcp-agent:0 ( rdo7-node1 )
* Start neutron-dhcp-agent:1 ( rdo7-node2 )
* Start neutron-dhcp-agent:2 ( rdo7-node3 )
* Start neutron-l3-agent:0 ( rdo7-node1 )
* Start neutron-l3-agent:1 ( rdo7-node2 )
* Start neutron-l3-agent:2 ( rdo7-node3 )
* Start neutron-metadata-agent:0 ( rdo7-node1 )
* Start neutron-metadata-agent:1 ( rdo7-node2 )
* Start neutron-metadata-agent:2 ( rdo7-node3 )
* Start nova-consoleauth:0 ( rdo7-node1 )
* Start nova-consoleauth:1 ( rdo7-node2 )
* Start nova-consoleauth:2 ( rdo7-node3 )
* Start nova-novncproxy:0 ( rdo7-node1 )
* Start nova-novncproxy:1 ( rdo7-node2 )
* Start nova-novncproxy:2 ( rdo7-node3 )
* Start nova-api:0 ( rdo7-node1 )
* Start nova-api:1 ( rdo7-node2 )
* Start nova-api:2 ( rdo7-node3 )
* Start nova-scheduler:0 ( rdo7-node1 )
* Start nova-scheduler:1 ( rdo7-node2 )
* Start nova-scheduler:2 ( rdo7-node3 )
* Start nova-conductor:0 ( rdo7-node1 )
* Start nova-conductor:1 ( rdo7-node2 )
* Start nova-conductor:2 ( rdo7-node3 )
* Start ceilometer-central:0 ( rdo7-node2 )
* Start ceilometer-central:1 ( rdo7-node3 )
* Start ceilometer-central:2 ( rdo7-node1 )
* Start ceilometer-collector:0 ( rdo7-node2 )
* Start ceilometer-collector:1 ( rdo7-node3 )
* Start ceilometer-collector:2 ( rdo7-node1 )
* Start ceilometer-api:0 ( rdo7-node2 )
* Start ceilometer-api:1 ( rdo7-node3 )
* Start ceilometer-api:2 ( rdo7-node1 )
* Start ceilometer-delay:0 ( rdo7-node2 )
* Start ceilometer-delay:1 ( rdo7-node3 )
* Start ceilometer-delay:2 ( rdo7-node1 )
* Start ceilometer-alarm-evaluator:0 ( rdo7-node2 )
* Start ceilometer-alarm-evaluator:1 ( rdo7-node3 )
* Start ceilometer-alarm-evaluator:2 ( rdo7-node1 )
* Start ceilometer-alarm-notifier:0 ( rdo7-node2 )
* Start ceilometer-alarm-notifier:1 ( rdo7-node3 )
* Start ceilometer-alarm-notifier:2 ( rdo7-node1 )
* Start ceilometer-notification:0 ( rdo7-node2 )
* Start ceilometer-notification:1 ( rdo7-node3 )
* Start ceilometer-notification:2 ( rdo7-node1 )
* Start heat-api:0 ( rdo7-node2 )
* Start heat-api:1 ( rdo7-node3 )
* Start heat-api:2 ( rdo7-node1 )
* Start heat-api-cfn:0 ( rdo7-node2 )
* Start heat-api-cfn:1 ( rdo7-node3 )
* Start heat-api-cfn:2 ( rdo7-node1 )
* Start heat-api-cloudwatch:0 ( rdo7-node2 )
* Start heat-api-cloudwatch:1 ( rdo7-node3 )
* Start heat-api-cloudwatch:2 ( rdo7-node1 )
* Start heat-engine:0 ( rdo7-node2 )
* Start heat-engine:1 ( rdo7-node3 )
* Start heat-engine:2 ( rdo7-node1 )
* Start neutron-openvswitch-agent-compute:0 ( mrg-07 )
* Start neutron-openvswitch-agent-compute:1 ( mrg-08 )
* Start neutron-openvswitch-agent-compute:2 ( mrg-09 )
* Start libvirtd-compute:0 ( mrg-07 )
* Start libvirtd-compute:1 ( mrg-08 )
* Start libvirtd-compute:2 ( mrg-09 )
* Start ceilometer-compute:0 ( mrg-07 )
* Start ceilometer-compute:1 ( mrg-08 )
* Start ceilometer-compute:2 ( mrg-09 )
* Start nova-compute:0 ( mrg-07 )
* Start nova-compute:1 ( mrg-08 )
* Start nova-compute:2 ( mrg-09 )
* Start fence-nova ( rdo7-node2 )
Executing Cluster Transition:
* Resource action: galera monitor=10000 on rdo7-node2
* Pseudo action: keystone-clone_start_0
* Pseudo action: nova-compute-clone_pre_notify_start_0
* Resource action: keystone start on rdo7-node2
* Resource action: keystone start on rdo7-node3
* Resource action: keystone start on rdo7-node1
* Pseudo action: keystone-clone_running_0
* Pseudo action: glance-registry-clone_start_0
* Pseudo action: cinder-api-clone_start_0
* Pseudo action: swift-account-clone_start_0
* Pseudo action: neutron-server-clone_start_0
* Pseudo action: nova-consoleauth-clone_start_0
* Pseudo action: ceilometer-central-clone_start_0
* Pseudo action: nova-compute-clone_confirmed-pre_notify_start_0
* Resource action: keystone monitor=60000 on rdo7-node2
* Resource action: keystone monitor=60000 on rdo7-node3
* Resource action: keystone monitor=60000 on rdo7-node1
* Resource action: glance-registry start on rdo7-node2
* Resource action: glance-registry start on rdo7-node3
* Resource action: glance-registry start on rdo7-node1
* Pseudo action: glance-registry-clone_running_0
* Pseudo action: glance-api-clone_start_0
* Resource action: cinder-api start on rdo7-node2
* Resource action: cinder-api start on rdo7-node3
* Resource action: cinder-api start on rdo7-node1
* Pseudo action: cinder-api-clone_running_0
* Pseudo action: cinder-scheduler-clone_start_0
* Resource action: swift-account start on rdo7-node3
* Resource action: swift-account start on rdo7-node1
* Resource action: swift-account start on rdo7-node2
* Pseudo action: swift-account-clone_running_0
* Pseudo action: swift-container-clone_start_0
* Pseudo action: swift-proxy-clone_start_0
* Resource action: neutron-server start on rdo7-node1
* Resource action: neutron-server start on rdo7-node2
* Resource action: neutron-server start on rdo7-node3
* Pseudo action: neutron-server-clone_running_0
* Pseudo action: neutron-scale-clone_start_0
* Resource action: nova-consoleauth start on rdo7-node1
* Resource action: nova-consoleauth start on rdo7-node2
* Resource action: nova-consoleauth start on rdo7-node3
* Pseudo action: nova-consoleauth-clone_running_0
* Pseudo action: nova-novncproxy-clone_start_0
* Resource action: ceilometer-central start on rdo7-node2
* Resource action: ceilometer-central start on rdo7-node3
* Resource action: ceilometer-central start on rdo7-node1
* Pseudo action: ceilometer-central-clone_running_0
* Pseudo action: ceilometer-collector-clone_start_0
* Pseudo action: clone-one-or-more:order-neutron-server-clone-neutron-openvswitch-agent-compute-clone-mandatory
* Resource action: glance-registry monitor=60000 on rdo7-node2
* Resource action: glance-registry monitor=60000 on rdo7-node3
* Resource action: glance-registry monitor=60000 on rdo7-node1
* Resource action: glance-api start on rdo7-node2
* Resource action: glance-api start on rdo7-node3
* Resource action: glance-api start on rdo7-node1
* Pseudo action: glance-api-clone_running_0
* Resource action: cinder-api monitor=60000 on rdo7-node2
* Resource action: cinder-api monitor=60000 on rdo7-node3
* Resource action: cinder-api monitor=60000 on rdo7-node1
* Resource action: cinder-scheduler start on rdo7-node2
* Resource action: cinder-scheduler start on rdo7-node3
* Resource action: cinder-scheduler start on rdo7-node1
* Pseudo action: cinder-scheduler-clone_running_0
* Resource action: cinder-volume start on rdo7-node2
* Resource action: swift-account monitor=60000 on rdo7-node3
* Resource action: swift-account monitor=60000 on rdo7-node1
* Resource action: swift-account monitor=60000 on rdo7-node2
* Resource action: swift-container start on rdo7-node3
* Resource action: swift-container start on rdo7-node1
* Resource action: swift-container start on rdo7-node2
* Pseudo action: swift-container-clone_running_0
* Pseudo action: swift-object-clone_start_0
* Resource action: swift-proxy start on rdo7-node3
* Resource action: swift-proxy start on rdo7-node1
* Resource action: swift-proxy start on rdo7-node2
* Pseudo action: swift-proxy-clone_running_0
* Resource action: swift-object-expirer start on rdo7-node3
* Resource action: neutron-server monitor=60000 on rdo7-node1
* Resource action: neutron-server monitor=60000 on rdo7-node2
* Resource action: neutron-server monitor=60000 on rdo7-node3
* Resource action: neutron-scale:0 start on rdo7-node1
* Resource action: neutron-scale:1 start on rdo7-node2
* Resource action: neutron-scale:2 start on rdo7-node3
* Pseudo action: neutron-scale-clone_running_0
* Pseudo action: neutron-ovs-cleanup-clone_start_0
* Resource action: nova-consoleauth monitor=60000 on rdo7-node1
* Resource action: nova-consoleauth monitor=60000 on rdo7-node2
* Resource action: nova-consoleauth monitor=60000 on rdo7-node3
* Resource action: nova-novncproxy start on rdo7-node1
* Resource action: nova-novncproxy start on rdo7-node2
* Resource action: nova-novncproxy start on rdo7-node3
* Pseudo action: nova-novncproxy-clone_running_0
* Pseudo action: nova-api-clone_start_0
* Resource action: ceilometer-central monitor=60000 on rdo7-node2
* Resource action: ceilometer-central monitor=60000 on rdo7-node3
* Resource action: ceilometer-central monitor=60000 on rdo7-node1
* Resource action: ceilometer-collector start on rdo7-node2
* Resource action: ceilometer-collector start on rdo7-node3
* Resource action: ceilometer-collector start on rdo7-node1
* Pseudo action: ceilometer-collector-clone_running_0
* Pseudo action: ceilometer-api-clone_start_0
* Pseudo action: neutron-openvswitch-agent-compute-clone_start_0
* Resource action: glance-api monitor=60000 on rdo7-node2
* Resource action: glance-api monitor=60000 on rdo7-node3
* Resource action: glance-api monitor=60000 on rdo7-node1
* Resource action: cinder-scheduler monitor=60000 on rdo7-node2
* Resource action: cinder-scheduler monitor=60000 on rdo7-node3
* Resource action: cinder-scheduler monitor=60000 on rdo7-node1
* Resource action: cinder-volume monitor=60000 on rdo7-node2
* Resource action: swift-container monitor=60000 on rdo7-node3
* Resource action: swift-container monitor=60000 on rdo7-node1
* Resource action: swift-container monitor=60000 on rdo7-node2
* Resource action: swift-object start on rdo7-node3
* Resource action: swift-object start on rdo7-node1
* Resource action: swift-object start on rdo7-node2
* Pseudo action: swift-object-clone_running_0
* Resource action: swift-proxy monitor=60000 on rdo7-node3
* Resource action: swift-proxy monitor=60000 on rdo7-node1
* Resource action: swift-proxy monitor=60000 on rdo7-node2
* Resource action: swift-object-expirer monitor=60000 on rdo7-node3
* Resource action: neutron-scale:0 monitor=10000 on rdo7-node1
* Resource action: neutron-scale:1 monitor=10000 on rdo7-node2
* Resource action: neutron-scale:2 monitor=10000 on rdo7-node3
* Resource action: neutron-ovs-cleanup start on rdo7-node1
* Resource action: neutron-ovs-cleanup start on rdo7-node2
* Resource action: neutron-ovs-cleanup start on rdo7-node3
* Pseudo action: neutron-ovs-cleanup-clone_running_0
* Pseudo action: neutron-netns-cleanup-clone_start_0
* Resource action: nova-novncproxy monitor=60000 on rdo7-node1
* Resource action: nova-novncproxy monitor=60000 on rdo7-node2
* Resource action: nova-novncproxy monitor=60000 on rdo7-node3
* Resource action: nova-api start on rdo7-node1
* Resource action: nova-api start on rdo7-node2
* Resource action: nova-api start on rdo7-node3
* Pseudo action: nova-api-clone_running_0
* Pseudo action: nova-scheduler-clone_start_0
* Resource action: ceilometer-collector monitor=60000 on rdo7-node2
* Resource action: ceilometer-collector monitor=60000 on rdo7-node3
* Resource action: ceilometer-collector monitor=60000 on rdo7-node1
* Resource action: ceilometer-api start on rdo7-node2
* Resource action: ceilometer-api start on rdo7-node3
* Resource action: ceilometer-api start on rdo7-node1
* Pseudo action: ceilometer-api-clone_running_0
* Pseudo action: ceilometer-delay-clone_start_0
* Resource action: neutron-openvswitch-agent-compute start on mrg-07
* Resource action: neutron-openvswitch-agent-compute start on mrg-08
* Resource action: neutron-openvswitch-agent-compute start on mrg-09
* Pseudo action: neutron-openvswitch-agent-compute-clone_running_0
* Pseudo action: libvirtd-compute-clone_start_0
* Resource action: swift-object monitor=60000 on rdo7-node3
* Resource action: swift-object monitor=60000 on rdo7-node1
* Resource action: swift-object monitor=60000 on rdo7-node2
* Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node1
* Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node2
* Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node3
* Resource action: neutron-netns-cleanup start on rdo7-node1
* Resource action: neutron-netns-cleanup start on rdo7-node2
* Resource action: neutron-netns-cleanup start on rdo7-node3
* Pseudo action: neutron-netns-cleanup-clone_running_0
* Pseudo action: neutron-openvswitch-agent-clone_start_0
* Resource action: nova-api monitor=60000 on rdo7-node1
* Resource action: nova-api monitor=60000 on rdo7-node2
* Resource action: nova-api monitor=60000 on rdo7-node3
* Resource action: nova-scheduler start on rdo7-node1
* Resource action: nova-scheduler start on rdo7-node2
* Resource action: nova-scheduler start on rdo7-node3
* Pseudo action: nova-scheduler-clone_running_0
* Pseudo action: nova-conductor-clone_start_0
* Resource action: ceilometer-api monitor=60000 on rdo7-node2
* Resource action: ceilometer-api monitor=60000 on rdo7-node3
* Resource action: ceilometer-api monitor=60000 on rdo7-node1
* Resource action: ceilometer-delay start on rdo7-node2
* Resource action: ceilometer-delay start on rdo7-node3
* Resource action: ceilometer-delay start on rdo7-node1
* Pseudo action: ceilometer-delay-clone_running_0
* Pseudo action: ceilometer-alarm-evaluator-clone_start_0
* Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-07
* Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-08
* Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-09
* Resource action: libvirtd-compute start on mrg-07
* Resource action: libvirtd-compute start on mrg-08
* Resource action: libvirtd-compute start on mrg-09
* Pseudo action: libvirtd-compute-clone_running_0
* Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node1
* Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node2
* Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node3
* Resource action: neutron-openvswitch-agent start on rdo7-node1
* Resource action: neutron-openvswitch-agent start on rdo7-node2
* Resource action: neutron-openvswitch-agent start on rdo7-node3
* Pseudo action: neutron-openvswitch-agent-clone_running_0
* Pseudo action: neutron-dhcp-agent-clone_start_0
* Resource action: nova-scheduler monitor=60000 on rdo7-node1
* Resource action: nova-scheduler monitor=60000 on rdo7-node2
* Resource action: nova-scheduler monitor=60000 on rdo7-node3
* Resource action: nova-conductor start on rdo7-node1
* Resource action: nova-conductor start on rdo7-node2
* Resource action: nova-conductor start on rdo7-node3
* Pseudo action: nova-conductor-clone_running_0
* Resource action: ceilometer-delay monitor=10000 on rdo7-node2
* Resource action: ceilometer-delay monitor=10000 on rdo7-node3
* Resource action: ceilometer-delay monitor=10000 on rdo7-node1
* Resource action: ceilometer-alarm-evaluator start on rdo7-node2
* Resource action: ceilometer-alarm-evaluator start on rdo7-node3
* Resource action: ceilometer-alarm-evaluator start on rdo7-node1
* Pseudo action: ceilometer-alarm-evaluator-clone_running_0
* Pseudo action: ceilometer-alarm-notifier-clone_start_0
* Resource action: libvirtd-compute monitor=60000 on mrg-07
* Resource action: libvirtd-compute monitor=60000 on mrg-08
* Resource action: libvirtd-compute monitor=60000 on mrg-09
* Resource action: fence-nova start on rdo7-node2
* Pseudo action: clone-one-or-more:order-nova-conductor-clone-nova-compute-clone-mandatory
* Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node1
* Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node2
* Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node3
* Resource action: neutron-dhcp-agent start on rdo7-node1
* Resource action: neutron-dhcp-agent start on rdo7-node2
* Resource action: neutron-dhcp-agent start on rdo7-node3
* Pseudo action: neutron-dhcp-agent-clone_running_0
* Pseudo action: neutron-l3-agent-clone_start_0
* Resource action: nova-conductor monitor=60000 on rdo7-node1
* Resource action: nova-conductor monitor=60000 on rdo7-node2
* Resource action: nova-conductor monitor=60000 on rdo7-node3
* Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node2
* Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node3
* Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node1
* Resource action: ceilometer-alarm-notifier start on rdo7-node2
* Resource action: ceilometer-alarm-notifier start on rdo7-node3
* Resource action: ceilometer-alarm-notifier start on rdo7-node1
* Pseudo action: ceilometer-alarm-notifier-clone_running_0
* Pseudo action: ceilometer-notification-clone_start_0
* Resource action: fence-nova monitor=60000 on rdo7-node2
* Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node1
* Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node2
* Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node3
* Resource action: neutron-l3-agent start on rdo7-node1
* Resource action: neutron-l3-agent start on rdo7-node2
* Resource action: neutron-l3-agent start on rdo7-node3
* Pseudo action: neutron-l3-agent-clone_running_0
* Pseudo action: neutron-metadata-agent-clone_start_0
* Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node2
* Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node3
* Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node1
* Resource action: ceilometer-notification start on rdo7-node2
* Resource action: ceilometer-notification start on rdo7-node3
* Resource action: ceilometer-notification start on rdo7-node1
* Pseudo action: ceilometer-notification-clone_running_0
* Pseudo action: heat-api-clone_start_0
* Pseudo action: clone-one-or-more:order-ceilometer-notification-clone-ceilometer-compute-clone-mandatory
* Resource action: neutron-l3-agent monitor=60000 on rdo7-node1
* Resource action: neutron-l3-agent monitor=60000 on rdo7-node2
* Resource action: neutron-l3-agent monitor=60000 on rdo7-node3
* Resource action: neutron-metadata-agent start on rdo7-node1
* Resource action: neutron-metadata-agent start on rdo7-node2
* Resource action: neutron-metadata-agent start on rdo7-node3
* Pseudo action: neutron-metadata-agent-clone_running_0
* Resource action: ceilometer-notification monitor=60000 on rdo7-node2
* Resource action: ceilometer-notification monitor=60000 on rdo7-node3
* Resource action: ceilometer-notification monitor=60000 on rdo7-node1
* Resource action: heat-api start on rdo7-node2
* Resource action: heat-api start on rdo7-node3
* Resource action: heat-api start on rdo7-node1
* Pseudo action: heat-api-clone_running_0
* Pseudo action: heat-api-cfn-clone_start_0
* Pseudo action: ceilometer-compute-clone_start_0
* Resource action: neutron-metadata-agent monitor=60000 on rdo7-node1
* Resource action: neutron-metadata-agent monitor=60000 on rdo7-node2
* Resource action: neutron-metadata-agent monitor=60000 on rdo7-node3
* Resource action: heat-api monitor=60000 on rdo7-node2
* Resource action: heat-api monitor=60000 on rdo7-node3
* Resource action: heat-api monitor=60000 on rdo7-node1
* Resource action: heat-api-cfn start on rdo7-node2
* Resource action: heat-api-cfn start on rdo7-node3
* Resource action: heat-api-cfn start on rdo7-node1
* Pseudo action: heat-api-cfn-clone_running_0
* Pseudo action: heat-api-cloudwatch-clone_start_0
* Resource action: ceilometer-compute start on mrg-07
* Resource action: ceilometer-compute start on mrg-08
* Resource action: ceilometer-compute start on mrg-09
* Pseudo action: ceilometer-compute-clone_running_0
* Pseudo action: nova-compute-clone_start_0
* Resource action: heat-api-cfn monitor=60000 on rdo7-node2
* Resource action: heat-api-cfn monitor=60000 on rdo7-node3
* Resource action: heat-api-cfn monitor=60000 on rdo7-node1
* Resource action: heat-api-cloudwatch start on rdo7-node2
* Resource action: heat-api-cloudwatch start on rdo7-node3
* Resource action: heat-api-cloudwatch start on rdo7-node1
* Pseudo action: heat-api-cloudwatch-clone_running_0
* Pseudo action: heat-engine-clone_start_0
* Resource action: ceilometer-compute monitor=60000 on mrg-07
* Resource action: ceilometer-compute monitor=60000 on mrg-08
* Resource action: ceilometer-compute monitor=60000 on mrg-09
* Resource action: nova-compute start on mrg-07
* Resource action: nova-compute start on mrg-08
* Resource action: nova-compute start on mrg-09
* Pseudo action: nova-compute-clone_running_0
* Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node2
* Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node3
* Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node1
* Resource action: heat-engine start on rdo7-node2
* Resource action: heat-engine start on rdo7-node3
* Resource action: heat-engine start on rdo7-node1
* Pseudo action: heat-engine-clone_running_0
* Pseudo action: nova-compute-clone_post_notify_running_0
* Resource action: heat-engine monitor=60000 on rdo7-node2
* Resource action: heat-engine monitor=60000 on rdo7-node3
* Resource action: heat-engine monitor=60000 on rdo7-node1
* Resource action: nova-compute notify on mrg-07
* Resource action: nova-compute notify on mrg-08
* Resource action: nova-compute notify on mrg-09
* Pseudo action: nova-compute-clone_confirmed-post_notify_running_0
* Resource action: nova-compute monitor=10000 on mrg-07
* Resource action: nova-compute monitor=10000 on mrg-08
* Resource action: nova-compute monitor=10000 on mrg-09
Revised Cluster Status:
* Node List:
* Online: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* RemoteOnline: [ mrg-07 mrg-08 mrg-09 ]
* Full List of Resources:
* fence1 (stonith:fence_xvm): Started rdo7-node2
* fence2 (stonith:fence_xvm): Started rdo7-node1
* fence3 (stonith:fence_xvm): Started rdo7-node3
* Clone Set: lb-haproxy-clone [lb-haproxy]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* vip-db (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-rabbitmq (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-keystone (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-glance (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-cinder (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-swift (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-neutron (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-nova (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-horizon (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-heat (ocf:heartbeat:IPaddr2): Started rdo7-node1
* vip-ceilometer (ocf:heartbeat:IPaddr2): Started rdo7-node2
* vip-qpid (ocf:heartbeat:IPaddr2): Started rdo7-node3
* vip-node (ocf:heartbeat:IPaddr2): Started rdo7-node1
* Clone Set: galera-master [galera] (promotable):
* Promoted: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: rabbitmq-server-clone [rabbitmq-server]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: memcached-clone [memcached]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: mongodb-clone [mongodb]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: keystone-clone [keystone]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: glance-fs-clone [glance-fs]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: glance-registry-clone [glance-registry]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: glance-api-clone [glance-api]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: cinder-api-clone [cinder-api]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: cinder-scheduler-clone [cinder-scheduler]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* cinder-volume (systemd:openstack-cinder-volume): Started rdo7-node2
* Clone Set: swift-fs-clone [swift-fs]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: swift-account-clone [swift-account]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: swift-container-clone [swift-container]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: swift-object-clone [swift-object]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: swift-proxy-clone [swift-proxy]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* swift-object-expirer (systemd:openstack-swift-object-expirer): Started rdo7-node3
* Clone Set: neutron-server-clone [neutron-server]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-scale-clone [neutron-scale] (unique):
* neutron-scale:0 (ocf:neutron:NeutronScale): Started rdo7-node1
* neutron-scale:1 (ocf:neutron:NeutronScale): Started rdo7-node2
* neutron-scale:2 (ocf:neutron:NeutronScale): Started rdo7-node3
* Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: nova-consoleauth-clone [nova-consoleauth]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: nova-novncproxy-clone [nova-novncproxy]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: nova-api-clone [nova-api]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: nova-scheduler-clone [nova-scheduler]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: nova-conductor-clone [nova-conductor]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: redis-master [redis] (promotable):
* Promoted: [ rdo7-node1 ]
* Unpromoted: [ rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* vip-redis (ocf:heartbeat:IPaddr2): Started rdo7-node1
* Clone Set: ceilometer-central-clone [ceilometer-central]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: ceilometer-collector-clone [ceilometer-collector]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: ceilometer-api-clone [ceilometer-api]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: ceilometer-delay-clone [ceilometer-delay]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: ceilometer-notification-clone [ceilometer-notification]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: heat-api-clone [heat-api]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: heat-api-cfn-clone [heat-api-cfn]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: heat-engine-clone [heat-engine]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: horizon-clone [horizon]:
* Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Stopped: [ mrg-07 mrg-08 mrg-09 ]
* Clone Set: neutron-openvswitch-agent-compute-clone [neutron-openvswitch-agent-compute]:
* Started: [ mrg-07 mrg-08 mrg-09 ]
* Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: libvirtd-compute-clone [libvirtd-compute]:
* Started: [ mrg-07 mrg-08 mrg-09 ]
* Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: ceilometer-compute-clone [ceilometer-compute]:
* Started: [ mrg-07 mrg-08 mrg-09 ]
* Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* Clone Set: nova-compute-clone [nova-compute]:
* Started: [ mrg-07 mrg-08 mrg-09 ]
* Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
* fence-nova (stonith:fence_compute): Started rdo7-node2
* fence-compute (stonith:fence_apc_snmp): Started rdo7-node3
* mrg-07 (ocf:pacemaker:remote): Started rdo7-node1
* mrg-08 (ocf:pacemaker:remote): Started rdo7-node2
* mrg-09 (ocf:pacemaker:remote): Started rdo7-node3
diff --git a/cts/scheduler/summary/order-serialize-set.summary b/cts/scheduler/summary/order-serialize-set.summary
index b0b759b51c..54fd7b18d4 100644
--- a/cts/scheduler/summary/order-serialize-set.summary
+++ b/cts/scheduler/summary/order-serialize-set.summary
@@ -1,73 +1,75 @@
Current cluster status:
* Node List:
* Node xen-a: standby (with active resources)
* Online: [ xen-b ]
* Full List of Resources:
* xen-a-fencing (stonith:external/ipmi): Started xen-b
* xen-b-fencing (stonith:external/ipmi): Started xen-a
* db (ocf:heartbeat:Xen): Started xen-a
* dbreplica (ocf:heartbeat:Xen): Started xen-b
* core-101 (ocf:heartbeat:Xen): Started xen-a
* core-200 (ocf:heartbeat:Xen): Started xen-a
* sysadmin (ocf:heartbeat:Xen): Started xen-b
* edge (ocf:heartbeat:Xen): Started xen-a
* base (ocf:heartbeat:Xen): Started xen-a
* Email_Alerting (ocf:heartbeat:MailTo): Started xen-b
+warning: Ignoring symmetrical for 'serialize-xen' because not valid with kind of 'Serialize'
+warning: Ignoring symmetrical for 'xen-set' because not valid with kind of 'Serialize'
Transition Summary:
* Restart xen-a-fencing ( xen-b ) due to resource definition change
* Stop xen-b-fencing ( xen-a ) due to node availability
* Migrate db ( xen-a -> xen-b )
* Migrate core-101 ( xen-a -> xen-b )
* Migrate core-200 ( xen-a -> xen-b )
* Migrate edge ( xen-a -> xen-b )
* Migrate base ( xen-a -> xen-b )
Executing Cluster Transition:
* Resource action: xen-a-fencing stop on xen-b
* Resource action: xen-a-fencing start on xen-b
* Resource action: xen-a-fencing monitor=60000 on xen-b
* Resource action: xen-b-fencing stop on xen-a
* Resource action: db migrate_to on xen-a
* Resource action: db migrate_from on xen-b
* Resource action: db stop on xen-a
* Resource action: core-101 migrate_to on xen-a
* Pseudo action: db_start_0
* Resource action: core-101 migrate_from on xen-b
* Resource action: core-101 stop on xen-a
* Resource action: core-200 migrate_to on xen-a
* Resource action: db monitor=10000 on xen-b
* Pseudo action: core-101_start_0
* Resource action: core-200 migrate_from on xen-b
* Resource action: core-200 stop on xen-a
* Resource action: edge migrate_to on xen-a
* Resource action: core-101 monitor=10000 on xen-b
* Pseudo action: core-200_start_0
* Resource action: edge migrate_from on xen-b
* Resource action: edge stop on xen-a
* Resource action: base migrate_to on xen-a
* Resource action: core-200 monitor=10000 on xen-b
* Pseudo action: edge_start_0
* Resource action: base migrate_from on xen-b
* Resource action: base stop on xen-a
* Resource action: edge monitor=10000 on xen-b
* Pseudo action: base_start_0
* Resource action: base monitor=10000 on xen-b
Revised Cluster Status:
* Node List:
* Node xen-a: standby
* Online: [ xen-b ]
* Full List of Resources:
* xen-a-fencing (stonith:external/ipmi): Started xen-b
* xen-b-fencing (stonith:external/ipmi): Stopped
* db (ocf:heartbeat:Xen): Started xen-b
* dbreplica (ocf:heartbeat:Xen): Started xen-b
* core-101 (ocf:heartbeat:Xen): Started xen-b
* core-200 (ocf:heartbeat:Xen): Started xen-b
* sysadmin (ocf:heartbeat:Xen): Started xen-b
* edge (ocf:heartbeat:Xen): Started xen-b
* base (ocf:heartbeat:Xen): Started xen-b
* Email_Alerting (ocf:heartbeat:MailTo): Started xen-b
diff --git a/cts/scheduler/summary/order-wrong-kind.summary b/cts/scheduler/summary/order-wrong-kind.summary
index 903a25c723..48c3454621 100644
--- a/cts/scheduler/summary/order-wrong-kind.summary
+++ b/cts/scheduler/summary/order-wrong-kind.summary
@@ -1,29 +1,36 @@
+warning: Support for validate-with='none' is deprecated and will be removed in a future release without the possibility of upgrades (manually edit to use a supported schema)
Schema validation of configuration is disabled (support for validate-with set to "none" is deprecated and will be removed in a future release)
+warning: Support for validate-with='none' is deprecated and will be removed in a future release without the possibility of upgrades (manually edit to use a supported schema)
+warning: Support for validate-with='none' is deprecated and will be removed in a future release without the possibility of upgrades (manually edit to use a supported schema)
Current cluster status:
* Node List:
* Online: [ node1 ]
* Full List of Resources:
* rsc1 (ocf:heartbeat:apache): Stopped
* rsc2 (ocf:heartbeat:apache): Started node1
* rsc3 (ocf:heartbeat:apache): Stopped
* rsc4 (ocf:heartbeat:apache): Started node1
+error: Resetting 'kind' for constraint order1 to 'Mandatory' because 'foo' is not valid
+error: Resetting 'kind' for constraint order1 to 'Mandatory' because 'foo' is not valid
+error: Resetting 'kind' for constraint order1 to 'Mandatory' because 'foo' is not valid
+error: Resetting 'kind' for constraint order1 to 'Mandatory' because 'foo' is not valid
Transition Summary:
* Start rsc1 ( node1 )
* Restart rsc2 ( node1 ) due to required rsc1 start
Executing Cluster Transition:
* Resource action: rsc1 start on node1
* Resource action: rsc2 stop on node1
* Resource action: rsc2 start on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 ]
* Full List of Resources:
* rsc1 (ocf:heartbeat:apache): Started node1
* rsc2 (ocf:heartbeat:apache): Started node1
* rsc3 (ocf:heartbeat:apache): Stopped
* rsc4 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/ordered-set-natural.summary b/cts/scheduler/summary/ordered-set-natural.summary
index b944e0d6f4..bf96e250f7 100644
--- a/cts/scheduler/summary/ordered-set-natural.summary
+++ b/cts/scheduler/summary/ordered-set-natural.summary
@@ -1,55 +1,56 @@
3 of 15 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* Resource Group: rgroup:
* dummy1-1 (ocf:heartbeat:Dummy): Stopped
* dummy1-2 (ocf:heartbeat:Dummy): Stopped
* dummy1-3 (ocf:heartbeat:Dummy): Stopped (disabled)
* dummy1-4 (ocf:heartbeat:Dummy): Stopped
* dummy1-5 (ocf:heartbeat:Dummy): Stopped
* dummy2-1 (ocf:heartbeat:Dummy): Stopped
* dummy2-2 (ocf:heartbeat:Dummy): Stopped
* dummy2-3 (ocf:heartbeat:Dummy): Stopped (disabled)
* dummy3-1 (ocf:heartbeat:Dummy): Stopped
* dummy3-2 (ocf:heartbeat:Dummy): Stopped
* dummy3-3 (ocf:heartbeat:Dummy): Stopped (disabled)
* dummy3-4 (ocf:heartbeat:Dummy): Stopped
* dummy3-5 (ocf:heartbeat:Dummy): Stopped
* dummy2-4 (ocf:heartbeat:Dummy): Stopped
* dummy2-5 (ocf:heartbeat:Dummy): Stopped
+warning: Support for 'ordering' other than 'group' in resource_set (such as pcs_rsc_set_dummy3-1_dummy3-2_dummy3-3_dummy3-4_dummy3-5-1) is deprecated and will be removed in a future release
Transition Summary:
* Start dummy1-1 ( node1 ) due to no quorum (blocked)
* Start dummy1-2 ( node1 ) due to no quorum (blocked)
* Start dummy2-1 ( node2 ) due to no quorum (blocked)
* Start dummy2-2 ( node2 ) due to no quorum (blocked)
* Start dummy3-4 ( node1 ) due to no quorum (blocked)
* Start dummy3-5 ( node1 ) due to no quorum (blocked)
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* Resource Group: rgroup:
* dummy1-1 (ocf:heartbeat:Dummy): Stopped
* dummy1-2 (ocf:heartbeat:Dummy): Stopped
* dummy1-3 (ocf:heartbeat:Dummy): Stopped (disabled)
* dummy1-4 (ocf:heartbeat:Dummy): Stopped
* dummy1-5 (ocf:heartbeat:Dummy): Stopped
* dummy2-1 (ocf:heartbeat:Dummy): Stopped
* dummy2-2 (ocf:heartbeat:Dummy): Stopped
* dummy2-3 (ocf:heartbeat:Dummy): Stopped (disabled)
* dummy3-1 (ocf:heartbeat:Dummy): Stopped
* dummy3-2 (ocf:heartbeat:Dummy): Stopped
* dummy3-3 (ocf:heartbeat:Dummy): Stopped (disabled)
* dummy3-4 (ocf:heartbeat:Dummy): Stopped
* dummy3-5 (ocf:heartbeat:Dummy): Stopped
* dummy2-4 (ocf:heartbeat:Dummy): Stopped
* dummy2-5 (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/priority-fencing-delay.summary b/cts/scheduler/summary/priority-fencing-delay.summary
index ce5aff2562..0c6bc702f2 100644
--- a/cts/scheduler/summary/priority-fencing-delay.summary
+++ b/cts/scheduler/summary/priority-fencing-delay.summary
@@ -1,104 +1,110 @@
Current cluster status:
* Node List:
* Node kiff-01: UNCLEAN (offline)
* Online: [ kiff-02 ]
* GuestOnline: [ lxc-01_kiff-02 lxc-02_kiff-02 ]
* Full List of Resources:
* vm-fs (ocf:heartbeat:Filesystem): FAILED lxc-01_kiff-01
* R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
* fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
* fence-kiff-02 (stonith:fence_ipmilan): Started kiff-01 (UNCLEAN)
* Clone Set: dlm-clone [dlm]:
* dlm (ocf:pacemaker:controld): Started kiff-01 (UNCLEAN)
* Started: [ kiff-02 ]
* Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: clvmd-clone [clvmd]:
* clvmd (ocf:heartbeat:clvm): Started kiff-01 (UNCLEAN)
* Started: [ kiff-02 ]
* Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: shared0-clone [shared0]:
* shared0 (ocf:heartbeat:Filesystem): Started kiff-01 (UNCLEAN)
* Started: [ kiff-02 ]
* Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): FAILED kiff-01 (UNCLEAN)
* R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-01 (UNCLEAN)
* R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+warning: Invalid ordering constraint between shared0:0 and R-lxc-02_kiff-02
+warning: Invalid ordering constraint between clvmd:0 and R-lxc-02_kiff-02
+warning: Invalid ordering constraint between dlm:0 and R-lxc-02_kiff-02
+warning: Invalid ordering constraint between shared0:0 and R-lxc-01_kiff-02
+warning: Invalid ordering constraint between clvmd:0 and R-lxc-01_kiff-02
+warning: Invalid ordering constraint between dlm:0 and R-lxc-01_kiff-02
Transition Summary:
* Fence (reboot) lxc-02_kiff-01 (resource: R-lxc-02_kiff-01) 'guest is unclean'
* Fence (reboot) lxc-01_kiff-01 (resource: R-lxc-01_kiff-01) 'guest is unclean'
* Fence (reboot) kiff-01 'peer is no longer part of the cluster'
* Recover vm-fs ( lxc-01_kiff-01 )
* Move fence-kiff-02 ( kiff-01 -> kiff-02 )
* Stop dlm:0 ( kiff-01 ) due to node availability
* Stop clvmd:0 ( kiff-01 ) due to node availability
* Stop shared0:0 ( kiff-01 ) due to node availability
* Recover R-lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
* Move R-lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
* Move lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
* Move lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
Executing Cluster Transition:
* Resource action: vm-fs monitor on lxc-02_kiff-02
* Resource action: vm-fs monitor on lxc-01_kiff-02
* Pseudo action: fence-kiff-02_stop_0
* Resource action: dlm monitor on lxc-02_kiff-02
* Resource action: dlm monitor on lxc-01_kiff-02
* Resource action: clvmd monitor on lxc-02_kiff-02
* Resource action: clvmd monitor on lxc-01_kiff-02
* Resource action: shared0 monitor on lxc-02_kiff-02
* Resource action: shared0 monitor on lxc-01_kiff-02
* Pseudo action: lxc-01_kiff-01_stop_0
* Pseudo action: lxc-02_kiff-01_stop_0
* Fencing kiff-01 (reboot)
* Pseudo action: R-lxc-01_kiff-01_stop_0
* Pseudo action: R-lxc-02_kiff-01_stop_0
* Pseudo action: stonith-lxc-02_kiff-01-reboot on lxc-02_kiff-01
* Pseudo action: stonith-lxc-01_kiff-01-reboot on lxc-01_kiff-01
* Pseudo action: vm-fs_stop_0
* Resource action: fence-kiff-02 start on kiff-02
* Pseudo action: shared0-clone_stop_0
* Resource action: R-lxc-01_kiff-01 start on kiff-02
* Resource action: R-lxc-02_kiff-01 start on kiff-02
* Resource action: lxc-01_kiff-01 start on kiff-02
* Resource action: lxc-02_kiff-01 start on kiff-02
* Resource action: vm-fs start on lxc-01_kiff-01
* Resource action: fence-kiff-02 monitor=60000 on kiff-02
* Pseudo action: shared0_stop_0
* Pseudo action: shared0-clone_stopped_0
* Resource action: R-lxc-01_kiff-01 monitor=10000 on kiff-02
* Resource action: R-lxc-02_kiff-01 monitor=10000 on kiff-02
* Resource action: lxc-01_kiff-01 monitor=30000 on kiff-02
* Resource action: lxc-02_kiff-01 monitor=30000 on kiff-02
* Resource action: vm-fs monitor=20000 on lxc-01_kiff-01
* Pseudo action: clvmd-clone_stop_0
* Pseudo action: clvmd_stop_0
* Pseudo action: clvmd-clone_stopped_0
* Pseudo action: dlm-clone_stop_0
* Pseudo action: dlm_stop_0
* Pseudo action: dlm-clone_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ kiff-02 ]
* OFFLINE: [ kiff-01 ]
* GuestOnline: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Full List of Resources:
* vm-fs (ocf:heartbeat:Filesystem): Started lxc-01_kiff-01
* R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
* fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
* fence-kiff-02 (stonith:fence_ipmilan): Started kiff-02
* Clone Set: dlm-clone [dlm]:
* Started: [ kiff-02 ]
* Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: clvmd-clone [clvmd]:
* Started: [ kiff-02 ]
* Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: shared0-clone [shared0]:
* Started: [ kiff-02 ]
* Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
* R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
* R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
diff --git a/cts/scheduler/summary/promoted-9.summary b/cts/scheduler/summary/promoted-9.summary
index 69dab46a2c..7be9cf7c72 100644
--- a/cts/scheduler/summary/promoted-9.summary
+++ b/cts/scheduler/summary/promoted-9.summary
@@ -1,100 +1,102 @@
Current cluster status:
* Node List:
* Node sgi2: UNCLEAN (offline)
* Node test02: UNCLEAN (offline)
* Online: [ ibm1 va1 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Stopped
* Resource Group: group-1:
* ocf_127.0.0.11 (ocf:heartbeat:IPaddr): Stopped
* heartbeat_127.0.0.12 (ocf:heartbeat:IPaddr): Stopped
* ocf_127.0.0.13 (ocf:heartbeat:IPaddr): Stopped
* lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped
* rsc_sgi2 (ocf:heartbeat:IPaddr): Stopped
* rsc_ibm1 (ocf:heartbeat:IPaddr): Stopped
* rsc_va1 (ocf:heartbeat:IPaddr): Stopped
* rsc_test02 (ocf:heartbeat:IPaddr): Stopped
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Started va1
* child_DoFencing:1 (stonith:ssh): Started ibm1
* child_DoFencing:2 (stonith:ssh): Stopped
* child_DoFencing:3 (stonith:ssh): Stopped
* Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
* ocf_msdummy:0 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:1 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:2 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:3 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:4 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:5 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:6 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:7 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+warning: Node sgi2 is unclean but cannot be fenced
+warning: Node test02 is unclean but cannot be fenced
Transition Summary:
* Start DcIPaddr ( va1 ) due to no quorum (blocked)
* Start ocf_127.0.0.11 ( va1 ) due to no quorum (blocked)
* Start heartbeat_127.0.0.12 ( va1 ) due to no quorum (blocked)
* Start ocf_127.0.0.13 ( va1 ) due to no quorum (blocked)
* Start lsb_dummy ( va1 ) due to no quorum (blocked)
* Start rsc_sgi2 ( va1 ) due to no quorum (blocked)
* Start rsc_ibm1 ( va1 ) due to no quorum (blocked)
* Start rsc_va1 ( va1 ) due to no quorum (blocked)
* Start rsc_test02 ( va1 ) due to no quorum (blocked)
* Stop child_DoFencing:1 ( ibm1 ) due to node availability
* Promote ocf_msdummy:0 ( Stopped -> Promoted va1 ) blocked
* Start ocf_msdummy:1 ( va1 ) due to no quorum (blocked)
Executing Cluster Transition:
* Resource action: child_DoFencing:1 monitor on va1
* Resource action: child_DoFencing:2 monitor on va1
* Resource action: child_DoFencing:2 monitor on ibm1
* Resource action: child_DoFencing:3 monitor on va1
* Resource action: child_DoFencing:3 monitor on ibm1
* Pseudo action: DoFencing_stop_0
* Resource action: ocf_msdummy:2 monitor on va1
* Resource action: ocf_msdummy:2 monitor on ibm1
* Resource action: ocf_msdummy:3 monitor on va1
* Resource action: ocf_msdummy:3 monitor on ibm1
* Resource action: ocf_msdummy:4 monitor on va1
* Resource action: ocf_msdummy:4 monitor on ibm1
* Resource action: ocf_msdummy:5 monitor on va1
* Resource action: ocf_msdummy:5 monitor on ibm1
* Resource action: ocf_msdummy:6 monitor on va1
* Resource action: ocf_msdummy:6 monitor on ibm1
* Resource action: ocf_msdummy:7 monitor on va1
* Resource action: ocf_msdummy:7 monitor on ibm1
* Resource action: child_DoFencing:1 stop on ibm1
* Pseudo action: DoFencing_stopped_0
* Cluster action: do_shutdown on ibm1
Revised Cluster Status:
* Node List:
* Node sgi2: UNCLEAN (offline)
* Node test02: UNCLEAN (offline)
* Online: [ ibm1 va1 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Stopped
* Resource Group: group-1:
* ocf_127.0.0.11 (ocf:heartbeat:IPaddr): Stopped
* heartbeat_127.0.0.12 (ocf:heartbeat:IPaddr): Stopped
* ocf_127.0.0.13 (ocf:heartbeat:IPaddr): Stopped
* lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped
* rsc_sgi2 (ocf:heartbeat:IPaddr): Stopped
* rsc_ibm1 (ocf:heartbeat:IPaddr): Stopped
* rsc_va1 (ocf:heartbeat:IPaddr): Stopped
* rsc_test02 (ocf:heartbeat:IPaddr): Stopped
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Started va1
* child_DoFencing:1 (stonith:ssh): Stopped
* child_DoFencing:2 (stonith:ssh): Stopped
* child_DoFencing:3 (stonith:ssh): Stopped
* Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
* ocf_msdummy:0 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:1 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:2 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:3 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:4 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:5 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:6 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
* ocf_msdummy:7 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
diff --git a/cts/scheduler/summary/promoted-asymmetrical-order.summary b/cts/scheduler/summary/promoted-asymmetrical-order.summary
index 591ff18a04..1702272f72 100644
--- a/cts/scheduler/summary/promoted-asymmetrical-order.summary
+++ b/cts/scheduler/summary/promoted-asymmetrical-order.summary
@@ -1,37 +1,53 @@
2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* Clone Set: ms1 [rsc1] (promotable, disabled):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
* Clone Set: ms2 [rsc2] (promotable):
* Promoted: [ node2 ]
* Unpromoted: [ node1 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc2-monitor-unpromoted-5 is duplicate of rsc2-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1:0 ( Promoted node1 ) due to node availability
* Stop rsc1:1 ( Unpromoted node2 ) due to node availability
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:0 demote on node1
* Pseudo action: ms1_demoted_0
* Pseudo action: ms1_stop_0
* Resource action: rsc1:0 stop on node1
* Resource action: rsc1:1 stop on node2
* Pseudo action: ms1_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* Clone Set: ms1 [rsc1] (promotable, disabled):
* Stopped (disabled): [ node1 node2 ]
* Clone Set: ms2 [rsc2] (promotable):
* Promoted: [ node2 ]
* Unpromoted: [ node1 ]
diff --git a/cts/scheduler/summary/promoted-failed-demote-2.summary b/cts/scheduler/summary/promoted-failed-demote-2.summary
index 3f317fabea..02f3ee7e67 100644
--- a/cts/scheduler/summary/promoted-failed-demote-2.summary
+++ b/cts/scheduler/summary/promoted-failed-demote-2.summary
@@ -1,47 +1,50 @@
Current cluster status:
* Node List:
* Online: [ dl380g5a dl380g5b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): FAILED dl380g5b
* stateful-2:0 (ocf:heartbeat:Stateful): Stopped
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
* stateful-2:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
+error: Resetting 'on-fail' for stateful-1:0 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for stateful-1:1 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for stateful-2:1 stop action to default value because 'stop' is not allowed for stop
Transition Summary:
* Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability
* Promote stateful-1:1 ( Unpromoted -> Promoted dl380g5a )
* Promote stateful-2:1 ( Unpromoted -> Promoted dl380g5a )
Executing Cluster Transition:
* Resource action: stateful-1:1 cancel=20000 on dl380g5a
* Resource action: stateful-2:1 cancel=20000 on dl380g5a
* Pseudo action: ms-sf_stop_0
* Pseudo action: group:0_stop_0
* Resource action: stateful-1:0 stop on dl380g5b
* Pseudo action: group:0_stopped_0
* Pseudo action: ms-sf_stopped_0
* Pseudo action: ms-sf_promote_0
* Pseudo action: group:1_promote_0
* Resource action: stateful-1:1 promote on dl380g5a
* Resource action: stateful-2:1 promote on dl380g5a
* Pseudo action: group:1_promoted_0
* Resource action: stateful-1:1 monitor=10000 on dl380g5a
* Resource action: stateful-2:1 monitor=10000 on dl380g5a
* Pseudo action: ms-sf_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ dl380g5a dl380g5b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): Stopped
* stateful-2:0 (ocf:heartbeat:Stateful): Stopped
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
* stateful-2:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
diff --git a/cts/scheduler/summary/promoted-failed-demote.summary b/cts/scheduler/summary/promoted-failed-demote.summary
index 70b3e1b2cf..e9f1a1baa9 100644
--- a/cts/scheduler/summary/promoted-failed-demote.summary
+++ b/cts/scheduler/summary/promoted-failed-demote.summary
@@ -1,64 +1,67 @@
Current cluster status:
* Node List:
* Online: [ dl380g5a dl380g5b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): FAILED dl380g5b
* stateful-2:0 (ocf:heartbeat:Stateful): Stopped
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
* stateful-2:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
+error: Resetting 'on-fail' for stateful-1:0 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for stateful-1:1 stop action to default value because 'stop' is not allowed for stop
+error: Resetting 'on-fail' for stateful-2:1 stop action to default value because 'stop' is not allowed for stop
Transition Summary:
* Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability
* Promote stateful-1:1 ( Unpromoted -> Promoted dl380g5a )
* Promote stateful-2:1 ( Unpromoted -> Promoted dl380g5a )
Executing Cluster Transition:
* Resource action: stateful-1:1 cancel=20000 on dl380g5a
* Resource action: stateful-2:1 cancel=20000 on dl380g5a
* Pseudo action: ms-sf_pre_notify_stop_0
* Resource action: stateful-1:0 notify on dl380g5b
* Resource action: stateful-1:1 notify on dl380g5a
* Resource action: stateful-2:1 notify on dl380g5a
* Pseudo action: ms-sf_confirmed-pre_notify_stop_0
* Pseudo action: ms-sf_stop_0
* Pseudo action: group:0_stop_0
* Resource action: stateful-1:0 stop on dl380g5b
* Pseudo action: group:0_stopped_0
* Pseudo action: ms-sf_stopped_0
* Pseudo action: ms-sf_post_notify_stopped_0
* Resource action: stateful-1:1 notify on dl380g5a
* Resource action: stateful-2:1 notify on dl380g5a
* Pseudo action: ms-sf_confirmed-post_notify_stopped_0
* Pseudo action: ms-sf_pre_notify_promote_0
* Resource action: stateful-1:1 notify on dl380g5a
* Resource action: stateful-2:1 notify on dl380g5a
* Pseudo action: ms-sf_confirmed-pre_notify_promote_0
* Pseudo action: ms-sf_promote_0
* Pseudo action: group:1_promote_0
* Resource action: stateful-1:1 promote on dl380g5a
* Resource action: stateful-2:1 promote on dl380g5a
* Pseudo action: group:1_promoted_0
* Pseudo action: ms-sf_promoted_0
* Pseudo action: ms-sf_post_notify_promoted_0
* Resource action: stateful-1:1 notify on dl380g5a
* Resource action: stateful-2:1 notify on dl380g5a
* Pseudo action: ms-sf_confirmed-post_notify_promoted_0
* Resource action: stateful-1:1 monitor=10000 on dl380g5a
* Resource action: stateful-2:1 monitor=10000 on dl380g5a
Revised Cluster Status:
* Node List:
* Online: [ dl380g5a dl380g5b ]
* Full List of Resources:
* Clone Set: ms-sf [group] (promotable, unique):
* Resource Group: group:0:
* stateful-1:0 (ocf:heartbeat:Stateful): Stopped
* stateful-2:0 (ocf:heartbeat:Stateful): Stopped
* Resource Group: group:1:
* stateful-1:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
* stateful-2:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
diff --git a/cts/scheduler/summary/promoted-group.summary b/cts/scheduler/summary/promoted-group.summary
index 44b380c25b..03a7f79afa 100644
--- a/cts/scheduler/summary/promoted-group.summary
+++ b/cts/scheduler/summary/promoted-group.summary
@@ -1,37 +1,42 @@
+warning: Support for the 'ordered' group meta-attribute is deprecated and will be removed in a future release (use a resource set instead)
+error: Resetting 'on-fail' for monitor of resource_1 to 'stop' because 'fence' is not valid when fencing is disabled
Current cluster status:
* Node List:
* Online: [ rh44-1 rh44-2 ]
* Full List of Resources:
* Resource Group: test:
* resource_1 (ocf:heartbeat:IPaddr): Started rh44-1
* Clone Set: ms-sf [grp_ms_sf] (promotable, unique):
* Resource Group: grp_ms_sf:0:
* promotable_Stateful:0 (ocf:heartbeat:Stateful): Unpromoted rh44-2
* Resource Group: grp_ms_sf:1:
* promotable_Stateful:1 (ocf:heartbeat:Stateful): Unpromoted rh44-1
+error: Resetting 'on-fail' for stop of resource_1 to 'stop' because 'fence' is not valid when fencing is disabled
+error: Resetting 'on-fail' for start of resource_1 to 'stop' because 'fence' is not valid when fencing is disabled
Transition Summary:
* Promote promotable_Stateful:1 ( Unpromoted -> Promoted rh44-1 )
Executing Cluster Transition:
* Resource action: promotable_Stateful:1 cancel=5000 on rh44-1
* Pseudo action: ms-sf_promote_0
* Pseudo action: grp_ms_sf:1_promote_0
* Resource action: promotable_Stateful:1 promote on rh44-1
* Pseudo action: grp_ms_sf:1_promoted_0
* Resource action: promotable_Stateful:1 monitor=6000 on rh44-1
* Pseudo action: ms-sf_promoted_0
+error: Resetting 'on-fail' for monitor of resource_1 to 'stop' because 'fence' is not valid when fencing is disabled
Revised Cluster Status:
* Node List:
* Online: [ rh44-1 rh44-2 ]
* Full List of Resources:
* Resource Group: test:
* resource_1 (ocf:heartbeat:IPaddr): Started rh44-1
* Clone Set: ms-sf [grp_ms_sf] (promotable, unique):
* Resource Group: grp_ms_sf:0:
* promotable_Stateful:0 (ocf:heartbeat:Stateful): Unpromoted rh44-2
* Resource Group: grp_ms_sf:1:
* promotable_Stateful:1 (ocf:heartbeat:Stateful): Promoted rh44-1
diff --git a/cts/scheduler/summary/promoted-notify.summary b/cts/scheduler/summary/promoted-notify.summary
index f0fb04027d..098e945dce 100644
--- a/cts/scheduler/summary/promoted-notify.summary
+++ b/cts/scheduler/summary/promoted-notify.summary
@@ -1,36 +1,48 @@
Current cluster status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: fake-master [fake] (promotable):
* Unpromoted: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
+error: Operation fake-monitor-interval-10-role-Unpromoted is duplicate of fake-monitor-interval-10-role-Promoted (do not use same name and interval combination more than once per resource)
Transition Summary:
* Promote fake:0 ( Unpromoted -> Promoted rhel7-auto1 )
Executing Cluster Transition:
* Pseudo action: fake-master_pre_notify_promote_0
* Resource action: fake notify on rhel7-auto1
* Resource action: fake notify on rhel7-auto3
* Resource action: fake notify on rhel7-auto2
* Pseudo action: fake-master_confirmed-pre_notify_promote_0
* Pseudo action: fake-master_promote_0
* Resource action: fake promote on rhel7-auto1
* Pseudo action: fake-master_promoted_0
* Pseudo action: fake-master_post_notify_promoted_0
* Resource action: fake notify on rhel7-auto1
* Resource action: fake notify on rhel7-auto3
* Resource action: fake notify on rhel7-auto2
* Pseudo action: fake-master_confirmed-post_notify_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started rhel7-auto1
* Clone Set: fake-master [fake] (promotable):
* Promoted: [ rhel7-auto1 ]
* Unpromoted: [ rhel7-auto2 rhel7-auto3 ]
diff --git a/cts/scheduler/summary/promoted-ordering.summary b/cts/scheduler/summary/promoted-ordering.summary
index 0ef1bd89e8..84158af223 100644
--- a/cts/scheduler/summary/promoted-ordering.summary
+++ b/cts/scheduler/summary/promoted-ordering.summary
@@ -1,96 +1,108 @@
+warning: Ignoring globally-unique for clone_webservice because lsb resources such as mysql-proxy:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone_webservice because lsb resources such as mysql-proxy:1 can be used only as anonymous clones
Current cluster status:
* Node List:
* Online: [ webcluster01 ]
* OFFLINE: [ webcluster02 ]
* Full List of Resources:
* mysql-server (ocf:heartbeat:mysql): Stopped
* extip_1 (ocf:heartbeat:IPaddr2): Stopped
* extip_2 (ocf:heartbeat:IPaddr2): Stopped
* Resource Group: group_main:
* intip_0_main (ocf:heartbeat:IPaddr2): Stopped
* intip_1_active (ocf:heartbeat:IPaddr2): Stopped
* intip_2_passive (ocf:heartbeat:IPaddr2): Stopped
* Clone Set: ms_drbd_www [drbd_www] (promotable):
* Stopped: [ webcluster01 webcluster02 ]
* Clone Set: clone_ocfs2_www [ocfs2_www] (unique):
* ocfs2_www:0 (ocf:heartbeat:Filesystem): Stopped
* ocfs2_www:1 (ocf:heartbeat:Filesystem): Stopped
* Clone Set: clone_webservice [group_webservice]:
* Stopped: [ webcluster01 webcluster02 ]
* Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
* Stopped: [ webcluster01 webcluster02 ]
* fs_mysql (ocf:heartbeat:Filesystem): Stopped
+warning: No resource, template, or tag named 'drbd_mysql'
+error: Ignoring constraint 'colo_drbd_mysql_ip0' because 'drbd_mysql' is not a valid resource or tag
+warning: No resource, template, or tag named 'drbd_mysql'
+error: Ignoring constraint 'colo_drbd_mysql_ip1' because 'drbd_mysql' is not a valid resource or tag
+warning: No resource, template, or tag named 'drbd_www'
+error: Ignoring constraint 'colo_drbd_www_ip0' because 'drbd_www' is not a valid resource or tag
+warning: No resource, template, or tag named 'drbd_www'
+error: Ignoring constraint 'colo_drbd_www_ip1' because 'drbd_www' is not a valid resource or tag
Transition Summary:
* Start extip_1 ( webcluster01 )
* Start extip_2 ( webcluster01 )
* Start intip_1_active ( webcluster01 )
* Start intip_2_passive ( webcluster01 )
* Start drbd_www:0 ( webcluster01 )
* Start drbd_mysql:0 ( webcluster01 )
Executing Cluster Transition:
* Resource action: mysql-server monitor on webcluster01
* Resource action: extip_1 monitor on webcluster01
* Resource action: extip_2 monitor on webcluster01
* Resource action: intip_0_main monitor on webcluster01
* Resource action: intip_1_active monitor on webcluster01
* Resource action: intip_2_passive monitor on webcluster01
* Resource action: drbd_www:0 monitor on webcluster01
* Pseudo action: ms_drbd_www_pre_notify_start_0
* Resource action: ocfs2_www:0 monitor on webcluster01
* Resource action: ocfs2_www:1 monitor on webcluster01
* Resource action: apache2:0 monitor on webcluster01
* Resource action: mysql-proxy:0 monitor on webcluster01
* Resource action: drbd_mysql:0 monitor on webcluster01
* Pseudo action: ms_drbd_mysql_pre_notify_start_0
* Resource action: fs_mysql monitor on webcluster01
* Resource action: extip_1 start on webcluster01
* Resource action: extip_2 start on webcluster01
* Resource action: intip_1_active start on webcluster01
* Resource action: intip_2_passive start on webcluster01
* Pseudo action: ms_drbd_www_confirmed-pre_notify_start_0
* Pseudo action: ms_drbd_www_start_0
* Pseudo action: ms_drbd_mysql_confirmed-pre_notify_start_0
* Pseudo action: ms_drbd_mysql_start_0
* Resource action: extip_1 monitor=30000 on webcluster01
* Resource action: extip_2 monitor=30000 on webcluster01
* Resource action: intip_1_active monitor=30000 on webcluster01
* Resource action: intip_2_passive monitor=30000 on webcluster01
* Resource action: drbd_www:0 start on webcluster01
* Pseudo action: ms_drbd_www_running_0
* Resource action: drbd_mysql:0 start on webcluster01
* Pseudo action: ms_drbd_mysql_running_0
* Pseudo action: ms_drbd_www_post_notify_running_0
* Pseudo action: ms_drbd_mysql_post_notify_running_0
* Resource action: drbd_www:0 notify on webcluster01
* Pseudo action: ms_drbd_www_confirmed-post_notify_running_0
* Resource action: drbd_mysql:0 notify on webcluster01
* Pseudo action: ms_drbd_mysql_confirmed-post_notify_running_0
+warning: Ignoring globally-unique for clone_webservice because lsb resources such as mysql-proxy:0 can be used only as anonymous clones
+warning: Ignoring globally-unique for clone_webservice because lsb resources such as mysql-proxy:1 can be used only as anonymous clones
Revised Cluster Status:
* Node List:
* Online: [ webcluster01 ]
* OFFLINE: [ webcluster02 ]
* Full List of Resources:
* mysql-server (ocf:heartbeat:mysql): Stopped
* extip_1 (ocf:heartbeat:IPaddr2): Started webcluster01
* extip_2 (ocf:heartbeat:IPaddr2): Started webcluster01
* Resource Group: group_main:
* intip_0_main (ocf:heartbeat:IPaddr2): Stopped
* intip_1_active (ocf:heartbeat:IPaddr2): Started webcluster01
* intip_2_passive (ocf:heartbeat:IPaddr2): Started webcluster01
* Clone Set: ms_drbd_www [drbd_www] (promotable):
* Unpromoted: [ webcluster01 ]
* Stopped: [ webcluster02 ]
* Clone Set: clone_ocfs2_www [ocfs2_www] (unique):
* ocfs2_www:0 (ocf:heartbeat:Filesystem): Stopped
* ocfs2_www:1 (ocf:heartbeat:Filesystem): Stopped
* Clone Set: clone_webservice [group_webservice]:
* Stopped: [ webcluster01 webcluster02 ]
* Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
* Unpromoted: [ webcluster01 ]
* Stopped: [ webcluster02 ]
* fs_mysql (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/promoted-with-blocked.summary b/cts/scheduler/summary/promoted-with-blocked.summary
index 82177a9a6a..c38b1ce49f 100644
--- a/cts/scheduler/summary/promoted-with-blocked.summary
+++ b/cts/scheduler/summary/promoted-with-blocked.summary
@@ -1,59 +1,60 @@
1 of 8 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ node1 node2 node3 node4 node5 ]
* Full List of Resources:
* Fencing (stonith:fence_xvm): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Clone Set: rsc2-clone [rsc2] (promotable):
* Stopped: [ node1 node2 node3 node4 node5 ]
* rsc3 (ocf:pacemaker:Dummy): Stopped (disabled)
+warning: Support for the Promoted role is deprecated and will be removed in a future release. Use Promoted instead.
Transition Summary:
* Start rsc1 ( node2 ) due to unrunnable rsc3 start (blocked)
* Start rsc2:0 ( node3 )
* Start rsc2:1 ( node4 )
* Start rsc2:2 ( node5 )
* Start rsc2:3 ( node1 )
* Promote rsc2:4 ( Stopped -> Promoted node2 ) due to colocation with rsc1 (blocked)
Executing Cluster Transition:
* Resource action: rsc1 monitor on node5
* Resource action: rsc1 monitor on node4
* Resource action: rsc1 monitor on node3
* Resource action: rsc1 monitor on node2
* Resource action: rsc1 monitor on node1
* Resource action: rsc2:0 monitor on node3
* Resource action: rsc2:1 monitor on node4
* Resource action: rsc2:2 monitor on node5
* Resource action: rsc2:3 monitor on node1
* Resource action: rsc2:4 monitor on node2
* Pseudo action: rsc2-clone_start_0
* Resource action: rsc3 monitor on node5
* Resource action: rsc3 monitor on node4
* Resource action: rsc3 monitor on node3
* Resource action: rsc3 monitor on node2
* Resource action: rsc3 monitor on node1
* Resource action: rsc2:0 start on node3
* Resource action: rsc2:1 start on node4
* Resource action: rsc2:2 start on node5
* Resource action: rsc2:3 start on node1
* Resource action: rsc2:4 start on node2
* Pseudo action: rsc2-clone_running_0
* Resource action: rsc2:0 monitor=10000 on node3
* Resource action: rsc2:1 monitor=10000 on node4
* Resource action: rsc2:2 monitor=10000 on node5
* Resource action: rsc2:3 monitor=10000 on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 node3 node4 node5 ]
* Full List of Resources:
* Fencing (stonith:fence_xvm): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Clone Set: rsc2-clone [rsc2] (promotable):
* Unpromoted: [ node1 node2 node3 node4 node5 ]
* rsc3 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/quorum-4.summary b/cts/scheduler/summary/quorum-4.summary
index 3d0c88e81f..0132adc92b 100644
--- a/cts/scheduler/summary/quorum-4.summary
+++ b/cts/scheduler/summary/quorum-4.summary
@@ -1,25 +1,27 @@
Current cluster status:
* Node List:
* Node hadev1: UNCLEAN (offline)
* Node hadev3: UNCLEAN (offline)
* Online: [ hadev2 ]
* Full List of Resources:
* child_DoFencing (stonith:ssh): Stopped
+warning: Node hadev1 is unclean but cannot be fenced
+warning: Node hadev3 is unclean but cannot be fenced
Transition Summary:
* Start child_DoFencing ( hadev2 )
Executing Cluster Transition:
* Resource action: child_DoFencing monitor on hadev2
* Resource action: child_DoFencing start on hadev2
* Resource action: child_DoFencing monitor=5000 on hadev2
Revised Cluster Status:
* Node List:
* Node hadev1: UNCLEAN (offline)
* Node hadev3: UNCLEAN (offline)
* Online: [ hadev2 ]
* Full List of Resources:
* child_DoFencing (stonith:ssh): Started hadev2
diff --git a/cts/scheduler/summary/quorum-5.summary b/cts/scheduler/summary/quorum-5.summary
index 1e7abf38ee..407dad631d 100644
--- a/cts/scheduler/summary/quorum-5.summary
+++ b/cts/scheduler/summary/quorum-5.summary
@@ -1,35 +1,37 @@
Current cluster status:
* Node List:
* Node hadev1: UNCLEAN (offline)
* Node hadev3: UNCLEAN (offline)
* Online: [ hadev2 ]
* Full List of Resources:
* Resource Group: group1:
* child_DoFencing_1 (stonith:ssh): Stopped
* child_DoFencing_2 (stonith:ssh): Stopped
+warning: Node hadev1 is unclean but cannot be fenced
+warning: Node hadev3 is unclean but cannot be fenced
Transition Summary:
* Start child_DoFencing_1 ( hadev2 )
* Start child_DoFencing_2 ( hadev2 )
Executing Cluster Transition:
* Pseudo action: group1_start_0
* Resource action: child_DoFencing_1 monitor on hadev2
* Resource action: child_DoFencing_2 monitor on hadev2
* Resource action: child_DoFencing_1 start on hadev2
* Resource action: child_DoFencing_2 start on hadev2
* Pseudo action: group1_running_0
* Resource action: child_DoFencing_1 monitor=5000 on hadev2
* Resource action: child_DoFencing_2 monitor=5000 on hadev2
Revised Cluster Status:
* Node List:
* Node hadev1: UNCLEAN (offline)
* Node hadev3: UNCLEAN (offline)
* Online: [ hadev2 ]
* Full List of Resources:
* Resource Group: group1:
* child_DoFencing_1 (stonith:ssh): Started hadev2
* child_DoFencing_2 (stonith:ssh): Started hadev2
diff --git a/cts/scheduler/summary/quorum-6.summary b/cts/scheduler/summary/quorum-6.summary
index 321410d5b5..04f41803b4 100644
--- a/cts/scheduler/summary/quorum-6.summary
+++ b/cts/scheduler/summary/quorum-6.summary
@@ -1,50 +1,52 @@
Current cluster status:
* Node List:
* Node hadev1: UNCLEAN (offline)
* Node hadev3: UNCLEAN (offline)
* Online: [ hadev2 ]
* Full List of Resources:
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Stopped
* child_DoFencing:1 (stonith:ssh): Stopped
* child_DoFencing:2 (stonith:ssh): Stopped
* child_DoFencing:3 (stonith:ssh): Stopped
* child_DoFencing:4 (stonith:ssh): Stopped
* child_DoFencing:5 (stonith:ssh): Stopped
* child_DoFencing:6 (stonith:ssh): Stopped
* child_DoFencing:7 (stonith:ssh): Stopped
+warning: Node hadev1 is unclean but cannot be fenced
+warning: Node hadev3 is unclean but cannot be fenced
Transition Summary:
* Start child_DoFencing:0 ( hadev2 )
Executing Cluster Transition:
* Resource action: child_DoFencing:0 monitor on hadev2
* Resource action: child_DoFencing:1 monitor on hadev2
* Resource action: child_DoFencing:2 monitor on hadev2
* Resource action: child_DoFencing:3 monitor on hadev2
* Resource action: child_DoFencing:4 monitor on hadev2
* Resource action: child_DoFencing:5 monitor on hadev2
* Resource action: child_DoFencing:6 monitor on hadev2
* Resource action: child_DoFencing:7 monitor on hadev2
* Pseudo action: DoFencing_start_0
* Resource action: child_DoFencing:0 start on hadev2
* Pseudo action: DoFencing_running_0
* Resource action: child_DoFencing:0 monitor=5000 on hadev2
Revised Cluster Status:
* Node List:
* Node hadev1: UNCLEAN (offline)
* Node hadev3: UNCLEAN (offline)
* Online: [ hadev2 ]
* Full List of Resources:
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Started hadev2
* child_DoFencing:1 (stonith:ssh): Stopped
* child_DoFencing:2 (stonith:ssh): Stopped
* child_DoFencing:3 (stonith:ssh): Stopped
* child_DoFencing:4 (stonith:ssh): Stopped
* child_DoFencing:5 (stonith:ssh): Stopped
* child_DoFencing:6 (stonith:ssh): Stopped
* child_DoFencing:7 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/rec-node-10.summary b/cts/scheduler/summary/rec-node-10.summary
index a77b2a14ee..2df3f57eb8 100644
--- a/cts/scheduler/summary/rec-node-10.summary
+++ b/cts/scheduler/summary/rec-node-10.summary
@@ -1,29 +1,30 @@
Current cluster status:
* Node List:
* Node node1: UNCLEAN (offline)
* Online: [ node2 ]
* Full List of Resources:
* stonith-1 (stonith:dummy): Stopped
* rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
* rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+warning: Node node1 is unclean but cannot be fenced
Transition Summary:
* Start stonith-1 ( node2 ) due to no quorum (blocked)
* Stop rsc1 ( node1 ) due to no quorum (blocked)
* Stop rsc2 ( node1 ) due to no quorum (blocked)
Executing Cluster Transition:
* Resource action: stonith-1 monitor on node2
* Resource action: rsc1 monitor on node2
* Resource action: rsc2 monitor on node2
Revised Cluster Status:
* Node List:
* Node node1: UNCLEAN (offline)
* Online: [ node2 ]
* Full List of Resources:
* stonith-1 (stonith:dummy): Stopped
* rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
* rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
diff --git a/cts/scheduler/summary/rec-node-5.summary b/cts/scheduler/summary/rec-node-5.summary
index a4128ca167..9ed88580a6 100644
--- a/cts/scheduler/summary/rec-node-5.summary
+++ b/cts/scheduler/summary/rec-node-5.summary
@@ -1,27 +1,29 @@
Current cluster status:
* Node List:
* Node node1: UNCLEAN (offline)
* Online: [ node2 ]
* Full List of Resources:
* rsc1 (ocf:heartbeat:apache): Stopped
* rsc2 (ocf:heartbeat:apache): Stopped
+warning: Node node1 is unclean but cannot be fenced
+warning: Resource functionality and data integrity cannot be guaranteed (configure, enable, and test fencing to correct this)
Transition Summary:
* Start rsc1 ( node2 )
* Start rsc2 ( node2 )
Executing Cluster Transition:
* Resource action: rsc1 monitor on node2
* Resource action: rsc2 monitor on node2
* Resource action: rsc1 start on node2
* Resource action: rsc2 start on node2
Revised Cluster Status:
* Node List:
* Node node1: UNCLEAN (offline)
* Online: [ node2 ]
* Full List of Resources:
* rsc1 (ocf:heartbeat:apache): Started node2
* rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-8.summary b/cts/scheduler/summary/rec-node-8.summary
index 226e333dfc..c20908be57 100644
--- a/cts/scheduler/summary/rec-node-8.summary
+++ b/cts/scheduler/summary/rec-node-8.summary
@@ -1,33 +1,34 @@
Current cluster status:
* Node List:
* Node node1: UNCLEAN (offline)
* Online: [ node2 ]
* Full List of Resources:
* stonith-1 (stonith:dummy): Stopped
* rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
* rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
* rsc3 (ocf:heartbeat:apache): Stopped
+warning: Node node1 is unclean but cannot be fenced
Transition Summary:
* Start stonith-1 ( node2 ) due to quorum freeze (blocked)
* Stop rsc1 ( node1 ) blocked
* Stop rsc2 ( node1 ) blocked
* Start rsc3 ( node2 ) due to quorum freeze (blocked)
Executing Cluster Transition:
* Resource action: stonith-1 monitor on node2
* Resource action: rsc1 monitor on node2
* Resource action: rsc2 monitor on node2
* Resource action: rsc3 monitor on node2
Revised Cluster Status:
* Node List:
* Node node1: UNCLEAN (offline)
* Online: [ node2 ]
* Full List of Resources:
* stonith-1 (stonith:dummy): Stopped
* rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
* rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
* rsc3 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/remote-orphaned2.summary b/cts/scheduler/summary/remote-orphaned2.summary
index 9b0091467b..f9e0c03242 100644
--- a/cts/scheduler/summary/remote-orphaned2.summary
+++ b/cts/scheduler/summary/remote-orphaned2.summary
@@ -1,29 +1,38 @@
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Current cluster status:
* Node List:
* RemoteNode mrg-02: UNCLEAN (offline)
* RemoteNode mrg-03: UNCLEAN (offline)
* RemoteNode mrg-04: UNCLEAN (offline)
* Online: [ host-026 host-027 host-028 ]
* Full List of Resources:
* neutron-openvswitch-agent-compute (ocf:heartbeat:Dummy): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
* libvirtd-compute (systemd:libvirtd): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
* ceilometer-compute (systemd:openstack-ceilometer-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
* nova-compute (systemd:openstack-nova-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+warning: Node mrg-02 is unclean but cannot be fenced
+warning: Node mrg-03 is unclean but cannot be fenced
+warning: Node mrg-04 is unclean but cannot be fenced
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* RemoteNode mrg-02: UNCLEAN (offline)
* RemoteNode mrg-03: UNCLEAN (offline)
* RemoteNode mrg-04: UNCLEAN (offline)
* Online: [ host-026 host-027 host-028 ]
* Full List of Resources:
* neutron-openvswitch-agent-compute (ocf:heartbeat:Dummy): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
* libvirtd-compute (systemd:libvirtd): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
* ceilometer-compute (systemd:openstack-ceilometer-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
* nova-compute (systemd:openstack-nova-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
diff --git a/cts/scheduler/summary/rsc-discovery-per-node.summary b/cts/scheduler/summary/rsc-discovery-per-node.summary
index 3c34ced4ff..150799f577 100644
--- a/cts/scheduler/summary/rsc-discovery-per-node.summary
+++ b/cts/scheduler/summary/rsc-discovery-per-node.summary
@@ -1,130 +1,135 @@
+warning: Ignoring resource-discovery-enabled attribute for 18node1 because disabling resource discovery is not allowed for cluster nodes
+warning: Ignoring resource-discovery-enabled attribute for 18node2 because disabling resource discovery is not allowed for cluster nodes
+warning: Support for the resource-discovery-enabled node attribute is deprecated and will be removed (and behave as 'true') in a future release.
Current cluster status:
* Node List:
* Online: [ 18builder 18node1 18node2 18node3 18node4 ]
* RemoteOFFLINE: [ remote1 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started 18node1
* remote1 (ocf:pacemaker:remote): Stopped
* FAKE1 (ocf:heartbeat:Dummy): Stopped
* FAKE2 (ocf:heartbeat:Dummy): Started 18node2
* FAKE3 (ocf:heartbeat:Dummy): Started 18builder
* FAKE4 (ocf:heartbeat:Dummy): Started 18node1
* FAKE5 (ocf:heartbeat:Dummy): Stopped
* Clone Set: FAKECLONE1-clone [FAKECLONE1]:
* Stopped: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
* Clone Set: FAKECLONE2-clone [FAKECLONE2]:
* Stopped: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
Transition Summary:
* Start remote1 ( 18builder )
* Start FAKE1 ( 18node2 )
* Move FAKE2 ( 18node2 -> 18node3 )
* Move FAKE3 ( 18builder -> 18node4 )
* Move FAKE4 ( 18node1 -> remote1 )
* Start FAKE5 ( 18builder )
* Start FAKECLONE1:0 ( 18node1 )
* Start FAKECLONE1:1 ( 18node2 )
* Start FAKECLONE1:2 ( 18node3 )
* Start FAKECLONE1:3 ( 18node4 )
* Start FAKECLONE1:4 ( remote1 )
* Start FAKECLONE1:5 ( 18builder )
* Start FAKECLONE2:0 ( 18node1 )
* Start FAKECLONE2:1 ( 18node2 )
* Start FAKECLONE2:2 ( 18node3 )
* Start FAKECLONE2:3 ( 18node4 )
* Start FAKECLONE2:4 ( remote1 )
* Start FAKECLONE2:5 ( 18builder )
Executing Cluster Transition:
* Resource action: shooter monitor on 18node4
* Resource action: shooter monitor on 18node3
* Resource action: remote1 monitor on 18node4
* Resource action: remote1 monitor on 18node3
* Resource action: FAKE1 monitor on 18node4
* Resource action: FAKE1 monitor on 18node3
* Resource action: FAKE1 monitor on 18node2
* Resource action: FAKE1 monitor on 18node1
* Resource action: FAKE1 monitor on 18builder
* Resource action: FAKE2 stop on 18node2
* Resource action: FAKE2 monitor on 18node4
* Resource action: FAKE2 monitor on 18node3
* Resource action: FAKE3 stop on 18builder
* Resource action: FAKE3 monitor on 18node4
* Resource action: FAKE3 monitor on 18node3
* Resource action: FAKE4 monitor on 18node4
* Resource action: FAKE4 monitor on 18node3
* Resource action: FAKE5 monitor on 18node4
* Resource action: FAKE5 monitor on 18node3
* Resource action: FAKE5 monitor on 18node2
* Resource action: FAKE5 monitor on 18node1
* Resource action: FAKE5 monitor on 18builder
* Resource action: FAKECLONE1:0 monitor on 18node1
* Resource action: FAKECLONE1:1 monitor on 18node2
* Resource action: FAKECLONE1:2 monitor on 18node3
* Resource action: FAKECLONE1:3 monitor on 18node4
* Resource action: FAKECLONE1:5 monitor on 18builder
* Pseudo action: FAKECLONE1-clone_start_0
* Resource action: FAKECLONE2:0 monitor on 18node1
* Resource action: FAKECLONE2:1 monitor on 18node2
* Resource action: FAKECLONE2:2 monitor on 18node3
* Resource action: FAKECLONE2:3 monitor on 18node4
* Resource action: FAKECLONE2:5 monitor on 18builder
* Pseudo action: FAKECLONE2-clone_start_0
* Resource action: remote1 start on 18builder
* Resource action: FAKE1 start on 18node2
* Resource action: FAKE2 start on 18node3
* Resource action: FAKE3 start on 18node4
* Resource action: FAKE4 stop on 18node1
* Resource action: FAKE5 start on 18builder
* Resource action: FAKECLONE1:0 start on 18node1
* Resource action: FAKECLONE1:1 start on 18node2
* Resource action: FAKECLONE1:2 start on 18node3
* Resource action: FAKECLONE1:3 start on 18node4
* Resource action: FAKECLONE1:4 start on remote1
* Resource action: FAKECLONE1:5 start on 18builder
* Pseudo action: FAKECLONE1-clone_running_0
* Resource action: FAKECLONE2:0 start on 18node1
* Resource action: FAKECLONE2:1 start on 18node2
* Resource action: FAKECLONE2:2 start on 18node3
* Resource action: FAKECLONE2:3 start on 18node4
* Resource action: FAKECLONE2:4 start on remote1
* Resource action: FAKECLONE2:5 start on 18builder
* Pseudo action: FAKECLONE2-clone_running_0
* Resource action: remote1 monitor=60000 on 18builder
* Resource action: FAKE1 monitor=60000 on 18node2
* Resource action: FAKE2 monitor=60000 on 18node3
* Resource action: FAKE3 monitor=60000 on 18node4
* Resource action: FAKE4 start on remote1
* Resource action: FAKE5 monitor=60000 on 18builder
* Resource action: FAKECLONE1:0 monitor=60000 on 18node1
* Resource action: FAKECLONE1:1 monitor=60000 on 18node2
* Resource action: FAKECLONE1:2 monitor=60000 on 18node3
* Resource action: FAKECLONE1:3 monitor=60000 on 18node4
* Resource action: FAKECLONE1:4 monitor=60000 on remote1
* Resource action: FAKECLONE1:5 monitor=60000 on 18builder
* Resource action: FAKECLONE2:0 monitor=60000 on 18node1
* Resource action: FAKECLONE2:1 monitor=60000 on 18node2
* Resource action: FAKECLONE2:2 monitor=60000 on 18node3
* Resource action: FAKECLONE2:3 monitor=60000 on 18node4
* Resource action: FAKECLONE2:4 monitor=60000 on remote1
* Resource action: FAKECLONE2:5 monitor=60000 on 18builder
* Resource action: FAKE4 monitor=60000 on remote1
+warning: Ignoring resource-discovery-enabled attribute for 18node1 because disabling resource discovery is not allowed for cluster nodes
+warning: Ignoring resource-discovery-enabled attribute for 18node2 because disabling resource discovery is not allowed for cluster nodes
Revised Cluster Status:
* Node List:
* Online: [ 18builder 18node1 18node2 18node3 18node4 ]
* RemoteOnline: [ remote1 ]
* Full List of Resources:
* shooter (stonith:fence_xvm): Started 18node1
* remote1 (ocf:pacemaker:remote): Started 18builder
* FAKE1 (ocf:heartbeat:Dummy): Started 18node2
* FAKE2 (ocf:heartbeat:Dummy): Started 18node3
* FAKE3 (ocf:heartbeat:Dummy): Started 18node4
* FAKE4 (ocf:heartbeat:Dummy): Started remote1
* FAKE5 (ocf:heartbeat:Dummy): Started 18builder
* Clone Set: FAKECLONE1-clone [FAKECLONE1]:
* Started: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
* Clone Set: FAKECLONE2-clone [FAKECLONE2]:
* Started: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
diff --git a/cts/scheduler/summary/stop-failure-no-fencing.summary b/cts/scheduler/summary/stop-failure-no-fencing.summary
index bb164fd5be..9d7cd66ff5 100644
--- a/cts/scheduler/summary/stop-failure-no-fencing.summary
+++ b/cts/scheduler/summary/stop-failure-no-fencing.summary
@@ -1,27 +1,35 @@
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
0 of 9 resource instances DISABLED and 1 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Node pcmk-3: UNCLEAN (offline)
* Node pcmk-4: UNCLEAN (offline)
* Online: [ pcmk-1 pcmk-2 ]
* Full List of Resources:
* Clone Set: dlm-clone [dlm]:
* Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
* ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+warning: Node pcmk-3 is unclean but cannot be fenced
+warning: Node pcmk-4 is unclean but cannot be fenced
+error: Resource start-up disabled since no STONITH resources have been defined
+error: Either configure some or disable STONITH with the stonith-enabled option
+error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Node pcmk-3: UNCLEAN (offline)
* Node pcmk-4: UNCLEAN (offline)
* Online: [ pcmk-1 pcmk-2 ]
* Full List of Resources:
* Clone Set: dlm-clone [dlm]:
* Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
* ClusterIP (ocf:heartbeat:IPaddr2): Stopped
diff --git a/cts/scheduler/summary/stop-failure-no-quorum.summary b/cts/scheduler/summary/stop-failure-no-quorum.summary
index e76827ddfc..a516415c28 100644
--- a/cts/scheduler/summary/stop-failure-no-quorum.summary
+++ b/cts/scheduler/summary/stop-failure-no-quorum.summary
@@ -1,45 +1,47 @@
0 of 10 resource instances DISABLED and 1 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Node pcmk-2: UNCLEAN (online)
* Node pcmk-3: UNCLEAN (offline)
* Node pcmk-4: UNCLEAN (offline)
* Online: [ pcmk-1 ]
* Full List of Resources:
* Clone Set: dlm-clone [dlm]:
* Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
* Clone Set: clvm-clone [clvm]:
* clvm (lsb:clvmd): FAILED pcmk-2
* clvm (lsb:clvmd): FAILED pcmk-3 (UNCLEAN, blocked)
* Stopped: [ pcmk-1 pcmk-3 pcmk-4 ]
* ClusterIP (ocf:heartbeat:IPaddr2): Stopped
* Fencing (stonith:fence_xvm): Stopped
+warning: Node pcmk-3 is unclean but cannot be fenced
+warning: Node pcmk-4 is unclean but cannot be fenced
Transition Summary:
* Fence (reboot) pcmk-2 'clvm:0 failed there'
* Start dlm:0 ( pcmk-1 ) due to no quorum (blocked)
* Stop clvm:0 ( pcmk-2 ) due to node availability
* Start clvm:2 ( pcmk-1 ) due to no quorum (blocked)
* Start ClusterIP ( pcmk-1 ) due to no quorum (blocked)
* Start Fencing ( pcmk-1 ) due to no quorum (blocked)
Executing Cluster Transition:
* Fencing pcmk-2 (reboot)
* Pseudo action: clvm-clone_stop_0
* Pseudo action: clvm_stop_0
* Pseudo action: clvm-clone_stopped_0
Revised Cluster Status:
* Node List:
* Node pcmk-3: UNCLEAN (offline)
* Node pcmk-4: UNCLEAN (offline)
* Online: [ pcmk-1 ]
* OFFLINE: [ pcmk-2 ]
* Full List of Resources:
* Clone Set: dlm-clone [dlm]:
* Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
* ClusterIP (ocf:heartbeat:IPaddr2): Stopped
* Fencing (stonith:fence_xvm): Stopped
diff --git a/cts/scheduler/summary/stop-failure-with-fencing.summary b/cts/scheduler/summary/stop-failure-with-fencing.summary
index 437708ef2e..9048b95ba6 100644
--- a/cts/scheduler/summary/stop-failure-with-fencing.summary
+++ b/cts/scheduler/summary/stop-failure-with-fencing.summary
@@ -1,45 +1,47 @@
Current cluster status:
* Node List:
* Node pcmk-2: UNCLEAN (online)
* Node pcmk-3: UNCLEAN (offline)
* Node pcmk-4: UNCLEAN (offline)
* Online: [ pcmk-1 ]
* Full List of Resources:
* Clone Set: dlm-clone [dlm]:
* Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
* Clone Set: clvm-clone [clvm]:
* clvm (lsb:clvmd): FAILED pcmk-2
* Stopped: [ pcmk-1 pcmk-3 pcmk-4 ]
* ClusterIP (ocf:heartbeat:IPaddr2): Stopped
* Fencing (stonith:fence_xvm): Stopped
+warning: Node pcmk-3 is unclean but cannot be fenced
+warning: Node pcmk-4 is unclean but cannot be fenced
Transition Summary:
* Fence (reboot) pcmk-2 'clvm:0 failed there'
* Start dlm:0 ( pcmk-1 ) due to no quorum (blocked)
* Stop clvm:0 ( pcmk-2 ) due to node availability
* Start clvm:1 ( pcmk-1 ) due to no quorum (blocked)
* Start ClusterIP ( pcmk-1 ) due to no quorum (blocked)
* Start Fencing ( pcmk-1 ) due to no quorum (blocked)
Executing Cluster Transition:
* Resource action: Fencing monitor on pcmk-1
* Fencing pcmk-2 (reboot)
* Pseudo action: clvm-clone_stop_0
* Pseudo action: clvm_stop_0
* Pseudo action: clvm-clone_stopped_0
Revised Cluster Status:
* Node List:
* Node pcmk-3: UNCLEAN (offline)
* Node pcmk-4: UNCLEAN (offline)
* Online: [ pcmk-1 ]
* OFFLINE: [ pcmk-2 ]
* Full List of Resources:
* Clone Set: dlm-clone [dlm]:
* Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
* Clone Set: clvm-clone [clvm]:
* Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
* ClusterIP (ocf:heartbeat:IPaddr2): Stopped
* Fencing (stonith:fence_xvm): Stopped
diff --git a/cts/scheduler/summary/target-1.summary b/cts/scheduler/summary/target-1.summary
index edc1daf32b..0c9572b366 100644
--- a/cts/scheduler/summary/target-1.summary
+++ b/cts/scheduler/summary/target-1.summary
@@ -1,43 +1,50 @@
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ c001n01 c001n02 c001n03 c001n08 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (disabled)
* rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
* Clone Set: promoteme [rsc_c001n03] (promotable):
* Unpromoted: [ c001n03 ]
* rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
Transition Summary:
* Stop rsc_c001n08 ( c001n08 ) due to node availability
Executing Cluster Transition:
* Resource action: DcIPaddr monitor on c001n08
* Resource action: DcIPaddr monitor on c001n03
* Resource action: DcIPaddr monitor on c001n01
* Resource action: rsc_c001n08 stop on c001n08
* Resource action: rsc_c001n08 monitor on c001n03
* Resource action: rsc_c001n08 monitor on c001n02
* Resource action: rsc_c001n08 monitor on c001n01
* Resource action: rsc_c001n02 monitor on c001n08
* Resource action: rsc_c001n02 monitor on c001n03
* Resource action: rsc_c001n02 monitor on c001n01
* Resource action: rsc_c001n01 monitor on c001n08
* Resource action: rsc_c001n01 monitor on c001n03
* Resource action: rsc_c001n01 monitor on c001n02
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
Revised Cluster Status:
* Node List:
* Online: [ c001n01 c001n02 c001n03 c001n08 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped (disabled)
* rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
* Clone Set: promoteme [rsc_c001n03] (promotable):
* Unpromoted: [ c001n03 ]
* rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/target-2.summary b/cts/scheduler/summary/target-2.summary
index a6194ae01e..c39a2aa6b2 100644
--- a/cts/scheduler/summary/target-2.summary
+++ b/cts/scheduler/summary/target-2.summary
@@ -1,44 +1,58 @@
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n03 because 'Promoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n03 because 'Promoted' only makes sense for promotable clones
1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ c001n01 c001n02 c001n03 c001n08 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (disabled)
* rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
* rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
Transition Summary:
* Stop rsc_c001n08 ( c001n08 ) due to node availability
Executing Cluster Transition:
* Resource action: DcIPaddr monitor on c001n08
* Resource action: DcIPaddr monitor on c001n03
* Resource action: DcIPaddr monitor on c001n01
* Resource action: rsc_c001n08 stop on c001n08
* Resource action: rsc_c001n08 monitor on c001n03
* Resource action: rsc_c001n08 monitor on c001n02
* Resource action: rsc_c001n08 monitor on c001n01
* Resource action: rsc_c001n02 monitor on c001n08
* Resource action: rsc_c001n02 monitor on c001n03
* Resource action: rsc_c001n02 monitor on c001n01
* Resource action: rsc_c001n03 monitor on c001n08
* Resource action: rsc_c001n03 monitor on c001n02
* Resource action: rsc_c001n03 monitor on c001n01
* Resource action: rsc_c001n01 monitor on c001n08
* Resource action: rsc_c001n01 monitor on c001n03
* Resource action: rsc_c001n01 monitor on c001n02
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n03 because 'Promoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n03 because 'Promoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n03 because 'Promoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n03 because 'Promoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n02 because 'Unpromoted' only makes sense for promotable clones
+error: Ignoring 'target-role' for rsc_c001n03 because 'Promoted' only makes sense for promotable clones
Revised Cluster Status:
* Node List:
* Online: [ c001n01 c001n02 c001n03 c001n08 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped (disabled)
* rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
* rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
* rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/template-coloc-3.summary b/cts/scheduler/summary/template-coloc-3.summary
index a7ff63e8de..b26ffea9b1 100644
--- a/cts/scheduler/summary/template-coloc-3.summary
+++ b/cts/scheduler/summary/template-coloc-3.summary
@@ -1,51 +1,52 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc1 (ocf:pacemaker:Dummy): Stopped
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* rsc4 (ocf:pacemaker:Dummy): Stopped
* rsc5 (ocf:pacemaker:Dummy): Stopped
* rsc6 (ocf:pacemaker:Dummy): Stopped
+error: Ignoring constraint 'template1-colo-template2' because two templates or tags cannot be colocated
Transition Summary:
* Start rsc1 ( node1 )
* Start rsc2 ( node2 )
* Start rsc3 ( node1 )
* Start rsc4 ( node2 )
* Start rsc5 ( node1 )
* Start rsc6 ( node2 )
Executing Cluster Transition:
* Resource action: rsc1 monitor on node2
* Resource action: rsc1 monitor on node1
* Resource action: rsc2 monitor on node2
* Resource action: rsc2 monitor on node1
* Resource action: rsc3 monitor on node2
* Resource action: rsc3 monitor on node1
* Resource action: rsc4 monitor on node2
* Resource action: rsc4 monitor on node1
* Resource action: rsc5 monitor on node2
* Resource action: rsc5 monitor on node1
* Resource action: rsc6 monitor on node2
* Resource action: rsc6 monitor on node1
* Resource action: rsc1 start on node1
* Resource action: rsc2 start on node2
* Resource action: rsc3 start on node1
* Resource action: rsc4 start on node2
* Resource action: rsc5 start on node1
* Resource action: rsc6 start on node2
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc1 (ocf:pacemaker:Dummy): Started node1
* rsc2 (ocf:pacemaker:Dummy): Started node2
* rsc3 (ocf:pacemaker:Dummy): Started node1
* rsc4 (ocf:pacemaker:Dummy): Started node2
* rsc5 (ocf:pacemaker:Dummy): Started node1
* rsc6 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-promoted-1.summary b/cts/scheduler/summary/ticket-promoted-1.summary
index 6bc13645df..5bd56c510a 100644
--- a/cts/scheduler/summary/ticket-promoted-1.summary
+++ b/cts/scheduler/summary/ticket-promoted-1.summary
@@ -1,23 +1,31 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
* Resource action: rsc1:0 monitor on node2
* Resource action: rsc1:0 monitor on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-10.summary b/cts/scheduler/summary/ticket-promoted-10.summary
index eab3d91008..c9133fe985 100644
--- a/cts/scheduler/summary/ticket-promoted-10.summary
+++ b/cts/scheduler/summary/ticket-promoted-10.summary
@@ -1,29 +1,37 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc1:0 ( node2 )
* Start rsc1:1 ( node1 )
Executing Cluster Transition:
* Resource action: rsc1:0 monitor on node2
* Resource action: rsc1:1 monitor on node1
* Pseudo action: ms1_start_0
* Resource action: rsc1:0 start on node2
* Resource action: rsc1:1 start on node1
* Pseudo action: ms1_running_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-11.summary b/cts/scheduler/summary/ticket-promoted-11.summary
index 381603997e..9bd1f55eb9 100644
--- a/cts/scheduler/summary/ticket-promoted-11.summary
+++ b/cts/scheduler/summary/ticket-promoted-11.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Promote rsc1:0 ( Unpromoted -> Promoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_promote_0
* Resource action: rsc1:1 promote on node1
* Pseudo action: ms1_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-12.summary b/cts/scheduler/summary/ticket-promoted-12.summary
index b51c277faf..68768df73b 100644
--- a/cts/scheduler/summary/ticket-promoted-12.summary
+++ b/cts/scheduler/summary/ticket-promoted-12.summary
@@ -1,23 +1,27 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-13.summary b/cts/scheduler/summary/ticket-promoted-13.summary
index 6b5d14a64d..821da14178 100644
--- a/cts/scheduler/summary/ticket-promoted-13.summary
+++ b/cts/scheduler/summary/ticket-promoted-13.summary
@@ -1,21 +1,29 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-14.summary b/cts/scheduler/summary/ticket-promoted-14.summary
index ee8912b2e9..31c16b5b4d 100644
--- a/cts/scheduler/summary/ticket-promoted-14.summary
+++ b/cts/scheduler/summary/ticket-promoted-14.summary
@@ -1,31 +1,39 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1:0 ( Promoted node1 ) due to node availability
* Stop rsc1:1 ( Unpromoted node2 ) due to node availability
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
* Pseudo action: ms1_stop_0
* Resource action: rsc1:1 stop on node1
* Resource action: rsc1:0 stop on node2
* Pseudo action: ms1_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-15.summary b/cts/scheduler/summary/ticket-promoted-15.summary
index ee8912b2e9..31c16b5b4d 100644
--- a/cts/scheduler/summary/ticket-promoted-15.summary
+++ b/cts/scheduler/summary/ticket-promoted-15.summary
@@ -1,31 +1,39 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1:0 ( Promoted node1 ) due to node availability
* Stop rsc1:1 ( Unpromoted node2 ) due to node availability
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
* Pseudo action: ms1_stop_0
* Resource action: rsc1:1 stop on node1
* Resource action: rsc1:0 stop on node2
* Pseudo action: ms1_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-16.summary b/cts/scheduler/summary/ticket-promoted-16.summary
index 851e54ebd5..a71fb4a7f8 100644
--- a/cts/scheduler/summary/ticket-promoted-16.summary
+++ b/cts/scheduler/summary/ticket-promoted-16.summary
@@ -1,21 +1,29 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-17.summary b/cts/scheduler/summary/ticket-promoted-17.summary
index ee25f92c4e..3ff57a331e 100644
--- a/cts/scheduler/summary/ticket-promoted-17.summary
+++ b/cts/scheduler/summary/ticket-promoted-17.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Demote rsc1:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-18.summary b/cts/scheduler/summary/ticket-promoted-18.summary
index ee25f92c4e..3ff57a331e 100644
--- a/cts/scheduler/summary/ticket-promoted-18.summary
+++ b/cts/scheduler/summary/ticket-promoted-18.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Demote rsc1:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-19.summary b/cts/scheduler/summary/ticket-promoted-19.summary
index 851e54ebd5..a71fb4a7f8 100644
--- a/cts/scheduler/summary/ticket-promoted-19.summary
+++ b/cts/scheduler/summary/ticket-promoted-19.summary
@@ -1,21 +1,29 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-2.summary b/cts/scheduler/summary/ticket-promoted-2.summary
index dc67f96156..1c5370a680 100644
--- a/cts/scheduler/summary/ticket-promoted-2.summary
+++ b/cts/scheduler/summary/ticket-promoted-2.summary
@@ -1,31 +1,39 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc1:0 ( node2 )
* Promote rsc1:1 ( Stopped -> Promoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_start_0
* Resource action: rsc1:0 start on node2
* Resource action: rsc1:1 start on node1
* Pseudo action: ms1_running_0
* Pseudo action: ms1_promote_0
* Resource action: rsc1:1 promote on node1
* Pseudo action: ms1_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-20.summary b/cts/scheduler/summary/ticket-promoted-20.summary
index ee25f92c4e..3ff57a331e 100644
--- a/cts/scheduler/summary/ticket-promoted-20.summary
+++ b/cts/scheduler/summary/ticket-promoted-20.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Demote rsc1:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-21.summary b/cts/scheduler/summary/ticket-promoted-21.summary
index f116a2eea0..c4b3a55fb4 100644
--- a/cts/scheduler/summary/ticket-promoted-21.summary
+++ b/cts/scheduler/summary/ticket-promoted-21.summary
@@ -1,36 +1,44 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Fence (reboot) node1 'deadman ticket was lost'
* Move rsc_stonith ( node1 -> node2 )
* Stop rsc1:0 ( Promoted node1 ) due to node availability
Executing Cluster Transition:
* Pseudo action: rsc_stonith_stop_0
* Pseudo action: ms1_demote_0
* Fencing node1 (reboot)
* Resource action: rsc_stonith start on node2
* Pseudo action: rsc1:1_demote_0
* Pseudo action: ms1_demoted_0
* Pseudo action: ms1_stop_0
* Pseudo action: rsc1:1_stop_0
* Pseudo action: ms1_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node2 ]
* OFFLINE: [ node1 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node2
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node2 ]
* Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/ticket-promoted-22.summary b/cts/scheduler/summary/ticket-promoted-22.summary
index 851e54ebd5..a71fb4a7f8 100644
--- a/cts/scheduler/summary/ticket-promoted-22.summary
+++ b/cts/scheduler/summary/ticket-promoted-22.summary
@@ -1,21 +1,29 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-23.summary b/cts/scheduler/summary/ticket-promoted-23.summary
index ee25f92c4e..3ff57a331e 100644
--- a/cts/scheduler/summary/ticket-promoted-23.summary
+++ b/cts/scheduler/summary/ticket-promoted-23.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Demote rsc1:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-24.summary b/cts/scheduler/summary/ticket-promoted-24.summary
index b51c277faf..68768df73b 100644
--- a/cts/scheduler/summary/ticket-promoted-24.summary
+++ b/cts/scheduler/summary/ticket-promoted-24.summary
@@ -1,23 +1,27 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-3.summary b/cts/scheduler/summary/ticket-promoted-3.summary
index ee8912b2e9..31c16b5b4d 100644
--- a/cts/scheduler/summary/ticket-promoted-3.summary
+++ b/cts/scheduler/summary/ticket-promoted-3.summary
@@ -1,31 +1,39 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1:0 ( Promoted node1 ) due to node availability
* Stop rsc1:1 ( Unpromoted node2 ) due to node availability
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
* Pseudo action: ms1_stop_0
* Resource action: rsc1:1 stop on node1
* Resource action: rsc1:0 stop on node2
* Pseudo action: ms1_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-4.summary b/cts/scheduler/summary/ticket-promoted-4.summary
index eab3d91008..c9133fe985 100644
--- a/cts/scheduler/summary/ticket-promoted-4.summary
+++ b/cts/scheduler/summary/ticket-promoted-4.summary
@@ -1,29 +1,37 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc1:0 ( node2 )
* Start rsc1:1 ( node1 )
Executing Cluster Transition:
* Resource action: rsc1:0 monitor on node2
* Resource action: rsc1:1 monitor on node1
* Pseudo action: ms1_start_0
* Resource action: rsc1:0 start on node2
* Resource action: rsc1:1 start on node1
* Pseudo action: ms1_running_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-5.summary b/cts/scheduler/summary/ticket-promoted-5.summary
index 381603997e..9bd1f55eb9 100644
--- a/cts/scheduler/summary/ticket-promoted-5.summary
+++ b/cts/scheduler/summary/ticket-promoted-5.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Promote rsc1:0 ( Unpromoted -> Promoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_promote_0
* Resource action: rsc1:1 promote on node1
* Pseudo action: ms1_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-6.summary b/cts/scheduler/summary/ticket-promoted-6.summary
index ee25f92c4e..3ff57a331e 100644
--- a/cts/scheduler/summary/ticket-promoted-6.summary
+++ b/cts/scheduler/summary/ticket-promoted-6.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Demote rsc1:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_demote_0
* Resource action: rsc1:1 demote on node1
* Pseudo action: ms1_demoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-7.summary b/cts/scheduler/summary/ticket-promoted-7.summary
index eab3d91008..c9133fe985 100644
--- a/cts/scheduler/summary/ticket-promoted-7.summary
+++ b/cts/scheduler/summary/ticket-promoted-7.summary
@@ -1,29 +1,37 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc1:0 ( node2 )
* Start rsc1:1 ( node1 )
Executing Cluster Transition:
* Resource action: rsc1:0 monitor on node2
* Resource action: rsc1:1 monitor on node1
* Pseudo action: ms1_start_0
* Resource action: rsc1:0 start on node2
* Resource action: rsc1:1 start on node1
* Pseudo action: ms1_running_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-8.summary b/cts/scheduler/summary/ticket-promoted-8.summary
index 381603997e..9bd1f55eb9 100644
--- a/cts/scheduler/summary/ticket-promoted-8.summary
+++ b/cts/scheduler/summary/ticket-promoted-8.summary
@@ -1,26 +1,34 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Promote rsc1:0 ( Unpromoted -> Promoted node1 )
Executing Cluster Transition:
* Pseudo action: ms1_promote_0
* Resource action: rsc1:1 promote on node1
* Pseudo action: ms1_promoted_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-9.summary b/cts/scheduler/summary/ticket-promoted-9.summary
index f116a2eea0..c4b3a55fb4 100644
--- a/cts/scheduler/summary/ticket-promoted-9.summary
+++ b/cts/scheduler/summary/ticket-promoted-9.summary
@@ -1,36 +1,44 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* Clone Set: ms1 [rsc1] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc1-monitor-unpromoted-5 is duplicate of rsc1-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Fence (reboot) node1 'deadman ticket was lost'
* Move rsc_stonith ( node1 -> node2 )
* Stop rsc1:0 ( Promoted node1 ) due to node availability
Executing Cluster Transition:
* Pseudo action: rsc_stonith_stop_0
* Pseudo action: ms1_demote_0
* Fencing node1 (reboot)
* Resource action: rsc_stonith start on node2
* Pseudo action: rsc1:1_demote_0
* Pseudo action: ms1_demoted_0
* Pseudo action: ms1_stop_0
* Pseudo action: rsc1:1_stop_0
* Pseudo action: ms1_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node2 ]
* OFFLINE: [ node1 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node2
* Clone Set: ms1 [rsc1] (promotable):
* Unpromoted: [ node2 ]
* Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-1.summary b/cts/scheduler/summary/ticket-rsc-sets-1.summary
index d119ce5176..e7a300c5a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-1.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-1.summary
@@ -1,49 +1,57 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc5:0 ( node2 )
* Start rsc5:1 ( node1 )
Executing Cluster Transition:
* Resource action: rsc1 monitor on node2
* Resource action: rsc1 monitor on node1
* Resource action: rsc2 monitor on node2
* Resource action: rsc2 monitor on node1
* Resource action: rsc3 monitor on node2
* Resource action: rsc3 monitor on node1
* Resource action: rsc4:0 monitor on node2
* Resource action: rsc4:0 monitor on node1
* Resource action: rsc5:0 monitor on node2
* Resource action: rsc5:1 monitor on node1
* Pseudo action: ms5_start_0
* Resource action: rsc5:0 start on node2
* Resource action: rsc5:1 start on node1
* Pseudo action: ms5_running_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-10.summary b/cts/scheduler/summary/ticket-rsc-sets-10.summary
index 3bc9d648ac..f8612ba8a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-10.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-10.summary
@@ -1,52 +1,60 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1 ( node2 ) due to node availability
* Stop rsc2 ( node1 ) due to node availability
* Stop rsc3 ( node1 ) due to node availability
* Stop rsc4:0 ( node1 ) due to node availability
* Stop rsc4:1 ( node2 ) due to node availability
* Demote rsc5:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Resource action: rsc1 stop on node2
* Pseudo action: group2_stop_0
* Resource action: rsc3 stop on node1
* Pseudo action: clone4_stop_0
* Pseudo action: ms5_demote_0
* Resource action: rsc2 stop on node1
* Resource action: rsc4:1 stop on node1
* Resource action: rsc4:0 stop on node2
* Pseudo action: clone4_stopped_0
* Resource action: rsc5:1 demote on node1
* Pseudo action: ms5_demoted_0
* Pseudo action: group2_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-11.summary b/cts/scheduler/summary/ticket-rsc-sets-11.summary
index 03153aa264..2775ac6930 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-11.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-11.summary
@@ -1,33 +1,41 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-12.summary b/cts/scheduler/summary/ticket-rsc-sets-12.summary
index 68e0827f78..b387a94fcd 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-12.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-12.summary
@@ -1,41 +1,49 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1 ( node2 ) due to node availability
* Stop rsc2 ( node1 ) due to node availability
* Stop rsc3 ( node1 ) due to node availability
Executing Cluster Transition:
* Resource action: rsc1 stop on node2
* Pseudo action: group2_stop_0
* Resource action: rsc3 stop on node1
* Resource action: rsc2 stop on node1
* Pseudo action: group2_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-13.summary b/cts/scheduler/summary/ticket-rsc-sets-13.summary
index 3bc9d648ac..f8612ba8a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-13.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-13.summary
@@ -1,52 +1,60 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1 ( node2 ) due to node availability
* Stop rsc2 ( node1 ) due to node availability
* Stop rsc3 ( node1 ) due to node availability
* Stop rsc4:0 ( node1 ) due to node availability
* Stop rsc4:1 ( node2 ) due to node availability
* Demote rsc5:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Resource action: rsc1 stop on node2
* Pseudo action: group2_stop_0
* Resource action: rsc3 stop on node1
* Pseudo action: clone4_stop_0
* Pseudo action: ms5_demote_0
* Resource action: rsc2 stop on node1
* Resource action: rsc4:1 stop on node1
* Resource action: rsc4:0 stop on node2
* Pseudo action: clone4_stopped_0
* Resource action: rsc5:1 demote on node1
* Pseudo action: ms5_demoted_0
* Pseudo action: group2_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-14.summary b/cts/scheduler/summary/ticket-rsc-sets-14.summary
index 3bc9d648ac..f8612ba8a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-14.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-14.summary
@@ -1,52 +1,60 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1 ( node2 ) due to node availability
* Stop rsc2 ( node1 ) due to node availability
* Stop rsc3 ( node1 ) due to node availability
* Stop rsc4:0 ( node1 ) due to node availability
* Stop rsc4:1 ( node2 ) due to node availability
* Demote rsc5:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Resource action: rsc1 stop on node2
* Pseudo action: group2_stop_0
* Resource action: rsc3 stop on node1
* Pseudo action: clone4_stop_0
* Pseudo action: ms5_demote_0
* Resource action: rsc2 stop on node1
* Resource action: rsc4:1 stop on node1
* Resource action: rsc4:0 stop on node2
* Pseudo action: clone4_stopped_0
* Resource action: rsc5:1 demote on node1
* Pseudo action: ms5_demoted_0
* Pseudo action: group2_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-2.summary b/cts/scheduler/summary/ticket-rsc-sets-2.summary
index fccf3cad1b..5e6c47b66f 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-2.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-2.summary
@@ -1,57 +1,65 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc1 ( node2 )
* Start rsc2 ( node1 )
* Start rsc3 ( node1 )
* Start rsc4:0 ( node2 )
* Start rsc4:1 ( node1 )
* Promote rsc5:0 ( Unpromoted -> Promoted node1 )
Executing Cluster Transition:
* Resource action: rsc1 start on node2
* Pseudo action: group2_start_0
* Resource action: rsc2 start on node1
* Resource action: rsc3 start on node1
* Pseudo action: clone4_start_0
* Pseudo action: ms5_promote_0
* Resource action: rsc1 monitor=10000 on node2
* Pseudo action: group2_running_0
* Resource action: rsc2 monitor=5000 on node1
* Resource action: rsc3 monitor=5000 on node1
* Resource action: rsc4:0 start on node2
* Resource action: rsc4:1 start on node1
* Pseudo action: clone4_running_0
* Resource action: rsc5:1 promote on node1
* Pseudo action: ms5_promoted_0
* Resource action: rsc4:0 monitor=5000 on node2
* Resource action: rsc4:1 monitor=5000 on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-3.summary b/cts/scheduler/summary/ticket-rsc-sets-3.summary
index 3bc9d648ac..f8612ba8a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-3.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-3.summary
@@ -1,52 +1,60 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1 ( node2 ) due to node availability
* Stop rsc2 ( node1 ) due to node availability
* Stop rsc3 ( node1 ) due to node availability
* Stop rsc4:0 ( node1 ) due to node availability
* Stop rsc4:1 ( node2 ) due to node availability
* Demote rsc5:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Resource action: rsc1 stop on node2
* Pseudo action: group2_stop_0
* Resource action: rsc3 stop on node1
* Pseudo action: clone4_stop_0
* Pseudo action: ms5_demote_0
* Resource action: rsc2 stop on node1
* Resource action: rsc4:1 stop on node1
* Resource action: rsc4:0 stop on node2
* Pseudo action: clone4_stopped_0
* Resource action: rsc5:1 demote on node1
* Pseudo action: ms5_demoted_0
* Pseudo action: group2_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-4.summary b/cts/scheduler/summary/ticket-rsc-sets-4.summary
index d119ce5176..e7a300c5a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-4.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-4.summary
@@ -1,49 +1,57 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Stopped: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc5:0 ( node2 )
* Start rsc5:1 ( node1 )
Executing Cluster Transition:
* Resource action: rsc1 monitor on node2
* Resource action: rsc1 monitor on node1
* Resource action: rsc2 monitor on node2
* Resource action: rsc2 monitor on node1
* Resource action: rsc3 monitor on node2
* Resource action: rsc3 monitor on node1
* Resource action: rsc4:0 monitor on node2
* Resource action: rsc4:0 monitor on node1
* Resource action: rsc5:0 monitor on node2
* Resource action: rsc5:1 monitor on node1
* Pseudo action: ms5_start_0
* Resource action: rsc5:0 start on node2
* Resource action: rsc5:1 start on node1
* Pseudo action: ms5_running_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-5.summary b/cts/scheduler/summary/ticket-rsc-sets-5.summary
index 217243a7b2..9d808a2ebd 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-5.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-5.summary
@@ -1,44 +1,52 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc1 ( node2 )
* Start rsc2 ( node1 )
* Start rsc3 ( node1 )
Executing Cluster Transition:
* Resource action: rsc1 start on node2
* Pseudo action: group2_start_0
* Resource action: rsc2 start on node1
* Resource action: rsc3 start on node1
* Resource action: rsc1 monitor=10000 on node2
* Pseudo action: group2_running_0
* Resource action: rsc2 monitor=5000 on node1
* Resource action: rsc3 monitor=5000 on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-6.summary b/cts/scheduler/summary/ticket-rsc-sets-6.summary
index 7336f70db3..4d446693ea 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-6.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-6.summary
@@ -1,46 +1,54 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Start rsc4:0 ( node2 )
* Start rsc4:1 ( node1 )
* Promote rsc5:0 ( Unpromoted -> Promoted node1 )
Executing Cluster Transition:
* Pseudo action: clone4_start_0
* Pseudo action: ms5_promote_0
* Resource action: rsc4:0 start on node2
* Resource action: rsc4:1 start on node1
* Pseudo action: clone4_running_0
* Resource action: rsc5:1 promote on node1
* Pseudo action: ms5_promoted_0
* Resource action: rsc4:0 monitor=5000 on node2
* Resource action: rsc4:1 monitor=5000 on node1
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-7.summary b/cts/scheduler/summary/ticket-rsc-sets-7.summary
index 3bc9d648ac..f8612ba8a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-7.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-7.summary
@@ -1,52 +1,60 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1 ( node2 ) due to node availability
* Stop rsc2 ( node1 ) due to node availability
* Stop rsc3 ( node1 ) due to node availability
* Stop rsc4:0 ( node1 ) due to node availability
* Stop rsc4:1 ( node2 ) due to node availability
* Demote rsc5:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Resource action: rsc1 stop on node2
* Pseudo action: group2_stop_0
* Resource action: rsc3 stop on node1
* Pseudo action: clone4_stop_0
* Pseudo action: ms5_demote_0
* Resource action: rsc2 stop on node1
* Resource action: rsc4:1 stop on node1
* Resource action: rsc4:0 stop on node2
* Pseudo action: clone4_stopped_0
* Resource action: rsc5:1 demote on node1
* Pseudo action: ms5_demoted_0
* Pseudo action: group2_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-8.summary b/cts/scheduler/summary/ticket-rsc-sets-8.summary
index 03153aa264..2775ac6930 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-8.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-8.summary
@@ -1,33 +1,41 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-9.summary b/cts/scheduler/summary/ticket-rsc-sets-9.summary
index 3bc9d648ac..f8612ba8a2 100644
--- a/cts/scheduler/summary/ticket-rsc-sets-9.summary
+++ b/cts/scheduler/summary/ticket-rsc-sets-9.summary
@@ -1,52 +1,60 @@
Current cluster status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Started node2
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Started node1
* rsc3 (ocf:pacemaker:Dummy): Started node1
* Clone Set: clone4 [rsc4]:
* Started: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Promoted: [ node1 ]
* Unpromoted: [ node2 ]
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
+error: Operation rsc5-monitor-unpromoted-5 is duplicate of rsc5-monitor-promoted-5 (do not use same name and interval combination more than once per resource)
Transition Summary:
* Stop rsc1 ( node2 ) due to node availability
* Stop rsc2 ( node1 ) due to node availability
* Stop rsc3 ( node1 ) due to node availability
* Stop rsc4:0 ( node1 ) due to node availability
* Stop rsc4:1 ( node2 ) due to node availability
* Demote rsc5:0 ( Promoted -> Unpromoted node1 )
Executing Cluster Transition:
* Resource action: rsc1 stop on node2
* Pseudo action: group2_stop_0
* Resource action: rsc3 stop on node1
* Pseudo action: clone4_stop_0
* Pseudo action: ms5_demote_0
* Resource action: rsc2 stop on node1
* Resource action: rsc4:1 stop on node1
* Resource action: rsc4:0 stop on node2
* Pseudo action: clone4_stopped_0
* Resource action: rsc5:1 demote on node1
* Pseudo action: ms5_demoted_0
* Pseudo action: group2_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ node1 node2 ]
* Full List of Resources:
* rsc_stonith (stonith:null): Started node1
* rsc1 (ocf:pacemaker:Dummy): Stopped
* Resource Group: group2:
* rsc2 (ocf:pacemaker:Dummy): Stopped
* rsc3 (ocf:pacemaker:Dummy): Stopped
* Clone Set: clone4 [rsc4]:
* Stopped: [ node1 node2 ]
* Clone Set: ms5 [rsc5] (promotable):
* Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/unrunnable-1.summary b/cts/scheduler/summary/unrunnable-1.summary
index 75fda23856..9ba6f2ecf5 100644
--- a/cts/scheduler/summary/unrunnable-1.summary
+++ b/cts/scheduler/summary/unrunnable-1.summary
@@ -1,67 +1,68 @@
Current cluster status:
* Node List:
* Node c001n02: UNCLEAN (offline)
* Online: [ c001n03 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Stopped
* Resource Group: group-1:
* child_192.168.100.181 (ocf:heartbeat:IPaddr): Stopped
* child_192.168.100.182 (ocf:heartbeat:IPaddr): Stopped
* child_192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n01 (ocf:heartbeat:IPaddr): Stopped
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Started c001n03
* child_DoFencing:1 (stonith:ssh): Started c001n02 (UNCLEAN)
* child_DoFencing:2 (stonith:ssh): Stopped
* child_DoFencing:3 (stonith:ssh): Stopped
+warning: Node c001n02 is unclean but cannot be fenced
Transition Summary:
* Start DcIPaddr ( c001n03 ) due to no quorum (blocked)
* Start child_192.168.100.181 ( c001n03 ) due to no quorum (blocked)
* Start child_192.168.100.182 ( c001n03 ) due to no quorum (blocked)
* Start child_192.168.100.183 ( c001n03 ) due to no quorum (blocked)
* Start rsc_c001n08 ( c001n03 ) due to no quorum (blocked)
* Start rsc_c001n02 ( c001n03 ) due to no quorum (blocked)
* Start rsc_c001n03 ( c001n03 ) due to no quorum (blocked)
* Start rsc_c001n01 ( c001n03 ) due to no quorum (blocked)
* Stop child_DoFencing:1 ( c001n02 ) due to node availability (blocked)
Executing Cluster Transition:
* Resource action: DcIPaddr monitor on c001n03
* Resource action: child_192.168.100.181 monitor on c001n03
* Resource action: child_192.168.100.182 monitor on c001n03
* Resource action: child_192.168.100.183 monitor on c001n03
* Resource action: rsc_c001n08 monitor on c001n03
* Resource action: rsc_c001n02 monitor on c001n03
* Resource action: rsc_c001n03 monitor on c001n03
* Resource action: rsc_c001n01 monitor on c001n03
* Resource action: child_DoFencing:1 monitor on c001n03
* Resource action: child_DoFencing:2 monitor on c001n03
* Resource action: child_DoFencing:3 monitor on c001n03
* Pseudo action: DoFencing_stop_0
* Pseudo action: DoFencing_stopped_0
Revised Cluster Status:
* Node List:
* Node c001n02: UNCLEAN (offline)
* Online: [ c001n03 ]
* Full List of Resources:
* DcIPaddr (ocf:heartbeat:IPaddr): Stopped
* Resource Group: group-1:
* child_192.168.100.181 (ocf:heartbeat:IPaddr): Stopped
* child_192.168.100.182 (ocf:heartbeat:IPaddr): Stopped
* child_192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
* rsc_c001n01 (ocf:heartbeat:IPaddr): Stopped
* Clone Set: DoFencing [child_DoFencing] (unique):
* child_DoFencing:0 (stonith:ssh): Started c001n03
* child_DoFencing:1 (stonith:ssh): Started c001n02 (UNCLEAN)
* child_DoFencing:2 (stonith:ssh): Stopped
* child_DoFencing:3 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/unrunnable-2.summary b/cts/scheduler/summary/unrunnable-2.summary
index 26c6351078..0c0ee882ad 100644
--- a/cts/scheduler/summary/unrunnable-2.summary
+++ b/cts/scheduler/summary/unrunnable-2.summary
@@ -1,178 +1,179 @@
6 of 117 resource instances DISABLED and 0 BLOCKED from further action due to failure
Current cluster status:
* Node List:
* Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Full List of Resources:
* ip-192.0.2.12 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
* Clone Set: haproxy-clone [haproxy]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: galera-master [galera] (promotable):
* Promoted: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: memcached-clone [memcached]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: rabbitmq-clone [rabbitmq]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-core-clone [openstack-core] (disabled):
* Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: redis-master [redis] (promotable):
* Promoted: [ overcloud-controller-1 ]
* Unpromoted: [ overcloud-controller-0 overcloud-controller-2 ]
* ip-192.0.2.11 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
* Clone Set: mongod-clone [mongod]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped
* Clone Set: openstack-heat-engine-clone [openstack-heat-engine]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-heat-api-clone [openstack-heat-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-glance-api-clone [openstack-glance-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-api-clone [openstack-nova-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-sahara-api-clone [openstack-sahara-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-glance-registry-clone [openstack-glance-registry]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-cinder-api-clone [openstack-cinder-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: delay-clone [delay]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-server-clone [neutron-server]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: httpd-clone [httpd] (disabled):
* Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+warning: Support for require-all in ordering constraints is deprecated and will be removed in a future release (use clone-min clone meta-attribute instead)
Transition Summary:
* Start openstack-cinder-volume ( overcloud-controller-2 ) due to unrunnable openstack-cinder-scheduler-clone running (blocked)
Executing Cluster Transition:
Revised Cluster Status:
* Node List:
* Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Full List of Resources:
* ip-192.0.2.12 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
* Clone Set: haproxy-clone [haproxy]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: galera-master [galera] (promotable):
* Promoted: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: memcached-clone [memcached]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: rabbitmq-clone [rabbitmq]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-core-clone [openstack-core] (disabled):
* Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: redis-master [redis] (promotable):
* Promoted: [ overcloud-controller-1 ]
* Unpromoted: [ overcloud-controller-0 overcloud-controller-2 ]
* ip-192.0.2.11 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
* Clone Set: mongod-clone [mongod]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped
* Clone Set: openstack-heat-engine-clone [openstack-heat-engine]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-heat-api-clone [openstack-heat-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-glance-api-clone [openstack-glance-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-api-clone [openstack-nova-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-sahara-api-clone [openstack-sahara-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-glance-registry-clone [openstack-glance-registry]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]:
* Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-cinder-api-clone [openstack-cinder-api]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: delay-clone [delay]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: neutron-server-clone [neutron-server]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: httpd-clone [httpd] (disabled):
* Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
* Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]:
* Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
diff --git a/cts/scheduler/summary/whitebox-imply-stop-on-fence.summary b/cts/scheduler/summary/whitebox-imply-stop-on-fence.summary
index 78506c5354..79c058252d 100644
--- a/cts/scheduler/summary/whitebox-imply-stop-on-fence.summary
+++ b/cts/scheduler/summary/whitebox-imply-stop-on-fence.summary
@@ -1,104 +1,110 @@
Current cluster status:
* Node List:
* Node kiff-01: UNCLEAN (offline)
* Online: [ kiff-02 ]
* GuestOnline: [ lxc-01_kiff-02 lxc-02_kiff-02 ]
* Full List of Resources:
* fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
* fence-kiff-02 (stonith:fence_ipmilan): Started kiff-01 (UNCLEAN)
* Clone Set: dlm-clone [dlm]:
* dlm (ocf:pacemaker:controld): Started kiff-01 (UNCLEAN)
* Started: [ kiff-02 ]
* Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: clvmd-clone [clvmd]:
* clvmd (ocf:heartbeat:clvm): Started kiff-01 (UNCLEAN)
* Started: [ kiff-02 ]
* Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: shared0-clone [shared0]:
* shared0 (ocf:heartbeat:Filesystem): Started kiff-01 (UNCLEAN)
* Started: [ kiff-02 ]
* Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): FAILED kiff-01 (UNCLEAN)
* R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-01 (UNCLEAN)
* R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
* R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
* vm-fs (ocf:heartbeat:Filesystem): FAILED lxc-01_kiff-01
+warning: Invalid ordering constraint between shared0:0 and R-lxc-02_kiff-02
+warning: Invalid ordering constraint between clvmd:0 and R-lxc-02_kiff-02
+warning: Invalid ordering constraint between dlm:0 and R-lxc-02_kiff-02
+warning: Invalid ordering constraint between shared0:0 and R-lxc-01_kiff-02
+warning: Invalid ordering constraint between clvmd:0 and R-lxc-01_kiff-02
+warning: Invalid ordering constraint between dlm:0 and R-lxc-01_kiff-02
Transition Summary:
* Fence (reboot) lxc-02_kiff-01 (resource: R-lxc-02_kiff-01) 'guest is unclean'
* Fence (reboot) lxc-01_kiff-01 (resource: R-lxc-01_kiff-01) 'guest is unclean'
* Fence (reboot) kiff-01 'peer is no longer part of the cluster'
* Move fence-kiff-02 ( kiff-01 -> kiff-02 )
* Stop dlm:0 ( kiff-01 ) due to node availability
* Stop clvmd:0 ( kiff-01 ) due to node availability
* Stop shared0:0 ( kiff-01 ) due to node availability
* Recover R-lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
* Move R-lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
* Recover vm-fs ( lxc-01_kiff-01 )
* Move lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
* Move lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
Executing Cluster Transition:
* Pseudo action: fence-kiff-02_stop_0
* Resource action: dlm monitor on lxc-02_kiff-02
* Resource action: dlm monitor on lxc-01_kiff-02
* Resource action: clvmd monitor on lxc-02_kiff-02
* Resource action: clvmd monitor on lxc-01_kiff-02
* Resource action: shared0 monitor on lxc-02_kiff-02
* Resource action: shared0 monitor on lxc-01_kiff-02
* Resource action: vm-fs monitor on lxc-02_kiff-02
* Resource action: vm-fs monitor on lxc-01_kiff-02
* Pseudo action: lxc-01_kiff-01_stop_0
* Pseudo action: lxc-02_kiff-01_stop_0
* Fencing kiff-01 (reboot)
* Pseudo action: R-lxc-01_kiff-01_stop_0
* Pseudo action: R-lxc-02_kiff-01_stop_0
* Pseudo action: stonith-lxc-02_kiff-01-reboot on lxc-02_kiff-01
* Pseudo action: stonith-lxc-01_kiff-01-reboot on lxc-01_kiff-01
* Resource action: fence-kiff-02 start on kiff-02
* Pseudo action: shared0-clone_stop_0
* Resource action: R-lxc-01_kiff-01 start on kiff-02
* Resource action: R-lxc-02_kiff-01 start on kiff-02
* Pseudo action: vm-fs_stop_0
* Resource action: lxc-01_kiff-01 start on kiff-02
* Resource action: lxc-02_kiff-01 start on kiff-02
* Resource action: fence-kiff-02 monitor=60000 on kiff-02
* Pseudo action: shared0_stop_0
* Pseudo action: shared0-clone_stopped_0
* Resource action: R-lxc-01_kiff-01 monitor=10000 on kiff-02
* Resource action: R-lxc-02_kiff-01 monitor=10000 on kiff-02
* Resource action: vm-fs start on lxc-01_kiff-01
* Resource action: lxc-01_kiff-01 monitor=30000 on kiff-02
* Resource action: lxc-02_kiff-01 monitor=30000 on kiff-02
* Pseudo action: clvmd-clone_stop_0
* Resource action: vm-fs monitor=20000 on lxc-01_kiff-01
* Pseudo action: clvmd_stop_0
* Pseudo action: clvmd-clone_stopped_0
* Pseudo action: dlm-clone_stop_0
* Pseudo action: dlm_stop_0
* Pseudo action: dlm-clone_stopped_0
Revised Cluster Status:
* Node List:
* Online: [ kiff-02 ]
* OFFLINE: [ kiff-01 ]
* GuestOnline: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Full List of Resources:
* fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
* fence-kiff-02 (stonith:fence_ipmilan): Started kiff-02
* Clone Set: dlm-clone [dlm]:
* Started: [ kiff-02 ]
* Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: clvmd-clone [clvmd]:
* Started: [ kiff-02 ]
* Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* Clone Set: shared0-clone [shared0]:
* Started: [ kiff-02 ]
* Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
* R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
* R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
* R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
* R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
* vm-fs (ocf:heartbeat:Filesystem): Started lxc-01_kiff-01