diff --git a/cts/cli/regression.daemons.exp b/cts/cli/regression.daemons.exp index d530c4ac98..74eedee957 100644 --- a/cts/cli/regression.daemons.exp +++ b/cts/cli/regression.daemons.exp @@ -1,750 +1,751 @@ =#=#=#= Begin test: Get CIB manager metadata =#=#=#= 1.1 Cluster options used by Pacemaker's Cluster Information Base manager Cluster Information Base manager options Enable Access Control Lists (ACLs) for the CIB Enable Access Control Lists (ACLs) for the CIB Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). Maximum IPC message backlog before disconnecting a cluster daemon =#=#=#= End test: Get CIB manager metadata - OK (0) =#=#=#= * Passed: pacemaker-based - Get CIB manager metadata =#=#=#= Begin test: Get controller metadata =#=#=#= 1.1 Cluster options used by Pacemaker's controller Pacemaker controller options Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. Pacemaker version on cluster node elected Designated Controller (DC) Used for informational and diagnostic purposes. The messaging layer on which Pacemaker is currently running This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. An arbitrary name for the cluster The optimal value will depend on the speed and load of your network and the type of switches used. How long to wait for a response from other nodes during start-up Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). Polling interval to recheck cluster state and evaluate rules with date specifications A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. Allowed values: stop, panic How a cluster node should react if notified of its own fencing Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. *** Advanced Use Only *** Enabling this option will slow down cluster recovery under all conditions If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use How many times fencing can fail before it will no longer be immediately re-attempted on a target How many times fencing can fail before it will no longer be immediately re-attempted on a target The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit Maximum amount of system load that should be used by cluster nodes Maximum number of jobs that can be scheduled per node (defaults to 2x cores) Maximum number of jobs that can be scheduled per node (defaults to 2x cores) =#=#=#= End test: Get controller metadata - OK (0) =#=#=#= * Passed: pacemaker-controld - Get controller metadata =#=#=#= Begin test: Get fencer metadata =#=#=#= 1.1 Instance attributes available for all "stonith"-class resources and used by Pacemaker's fence daemon, formerly known as stonithd Instance attributes available for all "stonith"-class resources Some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of "none" can be used to tell the cluster not to supply any additional parameters. *** Advanced Use Only *** An alternate parameter to supply instead of 'port' For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. A mapping of node names to port numbers for devices that do not support node names. Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. Nodes targeted by this device Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" Allowed values: dynamic-list, static-list, status, none How to determine which nodes can be targeted by the device Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. Enable a delay of no more than the time specified before executing fencing actions. This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. Enable a base delay for fencing actions and specify base delay value. Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. The maximum number of actions can be performed in parallel on this device Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action. *** Advanced Use Only *** An alternate command to run instead of 'reboot' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions. *** Advanced Use Only *** Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up. *** Advanced Use Only *** The maximum number of times to try the 'reboot' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action. *** Advanced Use Only *** An alternate command to run instead of 'off' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions. *** Advanced Use Only *** Specify an alternate timeout to use for 'off' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up. *** Advanced Use Only *** The maximum number of times to try the 'off' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action. *** Advanced Use Only *** An alternate command to run instead of 'on' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions. *** Advanced Use Only *** Specify an alternate timeout to use for 'on' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up. *** Advanced Use Only *** The maximum number of times to try the 'on' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action. *** Advanced Use Only *** An alternate command to run instead of 'list' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions. *** Advanced Use Only *** Specify an alternate timeout to use for 'list' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up. *** Advanced Use Only *** The maximum number of times to try the 'list' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action. *** Advanced Use Only *** An alternate command to run instead of 'monitor' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions. *** Advanced Use Only *** Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up. *** Advanced Use Only *** The maximum number of times to try the 'monitor' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action. *** Advanced Use Only *** An alternate command to run instead of 'status' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions. *** Advanced Use Only *** Specify an alternate timeout to use for 'status' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up. *** Advanced Use Only *** The maximum number of times to try the 'status' command within the timeout period =#=#=#= End test: Get fencer metadata - OK (0) =#=#=#= * Passed: pacemaker-fenced - Get fencer metadata =#=#=#= Begin test: Get scheduler metadata =#=#=#= 1.1 Cluster options used by Pacemaker's scheduler Pacemaker scheduler options - What to do when the cluster does not have quorum Allowed values: stop, freeze, ignore, demote, suicide + What to do when the cluster does not have quorum Allowed values: stop, freeze, ignore, demote, fence, suicide What to do when the cluster does not have quorum When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Whether to lock resources to a cleanly shut down node If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. Do not lock resources to a cleanly shut down node longer than this Whether resources can run on any node by default Whether resources can run on any node by default Whether the cluster should refrain from monitoring, starting, and stopping resources Whether the cluster should refrain from monitoring, starting, and stopping resources When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. Whether a start failure should prevent a resource from being recovered on the same node Whether the cluster should check for active resources during start-up Whether the cluster should check for active resources during start-up If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. *** Advanced Use Only *** Whether nodes may be fenced as part of recovery Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") Allowed values: reboot, off, poweroff Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") How long to wait for on, off, and reboot fence actions to complete by default How long to wait for on, off, and reboot fence actions to complete by default This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. Whether watchdog integration is enabled Allow performing fencing operations in parallel Allow performing fencing operations in parallel Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability. *** Advanced Use Only *** Whether to fence unseen nodes at start-up Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. Apply fencing delay targeting the lost nodes with the highest total resource priority Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. How long to wait for a node that has joined the cluster to join the controller process group The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. Maximum time for node-to-node communication The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. Maximum number of jobs that the cluster may execute in parallel across all nodes The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) Whether the cluster should stop all active resources Whether the cluster should stop all active resources Whether to stop resources that were removed from the configuration Whether to stop resources that were removed from the configuration Whether to cancel recurring actions removed from the configuration Whether to cancel recurring actions removed from the configuration Values other than default are poorly tested and potentially dangerous. *** Deprecated *** Whether to remove stopped resources from the executor Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in errors to save Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in warnings to save Zero to disable, -1 to store unlimited. The number of scheduler inputs without errors or warnings to save Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". Allowed values: none, migrate-on-red, only-green, progressive, custom How cluster should react to node health attributes Only used when "node-health-strategy" is set to "progressive". Base health score assigned to a node Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "green" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "yellow" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "red" How the cluster should allocate resources to nodes Allowed values: default, utilization, minimal, balanced How the cluster should allocate resources to nodes =#=#=#= End test: Get scheduler metadata - OK (0) =#=#=#= * Passed: pacemaker-schedulerd - Get scheduler metadata diff --git a/cts/cli/regression.tools.exp b/cts/cli/regression.tools.exp index b1bfc3c451..8c946fa664 100644 --- a/cts/cli/regression.tools.exp +++ b/cts/cli/regression.tools.exp @@ -1,10381 +1,10383 @@ Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Validate CIB =#=#=#= =#=#=#= Current cib after: Validate CIB =#=#=#= =#=#=#= End test: Validate CIB - OK (0) =#=#=#= * Passed: cibadmin - Validate CIB =#=#=#= Begin test: List all available options (invalid type) =#=#=#= crm_attribute: Invalid --list-options value 'asdf'. Allowed values: cluster =#=#=#= End test: List all available options (invalid type) - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - List all available options (invalid type) =#=#=#= Begin test: List all available options (invalid type) (XML) =#=#=#= crm_attribute: Invalid --list-options value 'asdf'. Allowed values: cluster =#=#=#= End test: List all available options (invalid type) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - List all available options (invalid type) (XML) =#=#=#= Begin test: List non-advanced cluster options =#=#=#= Pacemaker cluster options Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. * dc-version: Pacemaker version on cluster node elected Designated Controller (DC) * Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. * Possible values (generated by Pacemaker): version (no default) * cluster-infrastructure: The messaging layer on which Pacemaker is currently running * Used for informational and diagnostic purposes. * Possible values (generated by Pacemaker): string (no default) * cluster-name: An arbitrary name for the cluster * This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. * Possible values: string (no default) * dc-deadtime: How long to wait for a response from other nodes during start-up * The optimal value will depend on the speed and load of your network and the type of switches used. * Possible values: duration (default: ) * cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications * Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). * Possible values: duration (default: ) * fence-reaction: How a cluster node should react if notified of its own fencing * A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. * Possible values: "stop" (default), "panic" * no-quorum-policy: What to do when the cluster does not have quorum - * Possible values: "stop" (default), "freeze", "ignore", "demote", "suicide" + * Possible values: "stop" (default), "freeze", "ignore", "demote", "fence", "suicide" * shutdown-lock: Whether to lock resources to a cleanly shut down node * When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. * Possible values: boolean (default: ) * shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this * If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. * Possible values: duration (default: ) * enable-acl: Enable Access Control Lists (ACLs) for the CIB * Possible values: boolean (default: ) * symmetric-cluster: Whether resources can run on any node by default * Possible values: boolean (default: ) * maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources * Possible values: boolean (default: ) * start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node * When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. * Possible values: boolean (default: ) * enable-startup-probes: Whether the cluster should check for active resources during start-up * Possible values: boolean (default: ) * stonith-action: Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") * Possible values: "reboot" (default), "off", "poweroff" * stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default * Possible values: duration (default: ) * have-watchdog: Whether watchdog integration is enabled * This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. * Possible values (generated by Pacemaker): boolean (default: ) * stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use * If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. * Possible values: timeout (default: ) * stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target * Possible values: score (default: ) * concurrent-fencing: Allow performing fencing operations in parallel * Possible values: boolean (default: ) * priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority * Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. * Possible values: duration (default: ) * node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group * Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. * Possible values: duration (default: ) * cluster-delay: Maximum time for node-to-node communication * The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. * Possible values: duration (default: ) * load-threshold: Maximum amount of system load that should be used by cluster nodes * The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit * Possible values: percentage (default: ) * node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores) * Possible values: integer (default: ) * batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes * The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. * Possible values: integer (default: ) * migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) * Possible values: integer (default: ) * cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon * Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). * Possible values: nonnegative_integer (default: ) * stop-all-resources: Whether the cluster should stop all active resources * Possible values: boolean (default: ) * stop-orphan-resources: Whether to stop resources that were removed from the configuration * Possible values: boolean (default: ) * stop-orphan-actions: Whether to cancel recurring actions removed from the configuration * Possible values: boolean (default: ) * pe-error-series-max: The number of scheduler inputs resulting in errors to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-warn-series-max: The number of scheduler inputs resulting in warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-input-series-max: The number of scheduler inputs without errors or warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * node-health-strategy: How cluster should react to node health attributes * Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". * Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom" * node-health-base: Base health score assigned to a node * Only used when "node-health-strategy" is set to "progressive". * Possible values: score (default: ) * node-health-green: The score to use for a node health attribute whose value is "green" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-yellow: The score to use for a node health attribute whose value is "yellow" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-red: The score to use for a node health attribute whose value is "red" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * placement-strategy: How the cluster should allocate resources to nodes * Possible values: "default" (default), "utilization", "minimal", "balanced" =#=#=#= End test: List non-advanced cluster options - OK (0) =#=#=#= * Passed: crm_attribute - List non-advanced cluster options =#=#=#= Begin test: List non-advanced cluster options (XML) (shows all) =#=#=#= 1.1 Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. Pacemaker cluster options Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. Pacemaker version on cluster node elected Designated Controller (DC) Used for informational and diagnostic purposes. The messaging layer on which Pacemaker is currently running This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. An arbitrary name for the cluster The optimal value will depend on the speed and load of your network and the type of switches used. How long to wait for a response from other nodes during start-up Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). Polling interval to recheck cluster state and evaluate rules with date specifications A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. How a cluster node should react if notified of its own fencing Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. Enabling this option will slow down cluster recovery under all conditions What to do when the cluster does not have quorum What to do when the cluster does not have quorum When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Whether to lock resources to a cleanly shut down node If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. Do not lock resources to a cleanly shut down node longer than this Enable Access Control Lists (ACLs) for the CIB Enable Access Control Lists (ACLs) for the CIB Whether resources can run on any node by default Whether resources can run on any node by default Whether the cluster should refrain from monitoring, starting, and stopping resources Whether the cluster should refrain from monitoring, starting, and stopping resources When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. Whether a start failure should prevent a resource from being recovered on the same node Whether the cluster should check for active resources during start-up Whether the cluster should check for active resources during start-up If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether nodes may be fenced as part of recovery Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") How long to wait for on, off, and reboot fence actions to complete by default How long to wait for on, off, and reboot fence actions to complete by default This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. Whether watchdog integration is enabled If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use How many times fencing can fail before it will no longer be immediately re-attempted on a target How many times fencing can fail before it will no longer be immediately re-attempted on a target Allow performing fencing operations in parallel Allow performing fencing operations in parallel Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether to fence unseen nodes at start-up Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. Apply fencing delay targeting the lost nodes with the highest total resource priority Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. How long to wait for a node that has joined the cluster to join the controller process group The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. Maximum time for node-to-node communication The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit Maximum amount of system load that should be used by cluster nodes Maximum number of jobs that can be scheduled per node (defaults to 2x cores) Maximum number of jobs that can be scheduled per node (defaults to 2x cores) The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. Maximum number of jobs that the cluster may execute in parallel across all nodes The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). Maximum IPC message backlog before disconnecting a cluster daemon Whether the cluster should stop all active resources Whether the cluster should stop all active resources Whether to stop resources that were removed from the configuration Whether to stop resources that were removed from the configuration Whether to cancel recurring actions removed from the configuration Whether to cancel recurring actions removed from the configuration Values other than default are poorly tested and potentially dangerous. Whether to remove stopped resources from the executor Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in errors to save Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in warnings to save Zero to disable, -1 to store unlimited. The number of scheduler inputs without errors or warnings to save Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". How cluster should react to node health attributes Only used when "node-health-strategy" is set to "progressive". Base health score assigned to a node Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "green" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "yellow" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "red" How the cluster should allocate resources to nodes How the cluster should allocate resources to nodes =#=#=#= End test: List non-advanced cluster options (XML) (shows all) - OK (0) =#=#=#= * Passed: crm_attribute - List non-advanced cluster options (XML) (shows all) =#=#=#= Begin test: List all available cluster options =#=#=#= Pacemaker cluster options Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. * dc-version: Pacemaker version on cluster node elected Designated Controller (DC) * Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. * Possible values (generated by Pacemaker): version (no default) * cluster-infrastructure: The messaging layer on which Pacemaker is currently running * Used for informational and diagnostic purposes. * Possible values (generated by Pacemaker): string (no default) * cluster-name: An arbitrary name for the cluster * This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. * Possible values: string (no default) * dc-deadtime: How long to wait for a response from other nodes during start-up * The optimal value will depend on the speed and load of your network and the type of switches used. * Possible values: duration (default: ) * cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications * Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). * Possible values: duration (default: ) * fence-reaction: How a cluster node should react if notified of its own fencing * A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. * Possible values: "stop" (default), "panic" * no-quorum-policy: What to do when the cluster does not have quorum - * Possible values: "stop" (default), "freeze", "ignore", "demote", "suicide" + * Possible values: "stop" (default), "freeze", "ignore", "demote", "fence", "suicide" * shutdown-lock: Whether to lock resources to a cleanly shut down node * When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. * Possible values: boolean (default: ) * shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this * If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. * Possible values: duration (default: ) * enable-acl: Enable Access Control Lists (ACLs) for the CIB * Possible values: boolean (default: ) * symmetric-cluster: Whether resources can run on any node by default * Possible values: boolean (default: ) * maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources * Possible values: boolean (default: ) * start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node * When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. * Possible values: boolean (default: ) * enable-startup-probes: Whether the cluster should check for active resources during start-up * Possible values: boolean (default: ) * stonith-action: Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") * Possible values: "reboot" (default), "off", "poweroff" * stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default * Possible values: duration (default: ) * have-watchdog: Whether watchdog integration is enabled * This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. * Possible values (generated by Pacemaker): boolean (default: ) * stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use * If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. * Possible values: timeout (default: ) * stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target * Possible values: score (default: ) * concurrent-fencing: Allow performing fencing operations in parallel * Possible values: boolean (default: ) * priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority * Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. * Possible values: duration (default: ) * node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group * Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. * Possible values: duration (default: ) * cluster-delay: Maximum time for node-to-node communication * The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. * Possible values: duration (default: ) * load-threshold: Maximum amount of system load that should be used by cluster nodes * The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit * Possible values: percentage (default: ) * node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores) * Possible values: integer (default: ) * batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes * The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. * Possible values: integer (default: ) * migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) * Possible values: integer (default: ) * cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon * Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). * Possible values: nonnegative_integer (default: ) * stop-all-resources: Whether the cluster should stop all active resources * Possible values: boolean (default: ) * stop-orphan-resources: Whether to stop resources that were removed from the configuration * Possible values: boolean (default: ) * stop-orphan-actions: Whether to cancel recurring actions removed from the configuration * Possible values: boolean (default: ) * pe-error-series-max: The number of scheduler inputs resulting in errors to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-warn-series-max: The number of scheduler inputs resulting in warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-input-series-max: The number of scheduler inputs without errors or warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * node-health-strategy: How cluster should react to node health attributes * Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". * Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom" * node-health-base: Base health score assigned to a node * Only used when "node-health-strategy" is set to "progressive". * Possible values: score (default: ) * node-health-green: The score to use for a node health attribute whose value is "green" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-yellow: The score to use for a node health attribute whose value is "yellow" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-red: The score to use for a node health attribute whose value is "red" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * placement-strategy: How the cluster should allocate resources to nodes * Possible values: "default" (default), "utilization", "minimal", "balanced" * ADVANCED OPTIONS: * election-timeout: Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * shutdown-escalation: Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * join-integration-timeout: If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * join-finalization-timeout: If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * transition-delay: Enabling this option will slow down cluster recovery under all conditions * Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. * Possible values: duration (default: ) * stonith-enabled: Whether nodes may be fenced as part of recovery * If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. * Possible values: boolean (default: ) * startup-fencing: Whether to fence unseen nodes at start-up * Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability. * Possible values: boolean (default: ) * DEPRECATED OPTIONS (will be removed in a future release): * remove-after-stop: Whether to remove stopped resources from the executor * Values other than default are poorly tested and potentially dangerous. * Possible values: boolean (default: ) =#=#=#= End test: List all available cluster options - OK (0) =#=#=#= * Passed: crm_attribute - List all available cluster options =#=#=#= Begin test: List all available cluster options (XML) =#=#=#= 1.1 Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. Pacemaker cluster options Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. Pacemaker version on cluster node elected Designated Controller (DC) Used for informational and diagnostic purposes. The messaging layer on which Pacemaker is currently running This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. An arbitrary name for the cluster The optimal value will depend on the speed and load of your network and the type of switches used. How long to wait for a response from other nodes during start-up Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). Polling interval to recheck cluster state and evaluate rules with date specifications A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. How a cluster node should react if notified of its own fencing Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. Enabling this option will slow down cluster recovery under all conditions What to do when the cluster does not have quorum What to do when the cluster does not have quorum When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Whether to lock resources to a cleanly shut down node If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. Do not lock resources to a cleanly shut down node longer than this Enable Access Control Lists (ACLs) for the CIB Enable Access Control Lists (ACLs) for the CIB Whether resources can run on any node by default Whether resources can run on any node by default Whether the cluster should refrain from monitoring, starting, and stopping resources Whether the cluster should refrain from monitoring, starting, and stopping resources When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. Whether a start failure should prevent a resource from being recovered on the same node Whether the cluster should check for active resources during start-up Whether the cluster should check for active resources during start-up If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether nodes may be fenced as part of recovery Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") How long to wait for on, off, and reboot fence actions to complete by default How long to wait for on, off, and reboot fence actions to complete by default This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. Whether watchdog integration is enabled If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use How many times fencing can fail before it will no longer be immediately re-attempted on a target How many times fencing can fail before it will no longer be immediately re-attempted on a target Allow performing fencing operations in parallel Allow performing fencing operations in parallel Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether to fence unseen nodes at start-up Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. Apply fencing delay targeting the lost nodes with the highest total resource priority Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. How long to wait for a node that has joined the cluster to join the controller process group The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. Maximum time for node-to-node communication The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit Maximum amount of system load that should be used by cluster nodes Maximum number of jobs that can be scheduled per node (defaults to 2x cores) Maximum number of jobs that can be scheduled per node (defaults to 2x cores) The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. Maximum number of jobs that the cluster may execute in parallel across all nodes The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). Maximum IPC message backlog before disconnecting a cluster daemon Whether the cluster should stop all active resources Whether the cluster should stop all active resources Whether to stop resources that were removed from the configuration Whether to stop resources that were removed from the configuration Whether to cancel recurring actions removed from the configuration Whether to cancel recurring actions removed from the configuration Values other than default are poorly tested and potentially dangerous. Whether to remove stopped resources from the executor Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in errors to save Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in warnings to save Zero to disable, -1 to store unlimited. The number of scheduler inputs without errors or warnings to save Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". How cluster should react to node health attributes Only used when "node-health-strategy" is set to "progressive". Base health score assigned to a node Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "green" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "yellow" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "red" How the cluster should allocate resources to nodes How the cluster should allocate resources to nodes =#=#=#= End test: List all available cluster options (XML) - OK (0) =#=#=#= * Passed: crm_attribute - List all available cluster options (XML) =#=#=#= Begin test: Query the value of an attribute that does not exist =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query the value of an attribute that does not exist - No such object (105) =#=#=#= * Passed: crm_attribute - Query the value of an attribute that does not exist =#=#=#= Begin test: Configure something before erasing =#=#=#= =#=#=#= Current cib after: Configure something before erasing =#=#=#= =#=#=#= End test: Configure something before erasing - OK (0) =#=#=#= * Passed: crm_attribute - Configure something before erasing =#=#=#= Begin test: Test '++' XML attribute update syntax =#=#=#= =#=#=#= Current cib after: Test '++' XML attribute update syntax =#=#=#= =#=#=#= End test: Test '++' XML attribute update syntax - OK (0) =#=#=#= * Passed: cibadmin - Test '++' XML attribute update syntax =#=#=#= Begin test: Test '+=' XML attribute update syntax =#=#=#= =#=#=#= Current cib after: Test '+=' XML attribute update syntax =#=#=#= =#=#=#= End test: Test '+=' XML attribute update syntax - OK (0) =#=#=#= * Passed: cibadmin - Test '+=' XML attribute update syntax =#=#=#= Begin test: Test '++' nvpair value update syntax =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax =#=#=#= Begin test: Test '++' nvpair value update syntax (XML) =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax (XML) =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax (XML) =#=#=#= Begin test: Test '+=' nvpair value update syntax =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax =#=#=#= Begin test: Test '+=' nvpair value update syntax (XML) =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax (XML) =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax (XML) =#=#=#= Begin test: Test '++' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '++' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '++' XML attribute update syntax (--score not set) - OK (0) =#=#=#= * Passed: cibadmin - Test '++' XML attribute update syntax (--score not set) =#=#=#= Begin test: Test '+=' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '+=' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '+=' XML attribute update syntax (--score not set) - OK (0) =#=#=#= * Passed: cibadmin - Test '+=' XML attribute update syntax (--score not set) =#=#=#= Begin test: Test '++' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax (--score not set) - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax (--score not set) =#=#=#= Begin test: Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax (--score not set) (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#= Begin test: Test '+=' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax (--score not set) - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax (--score not set) =#=#=#= Begin test: Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax (--score not set) (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#= Begin test: Require --force for CIB erasure =#=#=#= cibadmin: The supplied command is considered dangerous. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= Current cib after: Require --force for CIB erasure =#=#=#= =#=#=#= End test: Require --force for CIB erasure - Operation not safe (107) =#=#=#= * Passed: cibadmin - Require --force for CIB erasure =#=#=#= Begin test: Allow CIB erasure with --force =#=#=#= =#=#=#= End test: Allow CIB erasure with --force - OK (0) =#=#=#= * Passed: cibadmin - Allow CIB erasure with --force =#=#=#= Begin test: Query CIB =#=#=#= =#=#=#= Current cib after: Query CIB =#=#=#= =#=#=#= End test: Query CIB - OK (0) =#=#=#= * Passed: cibadmin - Query CIB =#=#=#= Begin test: Set cluster option =#=#=#= =#=#=#= Current cib after: Set cluster option =#=#=#= =#=#=#= End test: Set cluster option - OK (0) =#=#=#= * Passed: crm_attribute - Set cluster option =#=#=#= Begin test: Query new cluster option =#=#=#= =#=#=#= Current cib after: Query new cluster option =#=#=#= =#=#=#= End test: Query new cluster option - OK (0) =#=#=#= * Passed: cibadmin - Query new cluster option =#=#=#= Begin test: Query cluster options =#=#=#= =#=#=#= Current cib after: Query cluster options =#=#=#= =#=#=#= End test: Query cluster options - OK (0) =#=#=#= * Passed: cibadmin - Query cluster options =#=#=#= Begin test: Set no-quorum policy =#=#=#= =#=#=#= Current cib after: Set no-quorum policy =#=#=#= =#=#=#= End test: Set no-quorum policy - OK (0) =#=#=#= * Passed: crm_attribute - Set no-quorum policy =#=#=#= Begin test: Delete nvpair =#=#=#= =#=#=#= Current cib after: Delete nvpair =#=#=#= =#=#=#= End test: Delete nvpair - OK (0) =#=#=#= * Passed: cibadmin - Delete nvpair =#=#=#= Begin test: Create operation should fail =#=#=#= Call failed: File exists =#=#=#= Current cib after: Create operation should fail =#=#=#= =#=#=#= End test: Create operation should fail - Requested item already exists (108) =#=#=#= * Passed: cibadmin - Create operation should fail =#=#=#= Begin test: Modify cluster options section =#=#=#= =#=#=#= Current cib after: Modify cluster options section =#=#=#= =#=#=#= End test: Modify cluster options section - OK (0) =#=#=#= * Passed: cibadmin - Modify cluster options section =#=#=#= Begin test: Query updated cluster option =#=#=#= =#=#=#= Current cib after: Query updated cluster option =#=#=#= =#=#=#= End test: Query updated cluster option - OK (0) =#=#=#= * Passed: cibadmin - Query updated cluster option =#=#=#= Begin test: Set duplicate cluster option =#=#=#= =#=#=#= Current cib after: Set duplicate cluster option =#=#=#= =#=#=#= End test: Set duplicate cluster option - OK (0) =#=#=#= * Passed: crm_attribute - Set duplicate cluster option =#=#=#= Begin test: Setting multiply defined cluster option should fail =#=#=#= crm_attribute: Please choose from one of the matches below and supply the 'id' with --attr-id Multiple attributes match name=cluster-delay Value: 60s (id=cib-bootstrap-options-cluster-delay) Value: 40s (id=duplicate-cluster-delay) =#=#=#= Current cib after: Setting multiply defined cluster option should fail =#=#=#= =#=#=#= End test: Setting multiply defined cluster option should fail - Multiple items match request (109) =#=#=#= * Passed: crm_attribute - Setting multiply defined cluster option should fail =#=#=#= Begin test: Set cluster option with -s =#=#=#= =#=#=#= Current cib after: Set cluster option with -s =#=#=#= =#=#=#= End test: Set cluster option with -s - OK (0) =#=#=#= * Passed: crm_attribute - Set cluster option with -s =#=#=#= Begin test: Delete cluster option with -i =#=#=#= Deleted crm_config option: id=(null) name=cluster-delay =#=#=#= Current cib after: Delete cluster option with -i =#=#=#= =#=#=#= End test: Delete cluster option with -i - OK (0) =#=#=#= * Passed: crm_attribute - Delete cluster option with -i =#=#=#= Begin test: Create node1 and bring it online =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Current cluster status: * Full List of Resources: * No resources Performing Requested Modifications: * Bringing node node1 online Transition Summary: Executing Cluster Transition: Revised Cluster Status: * Node List: * Online: [ node1 ] * Full List of Resources: * No resources =#=#=#= Current cib after: Create node1 and bring it online =#=#=#= =#=#=#= End test: Create node1 and bring it online - OK (0) =#=#=#= * Passed: crm_simulate - Create node1 and bring it online =#=#=#= Begin test: Create node attribute =#=#=#= =#=#=#= Current cib after: Create node attribute =#=#=#= =#=#=#= End test: Create node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Create node attribute =#=#=#= Begin test: Query new node attribute =#=#=#= =#=#=#= Current cib after: Query new node attribute =#=#=#= =#=#=#= End test: Query new node attribute - OK (0) =#=#=#= * Passed: cibadmin - Query new node attribute =#=#=#= Begin test: Create second node attribute =#=#=#= =#=#=#= Current cib after: Create second node attribute =#=#=#= =#=#=#= End test: Create second node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Create second node attribute =#=#=#= Begin test: Query node attributes by pattern =#=#=#= scope=nodes name=ram value=1024M scope=nodes name=rattr value=XYZ =#=#=#= End test: Query node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Query node attributes by pattern =#=#=#= Begin test: Update node attributes by pattern =#=#=#= =#=#=#= Current cib after: Update node attributes by pattern =#=#=#= =#=#=#= End test: Update node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Update node attributes by pattern =#=#=#= Begin test: Delete node attributes by pattern =#=#=#= Deleted nodes attribute: id=nodes-node1-rattr name=rattr =#=#=#= Current cib after: Delete node attributes by pattern =#=#=#= =#=#=#= End test: Delete node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Delete node attributes by pattern =#=#=#= Begin test: Set a transient (fail-count) node attribute =#=#=#= =#=#=#= Current cib after: Set a transient (fail-count) node attribute =#=#=#= =#=#=#= End test: Set a transient (fail-count) node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a transient (fail-count) node attribute =#=#=#= Begin test: Query a fail count =#=#=#= scope=status name=fail-count-foo value=3 =#=#=#= Current cib after: Query a fail count =#=#=#= =#=#=#= End test: Query a fail count - OK (0) =#=#=#= * Passed: crm_failcount - Query a fail count =#=#=#= Begin test: Show node attributes with crm_simulate =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Current cluster status: * Node List: * Online: [ node1 ] * Full List of Resources: * No resources * Node Attributes: * Node: node1: * ram : 1024M =#=#=#= End test: Show node attributes with crm_simulate - OK (0) =#=#=#= * Passed: crm_simulate - Show node attributes with crm_simulate =#=#=#= Begin test: Set a second transient node attribute =#=#=#= =#=#=#= Current cib after: Set a second transient node attribute =#=#=#= =#=#=#= End test: Set a second transient node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a second transient node attribute =#=#=#= Begin test: Query transient node attributes by pattern =#=#=#= scope=status name=fail-count-foo value=3 scope=status name=fail-count-bar value=5 =#=#=#= End test: Query transient node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Query transient node attributes by pattern =#=#=#= Begin test: Update transient node attributes by pattern =#=#=#= =#=#=#= Current cib after: Update transient node attributes by pattern =#=#=#= =#=#=#= End test: Update transient node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Update transient node attributes by pattern =#=#=#= Begin test: Delete transient node attributes by pattern =#=#=#= Deleted status attribute: id=status-node1-fail-count-foo name=fail-count-foo Deleted status attribute: id=status-node1-fail-count-bar name=fail-count-bar =#=#=#= Current cib after: Delete transient node attributes by pattern =#=#=#= =#=#=#= End test: Delete transient node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Delete transient node attributes by pattern =#=#=#= Begin test: crm_attribute given invalid delete usage =#=#=#= crm_attribute: Error: must specify attribute name or pattern to delete =#=#=#= End test: crm_attribute given invalid delete usage - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - crm_attribute given invalid delete usage =#=#=#= Begin test: Set a utilization node attribute =#=#=#= =#=#=#= Current cib after: Set a utilization node attribute =#=#=#= =#=#=#= End test: Set a utilization node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a utilization node attribute =#=#=#= Begin test: Query utilization node attribute =#=#=#= scope=nodes name=cpu value=1 =#=#=#= End test: Query utilization node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Query utilization node attribute =#=#=#= Begin test: Digest calculation =#=#=#= Digest: =#=#=#= Current cib after: Digest calculation =#=#=#= =#=#=#= End test: Digest calculation - OK (0) =#=#=#= * Passed: cibadmin - Digest calculation =#=#=#= Begin test: Replace operation should fail =#=#=#= Call failed: Update was older than existing configuration =#=#=#= Current cib after: Replace operation should fail =#=#=#= =#=#=#= End test: Replace operation should fail - Update was older than existing configuration (103) =#=#=#= * Passed: cibadmin - Replace operation should fail =#=#=#= Begin test: Default standby value =#=#=#= scope=status name=standby value=off =#=#=#= Current cib after: Default standby value =#=#=#= =#=#=#= End test: Default standby value - OK (0) =#=#=#= * Passed: crm_standby - Default standby value =#=#=#= Begin test: Set standby status =#=#=#= =#=#=#= Current cib after: Set standby status =#=#=#= =#=#=#= End test: Set standby status - OK (0) =#=#=#= * Passed: crm_standby - Set standby status =#=#=#= Begin test: Query standby value =#=#=#= scope=nodes name=standby value=true =#=#=#= Current cib after: Query standby value =#=#=#= =#=#=#= End test: Query standby value - OK (0) =#=#=#= * Passed: crm_standby - Query standby value =#=#=#= Begin test: Delete standby value =#=#=#= Deleted nodes attribute: id=nodes-node1-standby name=standby =#=#=#= Current cib after: Delete standby value =#=#=#= =#=#=#= End test: Delete standby value - OK (0) =#=#=#= * Passed: crm_standby - Delete standby value =#=#=#= Begin test: Create a resource =#=#=#= =#=#=#= Current cib after: Create a resource =#=#=#= =#=#=#= End test: Create a resource - OK (0) =#=#=#= * Passed: cibadmin - Create a resource =#=#=#= Begin test: crm_resource run with extra arguments =#=#=#= crm_resource: non-option ARGV-elements: [1 of 2] foo [2 of 2] bar =#=#=#= End test: crm_resource run with extra arguments - Incorrect usage (64) =#=#=#= * Passed: crm_resource - crm_resource run with extra arguments =#=#=#= Begin test: List all available resource options (invalid type) =#=#=#= crm_resource: Error parsing option --list-options =#=#=#= End test: List all available resource options (invalid type) - Incorrect usage (64) =#=#=#= * Passed: crm_resource - List all available resource options (invalid type) =#=#=#= Begin test: List all available resource options (invalid type) (XML) =#=#=#= crm_resource: Error parsing option --list-options =#=#=#= End test: List all available resource options (invalid type) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_resource - List all available resource options (invalid type) (XML) =#=#=#= Begin test: List non-advanced primitive meta-attributes =#=#=#= Primitive meta-attributes Meta-attributes applicable to primitive resources * priority: Resource assignment priority * If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. * Possible values: score (default: ) * critical: Default value for influence in colocation constraints * Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. * Possible values: boolean (default: ) * target-role: State the cluster should attempt to keep this resource in * "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". * Possible values: "Stopped", "Started" (default), "Unpromoted", "Promoted" * is-managed: Whether the cluster is allowed to actively change the resource's state * If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. * Possible values: boolean (default: ) * maintenance: If true, the cluster will not schedule any actions involving the resource * If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. * Possible values: boolean (default: ) * resource-stickiness: Score to add to the current node when a resource is already active * Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. * Possible values: score (no default) * requires: Conditions under which the resource can be started * Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". * Possible values: "nothing", "quorum", "fencing", "unfencing" * migration-threshold: Number of failures on a node before the resource becomes ineligible to run there. * Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. * Possible values: score (default: ) * failure-timeout: Number of seconds before acting as if a failure had not occurred * Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. * Possible values: duration (default: ) * multiple-active: What to do if the cluster finds the resource active on more than one node * What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) * Possible values: "block", "stop_only", "stop_start" (default), "stop_unexpected" * allow-migrate: Whether the cluster should try to "live migrate" this resource when it needs to be moved * Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. * Possible values: boolean (no default) * allow-unhealthy-nodes: Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it * Possible values: boolean (default: ) * container-attribute-target: Where to check user-defined node attributes * Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). * Possible values: string (no default) * remote-node: Name of the Pacemaker Remote guest node this resource is associated with, if any * Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. * Possible values: string (no default) * remote-addr: If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote * If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. * Possible values: string (no default) * remote-port: If remote-node is specified, port on the guest used for its Pacemaker Remote connection * If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. * Possible values: port (default: ) * remote-connect-timeout: If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. * Possible values: timeout (default: ) * remote-allow-migrate: If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). * Possible values: boolean (default: ) =#=#=#= End test: List non-advanced primitive meta-attributes - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced primitive meta-attributes =#=#=#= Begin test: List non-advanced primitive meta-attributes (XML) (shows all) =#=#=#= 1.1 Meta-attributes applicable to primitive resources Primitive meta-attributes If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. Resource assignment priority Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. Default value for influence in colocation constraints "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". State the cluster should attempt to keep this resource in If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. Whether the cluster is allowed to actively change the resource's state If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. If true, the cluster will not schedule any actions involving the resource Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. Score to add to the current node when a resource is already active Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". Conditions under which the resource can be started Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. Number of failures on a node before the resource becomes ineligible to run there. Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. Number of seconds before acting as if a failure had not occurred What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) What to do if the cluster finds the resource active on more than one node Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. Whether the cluster should try to "live migrate" this resource when it needs to be moved Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). Where to check user-defined node attributes Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. Name of the Pacemaker Remote guest node this resource is associated with, if any If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. If remote-node is specified, port on the guest used for its Pacemaker Remote connection If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). =#=#=#= End test: List non-advanced primitive meta-attributes (XML) (shows all) - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced primitive meta-attributes (XML) (shows all) =#=#=#= Begin test: List all available primitive meta-attributes =#=#=#= Primitive meta-attributes Meta-attributes applicable to primitive resources * priority: Resource assignment priority * If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. * Possible values: score (default: ) * critical: Default value for influence in colocation constraints * Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. * Possible values: boolean (default: ) * target-role: State the cluster should attempt to keep this resource in * "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". * Possible values: "Stopped", "Started" (default), "Unpromoted", "Promoted" * is-managed: Whether the cluster is allowed to actively change the resource's state * If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. * Possible values: boolean (default: ) * maintenance: If true, the cluster will not schedule any actions involving the resource * If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. * Possible values: boolean (default: ) * resource-stickiness: Score to add to the current node when a resource is already active * Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. * Possible values: score (no default) * requires: Conditions under which the resource can be started * Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". * Possible values: "nothing", "quorum", "fencing", "unfencing" * migration-threshold: Number of failures on a node before the resource becomes ineligible to run there. * Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. * Possible values: score (default: ) * failure-timeout: Number of seconds before acting as if a failure had not occurred * Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. * Possible values: duration (default: ) * multiple-active: What to do if the cluster finds the resource active on more than one node * What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) * Possible values: "block", "stop_only", "stop_start" (default), "stop_unexpected" * allow-migrate: Whether the cluster should try to "live migrate" this resource when it needs to be moved * Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. * Possible values: boolean (no default) * allow-unhealthy-nodes: Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it * Possible values: boolean (default: ) * container-attribute-target: Where to check user-defined node attributes * Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). * Possible values: string (no default) * remote-node: Name of the Pacemaker Remote guest node this resource is associated with, if any * Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. * Possible values: string (no default) * remote-addr: If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote * If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. * Possible values: string (no default) * remote-port: If remote-node is specified, port on the guest used for its Pacemaker Remote connection * If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. * Possible values: port (default: ) * remote-connect-timeout: If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. * Possible values: timeout (default: ) * remote-allow-migrate: If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). * Possible values: boolean (default: ) =#=#=#= End test: List all available primitive meta-attributes - OK (0) =#=#=#= * Passed: crm_resource - List all available primitive meta-attributes =#=#=#= Begin test: List all available primitive meta-attributes (XML) =#=#=#= 1.1 Meta-attributes applicable to primitive resources Primitive meta-attributes If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. Resource assignment priority Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. Default value for influence in colocation constraints "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". State the cluster should attempt to keep this resource in If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. Whether the cluster is allowed to actively change the resource's state If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. If true, the cluster will not schedule any actions involving the resource Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. Score to add to the current node when a resource is already active Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". Conditions under which the resource can be started Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. Number of failures on a node before the resource becomes ineligible to run there. Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. Number of seconds before acting as if a failure had not occurred What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) What to do if the cluster finds the resource active on more than one node Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. Whether the cluster should try to "live migrate" this resource when it needs to be moved Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). Where to check user-defined node attributes Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. Name of the Pacemaker Remote guest node this resource is associated with, if any If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. If remote-node is specified, port on the guest used for its Pacemaker Remote connection If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). =#=#=#= End test: List all available primitive meta-attributes (XML) - OK (0) =#=#=#= * Passed: crm_resource - List all available primitive meta-attributes (XML) =#=#=#= Begin test: List non-advanced fencing parameters =#=#=#= Fencing resource common parameters Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. * pcmk_host_map: A mapping of node names to port numbers for devices that do not support node names. * For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. * Possible values: string (no default) * pcmk_host_list: Nodes targeted by this device * Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. * Possible values: string (no default) * pcmk_host_check: How to determine which nodes can be targeted by the device * Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" * Possible values: "dynamic-list", "static-list", "status", "none" * pcmk_delay_max: Enable a delay of no more than the time specified before executing fencing actions. * Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. * Possible values: duration (default: ) * pcmk_delay_base: Enable a base delay for fencing actions and specify base delay value. * This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. * Possible values: string (default: ) * pcmk_action_limit: The maximum number of actions can be performed in parallel on this device * Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. * Possible values: integer (default: ) =#=#=#= End test: List non-advanced fencing parameters - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced fencing parameters =#=#=#= Begin test: List non-advanced fencing parameters (XML) (shows all) =#=#=#= 1.1 Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. Fencing resource common parameters Some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of "none" can be used to tell the cluster not to supply any additional parameters. An alternate parameter to supply instead of 'port' For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. A mapping of node names to port numbers for devices that do not support node names. Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. Nodes targeted by this device Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" How to determine which nodes can be targeted by the device Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. Enable a delay of no more than the time specified before executing fencing actions. This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. Enable a base delay for fencing actions and specify base delay value. Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. The maximum number of actions can be performed in parallel on this device Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action. An alternate command to run instead of 'reboot' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions. Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up. The maximum number of times to try the 'reboot' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action. An alternate command to run instead of 'off' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions. Specify an alternate timeout to use for 'off' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up. The maximum number of times to try the 'off' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action. An alternate command to run instead of 'on' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions. Specify an alternate timeout to use for 'on' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up. The maximum number of times to try the 'on' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action. An alternate command to run instead of 'list' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions. Specify an alternate timeout to use for 'list' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up. The maximum number of times to try the 'list' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action. An alternate command to run instead of 'monitor' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions. Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up. The maximum number of times to try the 'monitor' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action. An alternate command to run instead of 'status' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions. Specify an alternate timeout to use for 'status' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up. The maximum number of times to try the 'status' command within the timeout period =#=#=#= End test: List non-advanced fencing parameters (XML) (shows all) - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced fencing parameters (XML) (shows all) =#=#=#= Begin test: List all available fencing parameters =#=#=#= Fencing resource common parameters Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. * pcmk_host_map: A mapping of node names to port numbers for devices that do not support node names. * For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. * Possible values: string (no default) * pcmk_host_list: Nodes targeted by this device * Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. * Possible values: string (no default) * pcmk_host_check: How to determine which nodes can be targeted by the device * Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" * Possible values: "dynamic-list", "static-list", "status", "none" * pcmk_delay_max: Enable a delay of no more than the time specified before executing fencing actions. * Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. * Possible values: duration (default: ) * pcmk_delay_base: Enable a base delay for fencing actions and specify base delay value. * This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. * Possible values: string (default: ) * pcmk_action_limit: The maximum number of actions can be performed in parallel on this device * Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. * Possible values: integer (default: ) * ADVANCED OPTIONS: * pcmk_host_argument: An alternate parameter to supply instead of 'port' * Some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of "none" can be used to tell the cluster not to supply any additional parameters. * Possible values: string (default: ) * pcmk_reboot_action: An alternate command to run instead of 'reboot' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action. * Possible values: string (default: ) * pcmk_reboot_timeout: Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions. * Possible values: timeout (default: ) * pcmk_reboot_retries: The maximum number of times to try the 'reboot' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up. * Possible values: integer (default: ) * pcmk_off_action: An alternate command to run instead of 'off' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action. * Possible values: string (default: ) * pcmk_off_timeout: Specify an alternate timeout to use for 'off' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions. * Possible values: timeout (default: ) * pcmk_off_retries: The maximum number of times to try the 'off' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up. * Possible values: integer (default: ) * pcmk_on_action: An alternate command to run instead of 'on' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action. * Possible values: string (default: ) * pcmk_on_timeout: Specify an alternate timeout to use for 'on' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions. * Possible values: timeout (default: ) * pcmk_on_retries: The maximum number of times to try the 'on' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up. * Possible values: integer (default: ) * pcmk_list_action: An alternate command to run instead of 'list' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action. * Possible values: string (default: ) * pcmk_list_timeout: Specify an alternate timeout to use for 'list' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions. * Possible values: timeout (default: ) * pcmk_list_retries: The maximum number of times to try the 'list' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up. * Possible values: integer (default: ) * pcmk_monitor_action: An alternate command to run instead of 'monitor' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action. * Possible values: string (default: ) * pcmk_monitor_timeout: Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions. * Possible values: timeout (default: ) * pcmk_monitor_retries: The maximum number of times to try the 'monitor' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up. * Possible values: integer (default: ) * pcmk_status_action: An alternate command to run instead of 'status' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action. * Possible values: string (default: ) * pcmk_status_timeout: Specify an alternate timeout to use for 'status' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions. * Possible values: timeout (default: ) * pcmk_status_retries: The maximum number of times to try the 'status' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up. * Possible values: integer (default: ) =#=#=#= End test: List all available fencing parameters - OK (0) =#=#=#= * Passed: crm_resource - List all available fencing parameters =#=#=#= Begin test: List all available fencing parameters (XML) =#=#=#= 1.1 Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. Fencing resource common parameters Some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of "none" can be used to tell the cluster not to supply any additional parameters. An alternate parameter to supply instead of 'port' For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. A mapping of node names to port numbers for devices that do not support node names. Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. Nodes targeted by this device Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" How to determine which nodes can be targeted by the device Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. Enable a delay of no more than the time specified before executing fencing actions. This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. Enable a base delay for fencing actions and specify base delay value. Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. The maximum number of actions can be performed in parallel on this device Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action. An alternate command to run instead of 'reboot' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions. Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up. The maximum number of times to try the 'reboot' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action. An alternate command to run instead of 'off' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions. Specify an alternate timeout to use for 'off' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up. The maximum number of times to try the 'off' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action. An alternate command to run instead of 'on' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions. Specify an alternate timeout to use for 'on' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up. The maximum number of times to try the 'on' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action. An alternate command to run instead of 'list' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions. Specify an alternate timeout to use for 'list' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up. The maximum number of times to try the 'list' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action. An alternate command to run instead of 'monitor' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions. Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up. The maximum number of times to try the 'monitor' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action. An alternate command to run instead of 'status' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions. Specify an alternate timeout to use for 'status' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up. The maximum number of times to try the 'status' command within the timeout period =#=#=#= End test: List all available fencing parameters (XML) - OK (0) =#=#=#= * Passed: crm_resource - List all available fencing parameters (XML) =#=#=#= Begin test: crm_resource given both -r and resource config =#=#=#= crm_resource: --resource cannot be used with --class, --agent, and --provider =#=#=#= End test: crm_resource given both -r and resource config - Incorrect usage (64) =#=#=#= * Passed: crm_resource - crm_resource given both -r and resource config =#=#=#= Begin test: crm_resource given resource config with invalid action =#=#=#= crm_resource: --class, --agent, and --provider can only be used with --validate and --force-* =#=#=#= End test: crm_resource given resource config with invalid action - Incorrect usage (64) =#=#=#= * Passed: crm_resource - crm_resource given resource config with invalid action =#=#=#= Begin test: Create a resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set 'dummy' option: id=dummy-meta_attributes-is-managed set=dummy-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute =#=#=#= =#=#=#= End test: Create a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute =#=#=#= Begin test: Query a resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity false =#=#=#= Current cib after: Query a resource meta attribute =#=#=#= =#=#=#= End test: Query a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Query a resource meta attribute =#=#=#= Begin test: Remove a resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Deleted 'dummy' option: id=dummy-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Remove a resource meta attribute =#=#=#= =#=#=#= End test: Remove a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Remove a resource meta attribute =#=#=#= Begin test: Create another resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: Create another resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Create another resource meta attribute =#=#=#= Begin test: Show why a resource is not running =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: Show why a resource is not running - OK (0) =#=#=#= * Passed: crm_resource - Show why a resource is not running =#=#=#= Begin test: Remove another resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: Remove another resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Remove another resource meta attribute =#=#=#= Begin test: Get a non-existent attribute from a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Attribute 'nonexistent' not found for 'dummy' =#=#=#= End test: Get a non-existent attribute from a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get a non-existent attribute from a resource element with output-as=xml =#=#=#= Begin test: Get a non-existent attribute from a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Attribute 'nonexistent' not found for 'dummy' =#=#=#= Current cib after: Get a non-existent attribute from a resource element without output-as=xml =#=#=#= =#=#=#= End test: Get a non-existent attribute from a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get a non-existent attribute from a resource element without output-as=xml =#=#=#= Begin test: Get an existent attribute from a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity ocf =#=#=#= End test: Get an existent attribute from a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get an existent attribute from a resource element with output-as=xml =#=#=#= Begin test: Get an existent attribute from a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity ocf =#=#=#= Current cib after: Get an existent attribute from a resource element without output-as=xml =#=#=#= =#=#=#= End test: Get an existent attribute from a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get an existent attribute from a resource element without output-as=xml =#=#=#= Begin test: Set a non-existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Set a non-existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Set a non-existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set a non-existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Set an existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Set an existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Set an existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set an existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Delete an existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Delete an existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Delete an existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete an existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Delete a non-existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Delete a non-existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Delete a non-existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete a non-existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Set a non-existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set attribute: name=description value=test_description =#=#=#= Current cib after: Set a non-existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Set a non-existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set a non-existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Set an existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set attribute: name=description value=test_description =#=#=#= Current cib after: Set an existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Set an existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set an existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Delete an existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Deleted attribute: description =#=#=#= Current cib after: Delete an existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Delete an existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete an existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Delete a non-existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Deleted attribute: description =#=#=#= Current cib after: Delete a non-existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Delete a non-existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete a non-existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Create a resource attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set 'dummy' option: id=dummy-instance_attributes-delay set=dummy-instance_attributes name=delay value=10s =#=#=#= Current cib after: Create a resource attribute =#=#=#= =#=#=#= End test: Create a resource attribute - OK (0) =#=#=#= * Passed: crm_resource - Create a resource attribute =#=#=#= Begin test: List the configured resources =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Full List of Resources: * dummy (ocf:pacemaker:Dummy): Stopped =#=#=#= Current cib after: List the configured resources =#=#=#= =#=#=#= End test: List the configured resources - OK (0) =#=#=#= * Passed: crm_resource - List the configured resources =#=#=#= Begin test: List the configured resources in XML =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: List the configured resources in XML - OK (0) =#=#=#= * Passed: crm_resource - List the configured resources in XML =#=#=#= Begin test: Implicitly list the configured resources =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Full List of Resources: * dummy (ocf:pacemaker:Dummy): Stopped =#=#=#= End test: Implicitly list the configured resources - OK (0) =#=#=#= * Passed: crm_resource - Implicitly list the configured resources =#=#=#= Begin test: List IDs of instantiated resources =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity dummy =#=#=#= End test: List IDs of instantiated resources - OK (0) =#=#=#= * Passed: crm_resource - List IDs of instantiated resources =#=#=#= Begin test: Show XML configuration of resource =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity dummy (ocf:pacemaker:Dummy): Stopped Resource XML: =#=#=#= End test: Show XML configuration of resource - OK (0) =#=#=#= * Passed: crm_resource - Show XML configuration of resource =#=#=#= Begin test: Show XML configuration of resource, output as XML =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity ]]> =#=#=#= End test: Show XML configuration of resource, output as XML - OK (0) =#=#=#= * Passed: crm_resource - Show XML configuration of resource, output as XML =#=#=#= Begin test: Require a destination when migrating a resource that is stopped =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity crm_resource: Resource 'dummy' not moved: active in 0 locations. To prevent 'dummy' from running on a specific location, specify a node. =#=#=#= Current cib after: Require a destination when migrating a resource that is stopped =#=#=#= =#=#=#= End test: Require a destination when migrating a resource that is stopped - Incorrect usage (64) =#=#=#= * Passed: crm_resource - Require a destination when migrating a resource that is stopped =#=#=#= Begin test: Don't support migration to non-existent locations =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity crm_resource: Node 'i.do.not.exist' not found Error performing operation: No such object =#=#=#= Current cib after: Don't support migration to non-existent locations =#=#=#= =#=#=#= End test: Don't support migration to non-existent locations - No such object (105) =#=#=#= * Passed: crm_resource - Don't support migration to non-existent locations =#=#=#= Begin test: Create a fencing resource =#=#=#= =#=#=#= Current cib after: Create a fencing resource =#=#=#= =#=#=#= End test: Create a fencing resource - OK (0) =#=#=#= * Passed: cibadmin - Create a fencing resource =#=#=#= Begin test: Bring resources online =#=#=#= Current cluster status: * Node List: * Online: [ node1 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Stopped * Fence (stonith:fence_true): Stopped Transition Summary: * Start dummy ( node1 ) * Start Fence ( node1 ) Executing Cluster Transition: * Resource action: dummy monitor on node1 * Resource action: Fence monitor on node1 * Resource action: dummy start on node1 * Resource action: Fence start on node1 Revised Cluster Status: * Node List: * Online: [ node1 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node1 =#=#=#= Current cib after: Bring resources online =#=#=#= =#=#=#= End test: Bring resources online - OK (0) =#=#=#= * Passed: crm_simulate - Bring resources online =#=#=#= Begin test: Try to move a resource to its existing location =#=#=#= crm_resource: Error performing operation: Requested item already exists =#=#=#= Current cib after: Try to move a resource to its existing location =#=#=#= =#=#=#= End test: Try to move a resource to its existing location - Requested item already exists (108) =#=#=#= * Passed: crm_resource - Try to move a resource to its existing location =#=#=#= Begin test: Try to move a resource that doesn't exist =#=#=#= crm_resource: Resource 'xyz' not found Error performing operation: No such object =#=#=#= End test: Try to move a resource that doesn't exist - No such object (105) =#=#=#= * Passed: crm_resource - Try to move a resource that doesn't exist =#=#=#= Begin test: Move a resource from its existing location =#=#=#= WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1. This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool. This will be the case even if node1 is the last node in the cluster =#=#=#= Current cib after: Move a resource from its existing location =#=#=#= =#=#=#= End test: Move a resource from its existing location - OK (0) =#=#=#= * Passed: crm_resource - Move a resource from its existing location =#=#=#= Begin test: Clear out constraints generated by --move =#=#=#= Removing constraint: cli-ban-dummy-on-node1 =#=#=#= Current cib after: Clear out constraints generated by --move =#=#=#= =#=#=#= End test: Clear out constraints generated by --move - OK (0) =#=#=#= * Passed: crm_resource - Clear out constraints generated by --move =#=#=#= Begin test: Default ticket granted state =#=#=#= false =#=#=#= Current cib after: Default ticket granted state =#=#=#= =#=#=#= End test: Default ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Default ticket granted state =#=#=#= Begin test: Set ticket granted state =#=#=#= =#=#=#= Current cib after: Set ticket granted state =#=#=#= =#=#=#= End test: Set ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Set ticket granted state =#=#=#= Begin test: List ticket IDs =#=#=#= ticketA =#=#=#= End test: List ticket IDs - OK (0) =#=#=#= * Passed: crm_ticket - List ticket IDs =#=#=#= Begin test: List ticket IDs, outputting in XML =#=#=#= =#=#=#= End test: List ticket IDs, outputting in XML - OK (0) =#=#=#= * Passed: crm_ticket - List ticket IDs, outputting in XML =#=#=#= Begin test: Query ticket state =#=#=#= State XML: =#=#=#= End test: Query ticket state - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket state =#=#=#= Begin test: Query ticket state, outputting as xml =#=#=#= =#=#=#= End test: Query ticket state, outputting as xml - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket state, outputting as xml =#=#=#= Begin test: Query ticket granted state =#=#=#= false =#=#=#= Current cib after: Query ticket granted state =#=#=#= =#=#=#= End test: Query ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket granted state =#=#=#= Begin test: Query ticket granted state, outputting as xml =#=#=#= =#=#=#= End test: Query ticket granted state, outputting as xml - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket granted state, outputting as xml =#=#=#= Begin test: Delete ticket granted state =#=#=#= =#=#=#= Current cib after: Delete ticket granted state =#=#=#= =#=#=#= End test: Delete ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Delete ticket granted state =#=#=#= Begin test: Make a ticket standby =#=#=#= =#=#=#= Current cib after: Make a ticket standby =#=#=#= =#=#=#= End test: Make a ticket standby - OK (0) =#=#=#= * Passed: crm_ticket - Make a ticket standby =#=#=#= Begin test: Query ticket standby state =#=#=#= true =#=#=#= Current cib after: Query ticket standby state =#=#=#= =#=#=#= End test: Query ticket standby state - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket standby state =#=#=#= Begin test: Activate a ticket =#=#=#= =#=#=#= Current cib after: Activate a ticket =#=#=#= =#=#=#= End test: Activate a ticket - OK (0) =#=#=#= * Passed: crm_ticket - Activate a ticket =#=#=#= Begin test: List ticket details =#=#=#= ticketA revoked (standby=false) =#=#=#= End test: List ticket details - OK (0) =#=#=#= * Passed: crm_ticket - List ticket details =#=#=#= Begin test: List ticket details, outputting as XML =#=#=#= =#=#=#= End test: List ticket details, outputting as XML - OK (0) =#=#=#= * Passed: crm_ticket - List ticket details, outputting as XML =#=#=#= Begin test: Add a second ticket =#=#=#= false =#=#=#= Current cib after: Add a second ticket =#=#=#= =#=#=#= End test: Add a second ticket - OK (0) =#=#=#= * Passed: crm_ticket - Add a second ticket =#=#=#= Begin test: Set second ticket granted state =#=#=#= =#=#=#= Current cib after: Set second ticket granted state =#=#=#= =#=#=#= End test: Set second ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Set second ticket granted state =#=#=#= Begin test: List tickets =#=#=#= ticketA revoked ticketB revoked =#=#=#= End test: List tickets - OK (0) =#=#=#= * Passed: crm_ticket - List tickets =#=#=#= Begin test: List tickets, outputting as XML =#=#=#= =#=#=#= End test: List tickets, outputting as XML - OK (0) =#=#=#= * Passed: crm_ticket - List tickets, outputting as XML =#=#=#= Begin test: Delete second ticket =#=#=#= =#=#=#= Current cib after: Delete second ticket =#=#=#= =#=#=#= End test: Delete second ticket - OK (0) =#=#=#= * Passed: cibadmin - Delete second ticket =#=#=#= Begin test: Delete ticket standby state =#=#=#= =#=#=#= Current cib after: Delete ticket standby state =#=#=#= =#=#=#= End test: Delete ticket standby state - OK (0) =#=#=#= * Passed: crm_ticket - Delete ticket standby state =#=#=#= Begin test: Delete ticket standby state =#=#=#= =#=#=#= Current cib after: Delete ticket standby state =#=#=#= =#=#=#= End test: Delete ticket standby state - OK (0) =#=#=#= * Passed: cibadmin - Delete ticket standby state =#=#=#= Begin test: Query ticket constraints =#=#=#= Constraints XML: =#=#=#= End test: Query ticket constraints - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket constraints =#=#=#= Begin test: Query ticket constraints, outputting as xml =#=#=#= =#=#=#= End test: Query ticket constraints, outputting as xml - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket constraints, outputting as xml =#=#=#= Begin test: Delete ticket constraint =#=#=#= =#=#=#= Current cib after: Delete ticket constraint =#=#=#= =#=#=#= End test: Delete ticket constraint - OK (0) =#=#=#= * Passed: cibadmin - Delete ticket constraint =#=#=#= Begin test: Ban a resource on unknown node =#=#=#= crm_resource: Node 'host1' not found Error performing operation: No such object =#=#=#= Current cib after: Ban a resource on unknown node =#=#=#= =#=#=#= End test: Ban a resource on unknown node - No such object (105) =#=#=#= * Passed: crm_resource - Ban a resource on unknown node =#=#=#= Begin test: Create two more nodes and bring them online =#=#=#= Current cluster status: * Node List: * Online: [ node1 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node1 Performing Requested Modifications: * Bringing node node2 online * Bringing node node3 online Transition Summary: * Move Fence ( node1 -> node2 ) Executing Cluster Transition: * Resource action: dummy monitor on node3 * Resource action: dummy monitor on node2 * Resource action: Fence stop on node1 * Resource action: Fence monitor on node3 * Resource action: Fence monitor on node2 * Resource action: Fence start on node2 Revised Cluster Status: * Node List: * Online: [ node1 node2 node3 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node2 =#=#=#= Current cib after: Create two more nodes and bring them online =#=#=#= =#=#=#= End test: Create two more nodes and bring them online - OK (0) =#=#=#= * Passed: crm_simulate - Create two more nodes and bring them online =#=#=#= Begin test: Ban dummy from node1 =#=#=#= WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1. This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool. This will be the case even if node1 is the last node in the cluster =#=#=#= Current cib after: Ban dummy from node1 =#=#=#= =#=#=#= End test: Ban dummy from node1 - OK (0) =#=#=#= * Passed: crm_resource - Ban dummy from node1 =#=#=#= Begin test: Show where a resource is running =#=#=#= resource dummy is running on: node1 =#=#=#= End test: Show where a resource is running - OK (0) =#=#=#= * Passed: crm_resource - Show where a resource is running =#=#=#= Begin test: Show constraints on a resource =#=#=#= Locations: * Node node1 (score=-INFINITY, id=cli-ban-dummy-on-node1, rsc=dummy) =#=#=#= End test: Show constraints on a resource - OK (0) =#=#=#= * Passed: crm_resource - Show constraints on a resource =#=#=#= Begin test: Ban dummy from node2 =#=#=#= =#=#=#= Current cib after: Ban dummy from node2 =#=#=#= =#=#=#= End test: Ban dummy from node2 - OK (0) =#=#=#= * Passed: crm_resource - Ban dummy from node2 =#=#=#= Begin test: Relocate resources due to ban =#=#=#= Current cluster status: * Node List: * Online: [ node1 node2 node3 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node2 Transition Summary: * Move dummy ( node1 -> node3 ) Executing Cluster Transition: * Resource action: dummy stop on node1 * Resource action: dummy start on node3 Revised Cluster Status: * Node List: * Online: [ node1 node2 node3 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node3 * Fence (stonith:fence_true): Started node2 =#=#=#= Current cib after: Relocate resources due to ban =#=#=#= =#=#=#= End test: Relocate resources due to ban - OK (0) =#=#=#= * Passed: crm_simulate - Relocate resources due to ban =#=#=#= Begin test: Move dummy to node1 =#=#=#= =#=#=#= Current cib after: Move dummy to node1 =#=#=#= =#=#=#= End test: Move dummy to node1 - OK (0) =#=#=#= * Passed: crm_resource - Move dummy to node1 =#=#=#= Begin test: Clear implicit constraints for dummy on node2 =#=#=#= Removing constraint: cli-ban-dummy-on-node2 =#=#=#= Current cib after: Clear implicit constraints for dummy on node2 =#=#=#= =#=#=#= End test: Clear implicit constraints for dummy on node2 - OK (0) =#=#=#= * Passed: crm_resource - Clear implicit constraints for dummy on node2 =#=#=#= Begin test: Drop the status section =#=#=#= =#=#=#= End test: Drop the status section - OK (0) =#=#=#= * Passed: cibadmin - Drop the status section =#=#=#= Begin test: Create a clone =#=#=#= =#=#=#= End test: Create a clone - OK (0) =#=#=#= * Passed: cibadmin - Create a clone =#=#=#= Begin test: Create a resource meta attribute =#=#=#= Performing update of 'is-managed' on 'test-clone', the parent of 'test-primitive' Set 'test-clone' option: id=test-clone-meta_attributes-is-managed set=test-clone-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute =#=#=#= =#=#=#= End test: Create a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute =#=#=#= Begin test: Create a resource meta attribute in the primitive =#=#=#= Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed set=test-primitive-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute in the primitive =#=#=#= =#=#=#= End test: Create a resource meta attribute in the primitive - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in the primitive =#=#=#= Begin test: Update resource meta attribute with duplicates =#=#=#= Multiple attributes match name=is-managed Value: false (id=test-primitive-meta_attributes-is-managed) Value: false (id=test-clone-meta_attributes-is-managed) A value for 'is-managed' already exists in child 'test-primitive', performing update on that instead of 'test-clone' Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=true =#=#=#= Current cib after: Update resource meta attribute with duplicates =#=#=#= =#=#=#= End test: Update resource meta attribute with duplicates - OK (0) =#=#=#= * Passed: crm_resource - Update resource meta attribute with duplicates =#=#=#= Begin test: Update resource meta attribute with duplicates (force clone) =#=#=#= Set 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed value=true =#=#=#= Current cib after: Update resource meta attribute with duplicates (force clone) =#=#=#= =#=#=#= End test: Update resource meta attribute with duplicates (force clone) - OK (0) =#=#=#= * Passed: crm_resource - Update resource meta attribute with duplicates (force clone) =#=#=#= Begin test: Update child resource meta attribute with duplicates =#=#=#= Multiple attributes match name=is-managed Value: true (id=test-primitive-meta_attributes-is-managed) Value: true (id=test-clone-meta_attributes-is-managed) Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=false =#=#=#= Current cib after: Update child resource meta attribute with duplicates =#=#=#= =#=#=#= End test: Update child resource meta attribute with duplicates - OK (0) =#=#=#= * Passed: crm_resource - Update child resource meta attribute with duplicates =#=#=#= Begin test: Delete resource meta attribute with duplicates =#=#=#= Multiple attributes match name=is-managed Value: false (id=test-primitive-meta_attributes-is-managed) Value: true (id=test-clone-meta_attributes-is-managed) A value for 'is-managed' already exists in child 'test-primitive', performing delete on that instead of 'test-clone' Deleted 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource meta attribute with duplicates =#=#=#= =#=#=#= End test: Delete resource meta attribute with duplicates - OK (0) =#=#=#= * Passed: crm_resource - Delete resource meta attribute with duplicates =#=#=#= Begin test: Delete resource meta attribute in parent =#=#=#= Performing delete of 'is-managed' on 'test-clone', the parent of 'test-primitive' Deleted 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource meta attribute in parent =#=#=#= =#=#=#= End test: Delete resource meta attribute in parent - OK (0) =#=#=#= * Passed: crm_resource - Delete resource meta attribute in parent =#=#=#= Begin test: Create a resource meta attribute in the primitive =#=#=#= Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed set=test-primitive-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute in the primitive =#=#=#= =#=#=#= End test: Create a resource meta attribute in the primitive - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in the primitive =#=#=#= Begin test: Update existing resource meta attribute =#=#=#= A value for 'is-managed' already exists in child 'test-primitive', performing update on that instead of 'test-clone' Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=true =#=#=#= Current cib after: Update existing resource meta attribute =#=#=#= =#=#=#= End test: Update existing resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Update existing resource meta attribute =#=#=#= Begin test: Create a resource meta attribute in the parent =#=#=#= Set 'test-clone' option: id=test-clone-meta_attributes-is-managed set=test-clone-meta_attributes name=is-managed value=true =#=#=#= Current cib after: Create a resource meta attribute in the parent =#=#=#= =#=#=#= End test: Create a resource meta attribute in the parent - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in the parent =#=#=#= Begin test: Copy resources =#=#=#= =#=#=#= End test: Copy resources - OK (0) =#=#=#= * Passed: cibadmin - Copy resources =#=#=#= Begin test: Delete resource parent meta attribute (force) =#=#=#= Deleted 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource parent meta attribute (force) =#=#=#= =#=#=#= End test: Delete resource parent meta attribute (force) - OK (0) =#=#=#= * Passed: crm_resource - Delete resource parent meta attribute (force) =#=#=#= Begin test: Restore duplicates =#=#=#= =#=#=#= Current cib after: Restore duplicates =#=#=#= =#=#=#= End test: Restore duplicates - OK (0) =#=#=#= * Passed: cibadmin - Restore duplicates =#=#=#= Begin test: Delete resource child meta attribute =#=#=#= Multiple attributes match name=is-managed Value: true (id=test-primitive-meta_attributes-is-managed) Value: true (id=test-clone-meta_attributes-is-managed) Deleted 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource child meta attribute =#=#=#= =#=#=#= End test: Delete resource child meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Delete resource child meta attribute =#=#=#= Begin test: Create the dummy-group resource group =#=#=#= =#=#=#= Current cib after: Create the dummy-group resource group =#=#=#= =#=#=#= End test: Create the dummy-group resource group - OK (0) =#=#=#= * Passed: cibadmin - Create the dummy-group resource group =#=#=#= Begin test: Create a resource meta attribute in dummy1 =#=#=#= Set 'dummy1' option: id=dummy1-meta_attributes-is-managed set=dummy1-meta_attributes name=is-managed value=true =#=#=#= Current cib after: Create a resource meta attribute in dummy1 =#=#=#= =#=#=#= End test: Create a resource meta attribute in dummy1 - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in dummy1 =#=#=#= Begin test: Create a resource meta attribute in dummy-group =#=#=#= Set 'dummy1' option: id=dummy1-meta_attributes-is-managed name=is-managed value=false Set 'dummy-group' option: id=dummy-group-meta_attributes-is-managed set=dummy-group-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute in dummy-group =#=#=#= =#=#=#= End test: Create a resource meta attribute in dummy-group - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in dummy-group =#=#=#= Begin test: Delete the dummy-group resource group =#=#=#= =#=#=#= Current cib after: Delete the dummy-group resource group =#=#=#= =#=#=#= End test: Delete the dummy-group resource group - OK (0) =#=#=#= * Passed: cibadmin - Delete the dummy-group resource group =#=#=#= Begin test: Specify a lifetime when moving a resource =#=#=#= Migration will take effect until: =#=#=#= Current cib after: Specify a lifetime when moving a resource =#=#=#= =#=#=#= End test: Specify a lifetime when moving a resource - OK (0) =#=#=#= * Passed: crm_resource - Specify a lifetime when moving a resource =#=#=#= Begin test: Try to move a resource previously moved with a lifetime =#=#=#= =#=#=#= Current cib after: Try to move a resource previously moved with a lifetime =#=#=#= =#=#=#= End test: Try to move a resource previously moved with a lifetime - OK (0) =#=#=#= * Passed: crm_resource - Try to move a resource previously moved with a lifetime =#=#=#= Begin test: Ban dummy from node1 for a short time =#=#=#= Migration will take effect until: WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1. This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool. This will be the case even if node1 is the last node in the cluster =#=#=#= Current cib after: Ban dummy from node1 for a short time =#=#=#= =#=#=#= End test: Ban dummy from node1 for a short time - OK (0) =#=#=#= * Passed: crm_resource - Ban dummy from node1 for a short time =#=#=#= Begin test: Remove expired constraints =#=#=#= Removing constraint: cli-ban-dummy-on-node1 =#=#=#= Current cib after: Remove expired constraints =#=#=#= =#=#=#= End test: Remove expired constraints - OK (0) =#=#=#= * Passed: crm_resource - Remove expired constraints =#=#=#= Begin test: Clear all implicit constraints for dummy =#=#=#= Removing constraint: cli-prefer-dummy =#=#=#= Current cib after: Clear all implicit constraints for dummy =#=#=#= =#=#=#= End test: Clear all implicit constraints for dummy - OK (0) =#=#=#= * Passed: crm_resource - Clear all implicit constraints for dummy =#=#=#= Begin test: Set a node health strategy =#=#=#= =#=#=#= Current cib after: Set a node health strategy =#=#=#= =#=#=#= End test: Set a node health strategy - OK (0) =#=#=#= * Passed: crm_attribute - Set a node health strategy =#=#=#= Begin test: Set a node health attribute =#=#=#= =#=#=#= Current cib after: Set a node health attribute =#=#=#= =#=#=#= End test: Set a node health attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a node health attribute =#=#=#= Begin test: Show why a resource is not running on an unhealthy node =#=#=#= =#=#=#= End test: Show why a resource is not running on an unhealthy node - OK (0) =#=#=#= * Passed: crm_resource - Show why a resource is not running on an unhealthy node =#=#=#= Begin test: Delete a resource =#=#=#= =#=#=#= Current cib after: Delete a resource =#=#=#= =#=#=#= End test: Delete a resource - OK (0) =#=#=#= * Passed: crm_resource - Delete a resource =#=#=#= Begin test: Create an XML patchset =#=#=#= =#=#=#= End test: Create an XML patchset - Error occurred (1) =#=#=#= * Passed: crm_diff - Create an XML patchset =#=#=#= Begin test: Check locations and constraints for prim1 =#=#=#= =#=#=#= End test: Check locations and constraints for prim1 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim1 =#=#=#= Begin test: Recursively check locations and constraints for prim1 =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim1 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim1 =#=#=#= Begin test: Check locations and constraints for prim1 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim1 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim1 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim1 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim1 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim1 in XML =#=#=#= Begin test: Check locations and constraints for prim2 =#=#=#= Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim2 is colocated with: * prim3 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) =#=#=#= End test: Check locations and constraints for prim2 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim2 =#=#=#= Begin test: Recursively check locations and constraints for prim2 =#=#=#= Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim2 is colocated with: * prim3 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Resources prim3 is colocated with: * prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim2 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim2 =#=#=#= Begin test: Check locations and constraints for prim2 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim2 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim2 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim2 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim2 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim2 in XML =#=#=#= Begin test: Check locations and constraints for prim3 =#=#=#= Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim3 is colocated with: * prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) =#=#=#= End test: Check locations and constraints for prim3 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim3 =#=#=#= Begin test: Recursively check locations and constraints for prim3 =#=#=#= Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim3 is colocated with: * prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim3 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim3 =#=#=#= Begin test: Check locations and constraints for prim3 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim3 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim3 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim3 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim3 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim3 in XML =#=#=#= Begin test: Check locations and constraints for prim4 =#=#=#= Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) Resources colocated with prim4: * prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Check locations and constraints for prim4 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim4 =#=#=#= Begin test: Recursively check locations and constraints for prim4 =#=#=#= Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) Resources colocated with prim4: * prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim4 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim4 =#=#=#= Begin test: Check locations and constraints for prim4 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim4 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim4 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim4 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim4 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim4 in XML =#=#=#= Begin test: Check locations and constraints for prim5 =#=#=#= Resources colocated with prim5: * prim4 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) =#=#=#= End test: Check locations and constraints for prim5 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim5 =#=#=#= Begin test: Recursively check locations and constraints for prim5 =#=#=#= Resources colocated with prim5: * prim4 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources colocated with prim4: * prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) =#=#=#= End test: Recursively check locations and constraints for prim5 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim5 =#=#=#= Begin test: Check locations and constraints for prim5 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim5 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim5 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim5 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim5 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim5 in XML =#=#=#= Begin test: Check locations and constraints for prim6 =#=#=#= Locations: * Node cluster02 (score=-INFINITY, id=prim6-not-on-cluster2, rsc=prim6) =#=#=#= End test: Check locations and constraints for prim6 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim6 =#=#=#= Begin test: Recursively check locations and constraints for prim6 =#=#=#= Locations: * Node cluster02 (score=-INFINITY, id=prim6-not-on-cluster2, rsc=prim6) =#=#=#= End test: Recursively check locations and constraints for prim6 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim6 =#=#=#= Begin test: Check locations and constraints for prim6 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim6 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim6 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim6 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim6 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim6 in XML =#=#=#= Begin test: Check locations and constraints for prim7 =#=#=#= Resources prim7 is colocated with: * group (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Check locations and constraints for prim7 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim7 =#=#=#= Begin test: Recursively check locations and constraints for prim7 =#=#=#= Resources prim7 is colocated with: * group (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim7 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim7 =#=#=#= Begin test: Check locations and constraints for prim7 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim7 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim7 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim7 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim7 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim7 in XML =#=#=#= Begin test: Check locations and constraints for prim8 =#=#=#= Resources prim8 is colocated with: * gr2 (score=INFINITY, id=colocation-prim8-gr2-INFINITY) =#=#=#= End test: Check locations and constraints for prim8 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim8 =#=#=#= Begin test: Recursively check locations and constraints for prim8 =#=#=#= Resources prim8 is colocated with: * gr2 (score=INFINITY, id=colocation-prim8-gr2-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim8 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim8 =#=#=#= Begin test: Check locations and constraints for prim8 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim8 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim8 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim8 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim8 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim8 in XML =#=#=#= Begin test: Check locations and constraints for prim9 =#=#=#= Resources prim9 is colocated with: * clone (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Check locations and constraints for prim9 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim9 =#=#=#= Begin test: Recursively check locations and constraints for prim9 =#=#=#= Resources prim9 is colocated with: * clone (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim9 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim9 =#=#=#= Begin test: Check locations and constraints for prim9 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim9 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim9 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim9 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim9 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim9 in XML =#=#=#= Begin test: Check locations and constraints for prim10 =#=#=#= Resources prim10 is colocated with: * prim4 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) =#=#=#= End test: Check locations and constraints for prim10 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim10 =#=#=#= Begin test: Recursively check locations and constraints for prim10 =#=#=#= Resources prim10 is colocated with: * prim4 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim10 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim10 =#=#=#= Begin test: Check locations and constraints for prim10 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim10 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim10 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim10 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim10 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim10 in XML =#=#=#= Begin test: Check locations and constraints for prim11 =#=#=#= Resources colocated with prim11: * prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) Resources prim11 is colocated with: * prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) =#=#=#= End test: Check locations and constraints for prim11 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim11 =#=#=#= Begin test: Recursively check locations and constraints for prim11 =#=#=#= Resources colocated with prim11: * prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources colocated with prim13: * prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources colocated with prim12: * prim11 (id=colocation-prim11-prim12-INFINITY - loop) Resources prim11 is colocated with: * prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources prim12 is colocated with: * prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources prim13 is colocated with: * prim11 (id=colocation-prim13-prim11-INFINITY - loop) =#=#=#= End test: Recursively check locations and constraints for prim11 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim11 =#=#=#= Begin test: Check locations and constraints for prim11 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim11 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim11 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim11 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim11 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim11 in XML =#=#=#= Begin test: Check locations and constraints for prim12 =#=#=#= Resources colocated with prim12: * prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) Resources prim12 is colocated with: * prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) =#=#=#= End test: Check locations and constraints for prim12 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim12 =#=#=#= Begin test: Recursively check locations and constraints for prim12 =#=#=#= Resources colocated with prim12: * prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources colocated with prim11: * prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources colocated with prim13: * prim12 (id=colocation-prim12-prim13-INFINITY - loop) Resources prim12 is colocated with: * prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources prim13 is colocated with: * prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources prim11 is colocated with: * prim12 (id=colocation-prim11-prim12-INFINITY - loop) =#=#=#= End test: Recursively check locations and constraints for prim12 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim12 =#=#=#= Begin test: Check locations and constraints for prim12 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim12 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim12 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim12 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim12 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim12 in XML =#=#=#= Begin test: Check locations and constraints for prim13 =#=#=#= Resources colocated with prim13: * prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) Resources prim13 is colocated with: * prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) =#=#=#= End test: Check locations and constraints for prim13 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim13 =#=#=#= Begin test: Recursively check locations and constraints for prim13 =#=#=#= Resources colocated with prim13: * prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources colocated with prim12: * prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources colocated with prim11: * prim13 (id=colocation-prim13-prim11-INFINITY - loop) Resources prim13 is colocated with: * prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources prim11 is colocated with: * prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources prim12 is colocated with: * prim13 (id=colocation-prim12-prim13-INFINITY - loop) =#=#=#= End test: Recursively check locations and constraints for prim13 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim13 =#=#=#= Begin test: Check locations and constraints for prim13 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim13 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim13 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim13 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim13 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim13 in XML =#=#=#= Begin test: Check locations and constraints for group =#=#=#= Resources colocated with group: * prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Check locations and constraints for group - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group =#=#=#= Begin test: Recursively check locations and constraints for group =#=#=#= Resources colocated with group: * prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Recursively check locations and constraints for group - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for group =#=#=#= Begin test: Check locations and constraints for group in XML =#=#=#= =#=#=#= End test: Check locations and constraints for group in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group in XML =#=#=#= Begin test: Recursively check locations and constraints for group in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for group in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for group in XML =#=#=#= Begin test: Check locations and constraints for clone =#=#=#= Resources colocated with clone: * prim9 (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Check locations and constraints for clone - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for clone =#=#=#= Begin test: Recursively check locations and constraints for clone =#=#=#= Resources colocated with clone: * prim9 (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Recursively check locations and constraints for clone - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for clone =#=#=#= Begin test: Check locations and constraints for clone in XML =#=#=#= =#=#=#= End test: Check locations and constraints for clone in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for clone in XML =#=#=#= Begin test: Recursively check locations and constraints for clone in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for clone in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for clone in XML =#=#=#= Begin test: Check locations and constraints for group member (referring to group) =#=#=#= Resources colocated with group: * prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Check locations and constraints for group member (referring to group) - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group member (referring to group) =#=#=#= Begin test: Check locations and constraints for group member (without referring to group) =#=#=#= Resources colocated with gr2: * prim8 (score=INFINITY, id=colocation-prim8-gr2-INFINITY) =#=#=#= End test: Check locations and constraints for group member (without referring to group) - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group member (without referring to group) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Set a meta-attribute for primitive and resources colocated with it =#=#=#= =#=#=#= End test: Set a meta-attribute for primitive and resources colocated with it - OK (0) =#=#=#= * Passed: crm_resource - Set a meta-attribute for primitive and resources colocated with it =#=#=#= Begin test: Set a meta-attribute for group and resource colocated with it =#=#=#= Set 'group' option: id=group-meta_attributes-target-role set=group-meta_attributes name=target-role value=Stopped Set 'prim7' option: id=prim7-meta_attributes-target-role set=prim7-meta_attributes name=target-role value=Stopped =#=#=#= End test: Set a meta-attribute for group and resource colocated with it - OK (0) =#=#=#= * Passed: crm_resource - Set a meta-attribute for group and resource colocated with it =#=#=#= Begin test: Set a meta-attribute for clone and resource colocated with it =#=#=#= =#=#=#= End test: Set a meta-attribute for clone and resource colocated with it - OK (0) =#=#=#= * Passed: crm_resource - Set a meta-attribute for clone and resource colocated with it =#=#=#= Begin test: Show resource digests =#=#=#= =#=#=#= End test: Show resource digests - OK (0) =#=#=#= * Passed: crm_resource - Show resource digests =#=#=#= Begin test: Show resource digests with overrides =#=#=#= =#=#=#= End test: Show resource digests with overrides - OK (0) =#=#=#= * Passed: crm_resource - Show resource digests with overrides =#=#=#= Begin test: Show resource operations =#=#=#= rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node4, call=136, rc=7, exec=28ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node4, call=5, rc=7, exec=2ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node2, call=101, rc=7, exec=45ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node2, call=5, rc=7, exec=4ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node3, call=5, rc=7, exec=24ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node5, call=99, rc=193, exec=27ms): pending Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node5, call=5, rc=7, exec=14ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_start_0 (node=node1, call=104, rc=0, exec=22ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_10000 (node=node1, call=106, rc=0, exec=20ms): complete Fencing (stonith:fence_xvm): Started: Fencing_start_0 (node=node1, call=10, rc=0, exec=59ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_120000 (node=node1, call=12, rc=0, exec=70ms): complete =#=#=#= End test: Show resource operations - OK (0) =#=#=#= * Passed: crm_resource - Show resource operations =#=#=#= Begin test: Show resource operations (XML) =#=#=#= =#=#=#= End test: Show resource operations (XML) - OK (0) =#=#=#= * Passed: crm_resource - Show resource operations (XML) =#=#=#= Begin test: List all nodes =#=#=#= cluster node: overcloud-controller-0 (1) cluster node: overcloud-controller-1 (2) cluster node: overcloud-controller-2 (3) cluster node: overcloud-galera-0 (4) cluster node: overcloud-galera-1 (5) cluster node: overcloud-galera-2 (6) guest node: lxc1 (lxc1) guest node: lxc2 (lxc2) remote node: overcloud-rabbit-0 (overcloud-rabbit-0) remote node: overcloud-rabbit-1 (overcloud-rabbit-1) remote node: overcloud-rabbit-2 (overcloud-rabbit-2) =#=#=#= End test: List all nodes - OK (0) =#=#=#= * Passed: crmadmin - List all nodes =#=#=#= Begin test: Minimally list all nodes =#=#=#= overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 overcloud-galera-0 overcloud-galera-1 overcloud-galera-2 lxc1 lxc2 overcloud-rabbit-0 overcloud-rabbit-1 overcloud-rabbit-2 =#=#=#= End test: Minimally list all nodes - OK (0) =#=#=#= * Passed: crmadmin - Minimally list all nodes =#=#=#= Begin test: List all nodes as bash exports =#=#=#= export overcloud-controller-0=1 export overcloud-controller-1=2 export overcloud-controller-2=3 export overcloud-galera-0=4 export overcloud-galera-1=5 export overcloud-galera-2=6 export lxc1=lxc1 export lxc2=lxc2 export overcloud-rabbit-0=overcloud-rabbit-0 export overcloud-rabbit-1=overcloud-rabbit-1 export overcloud-rabbit-2=overcloud-rabbit-2 =#=#=#= End test: List all nodes as bash exports - OK (0) =#=#=#= * Passed: crmadmin - List all nodes as bash exports =#=#=#= Begin test: List cluster nodes =#=#=#= 6 =#=#=#= End test: List cluster nodes - OK (0) =#=#=#= * Passed: crmadmin - List cluster nodes =#=#=#= Begin test: List guest nodes =#=#=#= 2 =#=#=#= End test: List guest nodes - OK (0) =#=#=#= * Passed: crmadmin - List guest nodes =#=#=#= Begin test: List remote nodes =#=#=#= 3 =#=#=#= End test: List remote nodes - OK (0) =#=#=#= * Passed: crmadmin - List remote nodes =#=#=#= Begin test: List cluster,remote nodes =#=#=#= 9 =#=#=#= End test: List cluster,remote nodes - OK (0) =#=#=#= * Passed: crmadmin - List cluster,remote nodes =#=#=#= Begin test: List guest,remote nodes =#=#=#= 5 =#=#=#= End test: List guest,remote nodes - OK (0) =#=#=#= * Passed: crmadmin - List guest,remote nodes =#=#=#= Begin test: Show allocation scores with crm_simulate =#=#=#= =#=#=#= End test: Show allocation scores with crm_simulate - OK (0) =#=#=#= * Passed: crm_simulate - Show allocation scores with crm_simulate =#=#=#= Begin test: Show utilization with crm_simulate =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] Utilization Information: Only 'private' parameters to 1m-interval monitor for dummy on cluster02 changed: 0:0;16:2:0:4a9e64d6-e1dd-4395-917c-1596312eafe4 * Original: cluster01 capacity: * Original: cluster02 capacity: * Original: httpd-bundle-0 capacity: * Original: httpd-bundle-1 capacity: * Original: httpd-bundle-2 capacity: * pcmk__assign_resource: ping:0 utilization on cluster02: * pcmk__assign_resource: ping:1 utilization on cluster01: * pcmk__assign_resource: Fencing utilization on cluster01: * pcmk__assign_resource: dummy utilization on cluster02: * pcmk__assign_resource: httpd-bundle-docker-0 utilization on cluster01: * pcmk__assign_resource: httpd-bundle-docker-1 utilization on cluster02: * pcmk__assign_resource: httpd-bundle-ip-192.168.122.131 utilization on cluster01: * pcmk__assign_resource: httpd-bundle-0 utilization on cluster01: * pcmk__assign_resource: httpd:0 utilization on httpd-bundle-0: * pcmk__assign_resource: httpd-bundle-ip-192.168.122.132 utilization on cluster02: * pcmk__assign_resource: httpd-bundle-1 utilization on cluster02: * pcmk__assign_resource: httpd:1 utilization on httpd-bundle-1: * pcmk__assign_resource: httpd-bundle-2 utilization on cluster01: * pcmk__assign_resource: httpd:2 utilization on httpd-bundle-2: * pcmk__assign_resource: Public-IP utilization on cluster02: * pcmk__assign_resource: Email utilization on cluster02: * pcmk__assign_resource: mysql-proxy:0 utilization on cluster02: * pcmk__assign_resource: mysql-proxy:1 utilization on cluster01: * pcmk__assign_resource: promotable-rsc:0 utilization on cluster02: * pcmk__assign_resource: promotable-rsc:1 utilization on cluster01: * Remaining: cluster01 capacity: * Remaining: cluster02 capacity: * Remaining: httpd-bundle-0 capacity: * Remaining: httpd-bundle-1 capacity: * Remaining: httpd-bundle-2 capacity: Transition Summary: * Start httpd-bundle-2 ( cluster01 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) =#=#=#= End test: Show utilization with crm_simulate - OK (0) =#=#=#= * Passed: crm_simulate - Show utilization with crm_simulate =#=#=#= Begin test: Simulate injecting a failure =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] Performing Requested Modifications: * Injecting ping_monitor_10000@cluster02=1 into the configuration * Injecting attribute fail-count-ping#monitor_10000=1 into /node_state '2' * Injecting attribute last-failure-ping#monitor_10000= into /node_state '2' Transition Summary: * Recover ping:0 ( cluster02 ) * Start httpd-bundle-2 ( cluster01 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) Executing Cluster Transition: * Cluster action: clear_failcount for ping on cluster02 * Pseudo action: ping-clone_stop_0 * Pseudo action: httpd-bundle_start_0 * Resource action: ping stop on cluster02 * Pseudo action: ping-clone_stopped_0 * Pseudo action: ping-clone_start_0 * Pseudo action: httpd-bundle-clone_start_0 * Resource action: ping start on cluster02 * Resource action: ping monitor=10000 on cluster02 * Pseudo action: ping-clone_running_0 * Pseudo action: httpd-bundle-clone_running_0 * Pseudo action: httpd-bundle_running_0 Revised Cluster Status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] =#=#=#= End test: Simulate injecting a failure - OK (0) =#=#=#= * Passed: crm_simulate - Simulate injecting a failure =#=#=#= Begin test: Simulate bringing a node down =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] Performing Requested Modifications: * Taking node cluster01 offline Transition Summary: * Fence (off) httpd-bundle-0 (resource: httpd-bundle-docker-0) 'guest is unclean' * Start Fencing ( cluster02 ) * Start httpd-bundle-0 ( cluster02 ) due to unrunnable httpd-bundle-docker-0 start (blocked) * Stop httpd:0 ( httpd-bundle-0 ) due to unrunnable httpd-bundle-docker-0 start * Start httpd-bundle-2 ( cluster02 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) Executing Cluster Transition: * Resource action: Fencing start on cluster02 * Pseudo action: stonith-httpd-bundle-0-off on httpd-bundle-0 * Pseudo action: httpd-bundle_stop_0 * Pseudo action: httpd-bundle_start_0 * Resource action: Fencing monitor=60000 on cluster02 * Pseudo action: httpd-bundle-clone_stop_0 * Pseudo action: httpd_stop_0 * Pseudo action: httpd-bundle-clone_stopped_0 * Pseudo action: httpd-bundle-clone_start_0 * Pseudo action: httpd-bundle_stopped_0 * Pseudo action: httpd-bundle-clone_running_0 * Pseudo action: httpd-bundle_running_0 Revised Cluster Status: * Node List: * Online: [ cluster02 ] * OFFLINE: [ cluster01 ] * GuestOnline: [ httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster02 ] * Stopped: [ cluster01 ] * Fencing (stonith:fence_xvm): Started cluster02 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): FAILED * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster02 ] * Stopped: [ cluster01 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Stopped: [ cluster01 ] =#=#=#= End test: Simulate bringing a node down - OK (0) =#=#=#= * Passed: crm_simulate - Simulate bringing a node down =#=#=#= Begin test: Simulate a node failing =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] Performing Requested Modifications: * Failing node cluster02 Transition Summary: * Fence (off) httpd-bundle-1 (resource: httpd-bundle-docker-1) 'guest is unclean' * Fence (reboot) cluster02 'peer is no longer part of the cluster' * Stop ping:0 ( cluster02 ) due to node availability * Stop dummy ( cluster02 ) due to node availability * Stop httpd-bundle-ip-192.168.122.132 ( cluster02 ) due to node availability * Stop httpd-bundle-docker-1 ( cluster02 ) due to node availability * Stop httpd-bundle-1 ( cluster02 ) due to unrunnable httpd-bundle-docker-1 start * Stop httpd:1 ( httpd-bundle-1 ) due to unrunnable httpd-bundle-docker-1 start * Start httpd-bundle-2 ( cluster01 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Move Public-IP ( cluster02 -> cluster01 ) * Move Email ( cluster02 -> cluster01 ) * Stop mysql-proxy:0 ( cluster02 ) due to node availability * Stop promotable-rsc:0 ( Promoted cluster02 ) due to node availability Executing Cluster Transition: * Pseudo action: httpd-bundle-1_stop_0 * Pseudo action: promotable-clone_demote_0 * Pseudo action: httpd-bundle_stop_0 * Pseudo action: httpd-bundle_start_0 * Fencing cluster02 (reboot) * Pseudo action: ping-clone_stop_0 * Pseudo action: dummy_stop_0 * Pseudo action: httpd-bundle-docker-1_stop_0 * Pseudo action: exim-group_stop_0 * Pseudo action: Email_stop_0 * Pseudo action: mysql-clone-group_stop_0 * Pseudo action: promotable-rsc_demote_0 * Pseudo action: promotable-clone_demoted_0 * Pseudo action: promotable-clone_stop_0 * Pseudo action: stonith-httpd-bundle-1-off on httpd-bundle-1 * Pseudo action: ping_stop_0 * Pseudo action: ping-clone_stopped_0 * Pseudo action: httpd-bundle-clone_stop_0 * Pseudo action: httpd-bundle-ip-192.168.122.132_stop_0 * Pseudo action: Public-IP_stop_0 * Pseudo action: mysql-group:0_stop_0 * Pseudo action: mysql-proxy_stop_0 * Pseudo action: promotable-rsc_stop_0 * Pseudo action: promotable-clone_stopped_0 * Pseudo action: httpd_stop_0 * Pseudo action: httpd-bundle-clone_stopped_0 * Pseudo action: httpd-bundle-clone_start_0 * Pseudo action: exim-group_stopped_0 * Pseudo action: exim-group_start_0 * Resource action: Public-IP start on cluster01 * Resource action: Email start on cluster01 * Pseudo action: mysql-group:0_stopped_0 * Pseudo action: mysql-clone-group_stopped_0 * Pseudo action: httpd-bundle_stopped_0 * Pseudo action: httpd-bundle-clone_running_0 * Pseudo action: exim-group_running_0 * Pseudo action: httpd-bundle_running_0 Revised Cluster Status: * Node List: * Online: [ cluster01 ] * OFFLINE: [ cluster02 ] * GuestOnline: [ httpd-bundle-0 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 ] * Stopped: [ cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Stopped * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): FAILED * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster01 * Email (lsb:exim): Started cluster01 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 ] * Stopped: [ cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Unpromoted: [ cluster01 ] * Stopped: [ cluster02 ] =#=#=#= End test: Simulate a node failing - OK (0) =#=#=#= * Passed: crm_simulate - Simulate a node failing =#=#=#= Begin test: List a promotable clone resource =#=#=#= resource promotable-clone is running on: cluster01 resource promotable-clone is running on: cluster02 Promoted =#=#=#= End test: List a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List a promotable clone resource =#=#=#= Begin test: List the primitive of a promotable clone resource =#=#=#= resource promotable-rsc is running on: cluster01 resource promotable-rsc is running on: cluster02 Promoted =#=#=#= End test: List the primitive of a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List the primitive of a promotable clone resource =#=#=#= Begin test: List a single instance of a promotable clone resource =#=#=#= resource promotable-rsc:0 is running on: cluster02 Promoted =#=#=#= End test: List a single instance of a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List a single instance of a promotable clone resource =#=#=#= Begin test: List another instance of a promotable clone resource =#=#=#= resource promotable-rsc:1 is running on: cluster01 =#=#=#= End test: List another instance of a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List another instance of a promotable clone resource =#=#=#= Begin test: List a promotable clone resource in XML =#=#=#= cluster01 cluster02 =#=#=#= End test: List a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List a promotable clone resource in XML =#=#=#= Begin test: List the primitive of a promotable clone resource in XML =#=#=#= cluster01 cluster02 =#=#=#= End test: List the primitive of a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List the primitive of a promotable clone resource in XML =#=#=#= Begin test: List a single instance of a promotable clone resource in XML =#=#=#= cluster02 =#=#=#= End test: List a single instance of a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List a single instance of a promotable clone resource in XML =#=#=#= Begin test: List another instance of a promotable clone resource in XML =#=#=#= cluster01 =#=#=#= End test: List another instance of a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List another instance of a promotable clone resource in XML =#=#=#= Begin test: Try to move an instance of a cloned resource =#=#=#= crm_resource: Cannot operate on clone resource instance 'promotable-rsc:0' Error performing operation: Invalid parameter =#=#=#= End test: Try to move an instance of a cloned resource - Invalid parameter (2) =#=#=#= * Passed: crm_resource - Try to move an instance of a cloned resource =#=#=#= Begin test: Query a nonexistent promotable score attribute =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query a nonexistent promotable score attribute - No such object (105) =#=#=#= * Passed: crm_attribute - Query a nonexistent promotable score attribute =#=#=#= Begin test: Query a nonexistent promotable score attribute (XML) =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query a nonexistent promotable score attribute (XML) - No such object (105) =#=#=#= * Passed: crm_attribute - Query a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Delete a nonexistent promotable score attribute =#=#=#= =#=#=#= End test: Delete a nonexistent promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Delete a nonexistent promotable score attribute =#=#=#= Begin test: Delete a nonexistent promotable score attribute (XML) =#=#=#= =#=#=#= End test: Delete a nonexistent promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Delete a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Query after deleting a nonexistent promotable score attribute =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting a nonexistent promotable score attribute - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting a nonexistent promotable score attribute =#=#=#= Begin test: Query after deleting a nonexistent promotable score attribute (XML) =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting a nonexistent promotable score attribute (XML) - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Update a nonexistent promotable score attribute =#=#=#= =#=#=#= End test: Update a nonexistent promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Update a nonexistent promotable score attribute =#=#=#= Begin test: Update a nonexistent promotable score attribute (XML) =#=#=#= =#=#=#= End test: Update a nonexistent promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Update a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Query after updating a nonexistent promotable score attribute =#=#=#= scope=status name=master-promotable-rsc value=1 =#=#=#= End test: Query after updating a nonexistent promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a nonexistent promotable score attribute =#=#=#= Begin test: Query after updating a nonexistent promotable score attribute (XML) =#=#=#= =#=#=#= End test: Query after updating a nonexistent promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Update an existing promotable score attribute =#=#=#= =#=#=#= End test: Update an existing promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Update an existing promotable score attribute =#=#=#= Begin test: Update an existing promotable score attribute (XML) =#=#=#= =#=#=#= End test: Update an existing promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Update an existing promotable score attribute (XML) =#=#=#= Begin test: Query after updating an existing promotable score attribute =#=#=#= scope=status name=master-promotable-rsc value=5 =#=#=#= End test: Query after updating an existing promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating an existing promotable score attribute =#=#=#= Begin test: Query after updating an existing promotable score attribute (XML) =#=#=#= =#=#=#= End test: Query after updating an existing promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating an existing promotable score attribute (XML) =#=#=#= Begin test: Delete an existing promotable score attribute =#=#=#= Deleted status attribute: id=status-1-master-promotable-rsc name=master-promotable-rsc =#=#=#= End test: Delete an existing promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Delete an existing promotable score attribute =#=#=#= Begin test: Delete an existing promotable score attribute (XML) =#=#=#= =#=#=#= End test: Delete an existing promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Delete an existing promotable score attribute (XML) =#=#=#= Begin test: Query after deleting an existing promotable score attribute =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting an existing promotable score attribute - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting an existing promotable score attribute =#=#=#= Begin test: Query after deleting an existing promotable score attribute (XML) =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting an existing promotable score attribute (XML) - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting an existing promotable score attribute (XML) =#=#=#= Begin test: Update a promotable score attribute to -INFINITY =#=#=#= =#=#=#= End test: Update a promotable score attribute to -INFINITY - OK (0) =#=#=#= * Passed: crm_attribute - Update a promotable score attribute to -INFINITY =#=#=#= Begin test: Update a promotable score attribute to -INFINITY (XML) =#=#=#= =#=#=#= End test: Update a promotable score attribute to -INFINITY (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Update a promotable score attribute to -INFINITY (XML) =#=#=#= Begin test: Query after updating a promotable score attribute to -INFINITY =#=#=#= scope=status name=master-promotable-rsc value=-INFINITY =#=#=#= End test: Query after updating a promotable score attribute to -INFINITY - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a promotable score attribute to -INFINITY =#=#=#= Begin test: Query after updating a promotable score attribute to -INFINITY (XML) =#=#=#= =#=#=#= End test: Query after updating a promotable score attribute to -INFINITY (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a promotable score attribute to -INFINITY (XML) =#=#=#= Begin test: Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string =#=#=#= scope=status name=master-promotable-rsc value=-INFINITY =#=#=#= End test: Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string - OK (0) =#=#=#= * Passed: crm_attribute - Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string =#=#=#= Begin test: Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings =#=#=#= crm_attribute: -p/--promotion must be called from an OCF resource agent or with a resource ID specified =#=#=#= End test: Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings =#=#=#= Begin test: Check that CIB_file="-" works - crm_mon =#=#=#= Cluster Summary: * Stack: corosync * Current DC: cluster02 (version) - partition with quorum * Last updated: * Last change: * 5 nodes configured * 32 resource instances configured (4 DISABLED) Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] Active Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] =#=#=#= End test: Check that CIB_file="-" works - crm_mon - OK (0) =#=#=#= * Passed: cat - Check that CIB_file="-" works - crm_mon =#=#=#= Begin test: Check that CIB_file="-" works - crm_resource =#=#=#= =#=#=#= End test: Check that CIB_file="-" works - crm_resource - OK (0) =#=#=#= * Passed: cat - Check that CIB_file="-" works - crm_resource =#=#=#= Begin test: Check that CIB_file="-" works - crmadmin =#=#=#= 11 =#=#=#= End test: Check that CIB_file="-" works - crmadmin - OK (0) =#=#=#= * Passed: cat - Check that CIB_file="-" works - crmadmin =#=#=#= Begin test: Get active shadow instance (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance (no active instance) =#=#=#= Begin test: Get active shadow instance (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance (no active instance) (XML) =#=#=#= Begin test: Get active shadow instance's file name (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's file name (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (no active instance) =#=#=#= Begin test: Get active shadow instance's file name (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's file name (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (no active instance) (XML) =#=#=#= Begin test: Get active shadow instance's contents (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's contents (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (no active instance) =#=#=#= Begin test: Get active shadow instance's contents (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's contents (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (no active instance) (XML) =#=#=#= Begin test: Get active shadow instance's diff (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's diff (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (no active instance) =#=#=#= Begin test: Get active shadow instance's diff (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's diff (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (no active instance) (XML) =#=#=#= Begin test: Create copied shadow instance =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance =#=#=#= Begin test: Create copied shadow instance (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (XML) =#=#=#= Begin test: Get active shadow instance (copied) =#=#=#= cts-cli =#=#=#= End test: Get active shadow instance (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance (copied) =#=#=#= Begin test: Get active shadow instance (copied) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance (copied) (XML) =#=#=#= Begin test: Get active shadow instance's file name (copied) =#=#=#= /tmp/cts-cli.shadow/shadow.cts-cli =#=#=#= End test: Get active shadow instance's file name (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (copied) =#=#=#= Begin test: Get active shadow instance's file name (copied) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance's file name (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (copied) (XML) =#=#=#= Begin test: Get active shadow instance's contents (copied) =#=#=#= =#=#=#= End test: Get active shadow instance's contents (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (copied) =#=#=#= Begin test: Get active shadow instance's contents (copied) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's contents (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (copied) (XML) =#=#=#= Begin test: Get active shadow instance's diff (copied) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (copied) =#=#=#= Begin test: Get active shadow instance's diff (copied) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (copied) (XML) =#=#=#= Begin test: Get active shadow instance's diff (after changes) =#=#=#= Diff: --- 1.1.173 2 Diff: +++ 1.4.1 (null) -- /cib/configuration/op_defaults + /cib: @epoch=4, @num_updates=1 + /cib/configuration/resources/primitive[@id='dummy']: @description=desc ++ /cib/configuration/resources: ++ /cib/status: =#=#=#= End test: Get active shadow instance's diff (after changes) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after changes) =#=#=#= Begin test: Get active shadow instance's diff (after changes) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (after changes) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after changes) (XML) =#=#=#= Begin test: Commit shadow instance =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance =#=#=#= Begin test: Commit shadow instance (force) =#=#=#= =#=#=#= End test: Commit shadow instance (force) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) =#=#=#= Begin test: Get active shadow instance's diff (after commit) =#=#=#= Diff: --- 1.2.0 2 Diff: +++ 1.4.1 (null) + /cib: @epoch=4, @num_updates=1 ++ /cib/status: =#=#=#= End test: Get active shadow instance's diff (after commit) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit) =#=#=#= Begin test: Commit shadow instance (force) (all) =#=#=#= =#=#=#= End test: Commit shadow instance (force) (all) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) (all) =#=#=#= Begin test: Get active shadow instance's diff (after commit all) =#=#=#= Diff: --- 1.4.2 2 Diff: +++ 1.4.1 (null) + /cib: @num_updates=1 =#=#=#= End test: Get active shadow instance's diff (after commit all) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit all) =#=#=#= Begin test: Commit shadow instance (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (XML) =#=#=#= Begin test: Commit shadow instance (force) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) (XML) =#=#=#= Begin test: Get active shadow instance's diff (after commit) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (after commit) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit) (XML) =#=#=#= Begin test: Commit shadow instance (force) (all) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (force) (all) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) (all) (XML) =#=#=#= Begin test: Get active shadow instance's diff (after commit all) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (after commit all) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit all) (XML) =#=#=#= Begin test: Commit shadow instance (no active instance) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (no active instance) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) =#=#=#= Begin test: Commit shadow instance (no active instance) (force) =#=#=#= =#=#=#= End test: Commit shadow instance (no active instance) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) (force) =#=#=#= Begin test: Commit shadow instance (no active instance) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (no active instance) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) (XML) =#=#=#= Begin test: Commit shadow instance (no active instance) (force) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (no active instance) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) (force) (XML) =#=#=#= Begin test: Commit shadow instance (mismatch) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (mismatch) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) =#=#=#= Begin test: Commit shadow instance (mismatch) (force) =#=#=#= =#=#=#= End test: Commit shadow instance (mismatch) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) (force) =#=#=#= Begin test: Commit shadow instance (mismatch) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (mismatch) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) (XML) =#=#=#= Begin test: Commit shadow instance (mismatch) (force) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (mismatch) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) (force) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent shadow file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) (force) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Commit shadow instance (nonexistent shadow file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) (force) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent shadow file) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Get active shadow instance's diff (nonexistent shadow file) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent shadow file) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent shadow file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Commit shadow instance (nonexistent shadow file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent shadow file) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Get active shadow instance's diff (nonexistent shadow file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent shadow file) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent CIB file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) (force) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Commit shadow instance (nonexistent CIB file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) (force) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent CIB file) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Get active shadow instance's diff (nonexistent CIB file) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent CIB file) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent CIB file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Commit shadow instance (nonexistent CIB file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent CIB file) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Get active shadow instance's diff (nonexistent CIB file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent CIB file) (XML) =#=#=#= Begin test: Delete shadow instance =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance =#=#=#= Begin test: Delete shadow instance (force) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (XML) =#=#=#= Begin test: Delete shadow instance (force) (XML) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (force) (XML) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (no active instance) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (no active instance) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) =#=#=#= Begin test: Delete shadow instance (no active instance) (force) =#=#=#= =#=#=#= End test: Delete shadow instance (no active instance) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (no active instance) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (no active instance) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) (XML) =#=#=#= Begin test: Delete shadow instance (no active instance) (force) (XML) =#=#=#= =#=#=#= End test: Delete shadow instance (no active instance) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) (force) (XML) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (mismatch) =#=#=#= crm_shadow: The delete command removes the specified shadow file. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (mismatch) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) =#=#=#= Begin test: Delete shadow instance (mismatch) (force) =#=#=#= =#=#=#= End test: Delete shadow instance (mismatch) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (mismatch) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (mismatch) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) (XML) =#=#=#= Begin test: Delete shadow instance (mismatch) (force) (XML) =#=#=#= =#=#=#= End test: Delete shadow instance (mismatch) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) (force) (XML) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent shadow file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) (force) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent shadow file) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) (force) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent shadow file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) (XML) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent shadow file) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) (force) (XML) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent CIB file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) (force) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent CIB file) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent CIB file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent CIB file) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Create copied shadow instance (no active instance) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (no active instance) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (no active instance) =#=#=#= Begin test: Create copied shadow instance (no active instance) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (no active instance) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (no active instance) (XML) =#=#=#= Begin test: Create copied shadow instance (mismatch) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (mismatch) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (mismatch) =#=#=#= Begin test: Create copied shadow instance (mismatch) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (mismatch) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (mismatch) (XML) =#=#=#= Begin test: Create copied shadow instance (file already exists) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create copied shadow instance (file already exists) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) =#=#=#= Begin test: Create copied shadow instance (file already exists) (force) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (file already exists) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) (force) =#=#=#= Begin test: Create copied shadow instance (file already exists) (XML) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create copied shadow instance (file already exists) (XML) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) (XML) =#=#=#= Begin test: Create copied shadow instance (file already exists) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (file already exists) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) (force) (XML) =#=#=#= Begin test: Create copied shadow instance (nonexistent CIB file) (force) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Create copied shadow instance (nonexistent CIB file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (nonexistent CIB file) (force) =#=#=#= Begin test: Create copied shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Create copied shadow instance (nonexistent CIB file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Create empty shadow instance =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance =#=#=#= Begin test: Create empty shadow instance (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (XML) =#=#=#= Begin test: Create empty shadow instance (no active instance) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (no active instance) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (no active instance) =#=#=#= Begin test: Create empty shadow instance (no active instance) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (no active instance) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (no active instance) (XML) =#=#=#= Begin test: Create empty shadow instance (mismatch) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (mismatch) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (mismatch) =#=#=#= Begin test: Create empty shadow instance (mismatch) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (mismatch) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (mismatch) (XML) =#=#=#= Begin test: Create empty shadow instance (nonexistent CIB file) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (nonexistent CIB file) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (nonexistent CIB file) =#=#=#= Begin test: Create empty shadow instance (nonexistent CIB file) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (nonexistent CIB file) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Create empty shadow instance (file already exists) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create empty shadow instance (file already exists) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) =#=#=#= Begin test: Create empty shadow instance (file already exists) (force) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (file already exists) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) (force) =#=#=#= Begin test: Create empty shadow instance (file already exists) (XML) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create empty shadow instance (file already exists) (XML) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) (XML) =#=#=#= Begin test: Create empty shadow instance (file already exists) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (file already exists) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) (force) (XML) =#=#=#= Begin test: Get active shadow instance's contents (empty CIB) =#=#=#= =#=#=#= End test: Get active shadow instance's contents (empty CIB) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (empty CIB) =#=#=#= Begin test: Get active shadow instance's contents (empty CIB) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's contents (empty CIB) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (empty CIB) (XML) =#=#=#= Begin test: Get active shadow instance's diff (empty CIB) =#=#=#= Diff: --- 1.1.173 2 Diff: +++ 0.1.0 (null) -- /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options'] -- /cib/configuration/nodes/node[@id='1'] -- /cib/configuration/nodes/node[@id='2'] -- /cib/configuration/resources/clone[@id='ping-clone'] -- /cib/configuration/resources/primitive[@id='Fencing'] -- /cib/configuration/resources/primitive[@id='dummy'] -- /cib/configuration/resources/clone[@id='inactive-clone'] -- /cib/configuration/resources/group[@id='inactive-group'] -- /cib/configuration/resources/bundle[@id='httpd-bundle'] -- /cib/configuration/resources/group[@id='exim-group'] -- /cib/configuration/resources/clone[@id='mysql-clone-group'] -- /cib/configuration/resources/clone[@id='promotable-clone'] -- /cib/configuration/constraints/rsc_location[@id='not-on-cluster1'] -- /cib/configuration/constraints/rsc_location[@id='loc-promotable-clone'] -- /cib/configuration/tags -- /cib/configuration/op_defaults -- /cib/status/node_state[@id='2'] -- /cib/status/node_state[@id='1'] -- /cib/status/node_state[@id='httpd-bundle-0'] -- /cib/status/node_state[@id='httpd-bundle-1'] + /cib: @validate-with=pacemaker-X, @num_updates=0, @admin_epoch=0 -- /cib: @cib-last-written, @update-origin, @update-client, @update-user, @have-quorum, @dc-uuid =#=#=#= End test: Get active shadow instance's diff (empty CIB) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (empty CIB) =#=#=#= Begin test: Get active shadow instance's diff (empty CIB) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (empty CIB) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (empty CIB) (XML) =#=#=#= Begin test: Reset shadow instance =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance =#=#=#= Begin test: Get active shadow instance's diff (after reset) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (after reset) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after reset) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (XML) =#=#=#= Begin test: Get active shadow instance's diff (after reset) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (after reset) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after reset) (XML) =#=#=#= Begin test: Reset shadow instance (no active instance) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (no active instance) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (no active instance) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (no active instance) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (no active instance) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (no active instance) (XML) =#=#=#= Begin test: Reset shadow instance (mismatch) =#=#=#= crm_shadow: The supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Reset shadow instance (mismatch) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) =#=#=#= Begin test: Reset shadow instance (mismatch) (force) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (mismatch) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) (force) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (mismatch) (XML) =#=#=#= crm_shadow: The supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Reset shadow instance (mismatch) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) (XML) =#=#=#= Begin test: Reset shadow instance (mismatch) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (mismatch) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) (force) (XML) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) (force) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) (force) =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Reset shadow instance (nonexistent shadow file) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) (force) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (nonexistent shadow file) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) (force) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Reset shadow instance (nonexistent shadow file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) (XML) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (nonexistent shadow file) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) (force) (XML) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Switch to new shadow instance =#=#=#= To switch to the named shadow instance, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Switch to new shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Switch to new shadow instance =#=#=#= Begin test: Switch to new shadow instance (XML) =#=#=#= To switch to the named shadow instance, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Switch to new shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Switch to new shadow instance (XML) =#=#=#= Begin test: Switch to nonexistent shadow instance =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance =#=#=#= Begin test: Switch to nonexistent shadow instance (force) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance (force) =#=#=#= Begin test: Switch to nonexistent shadow instance (XML) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance (XML) =#=#=#= Begin test: Switch to nonexistent shadow instance (force) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance (force) (XML) =#=#=#= Begin test: Verbosely verify a file-specified configuration with an unallowed fencing level ID =#=#=#= warning: Ignoring topology registration with invalid level 10 Warnings found during check: config not valid =#=#=#= End test: Verbosely verify a file-specified configuration with an unallowed fencing level ID - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verbosely verify a file-specified configuration with an unallowed fencing level ID =#=#=#= Begin test: Verify a file-specified invalid configuration (text output) =#=#=#= Errors found during check: config not valid -V may provide more details =#=#=#= End test: Verify a file-specified invalid configuration (text output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (text output) =#=#=#= Begin test: Verify a file-specified invalid configuration (verbose text output) =#=#=#= unpack_config warning: Blind faith: not fencing unseen nodes error: Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource error: Ignoring resource 'test2-clone' because configuration is invalid error: CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify a file-specified invalid configuration (verbose text output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (verbose text output) =#=#=#= Begin test: Verify a file-specified invalid configuration (quiet text output) =#=#=#= =#=#=#= End test: Verify a file-specified invalid configuration (quiet text output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (quiet text output) =#=#=#= Begin test: Verify a file-specified invalid configuration (XML output) =#=#=#= error: Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource error: Ignoring <clone> resource 'test2-clone' because configuration is invalid error: CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify a file-specified invalid configuration (XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (XML output) =#=#=#= Begin test: Verify a file-specified invalid configuration (verbose XML output) =#=#=#= unpack_config warning: Blind faith: not fencing unseen nodes error: Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource error: Ignoring <clone> resource 'test2-clone' because configuration is invalid error: CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify a file-specified invalid configuration (verbose XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (verbose XML output) =#=#=#= Begin test: Verify a file-specified invalid configuration (quiet XML output) =#=#=#= error: Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource error: Ignoring <clone> resource 'test2-clone' because configuration is invalid error: CIB did not pass schema validation =#=#=#= End test: Verify a file-specified invalid configuration (quiet XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (quiet XML output) =#=#=#= Begin test: Verify another file-specified invalid configuration (XML output) =#=#=#= error: Resource start-up disabled since no STONITH resources have been defined error: Either configure some or disable STONITH with the stonith-enabled option error: NOTE: Clusters with shared data need STONITH to ensure data integrity warning: Node pcmk-1 is unclean but cannot be fenced warning: Node pcmk-2 is unclean but cannot be fenced error: CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify another file-specified invalid configuration (XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify another file-specified invalid configuration (XML output) =#=#=#= Begin test: Verify a file-specified valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verify a file-specified valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verify a file-specified valid configuration, outputting as xml =#=#=#= Begin test: Verify a piped-in valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verify a piped-in valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: cat - Verify a piped-in valid configuration, outputting as xml =#=#=#= Begin test: Verbosely verify a file-specified valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verbosely verify a file-specified valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verbosely verify a file-specified valid configuration, outputting as xml =#=#=#= Begin test: Verbosely verify a piped-in valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verbosely verify a piped-in valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: cat - Verbosely verify a piped-in valid configuration, outputting as xml =#=#=#= Begin test: Verify a string-supplied valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verify a string-supplied valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verify a string-supplied valid configuration, outputting as xml =#=#=#= Begin test: Verbosely verify a string-supplied valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verbosely verify a string-supplied valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verbosely verify a string-supplied valid configuration, outputting as xml diff --git a/daemons/controld/controld_control.c b/daemons/controld/controld_control.c index a6c7103fdd..c4b65949c5 100644 --- a/daemons/controld/controld_control.c +++ b/daemons/controld/controld_control.c @@ -1,689 +1,690 @@ /* * Copyright 2004-2024 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU General Public License version 2 * or later (GPLv2+) WITHOUT ANY WARRANTY. */ #include #include #include #include #include #include #include #include #include #include #include static qb_ipcs_service_t *ipcs = NULL; static crm_trigger_t *config_read_trigger = NULL; #if SUPPORT_COROSYNC extern gboolean crm_connect_corosync(pcmk_cluster_t *cluster); #endif static void crm_shutdown(int nsig); static gboolean crm_read_options(gpointer user_data); /* A_HA_CONNECT */ void do_ha_control(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { gboolean registered = FALSE; static pcmk_cluster_t *cluster = NULL; if (cluster == NULL) { cluster = pcmk_cluster_new(); } if (action & A_HA_DISCONNECT) { pcmk_cluster_disconnect(cluster); crm_info("Disconnected from the cluster"); controld_set_fsa_input_flags(R_HA_DISCONNECTED); } if (action & A_HA_CONNECT) { pcmk__cluster_set_status_callback(&peer_update_callback); pcmk__cluster_set_autoreap(false); #if SUPPORT_COROSYNC if (pcmk_get_cluster_layer() == pcmk_cluster_layer_corosync) { registered = crm_connect_corosync(cluster); } #endif // SUPPORT_COROSYNC if (registered) { controld_election_init(cluster->uname); controld_globals.our_nodename = cluster->uname; controld_globals.our_uuid = cluster->uuid; if(cluster->uuid == NULL) { crm_err("Could not obtain local uuid"); registered = FALSE; } } if (!registered) { controld_set_fsa_input_flags(R_HA_DISCONNECTED); register_fsa_error(C_FSA_INTERNAL, I_ERROR, NULL); return; } populate_cib_nodes(node_update_none, __func__); controld_clear_fsa_input_flags(R_HA_DISCONNECTED); crm_info("Connected to the cluster"); } if (action & ~(A_HA_CONNECT | A_HA_DISCONNECT)) { crm_err("Unexpected action %s in %s", fsa_action2string(action), __func__); } } /* A_SHUTDOWN */ void do_shutdown(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { /* just in case */ controld_set_fsa_input_flags(R_SHUTDOWN); controld_disconnect_fencer(FALSE); } /* A_SHUTDOWN_REQ */ void do_shutdown_req(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { xmlNode *msg = NULL; controld_set_fsa_input_flags(R_SHUTDOWN); //controld_set_fsa_input_flags(R_STAYDOWN); crm_info("Sending shutdown request to all peers (DC is %s)", pcmk__s(controld_globals.dc_name, "not set")); msg = create_request(CRM_OP_SHUTDOWN_REQ, NULL, NULL, CRM_SYSTEM_CRMD, CRM_SYSTEM_CRMD, NULL); if (!pcmk__cluster_send_message(NULL, crm_msg_crmd, msg)) { register_fsa_error(C_FSA_INTERNAL, I_ERROR, NULL); } free_xml(msg); } void crmd_fast_exit(crm_exit_t exit_code) { if (pcmk_is_set(controld_globals.fsa_input_register, R_STAYDOWN)) { crm_warn("Inhibiting respawn "CRM_XS" remapping exit code %d to %d", exit_code, CRM_EX_FATAL); exit_code = CRM_EX_FATAL; } else if ((exit_code == CRM_EX_OK) && pcmk_is_set(controld_globals.fsa_input_register, R_IN_RECOVERY)) { crm_err("Could not recover from internal error"); exit_code = CRM_EX_ERROR; } if (controld_globals.logger_out != NULL) { controld_globals.logger_out->finish(controld_globals.logger_out, exit_code, true, NULL); pcmk__output_free(controld_globals.logger_out); controld_globals.logger_out = NULL; } crm_exit(exit_code); } crm_exit_t crmd_exit(crm_exit_t exit_code) { GMainLoop *mloop = controld_globals.mainloop; static bool in_progress = FALSE; if (in_progress && (exit_code == CRM_EX_OK)) { crm_debug("Exit is already in progress"); return exit_code; } else if(in_progress) { crm_notice("Error during shutdown process, exiting now with status %d (%s)", exit_code, crm_exit_str(exit_code)); crm_write_blackbox(SIGTRAP, NULL); crmd_fast_exit(exit_code); } in_progress = TRUE; crm_trace("Preparing to exit with status %d (%s)", exit_code, crm_exit_str(exit_code)); /* Suppress secondary errors resulting from us disconnecting everything */ controld_set_fsa_input_flags(R_HA_DISCONNECTED); /* Close all IPC servers and clients to ensure any and all shared memory files are cleaned up */ if(ipcs) { crm_trace("Closing IPC server"); mainloop_del_ipc_server(ipcs); ipcs = NULL; } controld_close_attrd_ipc(); controld_shutdown_schedulerd_ipc(); controld_disconnect_fencer(TRUE); if ((exit_code == CRM_EX_OK) && (controld_globals.mainloop == NULL)) { crm_debug("No mainloop detected"); exit_code = CRM_EX_ERROR; } /* On an error, just get out. * * Otherwise, make the effort to have mainloop exit gracefully so * that it (mostly) cleans up after itself and valgrind has less * to report on - allowing real errors stand out */ if (exit_code != CRM_EX_OK) { crm_notice("Forcing immediate exit with status %d (%s)", exit_code, crm_exit_str(exit_code)); crm_write_blackbox(SIGTRAP, NULL); crmd_fast_exit(exit_code); } /* Clean up as much memory as possible for valgrind */ for (GList *iter = controld_globals.fsa_message_queue; iter != NULL; iter = iter->next) { fsa_data_t *fsa_data = (fsa_data_t *) iter->data; crm_info("Dropping %s: [ state=%s cause=%s origin=%s ]", fsa_input2string(fsa_data->fsa_input), fsa_state2string(controld_globals.fsa_state), fsa_cause2string(fsa_data->fsa_cause), fsa_data->origin); delete_fsa_input(fsa_data); } controld_clear_fsa_input_flags(R_MEMBERSHIP); g_list_free(controld_globals.fsa_message_queue); controld_globals.fsa_message_queue = NULL; controld_free_node_pending_timers(); controld_election_fini(); /* Tear down the CIB manager connection, but don't free it yet -- it could * be used when we drain the mainloop later. */ controld_disconnect_cib_manager(); verify_stopped(controld_globals.fsa_state, LOG_WARNING); controld_clear_fsa_input_flags(R_LRM_CONNECTED); lrm_state_destroy_all(); mainloop_destroy_trigger(config_read_trigger); config_read_trigger = NULL; controld_destroy_fsa_trigger(); controld_destroy_transition_trigger(); pcmk__client_cleanup(); pcmk__cluster_destroy_node_caches(); controld_free_fsa_timers(); te_cleanup_stonith_history_sync(NULL, TRUE); controld_free_sched_timer(); free(controld_globals.our_nodename); controld_globals.our_nodename = NULL; free(controld_globals.our_uuid); controld_globals.our_uuid = NULL; free(controld_globals.dc_name); controld_globals.dc_name = NULL; free(controld_globals.dc_version); controld_globals.dc_version = NULL; free(controld_globals.cluster_name); controld_globals.cluster_name = NULL; free(controld_globals.te_uuid); controld_globals.te_uuid = NULL; free_max_generation(); controld_destroy_failed_sync_table(); controld_destroy_outside_events_table(); mainloop_destroy_signal(SIGPIPE); mainloop_destroy_signal(SIGUSR1); mainloop_destroy_signal(SIGTERM); mainloop_destroy_signal(SIGTRAP); /* leave SIGCHLD engaged as we might still want to drain some service-actions */ if (mloop) { GMainContext *ctx = g_main_loop_get_context(controld_globals.mainloop); /* Don't re-enter this block */ controld_globals.mainloop = NULL; /* no signals on final draining anymore */ mainloop_destroy_signal(SIGCHLD); crm_trace("Draining mainloop %d %d", g_main_loop_is_running(mloop), g_main_context_pending(ctx)); { int lpc = 0; while((g_main_context_pending(ctx) && lpc < 10)) { lpc++; crm_trace("Iteration %d", lpc); g_main_context_dispatch(ctx); } } crm_trace("Closing mainloop %d %d", g_main_loop_is_running(mloop), g_main_context_pending(ctx)); g_main_loop_quit(mloop); /* Won't do anything yet, since we're inside it now */ g_main_loop_unref(mloop); } else { mainloop_destroy_signal(SIGCHLD); } cib_delete(controld_globals.cib_conn); controld_globals.cib_conn = NULL; throttle_fini(); /* Graceful */ crm_trace("Done preparing for exit with status %d (%s)", exit_code, crm_exit_str(exit_code)); return exit_code; } /* A_EXIT_0, A_EXIT_1 */ void do_exit(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { crm_exit_t exit_code = CRM_EX_OK; if (pcmk_is_set(action, A_EXIT_1)) { exit_code = CRM_EX_ERROR; crm_err("Exiting now due to errors"); } verify_stopped(cur_state, LOG_ERR); crmd_exit(exit_code); } static void sigpipe_ignore(int nsig) { return; } /* A_STARTUP */ void do_startup(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { crm_debug("Registering Signal Handlers"); mainloop_add_signal(SIGTERM, crm_shutdown); mainloop_add_signal(SIGPIPE, sigpipe_ignore); config_read_trigger = mainloop_add_trigger(G_PRIORITY_HIGH, crm_read_options, NULL); controld_init_fsa_trigger(); controld_init_transition_trigger(); crm_debug("Creating CIB manager and executor objects"); controld_globals.cib_conn = cib_new(); lrm_state_init_local(); if (controld_init_fsa_timers() == FALSE) { register_fsa_error(C_FSA_INTERNAL, I_ERROR, NULL); } } // \return libqb error code (0 on success, -errno on error) static int32_t accept_controller_client(qb_ipcs_connection_t *c, uid_t uid, gid_t gid) { crm_trace("Accepting new IPC client connection"); if (pcmk__new_client(c, uid, gid) == NULL) { return -ENOMEM; } return 0; } // \return libqb error code (0 on success, -errno on error) static int32_t dispatch_controller_ipc(qb_ipcs_connection_t * c, void *data, size_t size) { uint32_t id = 0; uint32_t flags = 0; pcmk__client_t *client = pcmk__find_client(c); xmlNode *msg = pcmk__client_data2xml(client, data, &id, &flags); if (msg == NULL) { pcmk__ipc_send_ack(client, id, flags, PCMK__XE_ACK, NULL, CRM_EX_PROTOCOL); return 0; } pcmk__ipc_send_ack(client, id, flags, PCMK__XE_ACK, NULL, CRM_EX_INDETERMINATE); CRM_ASSERT(client->user != NULL); pcmk__update_acl_user(msg, PCMK__XA_CRM_USER, client->user); crm_xml_add(msg, PCMK__XA_CRM_SYS_FROM, client->id); if (controld_authorize_ipc_message(msg, client, NULL)) { crm_trace("Processing IPC message from client %s", pcmk__client_name(client)); route_message(C_IPC_MESSAGE, msg); } controld_trigger_fsa(); free_xml(msg); return 0; } static int32_t ipc_client_disconnected(qb_ipcs_connection_t *c) { pcmk__client_t *client = pcmk__find_client(c); if (client) { crm_trace("Disconnecting %sregistered client %s (%p/%p)", (client->userdata? "" : "un"), pcmk__client_name(client), c, client); free(client->userdata); pcmk__free_client(client); controld_trigger_fsa(); } return 0; } static void ipc_connection_destroyed(qb_ipcs_connection_t *c) { crm_trace("Connection %p", c); ipc_client_disconnected(c); } /* A_STOP */ void do_stop(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { crm_trace("Closing IPC server"); mainloop_del_ipc_server(ipcs); ipcs = NULL; register_fsa_input(C_FSA_INTERNAL, I_TERMINATE, NULL); } /* A_STARTED */ void do_started(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { static struct qb_ipcs_service_handlers crmd_callbacks = { .connection_accept = accept_controller_client, .connection_created = NULL, .msg_process = dispatch_controller_ipc, .connection_closed = ipc_client_disconnected, .connection_destroyed = ipc_connection_destroyed }; if (cur_state != S_STARTING) { crm_err("Start cancelled... %s", fsa_state2string(cur_state)); return; } else if (!pcmk_is_set(controld_globals.fsa_input_register, R_MEMBERSHIP)) { crm_info("Delaying start, no membership data (%.16llx)", R_MEMBERSHIP); crmd_fsa_stall(TRUE); return; } else if (!pcmk_is_set(controld_globals.fsa_input_register, R_LRM_CONNECTED)) { crm_info("Delaying start, not connected to executor (%.16llx)", R_LRM_CONNECTED); crmd_fsa_stall(TRUE); return; } else if (!pcmk_is_set(controld_globals.fsa_input_register, R_CIB_CONNECTED)) { crm_info("Delaying start, CIB not connected (%.16llx)", R_CIB_CONNECTED); crmd_fsa_stall(TRUE); return; } else if (!pcmk_is_set(controld_globals.fsa_input_register, R_READ_CONFIG)) { crm_info("Delaying start, Config not read (%.16llx)", R_READ_CONFIG); crmd_fsa_stall(TRUE); return; } else if (!pcmk_is_set(controld_globals.fsa_input_register, R_PEER_DATA)) { crm_info("Delaying start, No peer data (%.16llx)", R_PEER_DATA); crmd_fsa_stall(TRUE); return; } crm_debug("Init server comms"); ipcs = pcmk__serve_controld_ipc(&crmd_callbacks); if (ipcs == NULL) { crm_err("Failed to create IPC server: shutting down and inhibiting respawn"); register_fsa_error(C_FSA_INTERNAL, I_ERROR, NULL); } else { crm_notice("Pacemaker controller successfully started and accepting connections"); } controld_set_fsa_input_flags(R_ST_REQUIRED); controld_timer_fencer_connect(GINT_TO_POINTER(TRUE)); controld_clear_fsa_input_flags(R_STARTING); register_fsa_input(msg_data->fsa_cause, I_PENDING, NULL); } /* A_RECOVER */ void do_recover(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { controld_set_fsa_input_flags(R_IN_RECOVERY); crm_warn("Fast-tracking shutdown in response to errors"); register_fsa_input(C_FSA_INTERNAL, I_TERMINATE, NULL); } static void config_query_callback(xmlNode * msg, int call_id, int rc, xmlNode * output, void *user_data) { const char *value = NULL; GHashTable *config_hash = NULL; crm_time_t *now = crm_time_new(NULL); xmlNode *crmconfig = NULL; xmlNode *alerts = NULL; if (rc != pcmk_ok) { fsa_data_t *msg_data = NULL; crm_err("Local CIB query resulted in an error: %s", pcmk_strerror(rc)); register_fsa_error(C_FSA_INTERNAL, I_ERROR, NULL); if (rc == -EACCES || rc == -pcmk_err_schema_validation) { crm_err("The cluster is mis-configured - shutting down and staying down"); controld_set_fsa_input_flags(R_STAYDOWN); } goto bail; } crmconfig = output; if ((crmconfig != NULL) && !pcmk__xe_is(crmconfig, PCMK_XE_CRM_CONFIG)) { crmconfig = pcmk__xe_first_child(crmconfig, PCMK_XE_CRM_CONFIG, NULL, NULL); } if (!crmconfig) { fsa_data_t *msg_data = NULL; crm_err("Local CIB query for " PCMK_XE_CRM_CONFIG " section failed"); register_fsa_error(C_FSA_INTERNAL, I_ERROR, NULL); goto bail; } crm_debug("Call %d : Parsing CIB options", call_id); config_hash = pcmk__strkey_table(free, free); pe_unpack_nvpairs(crmconfig, crmconfig, PCMK_XE_CLUSTER_PROPERTY_SET, NULL, config_hash, PCMK_VALUE_CIB_BOOTSTRAP_OPTIONS, FALSE, now, NULL); // Validate all options, and use defaults if not already present in hash pcmk__validate_cluster_options(config_hash); /* Validate the watchdog timeout in the context of the local node * environment. If invalid, the controller will exit with a fatal error. * * We do this via a wrapper in the controller, so that we call * pcmk__valid_stonith_watchdog_timeout() only if watchdog fencing is * enabled for the local node. Otherwise, we may exit unnecessarily. * * A validator function in libcrmcommon can't act as such a wrapper, because * it doesn't have a stonith API connection or the local node name. */ value = g_hash_table_lookup(config_hash, PCMK_OPT_STONITH_WATCHDOG_TIMEOUT); controld_verify_stonith_watchdog_timeout(value); value = g_hash_table_lookup(config_hash, PCMK_OPT_NO_QUORUM_POLICY); - if (pcmk__str_eq(value, PCMK_VALUE_FENCE_LEGACY, pcmk__str_casei) + if (pcmk__strcase_any_of(value, PCMK_VALUE_FENCE, PCMK_VALUE_FENCE_LEGACY, + NULL) && (pcmk__locate_sbd() != 0)) { controld_set_global_flags(controld_no_quorum_panic); } value = g_hash_table_lookup(config_hash, PCMK_OPT_SHUTDOWN_LOCK); if (crm_is_true(value)) { controld_set_global_flags(controld_shutdown_lock_enabled); } else { controld_clear_global_flags(controld_shutdown_lock_enabled); } value = g_hash_table_lookup(config_hash, PCMK_OPT_SHUTDOWN_LOCK_LIMIT); pcmk_parse_interval_spec(value, &controld_globals.shutdown_lock_limit); controld_globals.shutdown_lock_limit /= 1000; value = g_hash_table_lookup(config_hash, PCMK_OPT_NODE_PENDING_TIMEOUT); pcmk_parse_interval_spec(value, &controld_globals.node_pending_timeout); controld_globals.node_pending_timeout /= 1000; value = g_hash_table_lookup(config_hash, PCMK_OPT_CLUSTER_NAME); pcmk__str_update(&(controld_globals.cluster_name), value); // Let subcomponents initialize their own static variables controld_configure_election(config_hash); controld_configure_fencing(config_hash); controld_configure_fsa_timers(config_hash); controld_configure_throttle(config_hash); alerts = pcmk__xe_first_child(output, PCMK_XE_ALERTS, NULL, NULL); crmd_unpack_alerts(alerts); controld_set_fsa_input_flags(R_READ_CONFIG); controld_trigger_fsa(); g_hash_table_destroy(config_hash); bail: crm_time_free(now); } /*! * \internal * \brief Trigger read and processing of the configuration * * \param[in] fn Calling function name * \param[in] line Line number where call occurred */ void controld_trigger_config_as(const char *fn, int line) { if (config_read_trigger != NULL) { crm_trace("%s:%d - Triggered config processing", fn, line); mainloop_set_trigger(config_read_trigger); } } gboolean crm_read_options(gpointer user_data) { cib_t *cib_conn = controld_globals.cib_conn; int call_id = cib_conn->cmds->query(cib_conn, "//" PCMK_XE_CRM_CONFIG " | //" PCMK_XE_ALERTS, NULL, cib_xpath|cib_scope_local); fsa_register_cib_callback(call_id, NULL, config_query_callback); crm_trace("Querying the CIB... call %d", call_id); return TRUE; } /* A_READCONFIG */ void do_read_config(long long action, enum crmd_fsa_cause cause, enum crmd_fsa_state cur_state, enum crmd_fsa_input current_input, fsa_data_t * msg_data) { throttle_init(); controld_trigger_config(); } static void crm_shutdown(int nsig) { const char *value = NULL; guint default_period_ms = 0; if ((controld_globals.mainloop == NULL) || !g_main_loop_is_running(controld_globals.mainloop)) { crmd_exit(CRM_EX_OK); return; } if (pcmk_is_set(controld_globals.fsa_input_register, R_SHUTDOWN)) { crm_err("Escalating shutdown"); register_fsa_input_before(C_SHUTDOWN, I_ERROR, NULL); return; } controld_set_fsa_input_flags(R_SHUTDOWN); register_fsa_input(C_SHUTDOWN, I_SHUTDOWN, NULL); /* If shutdown timer doesn't have a period set, use the default * * @TODO: Evaluate whether this is still necessary. As long as * config_query_callback() has been run at least once, it doesn't look like * anything could have changed the timer period since then. */ value = pcmk__cluster_option(NULL, PCMK_OPT_SHUTDOWN_ESCALATION); pcmk_parse_interval_spec(value, &default_period_ms); controld_shutdown_start_countdown(default_period_ms); } diff --git a/doc/sphinx/Pacemaker_Explained/cluster-options.rst b/doc/sphinx/Pacemaker_Explained/cluster-options.rst index 042ed0bafe..9f1b0214e3 100644 --- a/doc/sphinx/Pacemaker_Explained/cluster-options.rst +++ b/doc/sphinx/Pacemaker_Explained/cluster-options.rst @@ -1,839 +1,841 @@ Cluster-Wide Configuration -------------------------- .. index:: pair: XML element; cib pair: XML element; configuration Configuration Layout #################### The cluster is defined by the Cluster Information Base (CIB), which uses XML notation. The simplest CIB, an empty one, looks like this: .. topic:: An empty configuration .. code-block:: xml The empty configuration above contains the major sections that make up a CIB: * ``cib``: The entire CIB is enclosed with a ``cib`` element. Certain fundamental settings are defined as attributes of this element. * ``configuration``: This section -- the primary focus of this document -- contains traditional configuration information such as what resources the cluster serves and the relationships among them. * ``crm_config``: cluster-wide configuration options * ``nodes``: the machines that host the cluster * ``resources``: the services run by the cluster * ``constraints``: indications of how resources should be placed * ``status``: This section contains the history of each resource on each node. Based on this data, the cluster can construct the complete current state of the cluster. The authoritative source for this section is the local executor (pacemaker-execd process) on each cluster node, and the cluster will occasionally repopulate the entire section. For this reason, it is never written to disk, and administrators are advised against modifying it in any way. In this document, configuration settings will be described as properties or options based on how they are defined in the CIB: * Properties are XML attributes of an XML element. * Options are name-value pairs expressed as ``nvpair`` child elements of an XML element. Normally, you will use command-line tools that abstract the XML, so the distinction will be unimportant; both properties and options are cluster settings you can tweak. CIB Properties ############## Certain settings are defined by CIB properties (that is, attributes of the ``cib`` tag) rather than with the rest of the cluster configuration in the ``configuration`` section. The reason is simply a matter of parsing. These options are used by the configuration database which is, by design, mostly ignorant of the content it holds. So the decision was made to place them in an easy-to-find location. .. list-table:: **CIB Properties** :class: longtable :widths: 2 2 2 5 :header-rows: 1 * - Name - Type - Default - Description * - .. _admin_epoch: .. index:: pair: admin_epoch; cib admin_epoch - :ref:`nonnegative integer ` - 0 - When a node joins the cluster, the cluster asks the node with the highest (``admin_epoch``, ``epoch``, ``num_updates``) tuple to replace the configuration on all the nodes -- which makes setting them correctly very important. ``admin_epoch`` is never modified by the cluster; you can use this to make the configurations on any inactive nodes obsolete. * - .. _epoch: .. index:: pair: epoch; cib epoch - :ref:`nonnegative integer ` - 0 - The cluster increments this every time the CIB's configuration section is updated. * - .. _num_updates: .. index:: pair: num_updates; cib num_updates - :ref:`nonnegative integer ` - 0 - The cluster increments this every time the CIB's configuration or status sections are updated, and resets it to 0 when epoch changes. * - .. _validate_with: .. index:: pair: validate-with; cib validate-with - :ref:`enumeration ` - - Determines the type of XML validation that will be done on the configuration. Allowed values are ``none`` (in which case the cluster will not require that updates conform to expected syntax) and the base names of schema files installed on the local machine (for example, "pacemaker-3.9") * - .. _remote_tls_port: .. index:: pair: remote-tls-port; cib remote-tls-port - :ref:`port ` - - If set, the CIB manager will listen for anonymously encrypted remote connections on this port, to allow CIB administration from hosts not in the cluster. No key is used, so this should be used only on a protected network where man-in-the-middle attacks can be avoided. * - .. _remote_clear_port: .. index:: pair: remote-clear-port; cib remote-clear-port - :ref:`port ` - - If set to a TCP port number, the CIB manager will listen for remote connections on this port, to allow for CIB administration from hosts not in the cluster. No encryption is used, so this should be used only on a protected network. * - .. _cib_last_written: .. index:: pair: cib-last-written; cib cib-last-written - :ref:`date/time ` - - Indicates when the configuration was last written to disk. Maintained by the cluster; for informational purposes only. * - .. _have_quorum: .. index:: pair: have-quorum; cib have-quorum - :ref:`boolean ` - - Indicates whether the cluster has quorum. If false, the cluster's response is determined by ``no-quorum-policy`` (see below). Maintained by the cluster. * - .. _dc_uuid: .. index:: pair: dc-uuid; cib dc-uuid - :ref:`text ` - - Node ID of the cluster's current designated controller (DC). Used and maintained by the cluster. * - .. _execution_date: .. index:: pair: execution-date; cib execution-date - :ref:`epoch time ` - - Time to use when evaluating rules. .. _cluster_options: Cluster Options ############### Cluster options, as you might expect, control how the cluster behaves when confronted with various situations. They are grouped into sets within the ``crm_config`` section. In advanced configurations, there may be more than one set. (This will be described later in the chapter on :ref:`rules` where we will show how to have the cluster use different sets of options during working hours than during weekends.) For now, we will describe the simple case where each option is present at most once. You can obtain an up-to-date list of cluster options, including their default values, by running the ``man pacemaker-schedulerd`` and ``man pacemaker-controld`` commands. .. list-table:: **Cluster Options** :class: longtable :widths: 2 2 2 5 :header-rows: 1 * - Name - Type - Default - Description * - .. _cluster_name: .. index:: pair: cluster option; cluster-name cluster-name - :ref:`text ` - - An (optional) name for the cluster as a whole. This is mostly for users' convenience for use as desired in administration, but can be used in the Pacemaker configuration in :ref:`rules` (as the ``#cluster-name`` :ref:`node attribute `). It may also be used by higher-level tools when displaying cluster information, and by certain resource agents (for example, the ``ocf:heartbeat:GFS2`` agent stores the cluster name in filesystem meta-data). * - .. _dc_version: .. index:: pair: cluster option; dc-version dc-version - :ref:`version ` - *detected* - Version of Pacemaker on the cluster's designated controller (DC). Maintained by the cluster, and intended for diagnostic purposes. * - .. _cluster_infrastructure: .. index:: pair: cluster option; cluster-infrastructure cluster-infrastructure - :ref:`text ` - *detected* - The messaging layer with which Pacemaker is currently running. Maintained by the cluster, and intended for informational and diagnostic purposes. * - .. _no_quorum_policy: .. index:: pair: cluster option; no-quorum-policy no-quorum-policy - :ref:`enumeration ` - stop - What to do when the cluster does not have quorum. Allowed values: * ``ignore:`` continue all resource management * ``freeze:`` continue resource management, but don't recover resources from nodes not in the affected partition * ``stop:`` stop all resources in the affected cluster partition * ``demote:`` demote promotable resources and stop all other resources in the affected cluster partition *(since 2.0.5)* - * ``suicide:`` fence all nodes in the affected cluster partition + * ``fence:`` fence all nodes in the affected cluster partition + *(since 2.1.9)* + * ``suicide:`` same as ``fence`` *(deprecated since 2.1.9)* * - .. _batch_limit: .. index:: pair: cluster option; batch-limit batch-limit - :ref:`integer ` - 0 - The maximum number of actions that the cluster may execute in parallel across all nodes. The ideal value will depend on the speed and load of your network and cluster nodes. If zero, the cluster will impose a dynamically calculated limit only when any node has high load. If -1, the cluster will not impose any limit. * - .. _migration_limit: .. index:: pair: cluster option; migration-limit migration-limit - :ref:`integer ` - -1 - The number of :ref:`live migration ` actions that the cluster is allowed to execute in parallel on a node. A value of -1 means unlimited. * - .. _load_threshold: .. index:: pair: cluster option; load-threshold load-threshold - :ref:`percentage ` - 80% - Maximum amount of system load that should be used by cluster nodes. The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit. * - .. _node_action_limit: .. index:: pair: cluster option; node-action-limit node-action-limit - :ref:`integer ` - 0 - Maximum number of jobs that can be scheduled per node. If nonpositive or invalid, double the number of cores is used as the maximum number of jobs per node. :ref:`PCMK_node_action_limit ` overrides this option on a per-node basis. * - .. _symmetric_cluster: .. index:: pair: cluster option; symmetric-cluster symmetric-cluster - :ref:`boolean ` - true - If true, resources can run on any node by default. If false, a resource is allowed to run on a node only if a :ref:`location constraint ` enables it. * - .. _stop_all_resources: .. index:: pair: cluster option; stop-all-resources stop-all-resources - :ref:`boolean ` - false - Whether all resources should be disallowed from running (can be useful during maintenance or troubleshooting) * - .. _stop_orphan_resources: .. index:: pair: cluster option; stop-orphan-resources stop-orphan-resources - :ref:`boolean ` - true - Whether resources that have been deleted from the configuration should be stopped. This value takes precedence over :ref:`is-managed ` (that is, even unmanaged resources will be stopped when orphaned if this value is ``true``). * - .. _stop_orphan_actions: .. index:: pair: cluster option; stop-orphan-actions stop-orphan-actions - :ref:`boolean ` - true - Whether recurring :ref:`operations ` that have been deleted from the configuration should be cancelled * - .. _start_failure_is_fatal: .. index:: pair: cluster option; start-failure-is-fatal start-failure-is-fatal - :ref:`boolean ` - true - Whether a failure to start a resource on a particular node prevents further start attempts on that node. If ``false``, the cluster will decide whether the node is still eligible based on the resource's current failure count and ``migration-threshold``. * - .. _enable_startup_probes: .. index:: pair: cluster option; enable-startup-probes enable-startup-probes - :ref:`boolean ` - true - Whether the cluster should check the pre-existing state of resources when the cluster starts * - .. _maintenance_mode: .. index:: pair: cluster option; maintenance-mode maintenance-mode - :ref:`boolean ` - false - If true, the cluster will not start or stop any resource in the cluster, and any recurring operations (expect those specifying ``role`` as ``Stopped``) will be paused. If true, this overrides the :ref:`maintenance ` node attribute, :ref:`is-managed ` and :ref:`maintenance ` resource meta-attributes, and :ref:`enabled ` operation meta-attribute. * - .. _stonith_enabled: .. index:: pair: cluster option; stonith-enabled stonith-enabled - :ref:`boolean ` - true - Whether the cluster is allowed to fence nodes (for example, failed nodes and nodes with resources that can't be stopped). If true, at least one fence device must be configured before resources are allowed to run. If false, unresponsive nodes are immediately assumed to be running no resources, and resource recovery on online nodes starts without any further protection (which can mean *data loss* if the unresponsive node still accesses shared storage, for example). See also the :ref:`requires ` resource meta-attribute. * - .. _stonith_action: .. index:: pair: cluster option; stonith-action stonith-action - :ref:`enumeration ` - reboot - Action the cluster should send to the fence agent when a node must be fenced. Allowed values are ``reboot``, ``off``, and (for legacy agents only) ``poweroff``. * - .. _stonith_timeout: .. index:: pair: cluster option; stonith-timeout stonith-timeout - :ref:`duration ` - 60s - How long to wait for ``on``, ``off``, and ``reboot`` fence actions to complete by default. * - .. _stonith_max_attempts: .. index:: pair: cluster option; stonith-max-attempts stonith-max-attempts - :ref:`score ` - 10 - How many times fencing can fail for a target before the cluster will no longer immediately re-attempt it. Any value below 1 will be ignored, and the default will be used instead. * - .. _have_watchdog: .. index:: pair: cluster option; have-watchdog have-watchdog - :ref:`boolean ` - *detected* - Whether watchdog integration is enabled. This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and :ref:`stonith-watchdog-timeout ` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. * - .. _stonith_watchdog_timeout: .. index:: pair: cluster option; stonith-watchdog-timeout stonith-watchdog-timeout - :ref:`timeout ` - 0 - If nonzero, and the cluster detects ``have-watchdog`` as ``true``, then watchdog-based self-fencing will be performed via SBD when fencing is required. If this is set to a positive value, lost nodes are assumed to achieve self-fencing within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the ``SBD_WATCHDOG_TIMEOUT`` environment variable if that is positive, or otherwise treat this as 0. **Warning:** When used, this timeout must be larger than ``SBD_WATCHDOG_TIMEOUT`` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, ``SBD_WATCHDOG_TIMEOUT`` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. * - .. _concurrent-fencing: .. index:: pair: cluster option; concurrent-fencing concurrent-fencing - :ref:`boolean ` - false - Whether the cluster is allowed to initiate multiple fence actions concurrently. Fence actions initiated externally, such as via the ``stonith_admin`` tool or an application such as DLM, or by the fencer itself such as recurring device monitors and ``status`` and ``list`` commands, are not limited by this option. * - .. _fence_reaction: .. index:: pair: cluster option; fence-reaction fence-reaction - :ref:`enumeration ` - stop - How should a cluster node react if notified of its own fencing? A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Allowed values are ``stop`` to attempt to immediately stop Pacemaker and stay stopped, or ``panic`` to attempt to immediately reboot the local node, falling back to stop on failure. The default is likely to be changed to ``panic`` in a future release. *(since 2.0.3)* * - .. _priority_fencing_delay: .. index:: pair: cluster option; priority-fencing-delay priority-fencing-delay - :ref:`duration ` - 0 - Apply this delay to any fencing targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match (especially meaningful in a split-brain of a 2-node cluster). A promoted resource instance takes the resource's priority plus 1 if the resource's priority is not 0. Any static or random delays introduced by ``pcmk_delay_base`` and ``pcmk_delay_max`` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than (safely twice) the maximum delay from those parameters. *(since 2.0.4)* * - .. _node_pending_timeout: .. index:: pair: cluster option; node-pending-timeout node-pending-timeout - :ref:`duration ` - 0 - Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. *(since 2.1.7)* * - .. _cluster_delay: .. index:: pair: cluster option; cluster-delay cluster-delay - :ref:`duration ` - 60s - If the DC requires an action to be executed on another node, it will consider the action failed if it does not get a response from the other node within this time (beyond the action's own timeout). The ideal value will depend on the speed and load of your network and cluster nodes. * - .. _dc_deadtime: .. index:: pair: cluster option; dc-deadtime dc-deadtime - :ref:`duration ` - 20s - How long to wait for a response from other nodes when electing a DC. The ideal value will depend on the speed and load of your network and cluster nodes. * - .. _cluster_ipc_limit: .. index:: pair: cluster option; cluster-ipc-limit cluster-ipc-limit - :ref:`nonnegative integer ` - 500 - The maximum IPC message backlog before one cluster daemon will disconnect another. This is of use in large clusters, for which a good value is the number of resources in the cluster multiplied by the number of nodes. The default of 500 is also the minimum. Raise this if you see "Evicting client" log messages for cluster daemon process IDs. * - .. _pe_error_series_max: .. index:: pair: cluster option; pe-error-series-max pe-error-series-max - :ref:`integer ` - -1 - The number of scheduler inputs resulting in errors to save. These inputs can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. * - .. _pe_warn_series_max: .. index:: pair: cluster option; pe-warn-series-max pe-warn-series-max - :ref:`integer ` - 5000 - The number of scheduler inputs resulting in warnings to save. These inputs can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. * - .. _pe_input_series_max: .. index:: pair: cluster option; pe-input-series-max pe-input-series-max - :ref:`integer ` - 4000 - The number of "normal" scheduler inputs to save. These inputs can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. * - .. _enable_acl: .. index:: pair: cluster option; enable-acl enable-acl - :ref:`boolean ` - false - Whether :ref:`access control lists ` should be used to authorize CIB modifications * - .. _placement_strategy: .. index:: pair: cluster option; placement-strategy placement-strategy - :ref:`enumeration ` - default - How the cluster should assign resources to nodes (see :ref:`utilization`). Allowed values are ``default``, ``utilization``, ``balanced``, and ``minimal``. * - .. _node_health_strategy: .. index:: pair: cluster option; node-health-strategy node-health-strategy - :ref:`enumeration ` - none - How the cluster should react to :ref:`node health ` attributes. Allowed values are ``none``, ``migrate-on-red``, ``only-green``, ``progressive``, and ``custom``. * - .. _node_health_base: .. index:: pair: cluster option; node-health-base node-health-base - :ref:`score ` - 0 - The base health score assigned to a node. Only used when ``node-health-strategy`` is ``progressive``. * - .. _node_health_green: .. index:: pair: cluster option; node-health-green node-health-green - :ref:`score ` - 0 - The score to use for a node health attribute whose value is ``green``. Only used when ``node-health-strategy`` is ``progressive`` or ``custom``. * - .. _node_health_yellow: .. index:: pair: cluster option; node-health-yellow node-health-yellow - :ref:`score ` - 0 - The score to use for a node health attribute whose value is ``yellow``. Only used when ``node-health-strategy`` is ``progressive`` or ``custom``. * - .. _node_health_red: .. index:: pair: cluster option; node-health-red node-health-red - :ref:`score ` - -INFINITY - The score to use for a node health attribute whose value is ``red``. Only used when ``node-health-strategy`` is ``progressive`` or ``custom``. * - .. _cluster_recheck_interval: .. index:: pair: cluster option; cluster-recheck-interval cluster-recheck-interval - :ref:`duration ` - 15min - Pacemaker is primarily event-driven, and looks ahead to know when to recheck the cluster for failure-timeout settings and most time-based rules *(since 2.0.3)*. However, it will also recheck the cluster after this amount of inactivity. This has two goals: rules with ``date_spec`` are only guaranteed to be checked this often, and it also serves as a fail-safe for some kinds of scheduler bugs. A value of 0 disables this polling. * - .. _shutdown_lock: .. index:: pair: cluster option; shutdown-lock shutdown-lock - :ref:`boolean ` - false - The default of false allows active resources to be recovered elsewhere when their node is cleanly shut down, which is what the vast majority of users will want. However, some users prefer to make resources highly available only for failures, with no recovery for clean shutdowns. If this option is true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most ``shutdown-lock-limit``, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Locks may be manually cleared using the ``--refresh`` option of ``crm_resource`` (both the resource and node must be specified; this works with remote nodes if their connection resource's ``target-role`` is set to ``Stopped``, but not if Pacemaker Remote is stopped on the remote node without disabling the connection resource). *(since 2.0.4)* * - .. _shutdown_lock_limit: .. index:: pair: cluster option; shutdown-lock-limit shutdown-lock-limit - :ref:`duration ` - 0 - If ``shutdown-lock`` is true, and this is set to a nonzero time duration, locked resources will be allowed to start after this much time has passed since the node shutdown was initiated, even if the node has not rejoined. (This works with remote nodes only if their connection resource's ``target-role`` is set to ``Stopped``.) *(since 2.0.4)* * - .. _remove_after_stop: .. index:: pair: cluster option; remove-after-stop remove-after-stop - :ref:`boolean ` - false - *Deprecated* Whether the cluster should remove resources from Pacemaker's executor after they are stopped. Values other than the default are, at best, poorly tested and potentially dangerous. This option is deprecated and will be removed in a future release. * - .. _startup_fencing: .. index:: pair: cluster option; startup-fencing startup-fencing - :ref:`boolean ` - true - *Advanced Use Only:* Whether the cluster should fence unseen nodes at start-up. Setting this to false is unsafe, because the unseen nodes could be active and running resources but unreachable. ``dc-deadtime`` acts as a grace period before this fencing, since a DC must be elected to schedule fencing. * - .. _election_timeout: .. index:: pair: cluster option; election-timeout election-timeout - :ref:`duration ` - 2min - *Advanced Use Only:* If a winner is not declared within this much time of starting an election, the node that initiated the election will declare itself the winner. * - .. _shutdown_escalation: .. index:: pair: cluster option; shutdown-escalation shutdown-escalation - :ref:`duration ` - 20min - *Advanced Use Only:* The controller will exit immediately if a shutdown does not complete within this much time. * - .. _join_integration_timeout: .. index:: pair: cluster option; join-integration-timeout join-integration-timeout - :ref:`duration ` - 3min - *Advanced Use Only:* If you need to adjust this value, it probably indicates the presence of a bug. * - .. _join_finalization_timeout: .. index:: pair: cluster option; join-finalization-timeout join-finalization-timeout - :ref:`duration ` - 30min - *Advanced Use Only:* If you need to adjust this value, it probably indicates the presence of a bug. * - .. _transition_delay: .. index:: pair: cluster option; transition-delay transition-delay - :ref:`duration ` - 0s - *Advanced Use Only:* Delay cluster recovery for the configured interval to allow for additional or related events to occur. This can be useful if your configuration is sensitive to the order in which ping updates arrive. Enabling this option will slow down cluster recovery under all conditions. diff --git a/lib/common/options.c b/lib/common/options.c index ba64959c8a..1b64c4d8d6 100644 --- a/lib/common/options.c +++ b/lib/common/options.c @@ -1,1565 +1,1567 @@ /* * Copyright 2004-2024 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #ifndef _GNU_SOURCE # define _GNU_SOURCE #endif #include #include #include #include #include #include #include #include void pcmk__cli_help(char cmd) { if (cmd == 'v' || cmd == '$') { printf("Pacemaker %s\n", PACEMAKER_VERSION); printf("Written by Andrew Beekhof and " "the Pacemaker project contributors\n"); } else if (cmd == '!') { printf("Pacemaker %s (Build: %s): %s\n", PACEMAKER_VERSION, BUILD_VERSION, CRM_FEATURES); } crm_exit(CRM_EX_OK); while(1); // above does not return } /* * Option metadata */ static const pcmk__cluster_option_t cluster_options[] = { /* name, old name, type, allowed values, * default value, validator, * flags, * short description, * long description */ { PCMK_OPT_DC_VERSION, NULL, PCMK_VALUE_VERSION, NULL, NULL, NULL, pcmk__opt_controld|pcmk__opt_generated, N_("Pacemaker version on cluster node elected Designated Controller " "(DC)"), N_("Includes a hash which identifies the exact revision the code was " "built from. Used for diagnostic purposes."), }, { PCMK_OPT_CLUSTER_INFRASTRUCTURE, NULL, PCMK_VALUE_STRING, NULL, NULL, NULL, pcmk__opt_controld|pcmk__opt_generated, N_("The messaging layer on which Pacemaker is currently running"), N_("Used for informational and diagnostic purposes."), }, { PCMK_OPT_CLUSTER_NAME, NULL, PCMK_VALUE_STRING, NULL, NULL, NULL, pcmk__opt_controld, N_("An arbitrary name for the cluster"), N_("This optional value is mostly for users' convenience as desired " "in administration, but may also be used in Pacemaker " "configuration rules via the #cluster-name node attribute, and " "by higher-level tools and resource agents."), }, { PCMK_OPT_DC_DEADTIME, NULL, PCMK_VALUE_DURATION, NULL, "20s", pcmk__valid_interval_spec, pcmk__opt_controld, N_("How long to wait for a response from other nodes during start-up"), N_("The optimal value will depend on the speed and load of your " "network and the type of switches used."), }, { PCMK_OPT_CLUSTER_RECHECK_INTERVAL, NULL, PCMK_VALUE_DURATION, NULL, "15min", pcmk__valid_interval_spec, pcmk__opt_controld, N_("Polling interval to recheck cluster state and evaluate rules " "with date specifications"), N_("Pacemaker is primarily event-driven, and looks ahead to know when " "to recheck cluster state for failure-timeout settings and most " "time-based rules. However, it will also recheck the cluster after " "this amount of inactivity, to evaluate rules with date " "specifications and serve as a fail-safe for certain types of " "scheduler bugs. A value of 0 disables polling. A positive value " "sets an interval in seconds, unless other units are specified " "(for example, \"5min\")."), }, { PCMK_OPT_FENCE_REACTION, NULL, PCMK_VALUE_SELECT, PCMK_VALUE_STOP ", " PCMK_VALUE_PANIC, PCMK_VALUE_STOP, NULL, pcmk__opt_controld, N_("How a cluster node should react if notified of its own fencing"), N_("A cluster node may receive notification of a \"succeeded\" " "fencing that targeted it if fencing is misconfigured, or if " "fabric fencing is in use that doesn't cut cluster communication. " "Use \"stop\" to attempt to immediately stop Pacemaker and stay " "stopped, or \"panic\" to attempt to immediately reboot the local " "node, falling back to stop on failure."), }, { PCMK_OPT_ELECTION_TIMEOUT, NULL, PCMK_VALUE_DURATION, NULL, "2min", pcmk__valid_interval_spec, pcmk__opt_controld|pcmk__opt_advanced, N_("Declare an election failed if it is not decided within this much " "time. If you need to adjust this value, it probably indicates " "the presence of a bug."), NULL, }, { PCMK_OPT_SHUTDOWN_ESCALATION, NULL, PCMK_VALUE_DURATION, NULL, "20min", pcmk__valid_interval_spec, pcmk__opt_controld|pcmk__opt_advanced, N_("Exit immediately if shutdown does not complete within this much " "time. If you need to adjust this value, it probably indicates " "the presence of a bug."), NULL, }, { PCMK_OPT_JOIN_INTEGRATION_TIMEOUT, "crmd-integration-timeout", PCMK_VALUE_DURATION, NULL, "3min", pcmk__valid_interval_spec, pcmk__opt_controld|pcmk__opt_advanced, N_("If you need to adjust this value, it probably indicates " "the presence of a bug."), NULL, }, { PCMK_OPT_JOIN_FINALIZATION_TIMEOUT, "crmd-finalization-timeout", PCMK_VALUE_DURATION, NULL, "30min", pcmk__valid_interval_spec, pcmk__opt_controld|pcmk__opt_advanced, N_("If you need to adjust this value, it probably indicates " "the presence of a bug."), NULL, }, { PCMK_OPT_TRANSITION_DELAY, "crmd-transition-delay", PCMK_VALUE_DURATION, NULL, "0s", pcmk__valid_interval_spec, pcmk__opt_controld|pcmk__opt_advanced, N_("Enabling this option will slow down cluster recovery under all " "conditions"), N_("Delay cluster recovery for this much time to allow for additional " "events to occur. Useful if your configuration is sensitive to " "the order in which ping updates arrive."), }, { PCMK_OPT_NO_QUORUM_POLICY, NULL, PCMK_VALUE_SELECT, PCMK_VALUE_STOP ", " PCMK_VALUE_FREEZE ", " PCMK_VALUE_IGNORE - ", " PCMK_VALUE_DEMOTE ", " PCMK_VALUE_FENCE_LEGACY, + ", " PCMK_VALUE_DEMOTE ", " PCMK_VALUE_FENCE ", " + PCMK_VALUE_FENCE_LEGACY, PCMK_VALUE_STOP, pcmk__valid_no_quorum_policy, pcmk__opt_schedulerd, N_("What to do when the cluster does not have quorum"), NULL, }, { PCMK_OPT_SHUTDOWN_LOCK, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether to lock resources to a cleanly shut down node"), N_("When true, resources active on a node when it is cleanly shut down " "are kept \"locked\" to that node (not allowed to run elsewhere) " "until they start again on that node after it rejoins (or for at " "most shutdown-lock-limit, if set). Stonith resources and " "Pacemaker Remote connections are never locked. Clone and bundle " "instances and the promoted role of promotable clones are " "currently never locked, though support could be added in a future " "release."), }, { PCMK_OPT_SHUTDOWN_LOCK_LIMIT, NULL, PCMK_VALUE_DURATION, NULL, "0", pcmk__valid_interval_spec, pcmk__opt_schedulerd, N_("Do not lock resources to a cleanly shut down node longer than " "this"), N_("If shutdown-lock is true and this is set to a nonzero time " "duration, shutdown locks will expire after this much time has " "passed since the shutdown was initiated, even if the node has not " "rejoined."), }, { PCMK_OPT_ENABLE_ACL, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, pcmk__valid_boolean, pcmk__opt_based, N_("Enable Access Control Lists (ACLs) for the CIB"), NULL, }, { PCMK_OPT_SYMMETRIC_CLUSTER, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether resources can run on any node by default"), NULL, }, { PCMK_OPT_MAINTENANCE_MODE, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether the cluster should refrain from monitoring, starting, and " "stopping resources"), NULL, }, { PCMK_OPT_START_FAILURE_IS_FATAL, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether a start failure should prevent a resource from being " "recovered on the same node"), N_("When true, the cluster will immediately ban a resource from a node " "if it fails to start there. When false, the cluster will instead " "check the resource's fail count against its migration-threshold.") }, { PCMK_OPT_ENABLE_STARTUP_PROBES, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether the cluster should check for active resources during " "start-up"), NULL, }, // Fencing-related options { PCMK_OPT_STONITH_ENABLED, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, pcmk__valid_boolean, pcmk__opt_schedulerd|pcmk__opt_advanced, N_("Whether nodes may be fenced as part of recovery"), N_("If false, unresponsive nodes are immediately assumed to be " "harmless, and resources that were active on them may be recovered " "elsewhere. This can result in a \"split-brain\" situation, " "potentially leading to data loss and/or service unavailability."), }, { PCMK_OPT_STONITH_ACTION, NULL, PCMK_VALUE_SELECT, PCMK_ACTION_REBOOT ", " PCMK_ACTION_OFF ", " PCMK__ACTION_POWEROFF, PCMK_ACTION_REBOOT, pcmk__is_fencing_action, pcmk__opt_schedulerd, N_("Action to send to fence device when a node needs to be fenced " "(\"poweroff\" is a deprecated alias for \"off\")"), NULL, }, { PCMK_OPT_STONITH_TIMEOUT, NULL, PCMK_VALUE_DURATION, NULL, "60s", pcmk__valid_interval_spec, pcmk__opt_schedulerd, N_("How long to wait for on, off, and reboot fence actions to complete " "by default"), NULL, }, { PCMK_OPT_HAVE_WATCHDOG, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, pcmk__valid_boolean, pcmk__opt_schedulerd|pcmk__opt_generated, N_("Whether watchdog integration is enabled"), N_("This is set automatically by the cluster according to whether SBD " "is detected to be in use. User-configured values are ignored. " "The value `true` is meaningful if diskless SBD is used and " "`stonith-watchdog-timeout` is nonzero. In that case, if fencing " "is required, watchdog-based self-fencing will be performed via " "SBD without requiring a fencing resource explicitly configured."), }, { /* @COMPAT Currently, unparsable values default to -1 (auto-calculate), * while missing values default to 0 (disable). All values are accepted * (unless the controller finds that the value conflicts with the * SBD_WATCHDOG_TIMEOUT). * * At a compatibility break: properly validate as a timeout, let * either negative values or a particular string like "auto" mean auto- * calculate, and use 0 as the single default for when the option either * is unset or fails to validate. */ PCMK_OPT_STONITH_WATCHDOG_TIMEOUT, NULL, PCMK_VALUE_TIMEOUT, NULL, "0", NULL, pcmk__opt_controld, N_("How long before nodes can be assumed to be safely down when " "watchdog-based self-fencing via SBD is in use"), N_("If this is set to a positive value, lost nodes are assumed to " "achieve self-fencing using watchdog-based SBD within this much " "time. This does not require a fencing resource to be explicitly " "configured, though a fence_watchdog resource can be configured, to " "limit use to specific nodes. If this is set to 0 (the default), " "the cluster will never assume watchdog-based self-fencing. If this " "is set to a negative value, the cluster will use twice the local " "value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that " "is positive, or otherwise treat this as 0. WARNING: When used, " "this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all " "nodes that use watchdog-based SBD, and Pacemaker will refuse to " "start on any of those nodes where this is not true for the local " "value or SBD is not active. When this is set to a negative value, " "`SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes " "that use SBD, otherwise data corruption or loss could occur."), }, { PCMK_OPT_STONITH_MAX_ATTEMPTS, NULL, PCMK_VALUE_SCORE, NULL, "10", pcmk__valid_positive_int, pcmk__opt_controld, N_("How many times fencing can fail before it will no longer be " "immediately re-attempted on a target"), NULL, }, { PCMK_OPT_CONCURRENT_FENCING, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK__CONCURRENT_FENCING_DEFAULT, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Allow performing fencing operations in parallel"), NULL, }, { PCMK_OPT_STARTUP_FENCING, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, pcmk__valid_boolean, pcmk__opt_schedulerd|pcmk__opt_advanced, N_("Whether to fence unseen nodes at start-up"), N_("Setting this to false may lead to a \"split-brain\" situation, " "potentially leading to data loss and/or service unavailability."), }, { PCMK_OPT_PRIORITY_FENCING_DELAY, NULL, PCMK_VALUE_DURATION, NULL, "0", pcmk__valid_interval_spec, pcmk__opt_schedulerd, N_("Apply fencing delay targeting the lost nodes with the highest " "total resource priority"), N_("Apply specified delay for the fencings that are targeting the lost " "nodes with the highest total resource priority in case we don't " "have the majority of the nodes in our cluster partition, so that " "the more significant nodes potentially win any fencing match, " "which is especially meaningful under split-brain of 2-node " "cluster. A promoted resource instance takes the base priority + 1 " "on calculation if the base priority is not 0. Any static/random " "delays that are introduced by `pcmk_delay_base/max` configured " "for the corresponding fencing resources will be added to this " "delay. This delay should be significantly greater than, safely " "twice, the maximum `pcmk_delay_base/max`. By default, priority " "fencing delay is disabled."), }, { PCMK_OPT_NODE_PENDING_TIMEOUT, NULL, PCMK_VALUE_DURATION, NULL, "0", pcmk__valid_interval_spec, pcmk__opt_schedulerd, N_("How long to wait for a node that has joined the cluster to join " "the controller process group"), N_("Fence nodes that do not join the controller process group within " "this much time after joining the cluster, to allow the cluster " "to continue managing resources. A value of 0 means never fence " "pending nodes. Setting the value to 2h means fence nodes after " "2 hours."), }, { PCMK_OPT_CLUSTER_DELAY, NULL, PCMK_VALUE_DURATION, NULL, "60s", pcmk__valid_interval_spec, pcmk__opt_schedulerd, N_("Maximum time for node-to-node communication"), N_("The node elected Designated Controller (DC) will consider an action " "failed if it does not get a response from the node executing the " "action within this time (after considering the action's own " "timeout). The \"correct\" value will depend on the speed and " "load of your network and cluster nodes.") }, // Limits { PCMK_OPT_LOAD_THRESHOLD, NULL, PCMK_VALUE_PERCENTAGE, NULL, "80%", pcmk__valid_percentage, pcmk__opt_controld, N_("Maximum amount of system load that should be used by cluster " "nodes"), N_("The cluster will slow down its recovery process when the amount of " "system resources used (currently CPU) approaches this limit"), }, { PCMK_OPT_NODE_ACTION_LIMIT, NULL, PCMK_VALUE_INTEGER, NULL, "0", pcmk__valid_int, pcmk__opt_controld, N_("Maximum number of jobs that can be scheduled per node (defaults to " "2x cores)"), NULL, }, { PCMK_OPT_BATCH_LIMIT, NULL, PCMK_VALUE_INTEGER, NULL, "0", pcmk__valid_int, pcmk__opt_schedulerd, N_("Maximum number of jobs that the cluster may execute in parallel " "across all nodes"), N_("The \"correct\" value will depend on the speed and load of your " "network and cluster nodes. If set to 0, the cluster will " "impose a dynamically calculated limit when any node has a " "high load."), }, { PCMK_OPT_MIGRATION_LIMIT, NULL, PCMK_VALUE_INTEGER, NULL, "-1", pcmk__valid_int, pcmk__opt_schedulerd, N_("The number of live migration actions that the cluster is allowed " "to execute in parallel on a node (-1 means no limit)"), NULL, }, { /* @TODO This is actually ignored if not strictly positive. We should * overhaul value types in Pacemaker Explained. There are lots of * inaccurate ranges (assumptions of 32-bit width, "nonnegative" when * positive is required, etc.). * * Maybe a single integer type with the allowed range specified would be * better. * * Drop the PCMK_VALUE_NONNEGATIVE_INTEGER constant if we do this before * a release. */ PCMK_OPT_CLUSTER_IPC_LIMIT, NULL, PCMK_VALUE_NONNEGATIVE_INTEGER, NULL, "500", pcmk__valid_positive_int, pcmk__opt_based, N_("Maximum IPC message backlog before disconnecting a cluster daemon"), N_("Raise this if log has \"Evicting client\" messages for cluster " "daemon PIDs (a good value is the number of resources in the " "cluster multiplied by the number of nodes)."), }, // Orphans and stopping { PCMK_OPT_STOP_ALL_RESOURCES, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether the cluster should stop all active resources"), NULL, }, { PCMK_OPT_STOP_ORPHAN_RESOURCES, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether to stop resources that were removed from the " "configuration"), NULL, }, { PCMK_OPT_STOP_ORPHAN_ACTIONS, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, pcmk__valid_boolean, pcmk__opt_schedulerd, N_("Whether to cancel recurring actions removed from the " "configuration"), NULL, }, { PCMK__OPT_REMOVE_AFTER_STOP, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, pcmk__valid_boolean, pcmk__opt_schedulerd|pcmk__opt_deprecated, N_("Whether to remove stopped resources from the executor"), N_("Values other than default are poorly tested and potentially " "dangerous."), }, // Storing inputs { PCMK_OPT_PE_ERROR_SERIES_MAX, NULL, PCMK_VALUE_INTEGER, NULL, "-1", pcmk__valid_int, pcmk__opt_schedulerd, N_("The number of scheduler inputs resulting in errors to save"), N_("Zero to disable, -1 to store unlimited."), }, { PCMK_OPT_PE_WARN_SERIES_MAX, NULL, PCMK_VALUE_INTEGER, NULL, "5000", pcmk__valid_int, pcmk__opt_schedulerd, N_("The number of scheduler inputs resulting in warnings to save"), N_("Zero to disable, -1 to store unlimited."), }, { PCMK_OPT_PE_INPUT_SERIES_MAX, NULL, PCMK_VALUE_INTEGER, NULL, "4000", pcmk__valid_int, pcmk__opt_schedulerd, N_("The number of scheduler inputs without errors or warnings to save"), N_("Zero to disable, -1 to store unlimited."), }, // Node health { PCMK_OPT_NODE_HEALTH_STRATEGY, NULL, PCMK_VALUE_SELECT, PCMK_VALUE_NONE ", " PCMK_VALUE_MIGRATE_ON_RED ", " PCMK_VALUE_ONLY_GREEN ", " PCMK_VALUE_PROGRESSIVE ", " PCMK_VALUE_CUSTOM, PCMK_VALUE_NONE, pcmk__validate_health_strategy, pcmk__opt_schedulerd, N_("How cluster should react to node health attributes"), N_("Requires external entities to create node attributes (named with " "the prefix \"#health\") with values \"red\", \"yellow\", or " "\"green\".") }, { PCMK_OPT_NODE_HEALTH_BASE, NULL, PCMK_VALUE_SCORE, NULL, "0", pcmk__valid_int, pcmk__opt_schedulerd, N_("Base health score assigned to a node"), N_("Only used when \"node-health-strategy\" is set to " "\"progressive\"."), }, { PCMK_OPT_NODE_HEALTH_GREEN, NULL, PCMK_VALUE_SCORE, NULL, "0", pcmk__valid_int, pcmk__opt_schedulerd, N_("The score to use for a node health attribute whose value is " "\"green\""), N_("Only used when \"node-health-strategy\" is set to \"custom\" or " "\"progressive\"."), }, { PCMK_OPT_NODE_HEALTH_YELLOW, NULL, PCMK_VALUE_SCORE, NULL, "0", pcmk__valid_int, pcmk__opt_schedulerd, N_("The score to use for a node health attribute whose value is " "\"yellow\""), N_("Only used when \"node-health-strategy\" is set to \"custom\" or " "\"progressive\"."), }, { PCMK_OPT_NODE_HEALTH_RED, NULL, PCMK_VALUE_SCORE, NULL, "-INFINITY", pcmk__valid_int, pcmk__opt_schedulerd, N_("The score to use for a node health attribute whose value is " "\"red\""), N_("Only used when \"node-health-strategy\" is set to \"custom\" or " "\"progressive\".") }, // Placement strategy { PCMK_OPT_PLACEMENT_STRATEGY, NULL, PCMK_VALUE_SELECT, PCMK_VALUE_DEFAULT ", " PCMK_VALUE_UTILIZATION ", " PCMK_VALUE_MINIMAL ", " PCMK_VALUE_BALANCED, PCMK_VALUE_DEFAULT, pcmk__valid_placement_strategy, pcmk__opt_schedulerd, N_("How the cluster should allocate resources to nodes"), NULL, }, { NULL, }, }; static const pcmk__cluster_option_t fencing_params[] = { /* name, old name, type, allowed values, * default value, validator, * flags, * short description, * long description */ { PCMK_STONITH_HOST_ARGUMENT, NULL, PCMK_VALUE_STRING, NULL, "port", NULL, pcmk__opt_advanced, N_("An alternate parameter to supply instead of 'port'"), N_("Some devices do not support the standard 'port' parameter or may " "provide additional ones. Use this to specify an alternate, device-" "specific, parameter that should indicate the machine to be " "fenced. A value of \"none\" can be used to tell the cluster not " "to supply any additional parameters."), }, { PCMK_STONITH_HOST_MAP, NULL, PCMK_VALUE_STRING, NULL, NULL, NULL, pcmk__opt_none, N_("A mapping of node names to port numbers for devices that do not " "support node names."), N_("For example, \"node1:1;node2:2,3\" would tell the cluster to use " "port 1 for node1 and ports 2 and 3 for node2."), }, { PCMK_STONITH_HOST_LIST, NULL, PCMK_VALUE_STRING, NULL, NULL, NULL, pcmk__opt_none, N_("Nodes targeted by this device"), N_("Comma-separated list of nodes that can be targeted by this device " "(for example, \"node1,node2,node3\"). If pcmk_host_check is " "\"static-list\", either this or pcmk_host_map must be set."), }, { PCMK_STONITH_HOST_CHECK, NULL, PCMK_VALUE_SELECT, PCMK_VALUE_DYNAMIC_LIST ", " PCMK_VALUE_STATIC_LIST ", " PCMK_VALUE_STATUS ", " PCMK_VALUE_NONE, NULL, NULL, pcmk__opt_none, N_("How to determine which nodes can be targeted by the device"), N_("Use \"dynamic-list\" to query the device via the 'list' command; " "\"static-list\" to check the pcmk_host_list attribute; " "\"status\" to query the device via the 'status' command; or " "\"none\" to assume every device can fence every node. " "The default value is \"static-list\" if pcmk_host_map or " "pcmk_host_list is set; otherwise \"dynamic-list\" if the device " "supports the list operation; otherwise \"status\" if the device " "supports the status operation; otherwise \"none\""), }, { PCMK_STONITH_DELAY_MAX, NULL, PCMK_VALUE_DURATION, NULL, "0s", NULL, pcmk__opt_none, N_("Enable a delay of no more than the time specified before executing " "fencing actions."), N_("Enable a delay of no more than the time specified before executing " "fencing actions. Pacemaker derives the overall delay by taking " "the value of pcmk_delay_base and adding a random delay value such " "that the sum is kept below this maximum."), }, { PCMK_STONITH_DELAY_BASE, NULL, PCMK_VALUE_STRING, NULL, "0s", NULL, pcmk__opt_none, N_("Enable a base delay for fencing actions and specify base delay " "value."), N_("This enables a static delay for fencing actions, which can help " "avoid \"death matches\" where two nodes try to fence each other " "at the same time. If pcmk_delay_max is also used, a random delay " "will be added such that the total delay is kept below that value. " "This can be set to a single time value to apply to any node " "targeted by this device (useful if a separate device is " "configured for each target), or to a node map (for example, " "\"node1:1s;node2:5\") to set a different value for each target."), }, { PCMK_STONITH_ACTION_LIMIT, NULL, PCMK_VALUE_INTEGER, NULL, "1", NULL, pcmk__opt_none, N_("The maximum number of actions can be performed in parallel on this " "device"), N_("Cluster property concurrent-fencing=\"true\" needs to be " "configured first. Then use this to specify the maximum number of " "actions can be performed in parallel on this device. A value of " "-1 means an unlimited number of actions can be performed in " "parallel."), }, { "pcmk_reboot_action", NULL, PCMK_VALUE_STRING, NULL, PCMK_ACTION_REBOOT, NULL, pcmk__opt_advanced, N_("An alternate command to run instead of 'reboot'"), N_("Some devices do not support the standard commands or may provide " "additional ones. Use this to specify an alternate, device-" "specific, command that implements the 'reboot' action."), }, { "pcmk_reboot_timeout", NULL, PCMK_VALUE_TIMEOUT, NULL, "60s", NULL, pcmk__opt_advanced, N_("Specify an alternate timeout to use for 'reboot' actions instead " "of stonith-timeout"), N_("Some devices need much more/less time to complete than normal. " "Use this to specify an alternate, device-specific, timeout for " "'reboot' actions."), }, { "pcmk_reboot_retries", NULL, PCMK_VALUE_INTEGER, NULL, "2", NULL, pcmk__opt_advanced, N_("The maximum number of times to try the 'reboot' command within the " "timeout period"), N_("Some devices do not support multiple connections. Operations may " "\"fail\" if the device is busy with another task. In that case, " "Pacemaker will automatically retry the operation if there is time " "remaining. Use this option to alter the number of times Pacemaker " "tries a 'reboot' action before giving up."), }, { "pcmk_off_action", NULL, PCMK_VALUE_STRING, NULL, PCMK_ACTION_OFF, NULL, pcmk__opt_advanced, N_("An alternate command to run instead of 'off'"), N_("Some devices do not support the standard commands or may provide " "additional ones. Use this to specify an alternate, device-" "specific, command that implements the 'off' action."), }, { "pcmk_off_timeout", NULL, PCMK_VALUE_TIMEOUT, NULL, "60s", NULL, pcmk__opt_advanced, N_("Specify an alternate timeout to use for 'off' actions instead of " "stonith-timeout"), N_("Some devices need much more/less time to complete than normal. " "Use this to specify an alternate, device-specific, timeout for " "'off' actions."), }, { "pcmk_off_retries", NULL, PCMK_VALUE_INTEGER, NULL, "2", NULL, pcmk__opt_advanced, N_("The maximum number of times to try the 'off' command within the " "timeout period"), N_("Some devices do not support multiple connections. Operations may " "\"fail\" if the device is busy with another task. In that case, " "Pacemaker will automatically retry the operation if there is time " "remaining. Use this option to alter the number of times Pacemaker " "tries a 'off' action before giving up."), }, { "pcmk_on_action", NULL, PCMK_VALUE_STRING, NULL, PCMK_ACTION_ON, NULL, pcmk__opt_advanced, N_("An alternate command to run instead of 'on'"), N_("Some devices do not support the standard commands or may provide " "additional ones. Use this to specify an alternate, device-" "specific, command that implements the 'on' action."), }, { "pcmk_on_timeout", NULL, PCMK_VALUE_TIMEOUT, NULL, "60s", NULL, pcmk__opt_advanced, N_("Specify an alternate timeout to use for 'on' actions instead of " "stonith-timeout"), N_("Some devices need much more/less time to complete than normal. " "Use this to specify an alternate, device-specific, timeout for " "'on' actions."), }, { "pcmk_on_retries", NULL, PCMK_VALUE_INTEGER, NULL, "2", NULL, pcmk__opt_advanced, N_("The maximum number of times to try the 'on' command within the " "timeout period"), N_("Some devices do not support multiple connections. Operations may " "\"fail\" if the device is busy with another task. In that case, " "Pacemaker will automatically retry the operation if there is time " "remaining. Use this option to alter the number of times Pacemaker " "tries a 'on' action before giving up."), }, { "pcmk_list_action", NULL, PCMK_VALUE_STRING, NULL, PCMK_ACTION_LIST, NULL, pcmk__opt_advanced, N_("An alternate command to run instead of 'list'"), N_("Some devices do not support the standard commands or may provide " "additional ones. Use this to specify an alternate, device-" "specific, command that implements the 'list' action."), }, { "pcmk_list_timeout", NULL, PCMK_VALUE_TIMEOUT, NULL, "60s", NULL, pcmk__opt_advanced, N_("Specify an alternate timeout to use for 'list' actions instead of " "stonith-timeout"), N_("Some devices need much more/less time to complete than normal. " "Use this to specify an alternate, device-specific, timeout for " "'list' actions."), }, { "pcmk_list_retries", NULL, PCMK_VALUE_INTEGER, NULL, "2", NULL, pcmk__opt_advanced, N_("The maximum number of times to try the 'list' command within the " "timeout period"), N_("Some devices do not support multiple connections. Operations may " "\"fail\" if the device is busy with another task. In that case, " "Pacemaker will automatically retry the operation if there is time " "remaining. Use this option to alter the number of times Pacemaker " "tries a 'list' action before giving up."), }, { "pcmk_monitor_action", NULL, PCMK_VALUE_STRING, NULL, PCMK_ACTION_MONITOR, NULL, pcmk__opt_advanced, N_("An alternate command to run instead of 'monitor'"), N_("Some devices do not support the standard commands or may provide " "additional ones. Use this to specify an alternate, device-" "specific, command that implements the 'monitor' action."), }, { "pcmk_monitor_timeout", NULL, PCMK_VALUE_TIMEOUT, NULL, "60s", NULL, pcmk__opt_advanced, N_("Specify an alternate timeout to use for 'monitor' actions instead " "of stonith-timeout"), N_("Some devices need much more/less time to complete than normal. " "Use this to specify an alternate, device-specific, timeout for " "'monitor' actions."), }, { "pcmk_monitor_retries", NULL, PCMK_VALUE_INTEGER, NULL, "2", NULL, pcmk__opt_advanced, N_("The maximum number of times to try the 'monitor' command within " "the timeout period"), N_("Some devices do not support multiple connections. Operations may " "\"fail\" if the device is busy with another task. In that case, " "Pacemaker will automatically retry the operation if there is time " "remaining. Use this option to alter the number of times Pacemaker " "tries a 'monitor' action before giving up."), }, { "pcmk_status_action", NULL, PCMK_VALUE_STRING, NULL, PCMK_ACTION_STATUS, NULL, pcmk__opt_advanced, N_("An alternate command to run instead of 'status'"), N_("Some devices do not support the standard commands or may provide " "additional ones. Use this to specify an alternate, device-" "specific, command that implements the 'status' action."), }, { "pcmk_status_timeout", NULL, PCMK_VALUE_TIMEOUT, NULL, "60s", NULL, pcmk__opt_advanced, N_("Specify an alternate timeout to use for 'status' actions instead " "of stonith-timeout"), N_("Some devices need much more/less time to complete than normal. " "Use this to specify an alternate, device-specific, timeout for " "'status' actions."), }, { "pcmk_status_retries", NULL, PCMK_VALUE_INTEGER, NULL, "2", NULL, pcmk__opt_advanced, N_("The maximum number of times to try the 'status' command within " "the timeout period"), N_("Some devices do not support multiple connections. Operations may " "\"fail\" if the device is busy with another task. In that case, " "Pacemaker will automatically retry the operation if there is time " "remaining. Use this option to alter the number of times Pacemaker " "tries a 'status' action before giving up."), }, { NULL, }, }; static const pcmk__cluster_option_t primitive_meta[] = { /* name, old name, type, allowed values, * default value, validator, * flags, * short description, * long description */ { PCMK_META_PRIORITY, NULL, PCMK_VALUE_SCORE, NULL, "0", NULL, pcmk__opt_none, N_("Resource assignment priority"), N_("If not all resources can be active, the cluster will stop " "lower-priority resources in order to keep higher-priority ones " "active."), }, { PCMK_META_CRITICAL, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, NULL, pcmk__opt_none, N_("Default value for influence in colocation constraints"), N_("Use this value as the default for influence in all colocation " "constraints involving this resource, as well as in the implicit " "colocation constraints created if this resource is in a group."), }, { PCMK_META_TARGET_ROLE, NULL, PCMK_VALUE_SELECT, PCMK_ROLE_STOPPED ", " PCMK_ROLE_STARTED ", " PCMK_ROLE_UNPROMOTED ", " PCMK_ROLE_PROMOTED, PCMK_ROLE_STARTED, NULL, pcmk__opt_none, N_("State the cluster should attempt to keep this resource in"), N_("\"Stopped\" forces the resource to be stopped. " "\"Started\" allows the resource to be started (and in the case of " "promotable clone resources, promoted if appropriate). " "\"Unpromoted\" allows the resource to be started, but only in the " "unpromoted role if the resource is promotable. " "\"Promoted\" is equivalent to \"Started\"."), }, { PCMK_META_IS_MANAGED, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, NULL, pcmk__opt_none, N_("Whether the cluster is allowed to actively change the resource's " "state"), N_("If false, the cluster will not start, stop, promote, or demote the " "resource on any node. Recurring actions for the resource are " "unaffected. If true, a true value for the maintenance-mode " "cluster option, the maintenance node attribute, or the " "maintenance resource meta-attribute overrides this."), }, { PCMK_META_MAINTENANCE, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, NULL, pcmk__opt_none, N_("If true, the cluster will not schedule any actions involving the " "resource"), N_("If true, the cluster will not start, stop, promote, or demote the " "resource on any node, and will pause any recurring monitors " "(except those specifying role as \"Stopped\"). If false, a true " "value for the maintenance-mode cluster option or maintenance node " "attribute overrides this."), }, { PCMK_META_RESOURCE_STICKINESS, NULL, PCMK_VALUE_SCORE, NULL, NULL, NULL, pcmk__opt_none, N_("Score to add to the current node when a resource is already " "active"), N_("Score to add to the current node when a resource is already " "active. This allows running resources to stay where they are, " "even if they would be placed elsewhere if they were being started " "from a stopped state. " "The default is 1 for individual clone instances, and 0 for all " "other resources."), }, { PCMK_META_REQUIRES, NULL, PCMK_VALUE_SELECT, PCMK_VALUE_NOTHING ", " PCMK_VALUE_QUORUM ", " PCMK_VALUE_FENCING ", " PCMK_VALUE_UNFENCING, NULL, NULL, pcmk__opt_none, N_("Conditions under which the resource can be started"), N_("Conditions under which the resource can be started. " "\"nothing\" means the cluster can always start this resource. " "\"quorum\" means the cluster can start this resource only if a " "majority of the configured nodes are active. " "\"fencing\" means the cluster can start this resource only if a " "majority of the configured nodes are active and any failed or " "unknown nodes have been fenced. " "\"unfencing\" means the cluster can start this resource only if " "a majority of the configured nodes are active and any failed or " "unknown nodes have been fenced, and only on nodes that have been " "unfenced. " "The default is \"quorum\" for resources with a class of stonith; " "otherwise, \"unfencing\" if unfencing is active in the cluster; " "otherwise, \"fencing\" if the stonith-enabled cluster option is " "true; " "otherwise, \"quorum\"."), }, { PCMK_META_MIGRATION_THRESHOLD, NULL, PCMK_VALUE_SCORE, NULL, PCMK_VALUE_INFINITY, NULL, pcmk__opt_none, N_("Number of failures on a node before the resource becomes " "ineligible to run there."), N_("Number of failures that may occur for this resource on a node, " "before that node is marked ineligible to host this resource. A " "value of 0 indicates that this feature is disabled (the node will " "never be marked ineligible). By contrast, the cluster treats " "\"INFINITY\" (the default) as a very large but finite number. " "This option has an effect only if the failed operation specifies " "its on-fail attribute as \"restart\" (the default), and " "additionally for failed start operations, if the " "start-failure-is-fatal cluster property is set to false."), }, { PCMK_META_FAILURE_TIMEOUT, NULL, PCMK_VALUE_DURATION, NULL, "0", NULL, pcmk__opt_none, N_("Number of seconds before acting as if a failure had not occurred"), N_("Number of seconds after a failed action for this resource before " "acting as if the failure had not occurred, and potentially " "allowing the resource back to the node on which it failed. " "A value of 0 indicates that this feature is disabled."), }, { PCMK_META_MULTIPLE_ACTIVE, NULL, PCMK_VALUE_SELECT, PCMK_VALUE_BLOCK ", " PCMK_VALUE_STOP_ONLY ", " PCMK_VALUE_STOP_START ", " PCMK_VALUE_STOP_UNEXPECTED, PCMK_VALUE_STOP_START, NULL, pcmk__opt_none, N_("What to do if the cluster finds the resource active on more than " "one node"), N_("What to do if the cluster finds the resource active on more than " "one node. " "\"block\" means to mark the resource as unmanaged. " "\"stop_only\" means to stop all active instances of this resource " "and leave them stopped. " "\"stop_start\" means to stop all active instances of this " "resource and start the resource in one location only. " "\"stop_unexpected\" means to stop all active instances of this " "resource except where the resource should be active. (This should " "be used only when extra instances are not expected to disrupt " "existing instances, and the resource agent's monitor of an " "existing instance is capable of detecting any problems that could " "be caused. Note that any resources ordered after this one will " "still need to be restarted.)"), }, { PCMK_META_ALLOW_MIGRATE, NULL, PCMK_VALUE_BOOLEAN, NULL, NULL, NULL, pcmk__opt_none, N_("Whether the cluster should try to \"live migrate\" this resource " "when it needs to be moved"), N_("Whether the cluster should try to \"live migrate\" this resource " "when it needs to be moved. " "The default is true for ocf:pacemaker:remote resources, and false " "otherwise."), }, { PCMK_META_ALLOW_UNHEALTHY_NODES, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_FALSE, NULL, pcmk__opt_none, N_("Whether the resource should be allowed to run on a node even if " "the node's health score would otherwise prevent it"), NULL, }, { PCMK_META_CONTAINER_ATTRIBUTE_TARGET, NULL, PCMK_VALUE_STRING, NULL, NULL, NULL, pcmk__opt_none, N_("Where to check user-defined node attributes"), N_("Whether to check user-defined node attributes on the physical host " "where a container is running or on the local node. This is " "usually set for a bundle resource and inherited by the bundle's " "primitive resource. " "A value of \"host\" means to check user-defined node attributes " "on the underlying physical host. Any other value means to check " "user-defined node attributes on the local node (for a bundled " "primitive resource, this is the bundle node)."), }, { PCMK_META_REMOTE_NODE, NULL, PCMK_VALUE_STRING, NULL, NULL, NULL, pcmk__opt_none, N_("Name of the Pacemaker Remote guest node this resource is " "associated with, if any"), N_("Name of the Pacemaker Remote guest node this resource is " "associated with, if any. If specified, this both enables the " "resource as a guest node and defines the unique name used to " "identify the guest node. The guest must be configured to run the " "Pacemaker Remote daemon when it is started. " "WARNING: This value cannot overlap with any resource or node " "IDs."), }, { PCMK_META_REMOTE_ADDR, NULL, PCMK_VALUE_STRING, NULL, NULL, NULL, pcmk__opt_none, N_("If remote-node is specified, the IP address or hostname used to " "connect to the guest via Pacemaker Remote"), N_("If remote-node is specified, the IP address or hostname used to " "connect to the guest via Pacemaker Remote. The Pacemaker Remote " "daemon on the guest must be configured to accept connections on " "this address. " "The default is the value of the remote-node meta-attribute."), }, { PCMK_META_REMOTE_PORT, NULL, PCMK_VALUE_PORT, NULL, "3121", NULL, pcmk__opt_none, N_("If remote-node is specified, port on the guest used for its " "Pacemaker Remote connection"), N_("If remote-node is specified, the port on the guest used for its " "Pacemaker Remote connection. The Pacemaker Remote daemon on the " "guest must be configured to listen on this port."), }, { PCMK_META_REMOTE_CONNECT_TIMEOUT, NULL, PCMK_VALUE_TIMEOUT, NULL, "60s", NULL, pcmk__opt_none, N_("If remote-node is specified, how long before a pending Pacemaker " "Remote guest connection times out."), NULL, }, { PCMK_META_REMOTE_ALLOW_MIGRATE, NULL, PCMK_VALUE_BOOLEAN, NULL, PCMK_VALUE_TRUE, NULL, pcmk__opt_none, N_("If remote-node is specified, this acts as the allow-migrate " "meta-attribute for the implicit remote connection resource " "(ocf:pacemaker:remote)."), NULL, }, { NULL, }, }; /* * Environment variable option handling */ /*! * \internal * \brief Get the value of a Pacemaker environment variable option * * If an environment variable option is set, with either a PCMK_ or (for * backward compatibility) HA_ prefix, log and return the value. * * \param[in] option Environment variable name (without prefix) * * \return Value of environment variable option, or NULL in case of * option name too long or value not found */ const char * pcmk__env_option(const char *option) { const char *const prefixes[] = {"PCMK_", "HA_"}; char env_name[NAME_MAX]; const char *value = NULL; CRM_CHECK(!pcmk__str_empty(option), return NULL); for (int i = 0; i < PCMK__NELEM(prefixes); i++) { int rv = snprintf(env_name, NAME_MAX, "%s%s", prefixes[i], option); if (rv < 0) { crm_err("Failed to write %s%s to buffer: %s", prefixes[i], option, strerror(errno)); return NULL; } if (rv >= sizeof(env_name)) { crm_trace("\"%s%s\" is too long", prefixes[i], option); continue; } value = getenv(env_name); if (value != NULL) { crm_trace("Found %s = %s", env_name, value); return value; } } crm_trace("Nothing found for %s", option); return NULL; } /*! * \brief Set or unset a Pacemaker environment variable option * * Set an environment variable option with a \c "PCMK_" prefix and optionally * an \c "HA_" prefix for backward compatibility. * * \param[in] option Environment variable name (without prefix) * \param[in] value New value (or NULL to unset) * \param[in] compat If false and \p value is not \c NULL, set only * \c "PCMK_