+ <error>crm_attribute: Invalid --list-options value 'asdf'. Allowed values: cluster</error>
+ </errors>
+ </status>
+</pacemaker-result>
+=#=#=#= End test: List all available options (invalid type) (XML) - Incorrect usage (64) =#=#=#=
+* Passed: crm_attribute - List all available options (invalid type) (XML)
+=#=#=#= Begin test: List non-advanced cluster options =#=#=#=
+Pacemaker cluster options
+
+Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.
+
+ * dc-version: Pacemaker version on cluster node elected Designated Controller (DC)
+ * Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes.
+ * Possible values (generated by Pacemaker): version (no default)
+
+ * cluster-infrastructure: The messaging layer on which Pacemaker is currently running
+ * Used for informational and diagnostic purposes.
+ * Possible values (generated by Pacemaker): string (no default)
+
+ * cluster-name: An arbitrary name for the cluster
+ * This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.
+ * Possible values: string (no default)
+
+ * dc-deadtime: How long to wait for a response from other nodes during start-up
+ * The optimal value will depend on the speed and load of your network and the type of switches used.
+ * Possible values: duration (default: )
+
+ * cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications
+ * Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").
+ * Possible values: duration (default: )
+
+ * fence-reaction: How a cluster node should react if notified of its own fencing
+ * A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.
+ * Possible values: "stop" (default), "panic"
+
+ * no-quorum-policy: What to do when the cluster does not have quorum
+ * Possible values: "stop" (default), "freeze", "ignore", "demote", "suicide"
+
+ * shutdown-lock: Whether to lock resources to a cleanly shut down node
+ * When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.
+ * Possible values: boolean (default: )
+
+ * shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this
+ * If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.
+ * Possible values: duration (default: )
+
+ * enable-acl: Enable Access Control Lists (ACLs) for the CIB
+ * Possible values: boolean (default: )
+
+ * symmetric-cluster: Whether resources can run on any node by default
+ * Possible values: boolean (default: )
+
+ * maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources
+ * Possible values: boolean (default: )
+
+ * start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node
+ * When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.
+ * Possible values: boolean (default: )
+
+ * enable-startup-probes: Whether the cluster should check for active resources during start-up
+ * Possible values: boolean (default: )
+
+ * stonith-action: Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off")
+ * Possible values: "reboot" (default), "off", "poweroff"
+
+ * stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default
+ * Possible values: duration (default: )
+
+ * have-watchdog: Whether watchdog integration is enabled
+ * This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.
+ * Possible values (generated by Pacemaker): boolean (default: )
+
+ * stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use
+ * If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.
+ * Possible values: timeout (default: )
+
+ * stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target
+ * Possible values: score (default: )
+
+ * concurrent-fencing: Allow performing fencing operations in parallel
+ * Possible values: boolean (default: )
+
+ * priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority
+ * Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.
+ * Possible values: duration (default: )
+
+ * node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group
+ * Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.
+ * Possible values: duration (default: )
+
+ * cluster-delay: Maximum time for node-to-node communication
+ * The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.
+ * Possible values: duration (default: )
+
+ * load-threshold: Maximum amount of system load that should be used by cluster nodes
+ * The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit
+ * Possible values: percentage (default: )
+
+ * node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores)
+ * Possible values: integer (default: )
+
+ * batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes
+ * The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.
+ * Possible values: integer (default: )
+
+ * migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)
+ * Possible values: integer (default: )
+
+ * cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon
+ * Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).
+ * Possible values: nonnegative_integer (default: )
+
+ * stop-all-resources: Whether the cluster should stop all active resources
+ * Possible values: boolean (default: )
+
+ * stop-orphan-resources: Whether to stop resources that were removed from the configuration
+ * Possible values: boolean (default: )
+
+ * stop-orphan-actions: Whether to cancel recurring actions removed from the configuration
+ * Possible values: boolean (default: )
+
+ * pe-error-series-max: The number of scheduler inputs resulting in errors to save
+ * Zero to disable, -1 to store unlimited.
+ * Possible values: integer (default: )
+
+ * pe-warn-series-max: The number of scheduler inputs resulting in warnings to save
+ * Zero to disable, -1 to store unlimited.
+ * Possible values: integer (default: )
+
+ * pe-input-series-max: The number of scheduler inputs without errors or warnings to save
+ * Zero to disable, -1 to store unlimited.
+ * Possible values: integer (default: )
+
+ * node-health-strategy: How cluster should react to node health attributes
+ * Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".
+ * Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom"
+
+ * node-health-base: Base health score assigned to a node
+ * Only used when "node-health-strategy" is set to "progressive".
+ * Possible values: score (default: )
+
+ * node-health-green: The score to use for a node health attribute whose value is "green"
+ * Only used when "node-health-strategy" is set to "custom" or "progressive".
+ * Possible values: score (default: )
+
+ * node-health-yellow: The score to use for a node health attribute whose value is "yellow"
+ * Only used when "node-health-strategy" is set to "custom" or "progressive".
+ * Possible values: score (default: )
+
+ * node-health-red: The score to use for a node health attribute whose value is "red"
+ * Only used when "node-health-strategy" is set to "custom" or "progressive".
+ * Possible values: score (default: )
+
+ * placement-strategy: How the cluster should allocate resources to nodes
+ * Possible values: "default" (default), "utilization", "minimal", "balanced"
+=#=#=#= End test: List non-advanced cluster options - OK (0) =#=#=#=
+* Passed: crm_attribute - List non-advanced cluster options
+=#=#=#= Begin test: List non-advanced cluster options (XML) =#=#=#=
+ <longdesc lang="en">Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.</longdesc>
+ <longdesc lang="en">This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.</longdesc>
+ <shortdesc lang="en">An arbitrary name for the cluster</shortdesc>
+ <longdesc lang="en">Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").</longdesc>
+ <shortdesc lang="en">Polling interval to recheck cluster state and evaluate rules with date specifications</shortdesc>
+ <longdesc lang="en">A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.</longdesc>
+ <shortdesc lang="en">How a cluster node should react if notified of its own fencing</shortdesc>
+ <longdesc lang="en">Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</longdesc>
+ <shortdesc lang="en">Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</shortdesc>
+ <longdesc lang="en">Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</longdesc>
+ <shortdesc lang="en">Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</shortdesc>
+ <longdesc lang="en">Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive.</longdesc>
+ <shortdesc lang="en">Enabling this option will slow down cluster recovery under all conditions</shortdesc>
+ <longdesc lang="en">When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.</longdesc>
+ <shortdesc lang="en">Whether to lock resources to a cleanly shut down node</shortdesc>
+ <longdesc lang="en">If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.</longdesc>
+ <shortdesc lang="en">Do not lock resources to a cleanly shut down node longer than this</shortdesc>
+ <longdesc lang="en">When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.</longdesc>
+ <shortdesc lang="en">Whether a start failure should prevent a resource from being recovered on the same node</shortdesc>
+ <longdesc lang="en">If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability.</longdesc>
+ <shortdesc lang="en">Whether nodes may be fenced as part of recovery</shortdesc>
+ <longdesc lang="en">This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.</longdesc>
+ <shortdesc lang="en">Whether watchdog integration is enabled</shortdesc>
+ <longdesc lang="en">If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.</longdesc>
+ <shortdesc lang="en">How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use</shortdesc>
+ <longdesc lang="en">Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability.</longdesc>
+ <shortdesc lang="en">Whether to fence unseen nodes at start-up</shortdesc>
+ <longdesc lang="en">Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.</longdesc>
+ <shortdesc lang="en">Apply fencing delay targeting the lost nodes with the highest total resource priority</shortdesc>
+ <longdesc lang="en">Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.</longdesc>
+ <shortdesc lang="en">How long to wait for a node that has joined the cluster to join the controller process group</shortdesc>
+ <longdesc lang="en">The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.</longdesc>
+ <shortdesc lang="en">Maximum time for node-to-node communication</shortdesc>
+ <longdesc lang="en">The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit</longdesc>
+ <shortdesc lang="en">Maximum amount of system load that should be used by cluster nodes</shortdesc>
+ <longdesc lang="en">The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.</longdesc>
+ <shortdesc lang="en">Maximum number of jobs that the cluster may execute in parallel across all nodes</shortdesc>
+ <longdesc lang="en">The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)</longdesc>
+ <shortdesc lang="en">The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)</shortdesc>
+ <longdesc lang="en">Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).</longdesc>
+ <shortdesc lang="en">Maximum IPC message backlog before disconnecting a cluster daemon</shortdesc>
+ <longdesc lang="en">Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".</longdesc>
+ <shortdesc lang="en">How cluster should react to node health attributes</shortdesc>
+ <longdesc lang="en">How the cluster should allocate resources to nodes</longdesc>
+ <shortdesc lang="en">How the cluster should allocate resources to nodes</shortdesc>
+ <content type="select" default="">
+ <option value="default"/>
+ <option value="utilization"/>
+ <option value="minimal"/>
+ <option value="balanced"/>
+ </content>
+ </parameter>
+ </parameters>
+ </resource-agent>
+ <status code="0" message="OK"/>
+</pacemaker-result>
+=#=#=#= End test: List non-advanced cluster options (XML) - OK (0) =#=#=#=
+* Passed: crm_attribute - List non-advanced cluster options (XML)
+=#=#=#= Begin test: List all available cluster options =#=#=#=
+Pacemaker cluster options
+
+Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.
+
+ * dc-version: Pacemaker version on cluster node elected Designated Controller (DC)
+ * Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes.
+ * Possible values (generated by Pacemaker): version (no default)
+
+ * cluster-infrastructure: The messaging layer on which Pacemaker is currently running
+ * Used for informational and diagnostic purposes.
+ * Possible values (generated by Pacemaker): string (no default)
+
+ * cluster-name: An arbitrary name for the cluster
+ * This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.
+ * Possible values: string (no default)
+
+ * dc-deadtime: How long to wait for a response from other nodes during start-up
+ * The optimal value will depend on the speed and load of your network and the type of switches used.
+ * Possible values: duration (default: )
+
+ * cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications
+ * Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").
+ * Possible values: duration (default: )
+
+ * fence-reaction: How a cluster node should react if notified of its own fencing
+ * A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.
+ * Possible values: "stop" (default), "panic"
+
+ * no-quorum-policy: What to do when the cluster does not have quorum
+ * Possible values: "stop" (default), "freeze", "ignore", "demote", "suicide"
+
+ * shutdown-lock: Whether to lock resources to a cleanly shut down node
+ * When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.
+ * Possible values: boolean (default: )
+
+ * shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this
+ * If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.
+ * Possible values: duration (default: )
+
+ * enable-acl: Enable Access Control Lists (ACLs) for the CIB
+ * Possible values: boolean (default: )
+
+ * symmetric-cluster: Whether resources can run on any node by default
+ * Possible values: boolean (default: )
+
+ * maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources
+ * Possible values: boolean (default: )
+
+ * start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node
+ * When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.
+ * Possible values: boolean (default: )
+
+ * enable-startup-probes: Whether the cluster should check for active resources during start-up
+ * Possible values: boolean (default: )
+
+ * stonith-action: Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off")
+ * Possible values: "reboot" (default), "off", "poweroff"
+
+ * stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default
+ * Possible values: duration (default: )
+
+ * have-watchdog: Whether watchdog integration is enabled
+ * This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.
+ * Possible values (generated by Pacemaker): boolean (default: )
+
+ * stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use
+ * If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.
+ * Possible values: timeout (default: )
+
+ * stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target
+ * Possible values: score (default: )
+
+ * concurrent-fencing: Allow performing fencing operations in parallel
+ * Possible values: boolean (default: )
+
+ * priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority
+ * Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.
+ * Possible values: duration (default: )
+
+ * node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group
+ * Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.
+ * Possible values: duration (default: )
+
+ * cluster-delay: Maximum time for node-to-node communication
+ * The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.
+ * Possible values: duration (default: )
+
+ * load-threshold: Maximum amount of system load that should be used by cluster nodes
+ * The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit
+ * Possible values: percentage (default: )
+
+ * node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores)
+ * Possible values: integer (default: )
+
+ * batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes
+ * The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.
+ * Possible values: integer (default: )
+
+ * migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)
+ * Possible values: integer (default: )
+
+ * cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon
+ * Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).
+ * Possible values: nonnegative_integer (default: )
+
+ * stop-all-resources: Whether the cluster should stop all active resources
+ * Possible values: boolean (default: )
+
+ * stop-orphan-resources: Whether to stop resources that were removed from the configuration
+ * Possible values: boolean (default: )
+
+ * stop-orphan-actions: Whether to cancel recurring actions removed from the configuration
+ * Possible values: boolean (default: )
+
+ * pe-error-series-max: The number of scheduler inputs resulting in errors to save
+ * Zero to disable, -1 to store unlimited.
+ * Possible values: integer (default: )
+
+ * pe-warn-series-max: The number of scheduler inputs resulting in warnings to save
+ * Zero to disable, -1 to store unlimited.
+ * Possible values: integer (default: )
+
+ * pe-input-series-max: The number of scheduler inputs without errors or warnings to save
+ * Zero to disable, -1 to store unlimited.
+ * Possible values: integer (default: )
+
+ * node-health-strategy: How cluster should react to node health attributes
+ * Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".
+ * Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom"
+
+ * node-health-base: Base health score assigned to a node
+ * Only used when "node-health-strategy" is set to "progressive".
+ * Possible values: score (default: )
+
+ * node-health-green: The score to use for a node health attribute whose value is "green"
+ * Only used when "node-health-strategy" is set to "custom" or "progressive".
+ * Possible values: score (default: )
+
+ * node-health-yellow: The score to use for a node health attribute whose value is "yellow"
+ * Only used when "node-health-strategy" is set to "custom" or "progressive".
+ * Possible values: score (default: )
+
+ * node-health-red: The score to use for a node health attribute whose value is "red"
+ * Only used when "node-health-strategy" is set to "custom" or "progressive".
+ * Possible values: score (default: )
+
+ * placement-strategy: How the cluster should allocate resources to nodes
+ * Possible values: "default" (default), "utilization", "minimal", "balanced"
+
+ * ADVANCED OPTIONS:
+
+ * election-timeout: Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.
+ * Possible values: duration (default: )
+
+ * shutdown-escalation: Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.
+ * Possible values: duration (default: )
+
+ * join-integration-timeout: If you need to adjust this value, it probably indicates the presence of a bug.
+ * Possible values: duration (default: )
+
+ * join-finalization-timeout: If you need to adjust this value, it probably indicates the presence of a bug.
+ * Possible values: duration (default: )
+
+ * transition-delay: Enabling this option will slow down cluster recovery under all conditions
+ * Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive.
+ * Possible values: duration (default: )
+
+ * stonith-enabled: Whether nodes may be fenced as part of recovery
+ * If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability.
+ * Possible values: boolean (default: )
+
+ * startup-fencing: Whether to fence unseen nodes at start-up
+ * Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability.
+ * Possible values: boolean (default: )
+
+ * DEPRECATED OPTIONS (will be removed in a future release):
+
+ * remove-after-stop: Whether to remove stopped resources from the executor
+ * Values other than default are poorly tested and potentially dangerous.
+ * Possible values: boolean (default: )
+=#=#=#= End test: List all available cluster options - OK (0) =#=#=#=
+* Passed: crm_attribute - List all available cluster options
+=#=#=#= Begin test: List all available cluster options (XML) =#=#=#=
+ <longdesc lang="en">Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section.</longdesc>
+ <longdesc lang="en">This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.</longdesc>
+ <shortdesc lang="en">An arbitrary name for the cluster</shortdesc>
+ <longdesc lang="en">Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min").</longdesc>
+ <shortdesc lang="en">Polling interval to recheck cluster state and evaluate rules with date specifications</shortdesc>
+ <longdesc lang="en">A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.</longdesc>
+ <shortdesc lang="en">How a cluster node should react if notified of its own fencing</shortdesc>
+ <longdesc lang="en">Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</longdesc>
+ <shortdesc lang="en">Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</shortdesc>
+ <longdesc lang="en">Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</longdesc>
+ <shortdesc lang="en">Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.</shortdesc>
+ <longdesc lang="en">Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive.</longdesc>
+ <shortdesc lang="en">Enabling this option will slow down cluster recovery under all conditions</shortdesc>
+ <longdesc lang="en">When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.</longdesc>
+ <shortdesc lang="en">Whether to lock resources to a cleanly shut down node</shortdesc>
+ <longdesc lang="en">If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.</longdesc>
+ <shortdesc lang="en">Do not lock resources to a cleanly shut down node longer than this</shortdesc>
+ <longdesc lang="en">When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.</longdesc>
+ <shortdesc lang="en">Whether a start failure should prevent a resource from being recovered on the same node</shortdesc>
+ <longdesc lang="en">If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability.</longdesc>
+ <shortdesc lang="en">Whether nodes may be fenced as part of recovery</shortdesc>
+ <longdesc lang="en">This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.</longdesc>
+ <shortdesc lang="en">Whether watchdog integration is enabled</shortdesc>
+ <longdesc lang="en">If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.</longdesc>
+ <shortdesc lang="en">How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use</shortdesc>
+ <longdesc lang="en">Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability.</longdesc>
+ <shortdesc lang="en">Whether to fence unseen nodes at start-up</shortdesc>
+ <longdesc lang="en">Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.</longdesc>
+ <shortdesc lang="en">Apply fencing delay targeting the lost nodes with the highest total resource priority</shortdesc>
+ <longdesc lang="en">Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.</longdesc>
+ <shortdesc lang="en">How long to wait for a node that has joined the cluster to join the controller process group</shortdesc>
+ <longdesc lang="en">The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.</longdesc>
+ <shortdesc lang="en">Maximum time for node-to-node communication</shortdesc>
+ <longdesc lang="en">The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit</longdesc>
+ <shortdesc lang="en">Maximum amount of system load that should be used by cluster nodes</shortdesc>
+ <longdesc lang="en">The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.</longdesc>
+ <shortdesc lang="en">Maximum number of jobs that the cluster may execute in parallel across all nodes</shortdesc>
+ <longdesc lang="en">The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)</longdesc>
+ <shortdesc lang="en">The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit)</shortdesc>
+ <longdesc lang="en">Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes).</longdesc>
+ <shortdesc lang="en">Maximum IPC message backlog before disconnecting a cluster daemon</shortdesc>
+ <longdesc lang="en">Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green".</longdesc>
+ <shortdesc lang="en">How cluster should react to node health attributes</shortdesc>
+=#=#=#= End test: Test '+=' nvpair value update syntax (--score not set) (XML) - OK (0) =#=#=#=
+* Passed: crm_attribute - Test '+=' nvpair value update syntax (--score not set) (XML)
+=#=#=#= Begin test: Require --force for CIB erasure =#=#=#=
+cibadmin: The supplied command is considered dangerous. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed.
+=#=#=#= Current cib after: Require --force for CIB erasure =#=#=#=