diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 0000000000..a0088851a7 --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,18 @@ +# Security Policy + +## Supported Versions + +Pacemaker's 2.1 and 3.0 release series are actively developed and receive +security fixes. + +## Reporting a Vulnerability + +If you have a support contract with an operating system vendor such as Red Hat +or SUSE, please submit potentially security-related reports via the vendor's +usual method. Otherwise, please submit a report via: + + https://github.com/ClusterLabs/pacemaker/security + +## Past Vulnerabilities + +See https://projects.clusterlabs.org/w/cluster_administration/cves/ diff --git a/cts/cli/regression.tools.exp b/cts/cli/regression.tools.exp index af3a788cc7..6eef178681 100644 --- a/cts/cli/regression.tools.exp +++ b/cts/cli/regression.tools.exp @@ -1,10364 +1,10364 @@ Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Validate CIB =#=#=#= =#=#=#= Current cib after: Validate CIB =#=#=#= =#=#=#= End test: Validate CIB - OK (0) =#=#=#= * Passed: cibadmin - Validate CIB =#=#=#= Begin test: List all available options (invalid type) =#=#=#= crm_attribute: Invalid --list-options value 'asdf'. Allowed values: cluster =#=#=#= End test: List all available options (invalid type) - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - List all available options (invalid type) =#=#=#= Begin test: List all available options (invalid type) (XML) =#=#=#= crm_attribute: Invalid --list-options value 'asdf'. Allowed values: cluster =#=#=#= End test: List all available options (invalid type) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - List all available options (invalid type) (XML) =#=#=#= Begin test: List non-advanced cluster options =#=#=#= Pacemaker cluster options Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. * dc-version: Pacemaker version on cluster node elected Designated Controller (DC) * Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. * Possible values (generated by Pacemaker): version (no default) * cluster-infrastructure: The messaging layer on which Pacemaker is currently running * Used for informational and diagnostic purposes. * Possible values (generated by Pacemaker): string (no default) * cluster-name: An arbitrary name for the cluster * This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. * Possible values: string (no default) * dc-deadtime: How long to wait for a response from other nodes during start-up * The optimal value will depend on the speed and load of your network and the type of switches used. * Possible values: duration (default: ) * cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications * Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). * Possible values: duration (default: ) * fence-reaction: How a cluster node should react if notified of its own fencing * A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. * Possible values: "stop" (default), "panic" * no-quorum-policy: What to do when the cluster does not have quorum * Possible values: "stop" (default), "freeze", "ignore", "demote", "suicide" * shutdown-lock: Whether to lock resources to a cleanly shut down node * When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. * Possible values: boolean (default: ) * shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this * If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. * Possible values: duration (default: ) * enable-acl: Enable Access Control Lists (ACLs) for the CIB * Possible values: boolean (default: ) * symmetric-cluster: Whether resources can run on any node by default * Possible values: boolean (default: ) * maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources * Possible values: boolean (default: ) * start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node * When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. * Possible values: boolean (default: ) * enable-startup-probes: Whether the cluster should check for active resources during start-up * Possible values: boolean (default: ) * stonith-action: Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") * Possible values: "reboot" (default), "off", "poweroff" * stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default * Possible values: duration (default: ) * have-watchdog: Whether watchdog integration is enabled * This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. * Possible values (generated by Pacemaker): boolean (default: ) * stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use * If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. * Possible values: timeout (default: ) * stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target * Possible values: score (default: ) * concurrent-fencing: Allow performing fencing operations in parallel * Possible values: boolean (default: ) * priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority * Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. * Possible values: duration (default: ) * node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group * Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. * Possible values: duration (default: ) * cluster-delay: Maximum time for node-to-node communication * The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. * Possible values: duration (default: ) * load-threshold: Maximum amount of system load that should be used by cluster nodes * The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit * Possible values: percentage (default: ) * node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores) * Possible values: integer (default: ) * batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes * The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. * Possible values: integer (default: ) * migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) * Possible values: integer (default: ) * cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon * Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). * Possible values: nonnegative_integer (default: ) * stop-all-resources: Whether the cluster should stop all active resources * Possible values: boolean (default: ) * stop-orphan-resources: Whether to stop resources that were removed from the configuration * Possible values: boolean (default: ) * stop-orphan-actions: Whether to cancel recurring actions removed from the configuration * Possible values: boolean (default: ) * pe-error-series-max: The number of scheduler inputs resulting in errors to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-warn-series-max: The number of scheduler inputs resulting in warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-input-series-max: The number of scheduler inputs without errors or warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * node-health-strategy: How cluster should react to node health attributes * Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". * Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom" * node-health-base: Base health score assigned to a node * Only used when "node-health-strategy" is set to "progressive". * Possible values: score (default: ) * node-health-green: The score to use for a node health attribute whose value is "green" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-yellow: The score to use for a node health attribute whose value is "yellow" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-red: The score to use for a node health attribute whose value is "red" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * placement-strategy: How the cluster should allocate resources to nodes * Possible values: "default" (default), "utilization", "minimal", "balanced" =#=#=#= End test: List non-advanced cluster options - OK (0) =#=#=#= * Passed: crm_attribute - List non-advanced cluster options =#=#=#= Begin test: List non-advanced cluster options (XML) (shows all) =#=#=#= 1.1 Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. Pacemaker cluster options Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. Pacemaker version on cluster node elected Designated Controller (DC) Used for informational and diagnostic purposes. The messaging layer on which Pacemaker is currently running This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. An arbitrary name for the cluster The optimal value will depend on the speed and load of your network and the type of switches used. How long to wait for a response from other nodes during start-up Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). Polling interval to recheck cluster state and evaluate rules with date specifications A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. How a cluster node should react if notified of its own fencing Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. Enabling this option will slow down cluster recovery under all conditions What to do when the cluster does not have quorum What to do when the cluster does not have quorum When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Whether to lock resources to a cleanly shut down node If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. Do not lock resources to a cleanly shut down node longer than this Enable Access Control Lists (ACLs) for the CIB Enable Access Control Lists (ACLs) for the CIB Whether resources can run on any node by default Whether resources can run on any node by default Whether the cluster should refrain from monitoring, starting, and stopping resources Whether the cluster should refrain from monitoring, starting, and stopping resources When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. Whether a start failure should prevent a resource from being recovered on the same node Whether the cluster should check for active resources during start-up Whether the cluster should check for active resources during start-up If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether nodes may be fenced as part of recovery Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") How long to wait for on, off, and reboot fence actions to complete by default How long to wait for on, off, and reboot fence actions to complete by default This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. Whether watchdog integration is enabled If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use How many times fencing can fail before it will no longer be immediately re-attempted on a target How many times fencing can fail before it will no longer be immediately re-attempted on a target Allow performing fencing operations in parallel Allow performing fencing operations in parallel Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether to fence unseen nodes at start-up Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. Apply fencing delay targeting the lost nodes with the highest total resource priority Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. How long to wait for a node that has joined the cluster to join the controller process group The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. Maximum time for node-to-node communication The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit Maximum amount of system load that should be used by cluster nodes Maximum number of jobs that can be scheduled per node (defaults to 2x cores) Maximum number of jobs that can be scheduled per node (defaults to 2x cores) The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. Maximum number of jobs that the cluster may execute in parallel across all nodes The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). Maximum IPC message backlog before disconnecting a cluster daemon Whether the cluster should stop all active resources Whether the cluster should stop all active resources Whether to stop resources that were removed from the configuration Whether to stop resources that were removed from the configuration Whether to cancel recurring actions removed from the configuration Whether to cancel recurring actions removed from the configuration Values other than default are poorly tested and potentially dangerous. Whether to remove stopped resources from the executor Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in errors to save Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in warnings to save Zero to disable, -1 to store unlimited. The number of scheduler inputs without errors or warnings to save Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". How cluster should react to node health attributes Only used when "node-health-strategy" is set to "progressive". Base health score assigned to a node Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "green" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "yellow" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "red" How the cluster should allocate resources to nodes How the cluster should allocate resources to nodes =#=#=#= End test: List non-advanced cluster options (XML) (shows all) - OK (0) =#=#=#= * Passed: crm_attribute - List non-advanced cluster options (XML) (shows all) =#=#=#= Begin test: List all available cluster options =#=#=#= Pacemaker cluster options Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. * dc-version: Pacemaker version on cluster node elected Designated Controller (DC) * Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. * Possible values (generated by Pacemaker): version (no default) * cluster-infrastructure: The messaging layer on which Pacemaker is currently running * Used for informational and diagnostic purposes. * Possible values (generated by Pacemaker): string (no default) * cluster-name: An arbitrary name for the cluster * This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. * Possible values: string (no default) * dc-deadtime: How long to wait for a response from other nodes during start-up * The optimal value will depend on the speed and load of your network and the type of switches used. * Possible values: duration (default: ) * cluster-recheck-interval: Polling interval to recheck cluster state and evaluate rules with date specifications * Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). * Possible values: duration (default: ) * fence-reaction: How a cluster node should react if notified of its own fencing * A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. * Possible values: "stop" (default), "panic" * no-quorum-policy: What to do when the cluster does not have quorum * Possible values: "stop" (default), "freeze", "ignore", "demote", "suicide" * shutdown-lock: Whether to lock resources to a cleanly shut down node * When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. * Possible values: boolean (default: ) * shutdown-lock-limit: Do not lock resources to a cleanly shut down node longer than this * If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. * Possible values: duration (default: ) * enable-acl: Enable Access Control Lists (ACLs) for the CIB * Possible values: boolean (default: ) * symmetric-cluster: Whether resources can run on any node by default * Possible values: boolean (default: ) * maintenance-mode: Whether the cluster should refrain from monitoring, starting, and stopping resources * Possible values: boolean (default: ) * start-failure-is-fatal: Whether a start failure should prevent a resource from being recovered on the same node * When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. * Possible values: boolean (default: ) * enable-startup-probes: Whether the cluster should check for active resources during start-up * Possible values: boolean (default: ) * stonith-action: Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") * Possible values: "reboot" (default), "off", "poweroff" * stonith-timeout: How long to wait for on, off, and reboot fence actions to complete by default * Possible values: duration (default: ) * have-watchdog: Whether watchdog integration is enabled * This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. * Possible values (generated by Pacemaker): boolean (default: ) * stonith-watchdog-timeout: How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use * If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. * Possible values: timeout (default: ) * stonith-max-attempts: How many times fencing can fail before it will no longer be immediately re-attempted on a target * Possible values: score (default: ) * concurrent-fencing: Allow performing fencing operations in parallel * Possible values: boolean (default: ) * priority-fencing-delay: Apply fencing delay targeting the lost nodes with the highest total resource priority * Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. * Possible values: duration (default: ) * node-pending-timeout: How long to wait for a node that has joined the cluster to join the controller process group * Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. * Possible values: duration (default: ) * cluster-delay: Maximum time for node-to-node communication * The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. * Possible values: duration (default: ) * load-threshold: Maximum amount of system load that should be used by cluster nodes * The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit * Possible values: percentage (default: ) * node-action-limit: Maximum number of jobs that can be scheduled per node (defaults to 2x cores) * Possible values: integer (default: ) * batch-limit: Maximum number of jobs that the cluster may execute in parallel across all nodes * The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. * Possible values: integer (default: ) * migration-limit: The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) * Possible values: integer (default: ) * cluster-ipc-limit: Maximum IPC message backlog before disconnecting a cluster daemon * Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). * Possible values: nonnegative_integer (default: ) * stop-all-resources: Whether the cluster should stop all active resources * Possible values: boolean (default: ) * stop-orphan-resources: Whether to stop resources that were removed from the configuration * Possible values: boolean (default: ) * stop-orphan-actions: Whether to cancel recurring actions removed from the configuration * Possible values: boolean (default: ) * pe-error-series-max: The number of scheduler inputs resulting in errors to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-warn-series-max: The number of scheduler inputs resulting in warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * pe-input-series-max: The number of scheduler inputs without errors or warnings to save * Zero to disable, -1 to store unlimited. * Possible values: integer (default: ) * node-health-strategy: How cluster should react to node health attributes * Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". * Possible values: "none" (default), "migrate-on-red", "only-green", "progressive", "custom" * node-health-base: Base health score assigned to a node * Only used when "node-health-strategy" is set to "progressive". * Possible values: score (default: ) * node-health-green: The score to use for a node health attribute whose value is "green" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-yellow: The score to use for a node health attribute whose value is "yellow" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * node-health-red: The score to use for a node health attribute whose value is "red" * Only used when "node-health-strategy" is set to "custom" or "progressive". * Possible values: score (default: ) * placement-strategy: How the cluster should allocate resources to nodes * Possible values: "default" (default), "utilization", "minimal", "balanced" * ADVANCED OPTIONS: * election-timeout: Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * shutdown-escalation: Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * join-integration-timeout: If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * join-finalization-timeout: If you need to adjust this value, it probably indicates the presence of a bug. * Possible values: duration (default: ) * transition-delay: Enabling this option will slow down cluster recovery under all conditions * Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. * Possible values: duration (default: ) * stonith-enabled: Whether nodes may be fenced as part of recovery * If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. * Possible values: boolean (default: ) * startup-fencing: Whether to fence unseen nodes at start-up * Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability. * Possible values: boolean (default: ) * DEPRECATED OPTIONS (will be removed in a future release): * remove-after-stop: Whether to remove stopped resources from the executor * Values other than default are poorly tested and potentially dangerous. * Possible values: boolean (default: ) =#=#=#= End test: List all available cluster options - OK (0) =#=#=#= * Passed: crm_attribute - List all available cluster options =#=#=#= Begin test: List all available cluster options (XML) =#=#=#= 1.1 Also known as properties, these are options that affect behavior across the entire cluster. They are configured within cluster_property_set elements inside the crm_config subsection of the CIB configuration section. Pacemaker cluster options Includes a hash which identifies the exact revision the code was built from. Used for diagnostic purposes. Pacemaker version on cluster node elected Designated Controller (DC) Used for informational and diagnostic purposes. The messaging layer on which Pacemaker is currently running This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. An arbitrary name for the cluster The optimal value will depend on the speed and load of your network and the type of switches used. How long to wait for a response from other nodes during start-up Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure-timeout settings and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. A value of 0 disables polling. A positive value sets an interval in seconds, unless other units are specified (for example, "5min"). Polling interval to recheck cluster state and evaluate rules with date specifications A cluster node may receive notification of a "succeeded" fencing that targeted it if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. How a cluster node should react if notified of its own fencing Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. If you need to adjust this value, it probably indicates the presence of a bug. Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. Enabling this option will slow down cluster recovery under all conditions What to do when the cluster does not have quorum What to do when the cluster does not have quorum When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Whether to lock resources to a cleanly shut down node If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. Do not lock resources to a cleanly shut down node longer than this Enable Access Control Lists (ACLs) for the CIB Enable Access Control Lists (ACLs) for the CIB Whether resources can run on any node by default Whether resources can run on any node by default Whether the cluster should refrain from monitoring, starting, and stopping resources Whether the cluster should refrain from monitoring, starting, and stopping resources When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. Whether a start failure should prevent a resource from being recovered on the same node Whether the cluster should check for active resources during start-up Whether the cluster should check for active resources during start-up If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether nodes may be fenced as part of recovery Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") How long to wait for on, off, and reboot fence actions to complete by default How long to wait for on, off, and reboot fence actions to complete by default This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. Whether watchdog integration is enabled If this is set to a positive value, lost nodes are assumed to achieve self-fencing using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use How many times fencing can fail before it will no longer be immediately re-attempted on a target How many times fencing can fail before it will no longer be immediately re-attempted on a target Allow performing fencing operations in parallel Allow performing fencing operations in parallel Setting this to false may lead to a "split-brain" situation, potentially leading to data loss and/or service unavailability. Whether to fence unseen nodes at start-up Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. Apply fencing delay targeting the lost nodes with the highest total resource priority Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. How long to wait for a node that has joined the cluster to join the controller process group The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. Maximum time for node-to-node communication The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit Maximum amount of system load that should be used by cluster nodes Maximum number of jobs that can be scheduled per node (defaults to 2x cores) Maximum number of jobs that can be scheduled per node (defaults to 2x cores) The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. Maximum number of jobs that the cluster may execute in parallel across all nodes The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). Maximum IPC message backlog before disconnecting a cluster daemon Whether the cluster should stop all active resources Whether the cluster should stop all active resources Whether to stop resources that were removed from the configuration Whether to stop resources that were removed from the configuration Whether to cancel recurring actions removed from the configuration Whether to cancel recurring actions removed from the configuration Values other than default are poorly tested and potentially dangerous. Whether to remove stopped resources from the executor Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in errors to save Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in warnings to save Zero to disable, -1 to store unlimited. The number of scheduler inputs without errors or warnings to save Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". How cluster should react to node health attributes Only used when "node-health-strategy" is set to "progressive". Base health score assigned to a node Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "green" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "yellow" Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "red" How the cluster should allocate resources to nodes How the cluster should allocate resources to nodes =#=#=#= End test: List all available cluster options (XML) - OK (0) =#=#=#= * Passed: crm_attribute - List all available cluster options (XML) =#=#=#= Begin test: Query the value of an attribute that does not exist =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query the value of an attribute that does not exist - No such object (105) =#=#=#= * Passed: crm_attribute - Query the value of an attribute that does not exist =#=#=#= Begin test: Configure something before erasing =#=#=#= =#=#=#= Current cib after: Configure something before erasing =#=#=#= =#=#=#= End test: Configure something before erasing - OK (0) =#=#=#= * Passed: crm_attribute - Configure something before erasing =#=#=#= Begin test: Test '++' XML attribute update syntax =#=#=#= =#=#=#= Current cib after: Test '++' XML attribute update syntax =#=#=#= =#=#=#= End test: Test '++' XML attribute update syntax - OK (0) =#=#=#= * Passed: cibadmin - Test '++' XML attribute update syntax =#=#=#= Begin test: Test '+=' XML attribute update syntax =#=#=#= =#=#=#= Current cib after: Test '+=' XML attribute update syntax =#=#=#= =#=#=#= End test: Test '+=' XML attribute update syntax - OK (0) =#=#=#= * Passed: cibadmin - Test '+=' XML attribute update syntax =#=#=#= Begin test: Test '++' nvpair value update syntax =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax =#=#=#= Begin test: Test '++' nvpair value update syntax (XML) =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax (XML) =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax (XML) =#=#=#= Begin test: Test '+=' nvpair value update syntax =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax =#=#=#= Begin test: Test '+=' nvpair value update syntax (XML) =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax (XML) =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax (XML) =#=#=#= Begin test: Test '++' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '++' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '++' XML attribute update syntax (--score not set) - OK (0) =#=#=#= * Passed: cibadmin - Test '++' XML attribute update syntax (--score not set) =#=#=#= Begin test: Test '+=' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '+=' XML attribute update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '+=' XML attribute update syntax (--score not set) - OK (0) =#=#=#= * Passed: cibadmin - Test '+=' XML attribute update syntax (--score not set) =#=#=#= Begin test: Test '++' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax (--score not set) - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax (--score not set) =#=#=#= Begin test: Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= Current cib after: Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= End test: Test '++' nvpair value update syntax (--score not set) (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '++' nvpair value update syntax (--score not set) (XML) =#=#=#= Begin test: Test '+=' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax (--score not set) =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax (--score not set) - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax (--score not set) =#=#=#= Begin test: Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= Current cib after: Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#= =#=#=#= End test: Test '+=' nvpair value update syntax (--score not set) (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Test '+=' nvpair value update syntax (--score not set) (XML) =#=#=#= Begin test: Require --force for CIB erasure =#=#=#= cibadmin: The supplied command is considered dangerous. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= Current cib after: Require --force for CIB erasure =#=#=#= =#=#=#= End test: Require --force for CIB erasure - Operation not safe (107) =#=#=#= * Passed: cibadmin - Require --force for CIB erasure =#=#=#= Begin test: Allow CIB erasure with --force =#=#=#= =#=#=#= End test: Allow CIB erasure with --force - OK (0) =#=#=#= * Passed: cibadmin - Allow CIB erasure with --force =#=#=#= Begin test: Query CIB =#=#=#= =#=#=#= Current cib after: Query CIB =#=#=#= =#=#=#= End test: Query CIB - OK (0) =#=#=#= * Passed: cibadmin - Query CIB =#=#=#= Begin test: Set cluster option =#=#=#= =#=#=#= Current cib after: Set cluster option =#=#=#= =#=#=#= End test: Set cluster option - OK (0) =#=#=#= * Passed: crm_attribute - Set cluster option =#=#=#= Begin test: Query new cluster option =#=#=#= =#=#=#= Current cib after: Query new cluster option =#=#=#= =#=#=#= End test: Query new cluster option - OK (0) =#=#=#= * Passed: cibadmin - Query new cluster option =#=#=#= Begin test: Query cluster options =#=#=#= =#=#=#= Current cib after: Query cluster options =#=#=#= =#=#=#= End test: Query cluster options - OK (0) =#=#=#= * Passed: cibadmin - Query cluster options =#=#=#= Begin test: Set no-quorum policy =#=#=#= =#=#=#= Current cib after: Set no-quorum policy =#=#=#= =#=#=#= End test: Set no-quorum policy - OK (0) =#=#=#= * Passed: crm_attribute - Set no-quorum policy =#=#=#= Begin test: Delete nvpair =#=#=#= =#=#=#= Current cib after: Delete nvpair =#=#=#= =#=#=#= End test: Delete nvpair - OK (0) =#=#=#= * Passed: cibadmin - Delete nvpair =#=#=#= Begin test: Create operation should fail =#=#=#= Call failed: File exists =#=#=#= Current cib after: Create operation should fail =#=#=#= =#=#=#= End test: Create operation should fail - Requested item already exists (108) =#=#=#= * Passed: cibadmin - Create operation should fail =#=#=#= Begin test: Modify cluster options section =#=#=#= =#=#=#= Current cib after: Modify cluster options section =#=#=#= =#=#=#= End test: Modify cluster options section - OK (0) =#=#=#= * Passed: cibadmin - Modify cluster options section =#=#=#= Begin test: Query updated cluster option =#=#=#= =#=#=#= Current cib after: Query updated cluster option =#=#=#= =#=#=#= End test: Query updated cluster option - OK (0) =#=#=#= * Passed: cibadmin - Query updated cluster option =#=#=#= Begin test: Set duplicate cluster option =#=#=#= =#=#=#= Current cib after: Set duplicate cluster option =#=#=#= =#=#=#= End test: Set duplicate cluster option - OK (0) =#=#=#= * Passed: crm_attribute - Set duplicate cluster option =#=#=#= Begin test: Setting multiply defined cluster option should fail =#=#=#= crm_attribute: Please choose from one of the matches below and supply the 'id' with --attr-id Multiple attributes match name=cluster-delay Value: 60s (id=cib-bootstrap-options-cluster-delay) Value: 40s (id=duplicate-cluster-delay) =#=#=#= Current cib after: Setting multiply defined cluster option should fail =#=#=#= =#=#=#= End test: Setting multiply defined cluster option should fail - Multiple items match request (109) =#=#=#= * Passed: crm_attribute - Setting multiply defined cluster option should fail =#=#=#= Begin test: Set cluster option with -s =#=#=#= =#=#=#= Current cib after: Set cluster option with -s =#=#=#= =#=#=#= End test: Set cluster option with -s - OK (0) =#=#=#= * Passed: crm_attribute - Set cluster option with -s =#=#=#= Begin test: Delete cluster option with -i =#=#=#= Deleted crm_config option: id=(null) name=cluster-delay =#=#=#= Current cib after: Delete cluster option with -i =#=#=#= =#=#=#= End test: Delete cluster option with -i - OK (0) =#=#=#= * Passed: crm_attribute - Delete cluster option with -i =#=#=#= Begin test: Create node1 and bring it online =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Current cluster status: * Full List of Resources: * No resources Performing Requested Modifications: * Bringing node node1 online Transition Summary: Executing Cluster Transition: Revised Cluster Status: * Node List: * Online: [ node1 ] * Full List of Resources: * No resources =#=#=#= Current cib after: Create node1 and bring it online =#=#=#= =#=#=#= End test: Create node1 and bring it online - OK (0) =#=#=#= * Passed: crm_simulate - Create node1 and bring it online =#=#=#= Begin test: Create node attribute =#=#=#= =#=#=#= Current cib after: Create node attribute =#=#=#= =#=#=#= End test: Create node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Create node attribute =#=#=#= Begin test: Query new node attribute =#=#=#= =#=#=#= Current cib after: Query new node attribute =#=#=#= =#=#=#= End test: Query new node attribute - OK (0) =#=#=#= * Passed: cibadmin - Query new node attribute =#=#=#= Begin test: Create second node attribute =#=#=#= =#=#=#= Current cib after: Create second node attribute =#=#=#= =#=#=#= End test: Create second node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Create second node attribute =#=#=#= Begin test: Query node attributes by pattern =#=#=#= scope=nodes name=ram value=1024M scope=nodes name=rattr value=XYZ =#=#=#= End test: Query node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Query node attributes by pattern =#=#=#= Begin test: Update node attributes by pattern =#=#=#= =#=#=#= Current cib after: Update node attributes by pattern =#=#=#= =#=#=#= End test: Update node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Update node attributes by pattern =#=#=#= Begin test: Delete node attributes by pattern =#=#=#= Deleted nodes attribute: id=nodes-node1-rattr name=rattr =#=#=#= Current cib after: Delete node attributes by pattern =#=#=#= =#=#=#= End test: Delete node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Delete node attributes by pattern =#=#=#= Begin test: Set a transient (fail-count) node attribute =#=#=#= =#=#=#= Current cib after: Set a transient (fail-count) node attribute =#=#=#= =#=#=#= End test: Set a transient (fail-count) node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a transient (fail-count) node attribute =#=#=#= Begin test: Query a fail count =#=#=#= scope=status name=fail-count-foo value=3 =#=#=#= Current cib after: Query a fail count =#=#=#= =#=#=#= End test: Query a fail count - OK (0) =#=#=#= * Passed: crm_failcount - Query a fail count =#=#=#= Begin test: Show node attributes with crm_simulate =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Current cluster status: * Node List: * Online: [ node1 ] * Full List of Resources: * No resources * Node Attributes: * Node: node1: * ram : 1024M =#=#=#= End test: Show node attributes with crm_simulate - OK (0) =#=#=#= * Passed: crm_simulate - Show node attributes with crm_simulate =#=#=#= Begin test: Set a second transient node attribute =#=#=#= =#=#=#= Current cib after: Set a second transient node attribute =#=#=#= =#=#=#= End test: Set a second transient node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a second transient node attribute =#=#=#= Begin test: Query transient node attributes by pattern =#=#=#= scope=status name=fail-count-foo value=3 scope=status name=fail-count-bar value=5 =#=#=#= End test: Query transient node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Query transient node attributes by pattern =#=#=#= Begin test: Update transient node attributes by pattern =#=#=#= =#=#=#= Current cib after: Update transient node attributes by pattern =#=#=#= =#=#=#= End test: Update transient node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Update transient node attributes by pattern =#=#=#= Begin test: Delete transient node attributes by pattern =#=#=#= Deleted status attribute: id=status-node1-fail-count-foo name=fail-count-foo Deleted status attribute: id=status-node1-fail-count-bar name=fail-count-bar =#=#=#= Current cib after: Delete transient node attributes by pattern =#=#=#= =#=#=#= End test: Delete transient node attributes by pattern - OK (0) =#=#=#= * Passed: crm_attribute - Delete transient node attributes by pattern =#=#=#= Begin test: crm_attribute given invalid delete usage =#=#=#= crm_attribute: Error: must specify attribute name or pattern to delete =#=#=#= End test: crm_attribute given invalid delete usage - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - crm_attribute given invalid delete usage =#=#=#= Begin test: Set a utilization node attribute =#=#=#= =#=#=#= Current cib after: Set a utilization node attribute =#=#=#= =#=#=#= End test: Set a utilization node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a utilization node attribute =#=#=#= Begin test: Query utilization node attribute =#=#=#= scope=nodes name=cpu value=1 =#=#=#= End test: Query utilization node attribute - OK (0) =#=#=#= * Passed: crm_attribute - Query utilization node attribute =#=#=#= Begin test: Digest calculation =#=#=#= Digest: =#=#=#= Current cib after: Digest calculation =#=#=#= =#=#=#= End test: Digest calculation - OK (0) =#=#=#= * Passed: cibadmin - Digest calculation =#=#=#= Begin test: Replace operation should fail =#=#=#= Call failed: Update was older than existing configuration =#=#=#= Current cib after: Replace operation should fail =#=#=#= =#=#=#= End test: Replace operation should fail - Update was older than existing configuration (103) =#=#=#= * Passed: cibadmin - Replace operation should fail =#=#=#= Begin test: Default standby value =#=#=#= scope=status name=standby value=off =#=#=#= Current cib after: Default standby value =#=#=#= =#=#=#= End test: Default standby value - OK (0) =#=#=#= * Passed: crm_standby - Default standby value =#=#=#= Begin test: Set standby status =#=#=#= =#=#=#= Current cib after: Set standby status =#=#=#= =#=#=#= End test: Set standby status - OK (0) =#=#=#= * Passed: crm_standby - Set standby status =#=#=#= Begin test: Query standby value =#=#=#= scope=nodes name=standby value=true =#=#=#= Current cib after: Query standby value =#=#=#= =#=#=#= End test: Query standby value - OK (0) =#=#=#= * Passed: crm_standby - Query standby value =#=#=#= Begin test: Delete standby value =#=#=#= Deleted nodes attribute: id=nodes-node1-standby name=standby =#=#=#= Current cib after: Delete standby value =#=#=#= =#=#=#= End test: Delete standby value - OK (0) =#=#=#= * Passed: crm_standby - Delete standby value =#=#=#= Begin test: Create a resource =#=#=#= =#=#=#= Current cib after: Create a resource =#=#=#= =#=#=#= End test: Create a resource - OK (0) =#=#=#= * Passed: cibadmin - Create a resource =#=#=#= Begin test: crm_resource run with extra arguments =#=#=#= crm_resource: non-option ARGV-elements: [1 of 2] foo [2 of 2] bar =#=#=#= End test: crm_resource run with extra arguments - Incorrect usage (64) =#=#=#= * Passed: crm_resource - crm_resource run with extra arguments =#=#=#= Begin test: List all available resource options (invalid type) =#=#=#= crm_resource: Error parsing option --list-options =#=#=#= End test: List all available resource options (invalid type) - Incorrect usage (64) =#=#=#= * Passed: crm_resource - List all available resource options (invalid type) =#=#=#= Begin test: List all available resource options (invalid type) (XML) =#=#=#= crm_resource: Error parsing option --list-options =#=#=#= End test: List all available resource options (invalid type) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_resource - List all available resource options (invalid type) (XML) =#=#=#= Begin test: List non-advanced primitive meta-attributes =#=#=#= Primitive meta-attributes Meta-attributes applicable to primitive resources * priority: Resource assignment priority * If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. * Possible values: score (default: ) * critical: Default value for influence in colocation constraints * Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. * Possible values: boolean (default: ) * target-role: State the cluster should attempt to keep this resource in * "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". * Possible values: "Stopped", "Started" (default), "Unpromoted", "Promoted" * is-managed: Whether the cluster is allowed to actively change the resource's state * If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. * Possible values: boolean (default: ) * maintenance: If true, the cluster will not schedule any actions involving the resource * If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. * Possible values: boolean (default: ) * resource-stickiness: Score to add to the current node when a resource is already active * Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. * Possible values: score (no default) * requires: Conditions under which the resource can be started * Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". * Possible values: "nothing", "quorum", "fencing", "unfencing" * migration-threshold: Number of failures on a node before the resource becomes ineligible to run there. * Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. * Possible values: score (default: ) * failure-timeout: Number of seconds before acting as if a failure had not occurred * Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. * Possible values: duration (default: ) * multiple-active: What to do if the cluster finds the resource active on more than one node * What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) * Possible values: "block", "stop_only", "stop_start" (default), "stop_unexpected" * allow-migrate: Whether the cluster should try to "live migrate" this resource when it needs to be moved * Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. * Possible values: boolean (no default) * allow-unhealthy-nodes: Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it * Possible values: boolean (default: ) * container-attribute-target: Where to check user-defined node attributes * Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). * Possible values: string (no default) * remote-node: Name of the Pacemaker Remote guest node this resource is associated with, if any * Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. * Possible values: string (no default) * remote-addr: If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote * If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. * Possible values: string (no default) * remote-port: If remote-node is specified, port on the guest used for its Pacemaker Remote connection * If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. * Possible values: port (default: ) * remote-connect-timeout: If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. * Possible values: timeout (default: ) * remote-allow-migrate: If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). * Possible values: boolean (default: ) =#=#=#= End test: List non-advanced primitive meta-attributes - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced primitive meta-attributes =#=#=#= Begin test: List non-advanced primitive meta-attributes (XML) (shows all) =#=#=#= 1.1 Meta-attributes applicable to primitive resources Primitive meta-attributes If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. Resource assignment priority Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. Default value for influence in colocation constraints "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". State the cluster should attempt to keep this resource in If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. Whether the cluster is allowed to actively change the resource's state If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. If true, the cluster will not schedule any actions involving the resource Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. Score to add to the current node when a resource is already active Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". Conditions under which the resource can be started Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. Number of failures on a node before the resource becomes ineligible to run there. Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. Number of seconds before acting as if a failure had not occurred What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) What to do if the cluster finds the resource active on more than one node Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. Whether the cluster should try to "live migrate" this resource when it needs to be moved Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). Where to check user-defined node attributes Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. Name of the Pacemaker Remote guest node this resource is associated with, if any If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. If remote-node is specified, port on the guest used for its Pacemaker Remote connection If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). =#=#=#= End test: List non-advanced primitive meta-attributes (XML) (shows all) - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced primitive meta-attributes (XML) (shows all) =#=#=#= Begin test: List all available primitive meta-attributes =#=#=#= Primitive meta-attributes Meta-attributes applicable to primitive resources * priority: Resource assignment priority * If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. * Possible values: score (default: ) * critical: Default value for influence in colocation constraints * Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. * Possible values: boolean (default: ) * target-role: State the cluster should attempt to keep this resource in * "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". * Possible values: "Stopped", "Started" (default), "Unpromoted", "Promoted" * is-managed: Whether the cluster is allowed to actively change the resource's state * If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. * Possible values: boolean (default: ) * maintenance: If true, the cluster will not schedule any actions involving the resource * If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. * Possible values: boolean (default: ) * resource-stickiness: Score to add to the current node when a resource is already active * Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. * Possible values: score (no default) * requires: Conditions under which the resource can be started * Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". * Possible values: "nothing", "quorum", "fencing", "unfencing" * migration-threshold: Number of failures on a node before the resource becomes ineligible to run there. * Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. * Possible values: score (default: ) * failure-timeout: Number of seconds before acting as if a failure had not occurred * Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. * Possible values: duration (default: ) * multiple-active: What to do if the cluster finds the resource active on more than one node * What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) * Possible values: "block", "stop_only", "stop_start" (default), "stop_unexpected" * allow-migrate: Whether the cluster should try to "live migrate" this resource when it needs to be moved * Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. * Possible values: boolean (no default) * allow-unhealthy-nodes: Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it * Possible values: boolean (default: ) * container-attribute-target: Where to check user-defined node attributes * Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). * Possible values: string (no default) * remote-node: Name of the Pacemaker Remote guest node this resource is associated with, if any * Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. * Possible values: string (no default) * remote-addr: If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote * If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. * Possible values: string (no default) * remote-port: If remote-node is specified, port on the guest used for its Pacemaker Remote connection * If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. * Possible values: port (default: ) * remote-connect-timeout: If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. * Possible values: timeout (default: ) * remote-allow-migrate: If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). * Possible values: boolean (default: ) =#=#=#= End test: List all available primitive meta-attributes - OK (0) =#=#=#= * Passed: crm_resource - List all available primitive meta-attributes =#=#=#= Begin test: List all available primitive meta-attributes (XML) =#=#=#= 1.1 Meta-attributes applicable to primitive resources Primitive meta-attributes If not all resources can be active, the cluster will stop lower-priority resources in order to keep higher-priority ones active. Resource assignment priority Use this value as the default for influence in all colocation constraints involving this resource, as well as in the implicit colocation constraints created if this resource is in a group. Default value for influence in colocation constraints "Stopped" forces the resource to be stopped. "Started" allows the resource to be started (and in the case of promotable clone resources, promoted if appropriate). "Unpromoted" allows the resource to be started, but only in the unpromoted role if the resource is promotable. "Promoted" is equivalent to "Started". State the cluster should attempt to keep this resource in If false, the cluster will not start, stop, promote, or demote the resource on any node. Recurring actions for the resource are unaffected. If true, a true value for the maintenance-mode cluster option, the maintenance node attribute, or the maintenance resource meta-attribute overrides this. Whether the cluster is allowed to actively change the resource's state If true, the cluster will not start, stop, promote, or demote the resource on any node, and will pause any recurring monitors (except those specifying role as "Stopped"). If false, a true value for the maintenance-mode cluster option or maintenance node attribute overrides this. If true, the cluster will not schedule any actions involving the resource Score to add to the current node when a resource is already active. This allows running resources to stay where they are, even if they would be placed elsewhere if they were being started from a stopped state. The default is 1 for individual clone instances, and 0 for all other resources. Score to add to the current node when a resource is already active Conditions under which the resource can be started. "nothing" means the cluster can always start this resource. "quorum" means the cluster can start this resource only if a majority of the configured nodes are active. "fencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. "unfencing" means the cluster can start this resource only if a majority of the configured nodes are active and any failed or unknown nodes have been fenced, and only on nodes that have been unfenced. The default is "quorum" for resources with a class of stonith; otherwise, "unfencing" if unfencing is active in the cluster; otherwise, "fencing" if the stonith-enabled cluster option is true; otherwise, "quorum". Conditions under which the resource can be started Number of failures that may occur for this resource on a node, before that node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible). By contrast, the cluster treats "INFINITY" (the default) as a very large but finite number. This option has an effect only if the failed operation specifies its on-fail attribute as "restart" (the default), and additionally for failed start operations, if the start-failure-is-fatal cluster property is set to false. Number of failures on a node before the resource becomes ineligible to run there. Number of seconds after a failed action for this resource before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. A value of 0 indicates that this feature is disabled. Number of seconds before acting as if a failure had not occurred What to do if the cluster finds the resource active on more than one node. "block" means to mark the resource as unmanaged. "stop_only" means to stop all active instances of this resource and leave them stopped. "stop_start" means to stop all active instances of this resource and start the resource in one location only. "stop_unexpected" means to stop all active instances of this resource except where the resource should be active. (This should be used only when extra instances are not expected to disrupt existing instances, and the resource agent's monitor of an existing instance is capable of detecting any problems that could be caused. Note that any resources ordered after this one will still need to be restarted.) What to do if the cluster finds the resource active on more than one node Whether the cluster should try to "live migrate" this resource when it needs to be moved. The default is true for ocf:pacemaker:remote resources, and false otherwise. Whether the cluster should try to "live migrate" this resource when it needs to be moved Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether the resource should be allowed to run on a node even if the node's health score would otherwise prevent it Whether to check user-defined node attributes on the physical host where a container is running or on the local node. This is usually set for a bundle resource and inherited by the bundle's primitive resource. A value of "host" means to check user-defined node attributes on the underlying physical host. Any other value means to check user-defined node attributes on the local node (for a bundled primitive resource, this is the bundle node). Where to check user-defined node attributes Name of the Pacemaker Remote guest node this resource is associated with, if any. If specified, this both enables the resource as a guest node and defines the unique name used to identify the guest node. The guest must be configured to run the Pacemaker Remote daemon when it is started. WARNING: This value cannot overlap with any resource or node IDs. Name of the Pacemaker Remote guest node this resource is associated with, if any If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest must be configured to accept connections on this address. The default is the value of the remote-node meta-attribute. If remote-node is specified, the IP address or hostname used to connect to the guest via Pacemaker Remote If remote-node is specified, the port on the guest used for its Pacemaker Remote connection. The Pacemaker Remote daemon on the guest must be configured to listen on this port. If remote-node is specified, port on the guest used for its Pacemaker Remote connection If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, how long before a pending Pacemaker Remote guest connection times out. If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). If remote-node is specified, this acts as the allow-migrate meta-attribute for the implicit remote connection resource (ocf:pacemaker:remote). =#=#=#= End test: List all available primitive meta-attributes (XML) - OK (0) =#=#=#= * Passed: crm_resource - List all available primitive meta-attributes (XML) =#=#=#= Begin test: List non-advanced fencing parameters =#=#=#= Fencing resource common parameters Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. * pcmk_host_map: A mapping of node names to port numbers for devices that do not support node names. * For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. * Possible values: string (no default) * pcmk_host_list: Nodes targeted by this device * Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. * Possible values: string (no default) * pcmk_host_check: How to determine which nodes can be targeted by the device * Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" * Possible values: "dynamic-list", "static-list", "status", "none" * pcmk_delay_max: Enable a delay of no more than the time specified before executing fencing actions. * Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. * Possible values: duration (default: ) * pcmk_delay_base: Enable a base delay for fencing actions and specify base delay value. * This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. * Possible values: string (default: ) * pcmk_action_limit: The maximum number of actions can be performed in parallel on this device * Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. * Possible values: integer (default: ) =#=#=#= End test: List non-advanced fencing parameters - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced fencing parameters =#=#=#= Begin test: List non-advanced fencing parameters (XML) (shows all) =#=#=#= 1.1 Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. Fencing resource common parameters Some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of "none" can be used to tell the cluster not to supply any additional parameters. An alternate parameter to supply instead of 'port' For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. A mapping of node names to port numbers for devices that do not support node names. Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. Nodes targeted by this device Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" How to determine which nodes can be targeted by the device Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. Enable a delay of no more than the time specified before executing fencing actions. This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. Enable a base delay for fencing actions and specify base delay value. Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. The maximum number of actions can be performed in parallel on this device Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action. An alternate command to run instead of 'reboot' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions. Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up. The maximum number of times to try the 'reboot' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action. An alternate command to run instead of 'off' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions. Specify an alternate timeout to use for 'off' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up. The maximum number of times to try the 'off' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action. An alternate command to run instead of 'on' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions. Specify an alternate timeout to use for 'on' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up. The maximum number of times to try the 'on' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action. An alternate command to run instead of 'list' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions. Specify an alternate timeout to use for 'list' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up. The maximum number of times to try the 'list' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action. An alternate command to run instead of 'monitor' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions. Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up. The maximum number of times to try the 'monitor' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action. An alternate command to run instead of 'status' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions. Specify an alternate timeout to use for 'status' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up. The maximum number of times to try the 'status' command within the timeout period =#=#=#= End test: List non-advanced fencing parameters (XML) (shows all) - OK (0) =#=#=#= * Passed: crm_resource - List non-advanced fencing parameters (XML) (shows all) =#=#=#= Begin test: List all available fencing parameters =#=#=#= Fencing resource common parameters Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. * pcmk_host_map: A mapping of node names to port numbers for devices that do not support node names. * For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. * Possible values: string (no default) * pcmk_host_list: Nodes targeted by this device * Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. * Possible values: string (no default) * pcmk_host_check: How to determine which nodes can be targeted by the device * Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" * Possible values: "dynamic-list", "static-list", "status", "none" * pcmk_delay_max: Enable a delay of no more than the time specified before executing fencing actions. * Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. * Possible values: duration (default: ) * pcmk_delay_base: Enable a base delay for fencing actions and specify base delay value. * This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. * Possible values: string (default: ) * pcmk_action_limit: The maximum number of actions can be performed in parallel on this device * Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. * Possible values: integer (default: ) * ADVANCED OPTIONS: * pcmk_host_argument: An alternate parameter to supply instead of 'port' * Some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of "none" can be used to tell the cluster not to supply any additional parameters. * Possible values: string (default: ) * pcmk_reboot_action: An alternate command to run instead of 'reboot' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action. * Possible values: string (default: ) * pcmk_reboot_timeout: Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions. * Possible values: timeout (default: ) * pcmk_reboot_retries: The maximum number of times to try the 'reboot' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up. * Possible values: integer (default: ) * pcmk_off_action: An alternate command to run instead of 'off' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action. * Possible values: string (default: ) * pcmk_off_timeout: Specify an alternate timeout to use for 'off' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions. * Possible values: timeout (default: ) * pcmk_off_retries: The maximum number of times to try the 'off' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up. * Possible values: integer (default: ) * pcmk_on_action: An alternate command to run instead of 'on' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action. * Possible values: string (default: ) * pcmk_on_timeout: Specify an alternate timeout to use for 'on' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions. * Possible values: timeout (default: ) * pcmk_on_retries: The maximum number of times to try the 'on' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up. * Possible values: integer (default: ) * pcmk_list_action: An alternate command to run instead of 'list' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action. * Possible values: string (default: ) * pcmk_list_timeout: Specify an alternate timeout to use for 'list' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions. * Possible values: timeout (default: ) * pcmk_list_retries: The maximum number of times to try the 'list' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up. * Possible values: integer (default: ) * pcmk_monitor_action: An alternate command to run instead of 'monitor' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action. * Possible values: string (default: ) * pcmk_monitor_timeout: Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions. * Possible values: timeout (default: ) * pcmk_monitor_retries: The maximum number of times to try the 'monitor' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up. * Possible values: integer (default: ) * pcmk_status_action: An alternate command to run instead of 'status' * Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action. * Possible values: string (default: ) * pcmk_status_timeout: Specify an alternate timeout to use for 'status' actions instead of stonith-timeout * Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions. * Possible values: timeout (default: ) * pcmk_status_retries: The maximum number of times to try the 'status' command within the timeout period * Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up. * Possible values: integer (default: ) =#=#=#= End test: List all available fencing parameters - OK (0) =#=#=#= * Passed: crm_resource - List all available fencing parameters =#=#=#= Begin test: List all available fencing parameters (XML) =#=#=#= 1.1 Special parameters that are available for all fencing resources, regardless of type. They are processed by Pacemaker, rather than by the fence agent or the fencing library. Fencing resource common parameters Some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of "none" can be used to tell the cluster not to supply any additional parameters. An alternate parameter to supply instead of 'port' For example, "node1:1;node2:2,3" would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2. A mapping of node names to port numbers for devices that do not support node names. Comma-separated list of nodes that can be targeted by this device (for example, "node1,node2,node3"). If pcmk_host_check is "static-list", either this or pcmk_host_map must be set. Nodes targeted by this device Use "dynamic-list" to query the device via the 'list' command; "static-list" to check the pcmk_host_list attribute; "status" to query the device via the 'status' command; or "none" to assume every device can fence every node. The default value is "static-list" if pcmk_host_map or pcmk_host_list is set; otherwise "dynamic-list" if the device supports the list operation; otherwise "status" if the device supports the status operation; otherwise "none" How to determine which nodes can be targeted by the device Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. Enable a delay of no more than the time specified before executing fencing actions. This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value for each target. Enable a base delay for fencing actions and specify base delay value. Cluster property concurrent-fencing="true" needs to be configured first. Then use this to specify the maximum number of actions can be performed in parallel on this device. A value of -1 means an unlimited number of actions can be performed in parallel. The maximum number of actions can be performed in parallel on this device Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'reboot' action. An alternate command to run instead of 'reboot' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'reboot' actions. Specify an alternate timeout to use for 'reboot' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'reboot' action before giving up. The maximum number of times to try the 'reboot' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'off' action. An alternate command to run instead of 'off' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'off' actions. Specify an alternate timeout to use for 'off' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'off' action before giving up. The maximum number of times to try the 'off' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'on' action. An alternate command to run instead of 'on' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'on' actions. Specify an alternate timeout to use for 'on' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'on' action before giving up. The maximum number of times to try the 'on' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'list' action. An alternate command to run instead of 'list' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'list' actions. Specify an alternate timeout to use for 'list' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'list' action before giving up. The maximum number of times to try the 'list' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'monitor' action. An alternate command to run instead of 'monitor' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'monitor' actions. Specify an alternate timeout to use for 'monitor' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'monitor' action before giving up. The maximum number of times to try the 'monitor' command within the timeout period Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the 'status' action. An alternate command to run instead of 'status' Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for 'status' actions. Specify an alternate timeout to use for 'status' actions instead of stonith-timeout Some devices do not support multiple connections. Operations may "fail" if the device is busy with another task. In that case, Pacemaker will automatically retry the operation if there is time remaining. Use this option to alter the number of times Pacemaker tries a 'status' action before giving up. The maximum number of times to try the 'status' command within the timeout period =#=#=#= End test: List all available fencing parameters (XML) - OK (0) =#=#=#= * Passed: crm_resource - List all available fencing parameters (XML) =#=#=#= Begin test: crm_resource given both -r and resource config =#=#=#= crm_resource: --resource cannot be used with --class, --agent, and --provider =#=#=#= End test: crm_resource given both -r and resource config - Incorrect usage (64) =#=#=#= * Passed: crm_resource - crm_resource given both -r and resource config =#=#=#= Begin test: crm_resource given resource config with invalid action =#=#=#= crm_resource: --class, --agent, and --provider can only be used with --validate and --force-* =#=#=#= End test: crm_resource given resource config with invalid action - Incorrect usage (64) =#=#=#= * Passed: crm_resource - crm_resource given resource config with invalid action =#=#=#= Begin test: Create a resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set 'dummy' option: id=dummy-meta_attributes-is-managed set=dummy-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute =#=#=#= =#=#=#= End test: Create a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute =#=#=#= Begin test: Query a resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity false =#=#=#= Current cib after: Query a resource meta attribute =#=#=#= =#=#=#= End test: Query a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Query a resource meta attribute =#=#=#= Begin test: Remove a resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Deleted 'dummy' option: id=dummy-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Remove a resource meta attribute =#=#=#= =#=#=#= End test: Remove a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Remove a resource meta attribute =#=#=#= Begin test: Create another resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: Create another resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Create another resource meta attribute =#=#=#= Begin test: Show why a resource is not running =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: Show why a resource is not running - OK (0) =#=#=#= * Passed: crm_resource - Show why a resource is not running =#=#=#= Begin test: Remove another resource meta attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: Remove another resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Remove another resource meta attribute =#=#=#= Begin test: Get a non-existent attribute from a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Attribute 'nonexistent' not found for 'dummy' =#=#=#= End test: Get a non-existent attribute from a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get a non-existent attribute from a resource element with output-as=xml =#=#=#= Begin test: Get a non-existent attribute from a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Attribute 'nonexistent' not found for 'dummy' =#=#=#= Current cib after: Get a non-existent attribute from a resource element without output-as=xml =#=#=#= =#=#=#= End test: Get a non-existent attribute from a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get a non-existent attribute from a resource element without output-as=xml =#=#=#= Begin test: Get an existent attribute from a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity ocf =#=#=#= End test: Get an existent attribute from a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get an existent attribute from a resource element with output-as=xml =#=#=#= Begin test: Get an existent attribute from a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity ocf =#=#=#= Current cib after: Get an existent attribute from a resource element without output-as=xml =#=#=#= =#=#=#= End test: Get an existent attribute from a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Get an existent attribute from a resource element without output-as=xml =#=#=#= Begin test: Set a non-existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Set a non-existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Set a non-existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set a non-existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Set an existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Set an existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Set an existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set an existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Delete an existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Delete an existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Delete an existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete an existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Delete a non-existent attribute for a resource element with output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= Current cib after: Delete a non-existent attribute for a resource element with output-as=xml =#=#=#= =#=#=#= End test: Delete a non-existent attribute for a resource element with output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete a non-existent attribute for a resource element with output-as=xml =#=#=#= Begin test: Set a non-existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set attribute: name=description value=test_description =#=#=#= Current cib after: Set a non-existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Set a non-existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set a non-existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Set an existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set attribute: name=description value=test_description =#=#=#= Current cib after: Set an existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Set an existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Set an existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Delete an existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Deleted attribute: description =#=#=#= Current cib after: Delete an existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Delete an existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete an existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Delete a non-existent attribute for a resource element without output-as=xml =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Deleted attribute: description =#=#=#= Current cib after: Delete a non-existent attribute for a resource element without output-as=xml =#=#=#= =#=#=#= End test: Delete a non-existent attribute for a resource element without output-as=xml - OK (0) =#=#=#= * Passed: crm_resource - Delete a non-existent attribute for a resource element without output-as=xml =#=#=#= Begin test: Create a resource attribute =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Set 'dummy' option: id=dummy-instance_attributes-delay set=dummy-instance_attributes name=delay value=10s =#=#=#= Current cib after: Create a resource attribute =#=#=#= =#=#=#= End test: Create a resource attribute - OK (0) =#=#=#= * Passed: crm_resource - Create a resource attribute =#=#=#= Begin test: List the configured resources =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Full List of Resources: * dummy (ocf:pacemaker:Dummy): Stopped =#=#=#= Current cib after: List the configured resources =#=#=#= =#=#=#= End test: List the configured resources - OK (0) =#=#=#= * Passed: crm_resource - List the configured resources =#=#=#= Begin test: List the configured resources in XML =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity =#=#=#= End test: List the configured resources in XML - OK (0) =#=#=#= * Passed: crm_resource - List the configured resources in XML =#=#=#= Begin test: Implicitly list the configured resources =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity Full List of Resources: * dummy (ocf:pacemaker:Dummy): Stopped =#=#=#= End test: Implicitly list the configured resources - OK (0) =#=#=#= * Passed: crm_resource - Implicitly list the configured resources =#=#=#= Begin test: List IDs of instantiated resources =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity dummy =#=#=#= End test: List IDs of instantiated resources - OK (0) =#=#=#= * Passed: crm_resource - List IDs of instantiated resources =#=#=#= Begin test: Show XML configuration of resource =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity dummy (ocf:pacemaker:Dummy): Stopped Resource XML: =#=#=#= End test: Show XML configuration of resource - OK (0) =#=#=#= * Passed: crm_resource - Show XML configuration of resource =#=#=#= Begin test: Show XML configuration of resource, output as XML =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity ]]> =#=#=#= End test: Show XML configuration of resource, output as XML - OK (0) =#=#=#= * Passed: crm_resource - Show XML configuration of resource, output as XML =#=#=#= Begin test: Require a destination when migrating a resource that is stopped =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity crm_resource: Resource 'dummy' not moved: active in 0 locations. To prevent 'dummy' from running on a specific location, specify a node. =#=#=#= Current cib after: Require a destination when migrating a resource that is stopped =#=#=#= =#=#=#= End test: Require a destination when migrating a resource that is stopped - Incorrect usage (64) =#=#=#= * Passed: crm_resource - Require a destination when migrating a resource that is stopped =#=#=#= Begin test: Don't support migration to non-existent locations =#=#=#= unpack_resources error: Resource start-up disabled since no STONITH resources have been defined unpack_resources error: Either configure some or disable STONITH with the stonith-enabled option unpack_resources error: NOTE: Clusters with shared data need STONITH to ensure data integrity crm_resource: Node 'i.do.not.exist' not found Error performing operation: No such object =#=#=#= Current cib after: Don't support migration to non-existent locations =#=#=#= =#=#=#= End test: Don't support migration to non-existent locations - No such object (105) =#=#=#= * Passed: crm_resource - Don't support migration to non-existent locations =#=#=#= Begin test: Create a fencing resource =#=#=#= =#=#=#= Current cib after: Create a fencing resource =#=#=#= =#=#=#= End test: Create a fencing resource - OK (0) =#=#=#= * Passed: cibadmin - Create a fencing resource =#=#=#= Begin test: Bring resources online =#=#=#= Current cluster status: * Node List: * Online: [ node1 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Stopped * Fence (stonith:fence_true): Stopped Transition Summary: * Start dummy ( node1 ) * Start Fence ( node1 ) Executing Cluster Transition: * Resource action: dummy monitor on node1 * Resource action: Fence monitor on node1 * Resource action: dummy start on node1 * Resource action: Fence start on node1 Revised Cluster Status: * Node List: * Online: [ node1 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node1 =#=#=#= Current cib after: Bring resources online =#=#=#= =#=#=#= End test: Bring resources online - OK (0) =#=#=#= * Passed: crm_simulate - Bring resources online =#=#=#= Begin test: Try to move a resource to its existing location =#=#=#= crm_resource: Error performing operation: Requested item already exists =#=#=#= Current cib after: Try to move a resource to its existing location =#=#=#= =#=#=#= End test: Try to move a resource to its existing location - Requested item already exists (108) =#=#=#= * Passed: crm_resource - Try to move a resource to its existing location =#=#=#= Begin test: Try to move a resource that doesn't exist =#=#=#= crm_resource: Resource 'xyz' not found Error performing operation: No such object =#=#=#= End test: Try to move a resource that doesn't exist - No such object (105) =#=#=#= * Passed: crm_resource - Try to move a resource that doesn't exist =#=#=#= Begin test: Move a resource from its existing location =#=#=#= WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1. This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool. This will be the case even if node1 is the last node in the cluster =#=#=#= Current cib after: Move a resource from its existing location =#=#=#= =#=#=#= End test: Move a resource from its existing location - OK (0) =#=#=#= * Passed: crm_resource - Move a resource from its existing location =#=#=#= Begin test: Clear out constraints generated by --move =#=#=#= Removing constraint: cli-ban-dummy-on-node1 =#=#=#= Current cib after: Clear out constraints generated by --move =#=#=#= =#=#=#= End test: Clear out constraints generated by --move - OK (0) =#=#=#= * Passed: crm_resource - Clear out constraints generated by --move =#=#=#= Begin test: Default ticket granted state =#=#=#= false =#=#=#= Current cib after: Default ticket granted state =#=#=#= =#=#=#= End test: Default ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Default ticket granted state =#=#=#= Begin test: Set ticket granted state =#=#=#= =#=#=#= Current cib after: Set ticket granted state =#=#=#= =#=#=#= End test: Set ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Set ticket granted state =#=#=#= Begin test: List ticket IDs =#=#=#= ticketA =#=#=#= End test: List ticket IDs - OK (0) =#=#=#= * Passed: crm_ticket - List ticket IDs =#=#=#= Begin test: List ticket IDs, outputting in XML =#=#=#= =#=#=#= End test: List ticket IDs, outputting in XML - OK (0) =#=#=#= * Passed: crm_ticket - List ticket IDs, outputting in XML =#=#=#= Begin test: Query ticket state =#=#=#= State XML: =#=#=#= End test: Query ticket state - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket state =#=#=#= Begin test: Query ticket state, outputting as xml =#=#=#= =#=#=#= End test: Query ticket state, outputting as xml - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket state, outputting as xml =#=#=#= Begin test: Query ticket granted state =#=#=#= false =#=#=#= Current cib after: Query ticket granted state =#=#=#= =#=#=#= End test: Query ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket granted state =#=#=#= Begin test: Query ticket granted state, outputting as xml =#=#=#= =#=#=#= End test: Query ticket granted state, outputting as xml - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket granted state, outputting as xml =#=#=#= Begin test: Delete ticket granted state =#=#=#= =#=#=#= Current cib after: Delete ticket granted state =#=#=#= =#=#=#= End test: Delete ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Delete ticket granted state =#=#=#= Begin test: Make a ticket standby =#=#=#= =#=#=#= Current cib after: Make a ticket standby =#=#=#= =#=#=#= End test: Make a ticket standby - OK (0) =#=#=#= * Passed: crm_ticket - Make a ticket standby =#=#=#= Begin test: Query ticket standby state =#=#=#= true =#=#=#= Current cib after: Query ticket standby state =#=#=#= =#=#=#= End test: Query ticket standby state - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket standby state =#=#=#= Begin test: Activate a ticket =#=#=#= =#=#=#= Current cib after: Activate a ticket =#=#=#= =#=#=#= End test: Activate a ticket - OK (0) =#=#=#= * Passed: crm_ticket - Activate a ticket =#=#=#= Begin test: List ticket details =#=#=#= ticketA revoked (standby=false) =#=#=#= End test: List ticket details - OK (0) =#=#=#= * Passed: crm_ticket - List ticket details =#=#=#= Begin test: List ticket details, outputting as XML =#=#=#= =#=#=#= End test: List ticket details, outputting as XML - OK (0) =#=#=#= * Passed: crm_ticket - List ticket details, outputting as XML =#=#=#= Begin test: Add a second ticket =#=#=#= false =#=#=#= Current cib after: Add a second ticket =#=#=#= =#=#=#= End test: Add a second ticket - OK (0) =#=#=#= * Passed: crm_ticket - Add a second ticket =#=#=#= Begin test: Set second ticket granted state =#=#=#= =#=#=#= Current cib after: Set second ticket granted state =#=#=#= =#=#=#= End test: Set second ticket granted state - OK (0) =#=#=#= * Passed: crm_ticket - Set second ticket granted state =#=#=#= Begin test: List tickets =#=#=#= ticketA revoked ticketB revoked =#=#=#= End test: List tickets - OK (0) =#=#=#= * Passed: crm_ticket - List tickets =#=#=#= Begin test: List tickets, outputting as XML =#=#=#= =#=#=#= End test: List tickets, outputting as XML - OK (0) =#=#=#= * Passed: crm_ticket - List tickets, outputting as XML =#=#=#= Begin test: Delete second ticket =#=#=#= =#=#=#= Current cib after: Delete second ticket =#=#=#= =#=#=#= End test: Delete second ticket - OK (0) =#=#=#= * Passed: cibadmin - Delete second ticket =#=#=#= Begin test: Delete ticket standby state =#=#=#= =#=#=#= Current cib after: Delete ticket standby state =#=#=#= =#=#=#= End test: Delete ticket standby state - OK (0) =#=#=#= * Passed: crm_ticket - Delete ticket standby state =#=#=#= Begin test: Delete ticket standby state =#=#=#= =#=#=#= Current cib after: Delete ticket standby state =#=#=#= =#=#=#= End test: Delete ticket standby state - OK (0) =#=#=#= * Passed: cibadmin - Delete ticket standby state =#=#=#= Begin test: Query ticket constraints =#=#=#= Constraints XML: =#=#=#= End test: Query ticket constraints - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket constraints =#=#=#= Begin test: Query ticket constraints, outputting as xml =#=#=#= =#=#=#= End test: Query ticket constraints, outputting as xml - OK (0) =#=#=#= * Passed: crm_ticket - Query ticket constraints, outputting as xml =#=#=#= Begin test: Delete ticket constraint =#=#=#= =#=#=#= Current cib after: Delete ticket constraint =#=#=#= =#=#=#= End test: Delete ticket constraint - OK (0) =#=#=#= * Passed: cibadmin - Delete ticket constraint =#=#=#= Begin test: Ban a resource on unknown node =#=#=#= crm_resource: Node 'host1' not found Error performing operation: No such object =#=#=#= Current cib after: Ban a resource on unknown node =#=#=#= =#=#=#= End test: Ban a resource on unknown node - No such object (105) =#=#=#= * Passed: crm_resource - Ban a resource on unknown node =#=#=#= Begin test: Create two more nodes and bring them online =#=#=#= Current cluster status: * Node List: * Online: [ node1 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node1 Performing Requested Modifications: * Bringing node node2 online * Bringing node node3 online Transition Summary: * Move Fence ( node1 -> node2 ) Executing Cluster Transition: * Resource action: dummy monitor on node3 * Resource action: dummy monitor on node2 * Resource action: Fence stop on node1 * Resource action: Fence monitor on node3 * Resource action: Fence monitor on node2 * Resource action: Fence start on node2 Revised Cluster Status: * Node List: * Online: [ node1 node2 node3 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node2 =#=#=#= Current cib after: Create two more nodes and bring them online =#=#=#= =#=#=#= End test: Create two more nodes and bring them online - OK (0) =#=#=#= * Passed: crm_simulate - Create two more nodes and bring them online =#=#=#= Begin test: Ban dummy from node1 =#=#=#= WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1. This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool. This will be the case even if node1 is the last node in the cluster =#=#=#= Current cib after: Ban dummy from node1 =#=#=#= =#=#=#= End test: Ban dummy from node1 - OK (0) =#=#=#= * Passed: crm_resource - Ban dummy from node1 =#=#=#= Begin test: Show where a resource is running =#=#=#= resource dummy is running on: node1 =#=#=#= End test: Show where a resource is running - OK (0) =#=#=#= * Passed: crm_resource - Show where a resource is running =#=#=#= Begin test: Show constraints on a resource =#=#=#= Locations: * Node node1 (score=-INFINITY, id=cli-ban-dummy-on-node1, rsc=dummy) =#=#=#= End test: Show constraints on a resource - OK (0) =#=#=#= * Passed: crm_resource - Show constraints on a resource =#=#=#= Begin test: Ban dummy from node2 =#=#=#= =#=#=#= Current cib after: Ban dummy from node2 =#=#=#= =#=#=#= End test: Ban dummy from node2 - OK (0) =#=#=#= * Passed: crm_resource - Ban dummy from node2 =#=#=#= Begin test: Relocate resources due to ban =#=#=#= Current cluster status: * Node List: * Online: [ node1 node2 node3 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node1 * Fence (stonith:fence_true): Started node2 Transition Summary: * Move dummy ( node1 -> node3 ) Executing Cluster Transition: * Resource action: dummy stop on node1 * Resource action: dummy start on node3 Revised Cluster Status: * Node List: * Online: [ node1 node2 node3 ] * Full List of Resources: * dummy (ocf:pacemaker:Dummy): Started node3 * Fence (stonith:fence_true): Started node2 =#=#=#= Current cib after: Relocate resources due to ban =#=#=#= =#=#=#= End test: Relocate resources due to ban - OK (0) =#=#=#= * Passed: crm_simulate - Relocate resources due to ban =#=#=#= Begin test: Move dummy to node1 =#=#=#= =#=#=#= Current cib after: Move dummy to node1 =#=#=#= =#=#=#= End test: Move dummy to node1 - OK (0) =#=#=#= * Passed: crm_resource - Move dummy to node1 =#=#=#= Begin test: Clear implicit constraints for dummy on node2 =#=#=#= Removing constraint: cli-ban-dummy-on-node2 =#=#=#= Current cib after: Clear implicit constraints for dummy on node2 =#=#=#= =#=#=#= End test: Clear implicit constraints for dummy on node2 - OK (0) =#=#=#= * Passed: crm_resource - Clear implicit constraints for dummy on node2 =#=#=#= Begin test: Drop the status section =#=#=#= =#=#=#= End test: Drop the status section - OK (0) =#=#=#= * Passed: cibadmin - Drop the status section =#=#=#= Begin test: Create a clone =#=#=#= =#=#=#= End test: Create a clone - OK (0) =#=#=#= * Passed: cibadmin - Create a clone =#=#=#= Begin test: Create a resource meta attribute =#=#=#= Performing update of 'is-managed' on 'test-clone', the parent of 'test-primitive' Set 'test-clone' option: id=test-clone-meta_attributes-is-managed set=test-clone-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute =#=#=#= =#=#=#= End test: Create a resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute =#=#=#= Begin test: Create a resource meta attribute in the primitive =#=#=#= Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed set=test-primitive-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute in the primitive =#=#=#= =#=#=#= End test: Create a resource meta attribute in the primitive - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in the primitive =#=#=#= Begin test: Update resource meta attribute with duplicates =#=#=#= Multiple attributes match name=is-managed Value: false (id=test-primitive-meta_attributes-is-managed) Value: false (id=test-clone-meta_attributes-is-managed) A value for 'is-managed' already exists in child 'test-primitive', performing update on that instead of 'test-clone' Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=true =#=#=#= Current cib after: Update resource meta attribute with duplicates =#=#=#= =#=#=#= End test: Update resource meta attribute with duplicates - OK (0) =#=#=#= * Passed: crm_resource - Update resource meta attribute with duplicates =#=#=#= Begin test: Update resource meta attribute with duplicates (force clone) =#=#=#= Set 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed value=true =#=#=#= Current cib after: Update resource meta attribute with duplicates (force clone) =#=#=#= =#=#=#= End test: Update resource meta attribute with duplicates (force clone) - OK (0) =#=#=#= * Passed: crm_resource - Update resource meta attribute with duplicates (force clone) =#=#=#= Begin test: Update child resource meta attribute with duplicates =#=#=#= Multiple attributes match name=is-managed Value: true (id=test-primitive-meta_attributes-is-managed) Value: true (id=test-clone-meta_attributes-is-managed) Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=false =#=#=#= Current cib after: Update child resource meta attribute with duplicates =#=#=#= =#=#=#= End test: Update child resource meta attribute with duplicates - OK (0) =#=#=#= * Passed: crm_resource - Update child resource meta attribute with duplicates =#=#=#= Begin test: Delete resource meta attribute with duplicates =#=#=#= Multiple attributes match name=is-managed Value: false (id=test-primitive-meta_attributes-is-managed) Value: true (id=test-clone-meta_attributes-is-managed) A value for 'is-managed' already exists in child 'test-primitive', performing delete on that instead of 'test-clone' Deleted 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource meta attribute with duplicates =#=#=#= =#=#=#= End test: Delete resource meta attribute with duplicates - OK (0) =#=#=#= * Passed: crm_resource - Delete resource meta attribute with duplicates =#=#=#= Begin test: Delete resource meta attribute in parent =#=#=#= Performing delete of 'is-managed' on 'test-clone', the parent of 'test-primitive' Deleted 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource meta attribute in parent =#=#=#= =#=#=#= End test: Delete resource meta attribute in parent - OK (0) =#=#=#= * Passed: crm_resource - Delete resource meta attribute in parent =#=#=#= Begin test: Create a resource meta attribute in the primitive =#=#=#= Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed set=test-primitive-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute in the primitive =#=#=#= =#=#=#= End test: Create a resource meta attribute in the primitive - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in the primitive =#=#=#= Begin test: Update existing resource meta attribute =#=#=#= A value for 'is-managed' already exists in child 'test-primitive', performing update on that instead of 'test-clone' Set 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed value=true =#=#=#= Current cib after: Update existing resource meta attribute =#=#=#= =#=#=#= End test: Update existing resource meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Update existing resource meta attribute =#=#=#= Begin test: Create a resource meta attribute in the parent =#=#=#= Set 'test-clone' option: id=test-clone-meta_attributes-is-managed set=test-clone-meta_attributes name=is-managed value=true =#=#=#= Current cib after: Create a resource meta attribute in the parent =#=#=#= =#=#=#= End test: Create a resource meta attribute in the parent - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in the parent =#=#=#= Begin test: Copy resources =#=#=#= =#=#=#= End test: Copy resources - OK (0) =#=#=#= * Passed: cibadmin - Copy resources =#=#=#= Begin test: Delete resource parent meta attribute (force) =#=#=#= Deleted 'test-clone' option: id=test-clone-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource parent meta attribute (force) =#=#=#= =#=#=#= End test: Delete resource parent meta attribute (force) - OK (0) =#=#=#= * Passed: crm_resource - Delete resource parent meta attribute (force) =#=#=#= Begin test: Restore duplicates =#=#=#= =#=#=#= Current cib after: Restore duplicates =#=#=#= =#=#=#= End test: Restore duplicates - OK (0) =#=#=#= * Passed: cibadmin - Restore duplicates =#=#=#= Begin test: Delete resource child meta attribute =#=#=#= Multiple attributes match name=is-managed Value: true (id=test-primitive-meta_attributes-is-managed) Value: true (id=test-clone-meta_attributes-is-managed) Deleted 'test-primitive' option: id=test-primitive-meta_attributes-is-managed name=is-managed =#=#=#= Current cib after: Delete resource child meta attribute =#=#=#= =#=#=#= End test: Delete resource child meta attribute - OK (0) =#=#=#= * Passed: crm_resource - Delete resource child meta attribute =#=#=#= Begin test: Create the dummy-group resource group =#=#=#= =#=#=#= Current cib after: Create the dummy-group resource group =#=#=#= =#=#=#= End test: Create the dummy-group resource group - OK (0) =#=#=#= * Passed: cibadmin - Create the dummy-group resource group =#=#=#= Begin test: Create a resource meta attribute in dummy1 =#=#=#= Set 'dummy1' option: id=dummy1-meta_attributes-is-managed set=dummy1-meta_attributes name=is-managed value=true =#=#=#= Current cib after: Create a resource meta attribute in dummy1 =#=#=#= =#=#=#= End test: Create a resource meta attribute in dummy1 - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in dummy1 =#=#=#= Begin test: Create a resource meta attribute in dummy-group =#=#=#= Set 'dummy1' option: id=dummy1-meta_attributes-is-managed name=is-managed value=false Set 'dummy-group' option: id=dummy-group-meta_attributes-is-managed set=dummy-group-meta_attributes name=is-managed value=false =#=#=#= Current cib after: Create a resource meta attribute in dummy-group =#=#=#= =#=#=#= End test: Create a resource meta attribute in dummy-group - OK (0) =#=#=#= * Passed: crm_resource - Create a resource meta attribute in dummy-group =#=#=#= Begin test: Delete the dummy-group resource group =#=#=#= =#=#=#= Current cib after: Delete the dummy-group resource group =#=#=#= =#=#=#= End test: Delete the dummy-group resource group - OK (0) =#=#=#= * Passed: cibadmin - Delete the dummy-group resource group =#=#=#= Begin test: Specify a lifetime when moving a resource =#=#=#= Migration will take effect until: =#=#=#= Current cib after: Specify a lifetime when moving a resource =#=#=#= =#=#=#= End test: Specify a lifetime when moving a resource - OK (0) =#=#=#= * Passed: crm_resource - Specify a lifetime when moving a resource =#=#=#= Begin test: Try to move a resource previously moved with a lifetime =#=#=#= =#=#=#= Current cib after: Try to move a resource previously moved with a lifetime =#=#=#= =#=#=#= End test: Try to move a resource previously moved with a lifetime - OK (0) =#=#=#= * Passed: crm_resource - Try to move a resource previously moved with a lifetime =#=#=#= Begin test: Ban dummy from node1 for a short time =#=#=#= Migration will take effect until: WARNING: Creating rsc_location constraint 'cli-ban-dummy-on-node1' with a score of -INFINITY for resource dummy on node1. This will prevent dummy from running on node1 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool. This will be the case even if node1 is the last node in the cluster =#=#=#= Current cib after: Ban dummy from node1 for a short time =#=#=#= =#=#=#= End test: Ban dummy from node1 for a short time - OK (0) =#=#=#= * Passed: crm_resource - Ban dummy from node1 for a short time =#=#=#= Begin test: Remove expired constraints =#=#=#= Removing constraint: cli-ban-dummy-on-node1 =#=#=#= Current cib after: Remove expired constraints =#=#=#= =#=#=#= End test: Remove expired constraints - OK (0) =#=#=#= * Passed: crm_resource - Remove expired constraints =#=#=#= Begin test: Clear all implicit constraints for dummy =#=#=#= Removing constraint: cli-prefer-dummy =#=#=#= Current cib after: Clear all implicit constraints for dummy =#=#=#= =#=#=#= End test: Clear all implicit constraints for dummy - OK (0) =#=#=#= * Passed: crm_resource - Clear all implicit constraints for dummy =#=#=#= Begin test: Set a node health strategy =#=#=#= =#=#=#= Current cib after: Set a node health strategy =#=#=#= =#=#=#= End test: Set a node health strategy - OK (0) =#=#=#= * Passed: crm_attribute - Set a node health strategy =#=#=#= Begin test: Set a node health attribute =#=#=#= =#=#=#= Current cib after: Set a node health attribute =#=#=#= =#=#=#= End test: Set a node health attribute - OK (0) =#=#=#= * Passed: crm_attribute - Set a node health attribute =#=#=#= Begin test: Show why a resource is not running on an unhealthy node =#=#=#= =#=#=#= End test: Show why a resource is not running on an unhealthy node - OK (0) =#=#=#= * Passed: crm_resource - Show why a resource is not running on an unhealthy node =#=#=#= Begin test: Delete a resource =#=#=#= =#=#=#= Current cib after: Delete a resource =#=#=#= =#=#=#= End test: Delete a resource - OK (0) =#=#=#= * Passed: crm_resource - Delete a resource =#=#=#= Begin test: Create an XML patchset =#=#=#= =#=#=#= End test: Create an XML patchset - Error occurred (1) =#=#=#= * Passed: crm_diff - Create an XML patchset =#=#=#= Begin test: Check locations and constraints for prim1 =#=#=#= =#=#=#= End test: Check locations and constraints for prim1 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim1 =#=#=#= Begin test: Recursively check locations and constraints for prim1 =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim1 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim1 =#=#=#= Begin test: Check locations and constraints for prim1 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim1 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim1 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim1 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim1 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim1 in XML =#=#=#= Begin test: Check locations and constraints for prim2 =#=#=#= Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim2 is colocated with: * prim3 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) =#=#=#= End test: Check locations and constraints for prim2 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim2 =#=#=#= Begin test: Recursively check locations and constraints for prim2 =#=#=#= Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim2 is colocated with: * prim3 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Resources prim3 is colocated with: * prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim2 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim2 =#=#=#= Begin test: Check locations and constraints for prim2 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim2 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim2 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim2 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim2 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim2 in XML =#=#=#= Begin test: Check locations and constraints for prim3 =#=#=#= Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim3 is colocated with: * prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) =#=#=#= End test: Check locations and constraints for prim3 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim3 =#=#=#= Begin test: Recursively check locations and constraints for prim3 =#=#=#= Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim3 is colocated with: * prim4 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim3 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim3 =#=#=#= Begin test: Check locations and constraints for prim3 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim3 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim3 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim3 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim3 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim3 in XML =#=#=#= Begin test: Check locations and constraints for prim4 =#=#=#= Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) Resources colocated with prim4: * prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Check locations and constraints for prim4 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim4 =#=#=#= Begin test: Recursively check locations and constraints for prim4 =#=#=#= Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) Resources colocated with prim4: * prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim4 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim4 =#=#=#= Begin test: Check locations and constraints for prim4 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim4 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim4 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim4 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim4 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim4 in XML =#=#=#= Begin test: Check locations and constraints for prim5 =#=#=#= Resources colocated with prim5: * prim4 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) =#=#=#= End test: Check locations and constraints for prim5 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim5 =#=#=#= Begin test: Recursively check locations and constraints for prim5 =#=#=#= Resources colocated with prim5: * prim4 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources colocated with prim4: * prim10 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * prim3 (score=INFINITY, id=colocation-prim3-prim4-INFINITY) * Resources colocated with prim3: * prim2 (score=INFINITY, id=colocation-prim2-prim3-INFINITY) * Locations: * Node cluster01 (score=INFINITY, id=prim2-on-cluster1, rsc=prim2) =#=#=#= End test: Recursively check locations and constraints for prim5 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim5 =#=#=#= Begin test: Check locations and constraints for prim5 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim5 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim5 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim5 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim5 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim5 in XML =#=#=#= Begin test: Check locations and constraints for prim6 =#=#=#= Locations: * Node cluster02 (score=-INFINITY, id=prim6-not-on-cluster2, rsc=prim6) =#=#=#= End test: Check locations and constraints for prim6 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim6 =#=#=#= Begin test: Recursively check locations and constraints for prim6 =#=#=#= Locations: * Node cluster02 (score=-INFINITY, id=prim6-not-on-cluster2, rsc=prim6) =#=#=#= End test: Recursively check locations and constraints for prim6 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim6 =#=#=#= Begin test: Check locations and constraints for prim6 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim6 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim6 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim6 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim6 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim6 in XML =#=#=#= Begin test: Check locations and constraints for prim7 =#=#=#= Resources prim7 is colocated with: * group (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Check locations and constraints for prim7 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim7 =#=#=#= Begin test: Recursively check locations and constraints for prim7 =#=#=#= Resources prim7 is colocated with: * group (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim7 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim7 =#=#=#= Begin test: Check locations and constraints for prim7 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim7 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim7 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim7 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim7 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim7 in XML =#=#=#= Begin test: Check locations and constraints for prim8 =#=#=#= Resources prim8 is colocated with: * gr2 (score=INFINITY, id=colocation-prim8-gr2-INFINITY) =#=#=#= End test: Check locations and constraints for prim8 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim8 =#=#=#= Begin test: Recursively check locations and constraints for prim8 =#=#=#= Resources prim8 is colocated with: * gr2 (score=INFINITY, id=colocation-prim8-gr2-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim8 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim8 =#=#=#= Begin test: Check locations and constraints for prim8 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim8 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim8 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim8 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim8 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim8 in XML =#=#=#= Begin test: Check locations and constraints for prim9 =#=#=#= Resources prim9 is colocated with: * clone (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Check locations and constraints for prim9 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim9 =#=#=#= Begin test: Recursively check locations and constraints for prim9 =#=#=#= Resources prim9 is colocated with: * clone (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim9 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim9 =#=#=#= Begin test: Check locations and constraints for prim9 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim9 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim9 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim9 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim9 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim9 in XML =#=#=#= Begin test: Check locations and constraints for prim10 =#=#=#= Resources prim10 is colocated with: * prim4 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) =#=#=#= End test: Check locations and constraints for prim10 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim10 =#=#=#= Begin test: Recursively check locations and constraints for prim10 =#=#=#= Resources prim10 is colocated with: * prim4 (score=INFINITY, id=colocation-prim10-prim4-INFINITY) * Locations: * Node cluster02 (score=INFINITY, id=prim4-on-cluster2, rsc=prim4) * Resources prim4 is colocated with: * prim5 (score=INFINITY, id=colocation-prim4-prim5-INFINITY) =#=#=#= End test: Recursively check locations and constraints for prim10 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim10 =#=#=#= Begin test: Check locations and constraints for prim10 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim10 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim10 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim10 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim10 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim10 in XML =#=#=#= Begin test: Check locations and constraints for prim11 =#=#=#= Resources colocated with prim11: * prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) Resources prim11 is colocated with: * prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) =#=#=#= End test: Check locations and constraints for prim11 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim11 =#=#=#= Begin test: Recursively check locations and constraints for prim11 =#=#=#= Resources colocated with prim11: * prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources colocated with prim13: * prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources colocated with prim12: * prim11 (id=colocation-prim11-prim12-INFINITY - loop) Resources prim11 is colocated with: * prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources prim12 is colocated with: * prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources prim13 is colocated with: * prim11 (id=colocation-prim13-prim11-INFINITY - loop) =#=#=#= End test: Recursively check locations and constraints for prim11 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim11 =#=#=#= Begin test: Check locations and constraints for prim11 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim11 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim11 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim11 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim11 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim11 in XML =#=#=#= Begin test: Check locations and constraints for prim12 =#=#=#= Resources colocated with prim12: * prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) Resources prim12 is colocated with: * prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) =#=#=#= End test: Check locations and constraints for prim12 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim12 =#=#=#= Begin test: Recursively check locations and constraints for prim12 =#=#=#= Resources colocated with prim12: * prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources colocated with prim11: * prim13 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources colocated with prim13: * prim12 (id=colocation-prim12-prim13-INFINITY - loop) Resources prim12 is colocated with: * prim13 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources prim13 is colocated with: * prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources prim11 is colocated with: * prim12 (id=colocation-prim11-prim12-INFINITY - loop) =#=#=#= End test: Recursively check locations and constraints for prim12 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim12 =#=#=#= Begin test: Check locations and constraints for prim12 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim12 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim12 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim12 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim12 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim12 in XML =#=#=#= Begin test: Check locations and constraints for prim13 =#=#=#= Resources colocated with prim13: * prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) Resources prim13 is colocated with: * prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) =#=#=#= End test: Check locations and constraints for prim13 - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim13 =#=#=#= Begin test: Recursively check locations and constraints for prim13 =#=#=#= Resources colocated with prim13: * prim12 (score=INFINITY, id=colocation-prim12-prim13-INFINITY) * Resources colocated with prim12: * prim11 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources colocated with prim11: * prim13 (id=colocation-prim13-prim11-INFINITY - loop) Resources prim13 is colocated with: * prim11 (score=INFINITY, id=colocation-prim13-prim11-INFINITY) * Resources prim11 is colocated with: * prim12 (score=INFINITY, id=colocation-prim11-prim12-INFINITY) * Resources prim12 is colocated with: * prim13 (id=colocation-prim12-prim13-INFINITY - loop) =#=#=#= End test: Recursively check locations and constraints for prim13 - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim13 =#=#=#= Begin test: Check locations and constraints for prim13 in XML =#=#=#= =#=#=#= End test: Check locations and constraints for prim13 in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for prim13 in XML =#=#=#= Begin test: Recursively check locations and constraints for prim13 in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for prim13 in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for prim13 in XML =#=#=#= Begin test: Check locations and constraints for group =#=#=#= Resources colocated with group: * prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Check locations and constraints for group - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group =#=#=#= Begin test: Recursively check locations and constraints for group =#=#=#= Resources colocated with group: * prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Recursively check locations and constraints for group - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for group =#=#=#= Begin test: Check locations and constraints for group in XML =#=#=#= =#=#=#= End test: Check locations and constraints for group in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group in XML =#=#=#= Begin test: Recursively check locations and constraints for group in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for group in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for group in XML =#=#=#= Begin test: Check locations and constraints for clone =#=#=#= Resources colocated with clone: * prim9 (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Check locations and constraints for clone - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for clone =#=#=#= Begin test: Recursively check locations and constraints for clone =#=#=#= Resources colocated with clone: * prim9 (score=INFINITY, id=colocation-prim9-clone-INFINITY) =#=#=#= End test: Recursively check locations and constraints for clone - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for clone =#=#=#= Begin test: Check locations and constraints for clone in XML =#=#=#= =#=#=#= End test: Check locations and constraints for clone in XML - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for clone in XML =#=#=#= Begin test: Recursively check locations and constraints for clone in XML =#=#=#= =#=#=#= End test: Recursively check locations and constraints for clone in XML - OK (0) =#=#=#= * Passed: crm_resource - Recursively check locations and constraints for clone in XML =#=#=#= Begin test: Check locations and constraints for group member (referring to group) =#=#=#= Resources colocated with group: * prim7 (score=INFINITY, id=colocation-prim7-group-INFINITY) =#=#=#= End test: Check locations and constraints for group member (referring to group) - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group member (referring to group) =#=#=#= Begin test: Check locations and constraints for group member (without referring to group) =#=#=#= Resources colocated with gr2: * prim8 (score=INFINITY, id=colocation-prim8-gr2-INFINITY) =#=#=#= End test: Check locations and constraints for group member (without referring to group) - OK (0) =#=#=#= * Passed: crm_resource - Check locations and constraints for group member (without referring to group) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Set a meta-attribute for primitive and resources colocated with it =#=#=#= =#=#=#= End test: Set a meta-attribute for primitive and resources colocated with it - OK (0) =#=#=#= * Passed: crm_resource - Set a meta-attribute for primitive and resources colocated with it =#=#=#= Begin test: Set a meta-attribute for group and resource colocated with it =#=#=#= Set 'group' option: id=group-meta_attributes-target-role set=group-meta_attributes name=target-role value=Stopped Set 'prim7' option: id=prim7-meta_attributes-target-role set=prim7-meta_attributes name=target-role value=Stopped =#=#=#= End test: Set a meta-attribute for group and resource colocated with it - OK (0) =#=#=#= * Passed: crm_resource - Set a meta-attribute for group and resource colocated with it =#=#=#= Begin test: Set a meta-attribute for clone and resource colocated with it =#=#=#= =#=#=#= End test: Set a meta-attribute for clone and resource colocated with it - OK (0) =#=#=#= * Passed: crm_resource - Set a meta-attribute for clone and resource colocated with it =#=#=#= Begin test: Show resource digests =#=#=#= =#=#=#= End test: Show resource digests - OK (0) =#=#=#= * Passed: crm_resource - Show resource digests =#=#=#= Begin test: Show resource digests with overrides =#=#=#= =#=#=#= End test: Show resource digests with overrides - OK (0) =#=#=#= * Passed: crm_resource - Show resource digests with overrides =#=#=#= Begin test: Show resource operations =#=#=#= rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node4, call=136, rc=7, exec=28ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node4, call=5, rc=7, exec=2ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node2, call=101, rc=7, exec=45ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node2, call=5, rc=7, exec=4ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node3, call=5, rc=7, exec=24ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_0 (node=node5, call=99, rc=193, exec=27ms): pending Fencing (stonith:fence_xvm): Started: Fencing_monitor_0 (node=node5, call=5, rc=7, exec=14ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_start_0 (node=node1, call=104, rc=0, exec=22ms): complete rsc1 (ocf:pacemaker:Dummy): Started: rsc1_monitor_10000 (node=node1, call=106, rc=0, exec=20ms): complete Fencing (stonith:fence_xvm): Started: Fencing_start_0 (node=node1, call=10, rc=0, exec=59ms): complete Fencing (stonith:fence_xvm): Started: Fencing_monitor_120000 (node=node1, call=12, rc=0, exec=70ms): complete =#=#=#= End test: Show resource operations - OK (0) =#=#=#= * Passed: crm_resource - Show resource operations =#=#=#= Begin test: Show resource operations (XML) =#=#=#= =#=#=#= End test: Show resource operations (XML) - OK (0) =#=#=#= * Passed: crm_resource - Show resource operations (XML) =#=#=#= Begin test: List all nodes =#=#=#= cluster node: overcloud-controller-0 (1) cluster node: overcloud-controller-1 (2) cluster node: overcloud-controller-2 (3) cluster node: overcloud-galera-0 (4) cluster node: overcloud-galera-1 (5) cluster node: overcloud-galera-2 (6) guest node: lxc1 (lxc1) guest node: lxc2 (lxc2) remote node: overcloud-rabbit-0 (overcloud-rabbit-0) remote node: overcloud-rabbit-1 (overcloud-rabbit-1) remote node: overcloud-rabbit-2 (overcloud-rabbit-2) =#=#=#= End test: List all nodes - OK (0) =#=#=#= * Passed: crmadmin - List all nodes =#=#=#= Begin test: Minimally list all nodes =#=#=#= overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 overcloud-galera-0 overcloud-galera-1 overcloud-galera-2 lxc1 lxc2 overcloud-rabbit-0 overcloud-rabbit-1 overcloud-rabbit-2 =#=#=#= End test: Minimally list all nodes - OK (0) =#=#=#= * Passed: crmadmin - Minimally list all nodes =#=#=#= Begin test: List all nodes as bash exports =#=#=#= export overcloud-controller-0=1 export overcloud-controller-1=2 export overcloud-controller-2=3 export overcloud-galera-0=4 export overcloud-galera-1=5 export overcloud-galera-2=6 export lxc1=lxc1 export lxc2=lxc2 export overcloud-rabbit-0=overcloud-rabbit-0 export overcloud-rabbit-1=overcloud-rabbit-1 export overcloud-rabbit-2=overcloud-rabbit-2 =#=#=#= End test: List all nodes as bash exports - OK (0) =#=#=#= * Passed: crmadmin - List all nodes as bash exports =#=#=#= Begin test: List cluster nodes =#=#=#= 6 =#=#=#= End test: List cluster nodes - OK (0) =#=#=#= * Passed: crmadmin - List cluster nodes =#=#=#= Begin test: List guest nodes =#=#=#= 2 =#=#=#= End test: List guest nodes - OK (0) =#=#=#= * Passed: crmadmin - List guest nodes =#=#=#= Begin test: List remote nodes =#=#=#= 3 =#=#=#= End test: List remote nodes - OK (0) =#=#=#= * Passed: crmadmin - List remote nodes =#=#=#= Begin test: List cluster,remote nodes =#=#=#= 9 =#=#=#= End test: List cluster,remote nodes - OK (0) =#=#=#= * Passed: crmadmin - List cluster,remote nodes =#=#=#= Begin test: List guest,remote nodes =#=#=#= 5 =#=#=#= End test: List guest,remote nodes - OK (0) =#=#=#= * Passed: crmadmin - List guest,remote nodes =#=#=#= Begin test: Show allocation scores with crm_simulate =#=#=#= =#=#=#= End test: Show allocation scores with crm_simulate - OK (0) =#=#=#= * Passed: crm_simulate - Show allocation scores with crm_simulate =#=#=#= Begin test: Show utilization with crm_simulate =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure [ cluster01 cluster02 ] [ httpd-bundle-0 httpd-bundle-1 ] Started: [ cluster01 cluster02 ] Fencing (stonith:fence_xvm): Started cluster01 dummy (ocf:pacemaker:Dummy): Started cluster02 Stopped (disabled): [ cluster01 cluster02 ] inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped Public-IP (ocf:heartbeat:IPaddr): Started cluster02 Email (lsb:exim): Started cluster02 Started: [ cluster01 cluster02 ] Promoted: [ cluster02 ] Unpromoted: [ cluster01 ] -Only 'private' parameters to 60s-interval monitor for dummy on cluster02 changed: 0:0;16:2:0:4a9e64d6-e1dd-4395-917c-1596312eafe4 +Only 'private' parameters to 1m-interval monitor for dummy on cluster02 changed: 0:0;16:2:0:4a9e64d6-e1dd-4395-917c-1596312eafe4 Original: cluster01 capacity: Original: cluster02 capacity: Original: httpd-bundle-0 capacity: Original: httpd-bundle-1 capacity: Original: httpd-bundle-2 capacity: pcmk__assign_resource: ping:0 utilization on cluster02: pcmk__assign_resource: ping:1 utilization on cluster01: pcmk__assign_resource: Fencing utilization on cluster01: pcmk__assign_resource: dummy utilization on cluster02: pcmk__assign_resource: httpd-bundle-docker-0 utilization on cluster01: pcmk__assign_resource: httpd-bundle-docker-1 utilization on cluster02: pcmk__assign_resource: httpd-bundle-ip-192.168.122.131 utilization on cluster01: pcmk__assign_resource: httpd-bundle-0 utilization on cluster01: pcmk__assign_resource: httpd:0 utilization on httpd-bundle-0: pcmk__assign_resource: httpd-bundle-ip-192.168.122.132 utilization on cluster02: pcmk__assign_resource: httpd-bundle-1 utilization on cluster02: pcmk__assign_resource: httpd:1 utilization on httpd-bundle-1: pcmk__assign_resource: httpd-bundle-2 utilization on cluster01: pcmk__assign_resource: httpd:2 utilization on httpd-bundle-2: pcmk__assign_resource: Public-IP utilization on cluster02: pcmk__assign_resource: Email utilization on cluster02: pcmk__assign_resource: mysql-proxy:0 utilization on cluster02: pcmk__assign_resource: mysql-proxy:1 utilization on cluster01: pcmk__assign_resource: promotable-rsc:0 utilization on cluster02: pcmk__assign_resource: promotable-rsc:1 utilization on cluster01: Remaining: cluster01 capacity: Remaining: cluster02 capacity: Remaining: httpd-bundle-0 capacity: Remaining: httpd-bundle-1 capacity: Remaining: httpd-bundle-2 capacity: Start httpd-bundle-2 ( cluster01 ) due to unrunnable httpd-bundle-docker-2 start (blocked) Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) =#=#=#= End test: Show utilization with crm_simulate - OK (0) =#=#=#= * Passed: crm_simulate - Show utilization with crm_simulate =#=#=#= Begin test: Simulate injecting a failure =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] Performing Requested Modifications: * Injecting ping_monitor_10000@cluster02=1 into the configuration * Injecting attribute fail-count-ping#monitor_10000=1 into /node_state '2' * Injecting attribute last-failure-ping#monitor_10000= into /node_state '2' Transition Summary: * Recover ping:0 ( cluster02 ) * Start httpd-bundle-2 ( cluster01 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) Executing Cluster Transition: * Cluster action: clear_failcount for ping on cluster02 * Pseudo action: ping-clone_stop_0 * Pseudo action: httpd-bundle_start_0 * Resource action: ping stop on cluster02 * Pseudo action: ping-clone_stopped_0 * Pseudo action: ping-clone_start_0 * Pseudo action: httpd-bundle-clone_start_0 * Resource action: ping start on cluster02 * Resource action: ping monitor=10000 on cluster02 * Pseudo action: ping-clone_running_0 * Pseudo action: httpd-bundle-clone_running_0 * Pseudo action: httpd-bundle_running_0 Revised Cluster Status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] =#=#=#= End test: Simulate injecting a failure - OK (0) =#=#=#= * Passed: crm_simulate - Simulate injecting a failure =#=#=#= Begin test: Simulate bringing a node down =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] Performing Requested Modifications: * Taking node cluster01 offline Transition Summary: * Fence (off) httpd-bundle-0 (resource: httpd-bundle-docker-0) 'guest is unclean' * Start Fencing ( cluster02 ) * Start httpd-bundle-0 ( cluster02 ) due to unrunnable httpd-bundle-docker-0 start (blocked) * Stop httpd:0 ( httpd-bundle-0 ) due to unrunnable httpd-bundle-docker-0 start * Start httpd-bundle-2 ( cluster02 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) Executing Cluster Transition: * Resource action: Fencing start on cluster02 * Pseudo action: stonith-httpd-bundle-0-off on httpd-bundle-0 * Pseudo action: httpd-bundle_stop_0 * Pseudo action: httpd-bundle_start_0 * Resource action: Fencing monitor=60000 on cluster02 * Pseudo action: httpd-bundle-clone_stop_0 * Pseudo action: httpd_stop_0 * Pseudo action: httpd-bundle-clone_stopped_0 * Pseudo action: httpd-bundle-clone_start_0 * Pseudo action: httpd-bundle_stopped_0 * Pseudo action: httpd-bundle-clone_running_0 * Pseudo action: httpd-bundle_running_0 Revised Cluster Status: * Node List: * Online: [ cluster02 ] * OFFLINE: [ cluster01 ] * GuestOnline: [ httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster02 ] * Stopped: [ cluster01 ] * Fencing (stonith:fence_xvm): Started cluster02 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): FAILED * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster02 ] * Stopped: [ cluster01 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Stopped: [ cluster01 ] =#=#=#= End test: Simulate bringing a node down - OK (0) =#=#=#= * Passed: crm_simulate - Simulate bringing a node down =#=#=#= Begin test: Simulate a node failing =#=#=#= 4 of 32 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] Performing Requested Modifications: * Failing node cluster02 Transition Summary: * Fence (off) httpd-bundle-1 (resource: httpd-bundle-docker-1) 'guest is unclean' * Fence (reboot) cluster02 'peer is no longer part of the cluster' * Stop ping:0 ( cluster02 ) due to node availability * Stop dummy ( cluster02 ) due to node availability * Stop httpd-bundle-ip-192.168.122.132 ( cluster02 ) due to node availability * Stop httpd-bundle-docker-1 ( cluster02 ) due to node availability * Stop httpd-bundle-1 ( cluster02 ) due to unrunnable httpd-bundle-docker-1 start * Stop httpd:1 ( httpd-bundle-1 ) due to unrunnable httpd-bundle-docker-1 start * Start httpd-bundle-2 ( cluster01 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-docker-2 start (blocked) * Move Public-IP ( cluster02 -> cluster01 ) * Move Email ( cluster02 -> cluster01 ) * Stop mysql-proxy:0 ( cluster02 ) due to node availability * Stop promotable-rsc:0 ( Promoted cluster02 ) due to node availability Executing Cluster Transition: * Pseudo action: httpd-bundle-1_stop_0 * Pseudo action: promotable-clone_demote_0 * Pseudo action: httpd-bundle_stop_0 * Pseudo action: httpd-bundle_start_0 * Fencing cluster02 (reboot) * Pseudo action: ping-clone_stop_0 * Pseudo action: dummy_stop_0 * Pseudo action: httpd-bundle-docker-1_stop_0 * Pseudo action: exim-group_stop_0 * Pseudo action: Email_stop_0 * Pseudo action: mysql-clone-group_stop_0 * Pseudo action: promotable-rsc_demote_0 * Pseudo action: promotable-clone_demoted_0 * Pseudo action: promotable-clone_stop_0 * Pseudo action: stonith-httpd-bundle-1-off on httpd-bundle-1 * Pseudo action: ping_stop_0 * Pseudo action: ping-clone_stopped_0 * Pseudo action: httpd-bundle-clone_stop_0 * Pseudo action: httpd-bundle-ip-192.168.122.132_stop_0 * Pseudo action: Public-IP_stop_0 * Pseudo action: mysql-group:0_stop_0 * Pseudo action: mysql-proxy_stop_0 * Pseudo action: promotable-rsc_stop_0 * Pseudo action: promotable-clone_stopped_0 * Pseudo action: httpd_stop_0 * Pseudo action: httpd-bundle-clone_stopped_0 * Pseudo action: httpd-bundle-clone_start_0 * Pseudo action: exim-group_stopped_0 * Pseudo action: exim-group_start_0 * Resource action: Public-IP start on cluster01 * Resource action: Email start on cluster01 * Pseudo action: mysql-group:0_stopped_0 * Pseudo action: mysql-clone-group_stopped_0 * Pseudo action: httpd-bundle_stopped_0 * Pseudo action: httpd-bundle-clone_running_0 * Pseudo action: exim-group_running_0 * Pseudo action: httpd-bundle_running_0 Revised Cluster Status: * Node List: * Online: [ cluster01 ] * OFFLINE: [ cluster02 ] * GuestOnline: [ httpd-bundle-0 ] * Full List of Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 ] * Stopped: [ cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Stopped * Clone Set: inactive-clone [inactive-dhcpd] (disabled): * Stopped (disabled): [ cluster01 cluster02 ] * Resource Group: inactive-group (disabled): * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled) * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled) * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): FAILED * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster01 * Email (lsb:exim): Started cluster01 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 ] * Stopped: [ cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Unpromoted: [ cluster01 ] * Stopped: [ cluster02 ] =#=#=#= End test: Simulate a node failing - OK (0) =#=#=#= * Passed: crm_simulate - Simulate a node failing =#=#=#= Begin test: List a promotable clone resource =#=#=#= resource promotable-clone is running on: cluster01 resource promotable-clone is running on: cluster02 Promoted =#=#=#= End test: List a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List a promotable clone resource =#=#=#= Begin test: List the primitive of a promotable clone resource =#=#=#= resource promotable-rsc is running on: cluster01 resource promotable-rsc is running on: cluster02 Promoted =#=#=#= End test: List the primitive of a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List the primitive of a promotable clone resource =#=#=#= Begin test: List a single instance of a promotable clone resource =#=#=#= resource promotable-rsc:0 is running on: cluster02 Promoted =#=#=#= End test: List a single instance of a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List a single instance of a promotable clone resource =#=#=#= Begin test: List another instance of a promotable clone resource =#=#=#= resource promotable-rsc:1 is running on: cluster01 =#=#=#= End test: List another instance of a promotable clone resource - OK (0) =#=#=#= * Passed: crm_resource - List another instance of a promotable clone resource =#=#=#= Begin test: List a promotable clone resource in XML =#=#=#= cluster01 cluster02 =#=#=#= End test: List a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List a promotable clone resource in XML =#=#=#= Begin test: List the primitive of a promotable clone resource in XML =#=#=#= cluster01 cluster02 =#=#=#= End test: List the primitive of a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List the primitive of a promotable clone resource in XML =#=#=#= Begin test: List a single instance of a promotable clone resource in XML =#=#=#= cluster02 =#=#=#= End test: List a single instance of a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List a single instance of a promotable clone resource in XML =#=#=#= Begin test: List another instance of a promotable clone resource in XML =#=#=#= cluster01 =#=#=#= End test: List another instance of a promotable clone resource in XML - OK (0) =#=#=#= * Passed: crm_resource - List another instance of a promotable clone resource in XML =#=#=#= Begin test: Try to move an instance of a cloned resource =#=#=#= crm_resource: Cannot operate on clone resource instance 'promotable-rsc:0' Error performing operation: Invalid parameter =#=#=#= End test: Try to move an instance of a cloned resource - Invalid parameter (2) =#=#=#= * Passed: crm_resource - Try to move an instance of a cloned resource =#=#=#= Begin test: Query a nonexistent promotable score attribute =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query a nonexistent promotable score attribute - No such object (105) =#=#=#= * Passed: crm_attribute - Query a nonexistent promotable score attribute =#=#=#= Begin test: Query a nonexistent promotable score attribute (XML) =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query a nonexistent promotable score attribute (XML) - No such object (105) =#=#=#= * Passed: crm_attribute - Query a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Delete a nonexistent promotable score attribute =#=#=#= =#=#=#= End test: Delete a nonexistent promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Delete a nonexistent promotable score attribute =#=#=#= Begin test: Delete a nonexistent promotable score attribute (XML) =#=#=#= =#=#=#= End test: Delete a nonexistent promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Delete a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Query after deleting a nonexistent promotable score attribute =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting a nonexistent promotable score attribute - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting a nonexistent promotable score attribute =#=#=#= Begin test: Query after deleting a nonexistent promotable score attribute (XML) =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting a nonexistent promotable score attribute (XML) - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Update a nonexistent promotable score attribute =#=#=#= =#=#=#= End test: Update a nonexistent promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Update a nonexistent promotable score attribute =#=#=#= Begin test: Update a nonexistent promotable score attribute (XML) =#=#=#= =#=#=#= End test: Update a nonexistent promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Update a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Query after updating a nonexistent promotable score attribute =#=#=#= scope=status name=master-promotable-rsc value=1 =#=#=#= End test: Query after updating a nonexistent promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a nonexistent promotable score attribute =#=#=#= Begin test: Query after updating a nonexistent promotable score attribute (XML) =#=#=#= =#=#=#= End test: Query after updating a nonexistent promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a nonexistent promotable score attribute (XML) =#=#=#= Begin test: Update an existing promotable score attribute =#=#=#= =#=#=#= End test: Update an existing promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Update an existing promotable score attribute =#=#=#= Begin test: Update an existing promotable score attribute (XML) =#=#=#= =#=#=#= End test: Update an existing promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Update an existing promotable score attribute (XML) =#=#=#= Begin test: Query after updating an existing promotable score attribute =#=#=#= scope=status name=master-promotable-rsc value=5 =#=#=#= End test: Query after updating an existing promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating an existing promotable score attribute =#=#=#= Begin test: Query after updating an existing promotable score attribute (XML) =#=#=#= =#=#=#= End test: Query after updating an existing promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating an existing promotable score attribute (XML) =#=#=#= Begin test: Delete an existing promotable score attribute =#=#=#= Deleted status attribute: id=status-1-master-promotable-rsc name=master-promotable-rsc =#=#=#= End test: Delete an existing promotable score attribute - OK (0) =#=#=#= * Passed: crm_attribute - Delete an existing promotable score attribute =#=#=#= Begin test: Delete an existing promotable score attribute (XML) =#=#=#= =#=#=#= End test: Delete an existing promotable score attribute (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Delete an existing promotable score attribute (XML) =#=#=#= Begin test: Query after deleting an existing promotable score attribute =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting an existing promotable score attribute - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting an existing promotable score attribute =#=#=#= Begin test: Query after deleting an existing promotable score attribute (XML) =#=#=#= crm_attribute: Error performing operation: No such device or address =#=#=#= End test: Query after deleting an existing promotable score attribute (XML) - No such object (105) =#=#=#= * Passed: crm_attribute - Query after deleting an existing promotable score attribute (XML) =#=#=#= Begin test: Update a promotable score attribute to -INFINITY =#=#=#= =#=#=#= End test: Update a promotable score attribute to -INFINITY - OK (0) =#=#=#= * Passed: crm_attribute - Update a promotable score attribute to -INFINITY =#=#=#= Begin test: Update a promotable score attribute to -INFINITY (XML) =#=#=#= =#=#=#= End test: Update a promotable score attribute to -INFINITY (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Update a promotable score attribute to -INFINITY (XML) =#=#=#= Begin test: Query after updating a promotable score attribute to -INFINITY =#=#=#= scope=status name=master-promotable-rsc value=-INFINITY =#=#=#= End test: Query after updating a promotable score attribute to -INFINITY - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a promotable score attribute to -INFINITY =#=#=#= Begin test: Query after updating a promotable score attribute to -INFINITY (XML) =#=#=#= =#=#=#= End test: Query after updating a promotable score attribute to -INFINITY (XML) - OK (0) =#=#=#= * Passed: crm_attribute - Query after updating a promotable score attribute to -INFINITY (XML) =#=#=#= Begin test: Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string =#=#=#= scope=status name=master-promotable-rsc value=-INFINITY =#=#=#= End test: Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string - OK (0) =#=#=#= * Passed: crm_attribute - Try OCF_RESOURCE_INSTANCE if -p is specified with an empty string =#=#=#= Begin test: Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings =#=#=#= crm_attribute: -p/--promotion must be called from an OCF resource agent or with a resource ID specified =#=#=#= End test: Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings - Incorrect usage (64) =#=#=#= * Passed: crm_attribute - Return usage error if both -p and OCF_RESOURCE_INSTANCE are empty strings =#=#=#= Begin test: Check that CIB_file="-" works - crm_mon =#=#=#= Cluster Summary: * Stack: corosync * Current DC: cluster02 (version) - partition with quorum * Last updated: * Last change: * 5 nodes configured * 32 resource instances configured (4 DISABLED) Node List: * Online: [ cluster01 cluster02 ] * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 ] Active Resources: * Clone Set: ping-clone [ping]: * Started: [ cluster01 cluster02 ] * Fencing (stonith:fence_xvm): Started cluster01 * dummy (ocf:pacemaker:Dummy): Started cluster02 * Container bundle set: httpd-bundle [pcmk:http]: * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started cluster01 * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started cluster02 * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped * Resource Group: exim-group: * Public-IP (ocf:heartbeat:IPaddr): Started cluster02 * Email (lsb:exim): Started cluster02 * Clone Set: mysql-clone-group [mysql-group]: * Started: [ cluster01 cluster02 ] * Clone Set: promotable-clone [promotable-rsc] (promotable): * Promoted: [ cluster02 ] * Unpromoted: [ cluster01 ] =#=#=#= End test: Check that CIB_file="-" works - crm_mon - OK (0) =#=#=#= * Passed: cat - Check that CIB_file="-" works - crm_mon =#=#=#= Begin test: Check that CIB_file="-" works - crm_resource =#=#=#= =#=#=#= End test: Check that CIB_file="-" works - crm_resource - OK (0) =#=#=#= * Passed: cat - Check that CIB_file="-" works - crm_resource =#=#=#= Begin test: Check that CIB_file="-" works - crmadmin =#=#=#= 11 =#=#=#= End test: Check that CIB_file="-" works - crmadmin - OK (0) =#=#=#= * Passed: cat - Check that CIB_file="-" works - crmadmin =#=#=#= Begin test: Get active shadow instance (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance (no active instance) =#=#=#= Begin test: Get active shadow instance (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance (no active instance) (XML) =#=#=#= Begin test: Get active shadow instance's file name (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's file name (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (no active instance) =#=#=#= Begin test: Get active shadow instance's file name (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's file name (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (no active instance) (XML) =#=#=#= Begin test: Get active shadow instance's contents (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's contents (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (no active instance) =#=#=#= Begin test: Get active shadow instance's contents (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's contents (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (no active instance) (XML) =#=#=#= Begin test: Get active shadow instance's diff (no active instance) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's diff (no active instance) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (no active instance) =#=#=#= Begin test: Get active shadow instance's diff (no active instance) (XML) =#=#=#= crm_shadow: No active shadow configuration defined =#=#=#= End test: Get active shadow instance's diff (no active instance) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (no active instance) (XML) =#=#=#= Begin test: Create copied shadow instance =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance =#=#=#= Begin test: Create copied shadow instance (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (XML) =#=#=#= Begin test: Get active shadow instance (copied) =#=#=#= cts-cli =#=#=#= End test: Get active shadow instance (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance (copied) =#=#=#= Begin test: Get active shadow instance (copied) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance (copied) (XML) =#=#=#= Begin test: Get active shadow instance's file name (copied) =#=#=#= /tmp/cts-cli.shadow/shadow.cts-cli =#=#=#= End test: Get active shadow instance's file name (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (copied) =#=#=#= Begin test: Get active shadow instance's file name (copied) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance's file name (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's file name (copied) (XML) =#=#=#= Begin test: Get active shadow instance's contents (copied) =#=#=#= =#=#=#= End test: Get active shadow instance's contents (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (copied) =#=#=#= Begin test: Get active shadow instance's contents (copied) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's contents (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (copied) (XML) =#=#=#= Begin test: Get active shadow instance's diff (copied) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (copied) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (copied) =#=#=#= Begin test: Get active shadow instance's diff (copied) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (copied) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (copied) (XML) =#=#=#= Begin test: Get active shadow instance's diff (after changes) =#=#=#= Diff: --- 1.1.173 2 Diff: +++ 1.4.1 (null) -- /cib/configuration/op_defaults + /cib: @epoch=4, @num_updates=1 + /cib/configuration/resources/primitive[@id='dummy']: @description=desc ++ /cib/configuration/resources: ++ /cib/status: =#=#=#= End test: Get active shadow instance's diff (after changes) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after changes) =#=#=#= Begin test: Get active shadow instance's diff (after changes) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (after changes) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after changes) (XML) =#=#=#= Begin test: Commit shadow instance =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance =#=#=#= Begin test: Commit shadow instance (force) =#=#=#= =#=#=#= End test: Commit shadow instance (force) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) =#=#=#= Begin test: Get active shadow instance's diff (after commit) =#=#=#= Diff: --- 1.2.0 2 Diff: +++ 1.4.1 (null) + /cib: @epoch=4, @num_updates=1 ++ /cib/status: =#=#=#= End test: Get active shadow instance's diff (after commit) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit) =#=#=#= Begin test: Commit shadow instance (force) (all) =#=#=#= =#=#=#= End test: Commit shadow instance (force) (all) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) (all) =#=#=#= Begin test: Get active shadow instance's diff (after commit all) =#=#=#= Diff: --- 1.4.2 2 Diff: +++ 1.4.1 (null) + /cib: @num_updates=1 =#=#=#= End test: Get active shadow instance's diff (after commit all) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit all) =#=#=#= Begin test: Commit shadow instance (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (XML) =#=#=#= Begin test: Commit shadow instance (force) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) (XML) =#=#=#= Begin test: Get active shadow instance's diff (after commit) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (after commit) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit) (XML) =#=#=#= Begin test: Commit shadow instance (force) (all) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (force) (all) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (force) (all) (XML) =#=#=#= Begin test: Get active shadow instance's diff (after commit all) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (after commit all) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after commit all) (XML) =#=#=#= Begin test: Commit shadow instance (no active instance) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (no active instance) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) =#=#=#= Begin test: Commit shadow instance (no active instance) (force) =#=#=#= =#=#=#= End test: Commit shadow instance (no active instance) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) (force) =#=#=#= Begin test: Commit shadow instance (no active instance) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (no active instance) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) (XML) =#=#=#= Begin test: Commit shadow instance (no active instance) (force) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (no active instance) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (no active instance) (force) (XML) =#=#=#= Begin test: Commit shadow instance (mismatch) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (mismatch) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) =#=#=#= Begin test: Commit shadow instance (mismatch) (force) =#=#=#= =#=#=#= End test: Commit shadow instance (mismatch) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) (force) =#=#=#= Begin test: Commit shadow instance (mismatch) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (mismatch) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) (XML) =#=#=#= Begin test: Commit shadow instance (mismatch) (force) (XML) =#=#=#= =#=#=#= End test: Commit shadow instance (mismatch) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Commit shadow instance (mismatch) (force) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent shadow file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) (force) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Commit shadow instance (nonexistent shadow file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) (force) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent shadow file) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Get active shadow instance's diff (nonexistent shadow file) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent shadow file) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent shadow file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Commit shadow instance (nonexistent shadow file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent shadow file) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'nonexistent_shadow': No such file or directory =#=#=#= End test: Get active shadow instance's diff (nonexistent shadow file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent shadow file) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent CIB file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) (force) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Commit shadow instance (nonexistent CIB file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) (force) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent CIB file) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Get active shadow instance's diff (nonexistent CIB file) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent CIB file) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) (XML) =#=#=#= crm_shadow: The commit command overwrites the active cluster configuration. To prevent accidental destruction of the cluster, the --force flag is required in order to proceed. =#=#=#= End test: Commit shadow instance (nonexistent CIB file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Commit shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Commit shadow instance (nonexistent CIB file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Commit shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Get active shadow instance's diff (nonexistent CIB file) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Get active shadow instance's diff (nonexistent CIB file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (nonexistent CIB file) (XML) =#=#=#= Begin test: Delete shadow instance =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance =#=#=#= Begin test: Delete shadow instance (force) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (XML) =#=#=#= Begin test: Delete shadow instance (force) (XML) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (force) (XML) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (no active instance) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (no active instance) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) =#=#=#= Begin test: Delete shadow instance (no active instance) (force) =#=#=#= =#=#=#= End test: Delete shadow instance (no active instance) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (no active instance) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (no active instance) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) (XML) =#=#=#= Begin test: Delete shadow instance (no active instance) (force) (XML) =#=#=#= =#=#=#= End test: Delete shadow instance (no active instance) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (no active instance) (force) (XML) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (mismatch) =#=#=#= crm_shadow: The delete command removes the specified shadow file. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (mismatch) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) =#=#=#= Begin test: Delete shadow instance (mismatch) (force) =#=#=#= =#=#=#= End test: Delete shadow instance (mismatch) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (mismatch) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. Additionally, the supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (mismatch) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) (XML) =#=#=#= Begin test: Delete shadow instance (mismatch) (force) (XML) =#=#=#= =#=#=#= End test: Delete shadow instance (mismatch) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (mismatch) (force) (XML) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent shadow file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) (force) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent shadow file) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) (force) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent shadow file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) (XML) =#=#=#= Begin test: Delete shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent shadow file) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent shadow file) (force) (XML) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent CIB file) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) (force) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent CIB file) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) (force) A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) (XML) =#=#=#= crm_shadow: The delete command removes the specified shadow file. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Delete shadow instance (nonexistent CIB file) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Delete shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Remember to unset the CIB_shadow variable by entering the following into your shell: unset CIB_shadow =#=#=#= End test: Delete shadow instance (nonexistent CIB file) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Delete shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Create copied shadow instance (no active instance) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (no active instance) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (no active instance) =#=#=#= Begin test: Create copied shadow instance (no active instance) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (no active instance) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (no active instance) (XML) =#=#=#= Begin test: Create copied shadow instance (mismatch) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (mismatch) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (mismatch) =#=#=#= Begin test: Create copied shadow instance (mismatch) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (mismatch) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (mismatch) (XML) =#=#=#= Begin test: Create copied shadow instance (file already exists) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create copied shadow instance (file already exists) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) =#=#=#= Begin test: Create copied shadow instance (file already exists) (force) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (file already exists) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) (force) =#=#=#= Begin test: Create copied shadow instance (file already exists) (XML) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create copied shadow instance (file already exists) (XML) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) (XML) =#=#=#= Begin test: Create copied shadow instance (file already exists) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create copied shadow instance (file already exists) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (file already exists) (force) (XML) =#=#=#= Begin test: Create copied shadow instance (nonexistent CIB file) (force) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Create copied shadow instance (nonexistent CIB file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (nonexistent CIB file) (force) =#=#=#= Begin test: Create copied shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Create copied shadow instance (nonexistent CIB file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Create copied shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Create empty shadow instance =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance =#=#=#= Begin test: Create empty shadow instance (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (XML) =#=#=#= Begin test: Create empty shadow instance (no active instance) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (no active instance) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (no active instance) =#=#=#= Begin test: Create empty shadow instance (no active instance) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (no active instance) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (no active instance) (XML) =#=#=#= Begin test: Create empty shadow instance (mismatch) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (mismatch) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (mismatch) =#=#=#= Begin test: Create empty shadow instance (mismatch) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (mismatch) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (mismatch) (XML) =#=#=#= Begin test: Create empty shadow instance (nonexistent CIB file) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (nonexistent CIB file) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (nonexistent CIB file) =#=#=#= Begin test: Create empty shadow instance (nonexistent CIB file) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (nonexistent CIB file) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Create empty shadow instance (file already exists) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create empty shadow instance (file already exists) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) =#=#=#= Begin test: Create empty shadow instance (file already exists) (force) =#=#=#= Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (file already exists) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) (force) =#=#=#= Begin test: Create empty shadow instance (file already exists) (XML) =#=#=#= crm_shadow: A shadow instance 'cts-cli' already exists. To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Create empty shadow instance (file already exists) (XML) - Cannot create output file (73) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) (XML) =#=#=#= Begin test: Create empty shadow instance (file already exists) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Create empty shadow instance (file already exists) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Create empty shadow instance (file already exists) (force) (XML) =#=#=#= Begin test: Get active shadow instance's contents (empty CIB) =#=#=#= =#=#=#= End test: Get active shadow instance's contents (empty CIB) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (empty CIB) =#=#=#= Begin test: Get active shadow instance's contents (empty CIB) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's contents (empty CIB) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's contents (empty CIB) (XML) =#=#=#= Begin test: Get active shadow instance's diff (empty CIB) =#=#=#= Diff: --- 1.1.173 2 Diff: +++ 0.1.0 (null) -- /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options'] -- /cib/configuration/nodes/node[@id='1'] -- /cib/configuration/nodes/node[@id='2'] -- /cib/configuration/resources/clone[@id='ping-clone'] -- /cib/configuration/resources/primitive[@id='Fencing'] -- /cib/configuration/resources/primitive[@id='dummy'] -- /cib/configuration/resources/clone[@id='inactive-clone'] -- /cib/configuration/resources/group[@id='inactive-group'] -- /cib/configuration/resources/bundle[@id='httpd-bundle'] -- /cib/configuration/resources/group[@id='exim-group'] -- /cib/configuration/resources/clone[@id='mysql-clone-group'] -- /cib/configuration/resources/clone[@id='promotable-clone'] -- /cib/configuration/constraints/rsc_location[@id='not-on-cluster1'] -- /cib/configuration/constraints/rsc_location[@id='loc-promotable-clone'] -- /cib/configuration/tags -- /cib/configuration/op_defaults -- /cib/status/node_state[@id='2'] -- /cib/status/node_state[@id='1'] -- /cib/status/node_state[@id='httpd-bundle-0'] -- /cib/status/node_state[@id='httpd-bundle-1'] + /cib: @validate-with=pacemaker-X, @num_updates=0, @admin_epoch=0 -- /cib: @cib-last-written, @update-origin, @update-client, @update-user, @have-quorum, @dc-uuid =#=#=#= End test: Get active shadow instance's diff (empty CIB) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (empty CIB) =#=#=#= Begin test: Get active shadow instance's diff (empty CIB) (XML) =#=#=#= ]]> =#=#=#= End test: Get active shadow instance's diff (empty CIB) (XML) - Error occurred (1) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (empty CIB) (XML) =#=#=#= Begin test: Reset shadow instance =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance =#=#=#= Begin test: Get active shadow instance's diff (after reset) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (after reset) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after reset) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (XML) =#=#=#= Begin test: Get active shadow instance's diff (after reset) (XML) =#=#=#= =#=#=#= End test: Get active shadow instance's diff (after reset) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Get active shadow instance's diff (after reset) (XML) =#=#=#= Begin test: Reset shadow instance (no active instance) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (no active instance) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (no active instance) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (no active instance) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (no active instance) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (no active instance) (XML) =#=#=#= Begin test: Reset shadow instance (mismatch) =#=#=#= crm_shadow: The supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Reset shadow instance (mismatch) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) =#=#=#= Begin test: Reset shadow instance (mismatch) (force) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (mismatch) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) (force) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (mismatch) (XML) =#=#=#= crm_shadow: The supplied shadow instance (cts-cli) is not the same as the active one (nonexistent_shadow). To prevent accidental destruction of the shadow file, the --force flag is required in order to proceed. =#=#=#= End test: Reset shadow instance (mismatch) (XML) - Incorrect usage (64) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) (XML) =#=#=#= Begin test: Reset shadow instance (mismatch) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (mismatch) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (mismatch) (force) (XML) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) (XML) =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) (force) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) (force) =#=#=#= Begin test: Reset shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= crm_shadow: Could not connect to CIB: No such device or address =#=#=#= End test: Reset shadow instance (nonexistent CIB file) (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent CIB file) (force) (XML) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Reset shadow instance (nonexistent shadow file) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) (force) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (nonexistent shadow file) (force) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) (force) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Reset shadow instance (nonexistent shadow file) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) (XML) =#=#=#= Begin test: Reset shadow instance (nonexistent shadow file) (force) (XML) =#=#=#= A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Reset shadow instance (nonexistent shadow file) (force) (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Reset shadow instance (nonexistent shadow file) (force) (XML) Created new pacemaker configuration A new shadow instance was created. To begin using it, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= Begin test: Switch to new shadow instance =#=#=#= To switch to the named shadow instance, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Switch to new shadow instance - OK (0) =#=#=#= * Passed: crm_shadow - Switch to new shadow instance =#=#=#= Begin test: Switch to new shadow instance (XML) =#=#=#= To switch to the named shadow instance, enter the following into your shell: export CIB_shadow=cts-cli =#=#=#= End test: Switch to new shadow instance (XML) - OK (0) =#=#=#= * Passed: crm_shadow - Switch to new shadow instance (XML) =#=#=#= Begin test: Switch to nonexistent shadow instance =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance =#=#=#= Begin test: Switch to nonexistent shadow instance (force) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance (force) - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance (force) =#=#=#= Begin test: Switch to nonexistent shadow instance (XML) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance (XML) =#=#=#= Begin test: Switch to nonexistent shadow instance (force) (XML) =#=#=#= crm_shadow: Could not access shadow instance 'cts-cli': No such file or directory =#=#=#= End test: Switch to nonexistent shadow instance (force) (XML) - No such object (105) =#=#=#= * Passed: crm_shadow - Switch to nonexistent shadow instance (force) (XML) =#=#=#= Begin test: Verify a file-specified invalid configuration (text output) =#=#=#= Errors found during check: config not valid -V may provide more details =#=#=#= End test: Verify a file-specified invalid configuration (text output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (text output) =#=#=#= Begin test: Verify a file-specified invalid configuration (verbose text output) =#=#=#= unpack_config warning: Blind faith: not fencing unseen nodes Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource Ignoring resource 'test2-clone' because configuration is invalid CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify a file-specified invalid configuration (verbose text output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (verbose text output) =#=#=#= Begin test: Verify a file-specified invalid configuration (quiet text output) =#=#=#= =#=#=#= End test: Verify a file-specified invalid configuration (quiet text output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (quiet text output) =#=#=#= Begin test: Verify a file-specified invalid configuration (XML output) =#=#=#= Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource Ignoring <clone> resource 'test2-clone' because configuration is invalid CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify a file-specified invalid configuration (XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (XML output) =#=#=#= Begin test: Verify a file-specified invalid configuration (verbose XML output) =#=#=#= unpack_config warning: Blind faith: not fencing unseen nodes Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource Ignoring <clone> resource 'test2-clone' because configuration is invalid CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify a file-specified invalid configuration (verbose XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (verbose XML output) =#=#=#= Begin test: Verify a file-specified invalid configuration (quiet XML output) =#=#=#= Resource test2:0 is of type systemd and therefore cannot be used as a promotable clone resource Ignoring <clone> resource 'test2-clone' because configuration is invalid CIB did not pass schema validation =#=#=#= End test: Verify a file-specified invalid configuration (quiet XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify a file-specified invalid configuration (quiet XML output) =#=#=#= Begin test: Verify another file-specified invalid configuration (XML output) =#=#=#= Resource start-up disabled since no STONITH resources have been defined Either configure some or disable STONITH with the stonith-enabled option NOTE: Clusters with shared data need STONITH to ensure data integrity Node pcmk-1 is unclean but cannot be fenced Node pcmk-2 is unclean but cannot be fenced CIB did not pass schema validation Errors found during check: config not valid =#=#=#= End test: Verify another file-specified invalid configuration (XML output) - Invalid configuration (78) =#=#=#= * Passed: crm_verify - Verify another file-specified invalid configuration (XML output) =#=#=#= Begin test: Verify a file-specified valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verify a file-specified valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verify a file-specified valid configuration, outputting as xml =#=#=#= Begin test: Verify a piped-in valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verify a piped-in valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: cat - Verify a piped-in valid configuration, outputting as xml =#=#=#= Begin test: Verbosely verify a file-specified valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verbosely verify a file-specified valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verbosely verify a file-specified valid configuration, outputting as xml =#=#=#= Begin test: Verbosely verify a piped-in valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verbosely verify a piped-in valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: cat - Verbosely verify a piped-in valid configuration, outputting as xml =#=#=#= Begin test: Verify a string-supplied valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verify a string-supplied valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verify a string-supplied valid configuration, outputting as xml =#=#=#= Begin test: Verbosely verify a string-supplied valid configuration, outputting as xml =#=#=#= =#=#=#= End test: Verbosely verify a string-supplied valid configuration, outputting as xml - OK (0) =#=#=#= * Passed: crm_verify - Verbosely verify a string-supplied valid configuration, outputting as xml diff --git a/cts/valgrind-pcmk.suppressions b/cts/valgrind-pcmk.suppressions index a05b9dbbf8..461edc250b 100644 --- a/cts/valgrind-pcmk.suppressions +++ b/cts/valgrind-pcmk.suppressions @@ -1,303 +1,295 @@ # Valgrind suppressions for Pacemaker testing { Valgrind bug Memcheck:Addr8 fun:__strspn_sse42 fun:crm_get_msec } -{ - Ignore option parsing - Memcheck:Leak - fun:realloc - fun:crm_get_option_long - fun:main -} - { dlopen internals Memcheck:Leak fun:calloc fun:_dlerror_run fun:dlopen* fun:_log_so_walk_callback fun:dl_iterate_phdr fun:qb_log_init } # Numerous leaks in bash { Bash reader_loop leaks Memcheck:Leak fun:malloc fun:xmalloc ... fun:reader_loop fun:main } { Bash set_default_locale leaks Memcheck:Leak fun:malloc fun:xmalloc fun:set_default_locale fun:main } { Bash execute_command leaks Memcheck:Leak fun:malloc fun:xmalloc obj:*/bash ... fun:execute_command_internal fun:execute_command ... } # Numerous leaks in glib { quarks - hashtable Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libglib-* fun:g_slice_alloc fun:g_hash_table_new_full fun:g_quark_from_static_string } { quarks - hashtable 2 Memcheck:Leak fun:malloc fun:g_malloc fun:g_slice_alloc fun:g_hash_table_new_full fun:g_quark_from_static_string } { quarks - hashtable 3 Memcheck:Leak fun:calloc fun:g_malloc0 fun:g_hash_table_new_full fun:g_quark_from_static_string } { quarks - hashtable 4 Memcheck:Leak fun:malloc fun:realloc fun:g_realloc fun:g_quark_from_static_string } { glib mainloop internals - default Memcheck:Leak fun:calloc fun:g_malloc0 fun:g_main_context_new fun:g_main_context_default fun:g_main_loop_new fun:main } { glib mainloop internals - default Memcheck:Leak fun:malloc fun:g_malloc fun:g_slice_alloc fun:* fun:g_main_context_new fun:g_main_context_default fun:g_main_loop_new fun:main } { glib mainloop internals - default Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libglib-2.* fun:g_slice_alloc fun:g_ptr_array_sized_new fun:g_main_context_new fun:g_main_context_default fun:g_main_loop_new } { glib mainloop internals - run Memcheck:Leak fun:calloc fun:g_malloc0 fun:g_thread_self fun:g_main_loop_run } { glib mainloop internals - run Memcheck:Leak fun:malloc fun:g_malloc obj:*/libglib-2.* fun:g_main_loop_run } { glib mainloop internals - run Memcheck:Leak fun:malloc fun:realloc fun:g_realloc obj:*/libglib-2.* fun:g_ptr_array_add fun:g_main_context_check obj:*/libglib-2.* fun:g_main_loop_run } { glib mainloop internals - run Memcheck:Leak fun:malloc fun:g_malloc fun:g_slice_alloc fun:g_slice_alloc0 obj:*/libglib-2.* fun:g_main_context_dispatch obj:*/libglib-2.* fun:g_main_loop_run } { glib mainloop internals - run Memcheck:Leak fun:malloc fun:realloc fun:g_realloc obj:*/libglib-2.* fun:g_array_set_size fun:g_static_private_set obj:*/libglib-2.* fun:g_main_context_dispatch obj:*/libglib-2.* fun:g_main_loop_run } { glib mainloop internals - run Memcheck:Leak fun:malloc fun:g_malloc fun:g_slice_alloc fun:g_array_sized_new fun:g_static_private_set obj:*/libglib-2.* fun:g_main_context_dispatch obj:*/libglib-2.* fun:g_main_loop_run } { glib types Memcheck:Leak fun:malloc fun:realloc fun:g_realloc obj:*/libgobject-* fun:g_type_register_static } { glib types 2 Memcheck:Leak fun:realloc fun:g_realloc obj:*/libgobject-* fun:g_type_register_static fun:g_param_type_register_static } { glib types 3 Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libgobject-* fun:g_type_register_fundamental } { glib types 4 Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libgobject-* obj:*/libgobject-* fun:g_type_register_fundamental } { glib - the return Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libgobject-* obj:*/libgobject-* fun:_dl_init } { glib - seriously? Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libgobject-* obj:*/libgobject-* obj:*/libgobject-* fun:_dl_init } { glib - this is not funny anymore Memcheck:Leak fun:malloc fun:realloc fun:g_realloc obj:*/libgobject-* fun:g_type_register_fundamental } { glib - why do you hate me? Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libgobject-* obj:*/libgobject-* fun:call_init.part.0 fun:_dl_init } { dear glib - you suck at memory management Memcheck:Leak fun:calloc fun:g_malloc0 obj:*/libgobject-* obj:*/libgobject-* obj:*/libgobject-* fun:call_init.part.0 fun:_dl_init } diff --git a/doc/sphinx/Pacemaker_Explained/alerts.rst b/doc/sphinx/Pacemaker_Explained/alerts.rst index cba271c8f7..f4cad72cb7 100644 --- a/doc/sphinx/Pacemaker_Explained/alerts.rst +++ b/doc/sphinx/Pacemaker_Explained/alerts.rst @@ -1,259 +1,277 @@ .. _alerts: .. index:: single: alert single: resource; alert single: node; alert single: fencing; alert pair: XML element; alert pair: XML element; alerts Alerts ------ *Alerts* may be configured to take some external action when a cluster event occurs (node failure, resource starting or stopping, etc.). .. index:: pair: alert; agent Alert Agents ############ As with resource agents, the cluster calls an external program (an *alert agent*) to handle alerts. The cluster passes information about the event to the agent via environment variables. Agents can do anything desired with this information (send an e-mail, log to a file, update a monitoring system, etc.). .. topic:: Simple alert configuration .. code-block:: xml In the example above, the cluster will call ``my-script.sh`` for each event. Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called *on* those nodes. For more information about sample alert agents provided by Pacemaker and about developing custom alert agents, see the *Pacemaker Administration* document. .. index:: single: alert; recipient pair: XML element; recipient Alert Recipients ################ Usually, alerts are directed towards a recipient. Thus, each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. .. topic:: Alert configuration with recipient .. code-block:: xml In the above example, the cluster will call ``my-script.sh`` for each event, passing the recipient ``some-address`` as an environment variable. The recipient may be anything the alert agent can recognize -- an IP address, an e-mail address, a file name, whatever the particular agent supports. .. index:: single: alert; meta-attributes single: meta-attribute; alert meta-attributes Alert Meta-Attributes ##################### As with resources, meta-attributes can be configured for alerts to change whether and how Pacemaker calls them. .. table:: **Meta-Attributes of an Alert** :class: longtable :widths: 1 1 3 +------------------+---------------+-----------------------------------------------------+ | Meta-Attribute | Default | Description | +==================+===============+=====================================================+ | enabled | true | .. index:: | | | | single: alert; meta-attribute, enabled | | | | single: meta-attribute; enabled (alert) | | | | single: enabled; alert meta-attribute | | | | | | | | If false for an alert, the alert will not be used. | | | | If true for an alert and false for a particular | | | | recipient of that alert, that recipient will not be | | | | used. *(since 2.1.6)* | +------------------+---------------+-----------------------------------------------------+ | timestamp-format | %H:%M:%S.%06N | .. index:: | | | | single: alert; meta-attribute, timestamp-format | | | | single: meta-attribute; timestamp-format (alert) | | | | single: timestamp-format; alert meta-attribute | | | | | | | | Format the cluster will use when sending the | | | | event's timestamp to the agent. This is a string as | | | | used with the ``date(1)`` command. | +------------------+---------------+-----------------------------------------------------+ | timeout | 30s | .. index:: | | | | single: alert; meta-attribute, timeout | | | | single: meta-attribute; timeout (alert) | | | | single: timeout; alert meta-attribute | | | | | | | | If the alert agent does not complete within this | | | | amount of time, it will be terminated. | +------------------+---------------+-----------------------------------------------------+ Meta-attributes can be configured per alert and/or per recipient. .. topic:: Alert configuration with meta-attributes .. code-block:: xml In the above example, the ``my-script.sh`` will get called twice for each event, with each call using a 15-second timeout. One call will be passed the recipient ``someuser@example.com`` and a timestamp in the format ``%D %H:%M``, while the other call will be passed the recipient ``otheruser@example.com`` and a timestamp in the format ``%c``. .. index:: single: alert; instance attributes single: instance attribute; alert instance attributes Alert Instance Attributes ######################### As with resource agents, agent-specific configuration values may be configured as instance attributes. These will be passed to the agent as additional environment variables. The number, names and allowed values of these instance attributes are completely up to the particular agent. .. topic:: Alert configuration with instance attributes .. code-block:: xml .. index:: single: alert; filters pair: XML element; select pair: XML element; select_nodes pair: XML element; select_fencing pair: XML element; select_resources pair: XML element; select_attributes pair: XML element; attribute Alert Filters ############# By default, an alert agent will be called for node events, fencing events, and resource events. An agent may choose to ignore certain types of events, but there is still the overhead of calling it for those events. To eliminate that overhead, you may select which types of events the agent should receive. - + +Alert filters are configured within a ``select`` element inside an ``alert`` +element. + +.. list-table:: **Possible alert filters** + :class: longtable + :widths: 1 3 + :header-rows: 1 + + * - Name + - Events alerted + * - select_nodes + - A node joins or leaves the cluster (whether at the cluster layer for + cluster nodes, or via a remote connection for Pacemaker Remote nodes). + * - select_fencing + - Fencing or unfencing of a node completes (whether successfully or not). + * - select_resources + - A resource action other than meta-data completes (whether successfully + or not). + * - select_attributes + - A transient attribute value update is sent to the CIB. + .. topic:: Alert configuration to receive only node events and fencing events .. code-block:: xml -The possible options within `` Node attribute alerts are currently considered experimental. Alerts may be limited to attributes set via ``attrd_updater``, and agents may be called multiple times with the same attribute value. diff --git a/include/crm/Makefile.am b/include/crm/Makefile.am index 32719bd1bc..1821579369 100644 --- a/include/crm/Makefile.am +++ b/include/crm/Makefile.am @@ -1,36 +1,35 @@ # # Copyright 2004-2024 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # MAINTAINERCLEANFILES = Makefile.in headerdir=$(pkgincludedir)/crm header_HEADERS = cib.h \ cib_compat.h \ cluster.h \ compatibility.h \ crm.h \ crm_compat.h \ lrmd.h \ lrmd_compat.h \ lrmd_events.h \ msg_xml.h \ msg_xml_compat.h \ services.h \ services_compat.h \ - stonith-ng.h \ - stonith-ng_compat.h + stonith-ng.h noinst_HEADERS = $(wildcard *_internal.h) SUBDIRS = common \ pengine \ cib \ fencing \ cluster diff --git a/include/crm/stonith-ng.h b/include/crm/stonith-ng.h index 9bff891381..4774d9a0f5 100644 --- a/include/crm/stonith-ng.h +++ b/include/crm/stonith-ng.h @@ -1,704 +1,716 @@ /* * Copyright 2004-2024 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #ifndef PCMK__CRM_STONITH_NG__H # define PCMK__CRM_STONITH_NG__H #ifdef __cplusplus extern "C" { #endif /** * \file * \brief Fencing aka. STONITH * \ingroup fencing */ /* IMPORTANT: DLM source code includes this file directly, without having access * to other Pacemaker headers on its include path, so this file should *not* * include any other Pacemaker headers. (DLM might be updated to avoid the * issue, but we should still follow this guideline for a long time after.) */ # include # include # include // bool # include // uint32_t # include // time_t /* *INDENT-OFF* */ enum stonith_state { stonith_connected_command, stonith_connected_query, stonith_disconnected, }; enum stonith_call_options { st_opt_none = 0x00000000, st_opt_verbose = 0x00000001, st_opt_allow_suicide = 0x00000002, st_opt_manual_ack = 0x00000008, st_opt_discard_reply = 0x00000010, /* st_opt_all_replies = 0x00000020, */ st_opt_topology = 0x00000040, st_opt_scope_local = 0x00000100, st_opt_cs_nodeid = 0x00000200, st_opt_sync_call = 0x00001000, /*! Allow the timeout period for a callback to be adjusted * based on the time the server reports the operation will take. */ st_opt_timeout_updates = 0x00002000, /*! Only report back if operation is a success in callback */ st_opt_report_only_success = 0x00004000, /* used where ever apropriate - e.g. cleanup of history */ st_opt_cleanup = 0x000080000, /* used where ever apropriate - e.g. send out a history query to all nodes */ st_opt_broadcast = 0x000100000, }; /*! Order matters here, do not change values */ enum op_state { st_query, st_exec, st_done, st_duplicate, st_failed, }; // Supported fence agent interface standards enum stonith_namespace { st_namespace_invalid, st_namespace_any, st_namespace_internal, // Implemented internally by Pacemaker /* Neither of these projects are active any longer, but the fence agent * interfaces they created are still in use and supported by Pacemaker. */ st_namespace_rhcs, // Red Hat Cluster Suite compatible st_namespace_lha, // Linux-HA compatible }; enum stonith_namespace stonith_text2namespace(const char *namespace_s); const char *stonith_namespace2text(enum stonith_namespace st_namespace); enum stonith_namespace stonith_get_namespace(const char *agent, const char *namespace_s); typedef struct stonith_key_value_s { char *key; char *value; struct stonith_key_value_s *next; } stonith_key_value_t; typedef struct stonith_history_s { char *target; char *action; char *origin; char *delegate; char *client; int state; time_t completed; struct stonith_history_s *next; long completed_nsec; char *exit_reason; } stonith_history_t; typedef struct stonith_s stonith_t; typedef struct stonith_event_s { char *id; char *type; //!< \deprecated Will be removed in future release char *message; //!< \deprecated Will be removed in future release char *operation; int result; char *origin; char *target; char *action; char *executioner; char *device; /*! The name of the client that initiated the action. */ char *client_origin; //! \internal This field should be treated as internal to Pacemaker void *opaque; } stonith_event_t; typedef struct stonith_callback_data_s { int rc; int call_id; void *userdata; //! \internal This field should be treated as internal to Pacemaker void *opaque; } stonith_callback_data_t; typedef struct stonith_api_operations_s { /*! * \brief Destroy a fencer connection * * \param[in,out] st Fencer connection to destroy */ int (*free) (stonith_t *st); /*! * \brief Connect to the local fencer * * \param[in,out] st Fencer connection to connect * \param[in] name Client name to use * \param[out] stonith_fd If NULL, use a main loop, otherwise * store IPC file descriptor here * * \return Legacy Pacemaker return code */ int (*connect) (stonith_t *st, const char *name, int *stonith_fd); /*! * \brief Disconnect from the local stonith daemon. * * \param[in,out] st Fencer connection to disconnect * * \return Legacy Pacemaker return code */ int (*disconnect)(stonith_t *st); /*! * \brief Unregister a fence device with the local fencer * * \param[in,out] st Fencer connection to disconnect * \param[in] options Group of enum stonith_call_options * \param[in] name ID of fence device to unregister * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*remove_device)(stonith_t *st, int options, const char *name); /*! * \brief Register a fence device with the local fencer * * \param[in,out] st Fencer connection to use * \param[in] options Group of enum stonith_call_options * \param[in] id ID of fence device to register * \param[in] namespace_s Type of fence agent to search for ("redhat" * or "stonith-ng" for RHCS-style, "internal" * for Pacemaker-internal devices, "heartbeat" * for LHA-style, or "any" or NULL for any) * \param[in] agent Name of fence agent for device * \param[in] params Fence agent parameters for device * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*register_device)(stonith_t *st, int options, const char *id, const char *namespace_s, const char *agent, const stonith_key_value_t *params); /*! * \brief Unregister a fencing level for specified node with local fencer * * \param[in,out] st Fencer connection to use * \param[in] options Group of enum stonith_call_options * \param[in] node Target node to unregister level for * \param[in] level Topology level number to unregister * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*remove_level)(stonith_t *st, int options, const char *node, int level); /*! * \brief Register a fencing level for specified node with local fencer * * \param[in,out] st Fencer connection to use * \param[in] options Group of enum stonith_call_options * \param[in] node Target node to register level for * \param[in] level Topology level number to register * \param[in] device_list Devices to register in level * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*register_level)(stonith_t *st, int options, const char *node, int level, const stonith_key_value_t *device_list); /*! * \brief Retrieve a fence agent's metadata * * \param[in,out] stonith Fencer connection * \param[in] call_options Group of enum stonith_call_options * (currently ignored) * \param[in] agent Fence agent to query * \param[in] namespace_s Type of fence agent to search for ("redhat" * or "stonith-ng" for RHCS-style, "internal" * for Pacemaker-internal devices, "heartbeat" * for LHA-style, or "any" or NULL for any) * \param[out] output Where to store metadata * \param[in] timeout_sec Error if not complete within this time * * \return Legacy Pacemaker return code * \note The caller is responsible for freeing *output using free(). */ int (*metadata)(stonith_t *stonith, int call_options, const char *agent, const char *namespace_s, char **output, int timeout_sec); /*! * \brief Retrieve a list of installed fence agents * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * (currently ignored) * \param[in] namespace_s Type of fence agents to list ("redhat" * or "stonith-ng" for RHCS-style, "internal" for * Pacemaker-internal devices, "heartbeat" for * LHA-style, or "any" or NULL for all) * \param[out] devices Where to store agent list * \param[in] timeout Error if unable to complete within this * (currently ignored) * * \return Number of items in list on success, or negative errno otherwise * \note The caller is responsible for freeing the returned list with * stonith_key_value_freeall(). */ int (*list_agents)(stonith_t *stonith, int call_options, const char *namespace_s, stonith_key_value_t **devices, int timeout); /*! * \brief Get the output of a fence device's list action * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] id Fence device ID to run list for * \param[out] list_info Where to store list output * \param[in] timeout Error if unable to complete within this * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*list)(stonith_t *stonith, int call_options, const char *id, char **list_info, int timeout); /*! * \brief Check whether a fence device is reachable by monitor action * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] id Fence device ID to run monitor for * \param[in] timeout Error if unable to complete within this * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*monitor)(stonith_t *stonith, int call_options, const char *id, int timeout); /*! * \brief Check whether a fence device target is reachable by status action * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] id Fence device ID to run status for * \param[in] port Fence target to run status for * \param[in] timeout Error if unable to complete within this * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*status)(stonith_t *stonith, int call_options, const char *id, const char *port, int timeout); /*! * \brief List registered fence devices * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] target Fence target to run status for * \param[out] devices Where to store list of fence devices * \param[in] timeout Error if unable to complete within this * * \note If node is provided, only devices that can fence the node id * will be returned. * * \return Number of items in list on success, or negative errno otherwise */ int (*query)(stonith_t *stonith, int call_options, const char *target, stonith_key_value_t **devices, int timeout); /*! * \brief Request that a target get fenced * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] node Fence target * \param[in] action "on", "off", or "reboot" * \param[in] timeout Default per-device timeout to use with * each executed device * \param[in] tolerance Accept result of identical fence action * completed within this time * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*fence)(stonith_t *stonith, int call_options, const char *node, const char *action, int timeout, int tolerance); /*! * \brief Manually confirm that a node has been fenced * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] target Fence target * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*confirm)(stonith_t *stonith, int call_options, const char *target); /*! * \brief List fencing actions that have occurred for a target * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] node Fence target * \param[out] history Where to store list of fencing actions * \param[in] timeout Error if unable to complete within this * * \return Legacy Pacemaker return code */ int (*history)(stonith_t *stonith, int call_options, const char *node, stonith_history_t **history, int timeout); /*! * \brief Register a callback for fence notifications * * \param[in,out] stonith Fencer connection to use * \param[in] event Event to register for * \param[in] callback Callback to register * * \return Legacy Pacemaker return code */ int (*register_notification)(stonith_t *stonith, const char *event, void (*callback)(stonith_t *st, stonith_event_t *e)); /*! * \brief Unregister callbacks for fence notifications * * \param[in,out] stonith Fencer connection to use * \param[in] event Event to unregister callbacks for (NULL for all) * * \return Legacy Pacemaker return code */ int (*remove_notification)(stonith_t *stonith, const char *event); /*! * \brief Register a callback for an asynchronous fencing result * * \param[in,out] stonith Fencer connection to use * \param[in] call_id Call ID to register callback for * \param[in] timeout Error if result not received in this time * \param[in] options Group of enum stonith_call_options * (respects \c st_opt_timeout_updates and * \c st_opt_report_only_success) * \param[in,out] user_data Pointer to pass to callback * \param[in] callback_name Unique identifier for callback * \param[in] callback Callback to register (may be called * immediately if \p call_id indicates error) * * \return \c TRUE on success, \c FALSE if call_id indicates error, * or -EINVAL if \p stonith is not valid */ int (*register_callback)(stonith_t *stonith, int call_id, int timeout, int options, void *user_data, const char *callback_name, void (*callback)(stonith_t *st, stonith_callback_data_t *data)); /*! * \brief Unregister callbacks for asynchronous fencing results * * \param[in,out] stonith Fencer connection to use * \param[in] call_id If \p all_callbacks is false, call ID * to unregister callback for * \param[in] all_callbacks If true, unregister all callbacks * * \return pcmk_ok */ int (*remove_callback)(stonith_t *stonith, int call_id, bool all_callbacks); /*! * \brief Unregister fencing level for specified node, pattern or attribute * * \param[in,out] st Fencer connection to use * \param[in] options Group of enum stonith_call_options * \param[in] node If not NULL, unregister level targeting this node * \param[in] pattern If not NULL, unregister level targeting nodes * whose names match this regular expression * \param[in] attr If this and \p value are not NULL, unregister * level targeting nodes with this node attribute * set to \p value * \param[in] value If this and \p attr are not NULL, unregister * level targeting nodes with node attribute \p attr * set to this * \param[in] level Topology level number to remove * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code * \note The caller should set only one of \p node, \p pattern, or \p attr * and \p value. */ int (*remove_level_full)(stonith_t *st, int options, const char *node, const char *pattern, const char *attr, const char *value, int level); /*! * \brief Register fencing level for specified node, pattern or attribute * * \param[in,out] st Fencer connection to use * \param[in] options Group of enum stonith_call_options * \param[in] node If not NULL, register level targeting this * node by name * \param[in] pattern If not NULL, register level targeting nodes * whose names match this regular expression * \param[in] attr If this and \p value are not NULL, register * level targeting nodes with this node * attribute set to \p value * \param[in] value If this and \p attr are not NULL, register * level targeting nodes with node attribute * \p attr set to this * \param[in] level Topology level number to remove * \param[in] device_list Devices to use in level * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code * * \note The caller should set only one of node, pattern or attr/value. */ int (*register_level_full)(stonith_t *st, int options, const char *node, const char *pattern, const char *attr, const char *value, int level, const stonith_key_value_t *device_list); /*! * \brief Validate an arbitrary stonith device configuration * * \param[in,out] st Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] rsc_id ID used to replace CIB secrets in \p params * \param[in] namespace_s Type of fence agent to validate ("redhat" * or "stonith-ng" for RHCS-style, "internal" * for Pacemaker-internal devices, "heartbeat" * for LHA-style, or "any" or NULL for any) * \param[in] agent Fence agent to validate * \param[in] params Configuration parameters to pass to agent * \param[in] timeout Fail if no response within this many seconds * \param[out] output If non-NULL, where to store any agent output * \param[out] error_output If non-NULL, where to store agent error output * * \return pcmk_ok if validation succeeds, -errno otherwise * \note If pcmk_ok is returned, the caller is responsible for freeing * the output (if requested) with free(). */ int (*validate)(stonith_t *st, int call_options, const char *rsc_id, const char *namespace_s, const char *agent, const stonith_key_value_t *params, int timeout, char **output, char **error_output); /*! * \brief Request delayed fencing of a target * * \param[in,out] stonith Fencer connection to use * \param[in] call_options Group of enum stonith_call_options * \param[in] node Fence target * \param[in] action "on", "off", or "reboot" * \param[in] timeout Default per-device timeout to use with * each executed device * \param[in] tolerance Accept result of identical fence action * completed within this time * \param[in] delay Execute fencing after this delay (-1 * disables any delay from pcmk_delay_base * and pcmk_delay_max) * * \return pcmk_ok (if synchronous) or positive call ID (if asynchronous) * on success, otherwise a negative legacy Pacemaker return code */ int (*fence_with_delay)(stonith_t *stonith, int call_options, const char *node, const char *action, int timeout, int tolerance, int delay); } stonith_api_operations_t; struct stonith_s { enum stonith_state state; int call_id; int call_timeout; //!< \deprecated Unused void *st_private; stonith_api_operations_t *cmds; }; /* *INDENT-ON* */ /* Core functions */ stonith_t *stonith_api_new(void); void stonith_api_delete(stonith_t * st); void stonith_dump_pending_callbacks(stonith_t * st); bool stonith_dispatch(stonith_t * st); stonith_key_value_t *stonith_key_value_add(stonith_key_value_t * kvp, const char *key, const char *value); void stonith_key_value_freeall(stonith_key_value_t * kvp, int keys, int values); void stonith_history_free(stonith_history_t *history); // Convenience functions int stonith_api_connect_retry(stonith_t *st, const char *name, int max_attempts); const char *stonith_op_state_str(enum op_state state); /* Basic helpers that allows nodes to be fenced and the history to be * queried without mainloop or the caller understanding the full API * * At least one of nodeid and uname are required * * NOTE: DLM uses both of these */ int stonith_api_kick(uint32_t nodeid, const char *uname, int timeout, bool off); time_t stonith_api_time(uint32_t nodeid, const char *uname, bool in_progress); /* * Helpers for using the above functions without install-time dependencies * * Usage: * #include * * To turn a node off by corosync nodeid: * stonith_api_kick_helper(nodeid, 120, 1); * * To check the last fence date/time (also by nodeid): * last = stonith_api_time_helper(nodeid, 0); * * To check if fencing is in progress: * if(stonith_api_time_helper(nodeid, 1) > 0) { ... } * * eg. #include #include #include int main(int argc, char ** argv) { int rc = 0; int nodeid = 102; rc = stonith_api_time_helper(nodeid, 0); printf("%d last fenced at %s\n", nodeid, ctime(rc)); rc = stonith_api_kick_helper(nodeid, 120, 1); printf("%d fence result: %d\n", nodeid, rc); rc = stonith_api_time_helper(nodeid, 0); printf("%d last fenced at %s\n", nodeid, ctime(rc)); return 0; } */ # define STONITH_LIBRARY "libstonithd.so.26" typedef int (*st_api_kick_fn) (int nodeid, const char *uname, int timeout, bool off); typedef time_t (*st_api_time_fn) (int nodeid, const char *uname, bool in_progress); static inline int stonith_api_kick_helper(uint32_t nodeid, int timeout, bool off) { static void *st_library = NULL; static st_api_kick_fn st_kick_fn; if (st_library == NULL) { st_library = dlopen(STONITH_LIBRARY, RTLD_LAZY); } if (st_library && st_kick_fn == NULL) { st_kick_fn = (st_api_kick_fn) dlsym(st_library, "stonith_api_kick"); } if (st_kick_fn == NULL) { #ifdef ELIBACC return -ELIBACC; #else return -ENOSYS; #endif } return (*st_kick_fn) (nodeid, NULL, timeout, off); } static inline time_t stonith_api_time_helper(uint32_t nodeid, bool in_progress) { static void *st_library = NULL; static st_api_time_fn st_time_fn; if (st_library == NULL) { st_library = dlopen(STONITH_LIBRARY, RTLD_LAZY); } if (st_library && st_time_fn == NULL) { st_time_fn = (st_api_time_fn) dlsym(st_library, "stonith_api_time"); } if (st_time_fn == NULL) { return 0; } return (*st_time_fn) (nodeid, NULL, in_progress); } /** * Does the given agent describe a stonith resource that can exist? * * \param[in] agent What is the name of the agent? * \param[in] timeout Timeout to use when querying. If 0 is given, * use a default of 120. * * \return A boolean */ bool stonith_agent_exists(const char *agent, int timeout); /*! * \brief Turn fence action into a more readable string * * \param[in] action Fence action */ const char *stonith_action_str(const char *action); #if !defined(PCMK_ALLOW_DEPRECATED) || (PCMK_ALLOW_DEPRECATED == 1) /* Normally we'd put this section in a separate file (crm/fencing/compat.h), but * we can't do that for the reason noted at the top of this file. That does mean * we have to duplicate these declarations where they're implemented. */ +//! \deprecated Do not use +#define T_STONITH_NOTIFY_DISCONNECT "st_notify_disconnect" + +//! \deprecated Do not use +#define T_STONITH_NOTIFY_FENCE "st_notify_fence" + +//! \deprecated Do not use +#define T_STONITH_NOTIFY_HISTORY "st_notify_history" + +//! \deprecated Do not use +#define T_STONITH_NOTIFY_HISTORY_SYNCED "st_notify_history_synced" + //! \deprecated Use stonith_get_namespace() instead const char *get_stonith_provider(const char *agent, const char *provider); #endif #ifdef __cplusplus } #endif #endif diff --git a/include/crm/stonith-ng_compat.h b/include/crm/stonith-ng_compat.h deleted file mode 100644 index 1ff18640a4..0000000000 --- a/include/crm/stonith-ng_compat.h +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Copyright 2004-2024 the Pacemaker project contributors - * - * The version control history for this file may have further details. - * - * This source code is licensed under the GNU Lesser General Public License - * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. - */ - -#ifndef PCMK__CRM_STONITH_NG_COMPAT__H -#define PCMK__CRM_STONITH_NG_COMPAT__H - -#ifdef __cplusplus -extern "C" { -#endif - -/** - * \file - * \brief Deprecated fencer utilities - * \ingroup core - * \deprecated Do not include this header directly. The utilities in this - * header, and the header itself, will be removed in a future - * release. - */ - -//! \deprecated Do not use -#define T_STONITH_NOTIFY_DISCONNECT "st_notify_disconnect" - -//! \deprecated Do not use -#define T_STONITH_NOTIFY_FENCE "st_notify_fence" - -//! \deprecated Do not use -#define T_STONITH_NOTIFY_HISTORY "st_notify_history" - -//! \deprecated Do not use -#define T_STONITH_NOTIFY_HISTORY_SYNCED "st_notify_history_synced" - -#ifdef __cplusplus -} -#endif - -#endif // PCMK__CRM_STONITH_NG_COMPAT__H diff --git a/lib/cib/Makefile.am b/lib/cib/Makefile.am index a74c4b181d..b23c3329de 100644 --- a/lib/cib/Makefile.am +++ b/lib/cib/Makefile.am @@ -1,30 +1,30 @@ # # Copyright 2004-2023 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # include $(top_srcdir)/mk/common.mk ## libraries lib_LTLIBRARIES = libcib.la ## Library sources (*must* use += format for bumplibs) libcib_la_SOURCES = cib_attrs.c libcib_la_SOURCES += cib_client.c libcib_la_SOURCES += cib_file.c libcib_la_SOURCES += cib_native.c libcib_la_SOURCES += cib_ops.c libcib_la_SOURCES += cib_remote.c libcib_la_SOURCES += cib_utils.c -libcib_la_LDFLAGS = -version-info 32:0:5 +libcib_la_LDFLAGS = -version-info 33:0:6 libcib_la_CPPFLAGS = -I$(top_srcdir) $(AM_CPPFLAGS) libcib_la_CFLAGS = $(CFLAGS_HARDENED_LIB) libcib_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libcib_la_LIBADD = $(top_builddir)/lib/pengine/libpe_rules.la \ $(top_builddir)/lib/common/libcrmcommon.la diff --git a/lib/cluster/Makefile.am b/lib/cluster/Makefile.am index 22d8028282..85ba22d48b 100644 --- a/lib/cluster/Makefile.am +++ b/lib/cluster/Makefile.am @@ -1,34 +1,34 @@ # # Copyright 2004-2024 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # include $(top_srcdir)/mk/common.mk SUBDIRS = tests noinst_HEADERS = crmcluster_private.h ## libraries lib_LTLIBRARIES = libcrmcluster.la -libcrmcluster_la_LDFLAGS = -version-info 31:0:2 +libcrmcluster_la_LDFLAGS = -version-info 32:0:3 libcrmcluster_la_CFLAGS = $(CFLAGS_HARDENED_LIB) libcrmcluster_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libcrmcluster_la_LIBADD = $(top_builddir)/lib/fencing/libstonithd.la libcrmcluster_la_LIBADD += $(top_builddir)/lib/common/libcrmcommon.la libcrmcluster_la_LIBADD += $(CLUSTERLIBS) ## Library sources (*must* use += format for bumplibs) libcrmcluster_la_SOURCES = cluster.c libcrmcluster_la_SOURCES += election.c libcrmcluster_la_SOURCES += membership.c if BUILD_CS_SUPPORT libcrmcluster_la_SOURCES += corosync.c libcrmcluster_la_SOURCES += cpg.c endif diff --git a/lib/common/Makefile.am b/lib/common/Makefile.am index 317801b9f5..bfa5c1dd7b 100644 --- a/lib/common/Makefile.am +++ b/lib/common/Makefile.am @@ -1,143 +1,143 @@ # # Copyright 2004-2024 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # include $(top_srcdir)/mk/common.mk AM_CPPFLAGS += -I$(top_builddir)/lib/gnu \ -I$(top_srcdir)/lib/gnu ## libraries lib_LTLIBRARIES = libcrmcommon.la check_LTLIBRARIES = libcrmcommon_test.la # Disable -Wcast-qual if used, because we do some hacky casting, # and because libxml2 has some signatures that should be const but aren't # for backward compatibility reasons. # s390 needs -fPIC # s390-suse-linux/bin/ld: .libs/ipc.o: relocation R_390_PC32DBL against `__stack_chk_fail@@GLIBC_2.4' can not be used when making a shared object; recompile with -fPIC CFLAGS = $(CFLAGS_COPY:-Wcast-qual=) -fPIC # Without "." here, check-recursive will run through the subdirectories first # and then run "make check" here. This will fail, because there's things in # the subdirectories that need check_LTLIBRARIES built first. Adding "." here # changes the order so the subdirectories are processed afterwards. SUBDIRS = . tests noinst_HEADERS = crmcommon_private.h \ mock_private.h -libcrmcommon_la_LDFLAGS = -version-info 46:0:12 +libcrmcommon_la_LDFLAGS = -version-info 47:0:13 libcrmcommon_la_CFLAGS = $(CFLAGS_HARDENED_LIB) libcrmcommon_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libcrmcommon_la_LIBADD = @LIBADD_DL@ \ $(top_builddir)/lib/gnu/libgnu.la # If configured with --with-profiling or --with-coverage, BUILD_PROFILING will # be set and -fno-builtin will be added to the CFLAGS. However, libcrmcommon # uses the fabs() function which is normally supplied by gcc as one of its # builtins. Therefore we need to explicitly link against libm here or the # tests won't link. if BUILD_PROFILING libcrmcommon_la_LIBADD += -lm endif ## Library sources (*must* use += format for bumplibs) libcrmcommon_la_SOURCES = libcrmcommon_la_SOURCES += acl.c libcrmcommon_la_SOURCES += actions.c libcrmcommon_la_SOURCES += agents.c libcrmcommon_la_SOURCES += alerts.c libcrmcommon_la_SOURCES += attrs.c libcrmcommon_la_SOURCES += cib.c if BUILD_CIBSECRETS libcrmcommon_la_SOURCES += cib_secrets.c endif libcrmcommon_la_SOURCES += cmdline.c libcrmcommon_la_SOURCES += digest.c libcrmcommon_la_SOURCES += health.c libcrmcommon_la_SOURCES += io.c libcrmcommon_la_SOURCES += ipc_attrd.c libcrmcommon_la_SOURCES += ipc_client.c libcrmcommon_la_SOURCES += ipc_common.c libcrmcommon_la_SOURCES += ipc_controld.c libcrmcommon_la_SOURCES += ipc_pacemakerd.c libcrmcommon_la_SOURCES += ipc_schedulerd.c libcrmcommon_la_SOURCES += ipc_server.c libcrmcommon_la_SOURCES += iso8601.c libcrmcommon_la_SOURCES += lists.c libcrmcommon_la_SOURCES += logging.c libcrmcommon_la_SOURCES += mainloop.c libcrmcommon_la_SOURCES += messages.c libcrmcommon_la_SOURCES += nodes.c libcrmcommon_la_SOURCES += nvpair.c libcrmcommon_la_SOURCES += options.c libcrmcommon_la_SOURCES += options_display.c libcrmcommon_la_SOURCES += output.c libcrmcommon_la_SOURCES += output_html.c libcrmcommon_la_SOURCES += output_log.c libcrmcommon_la_SOURCES += output_none.c libcrmcommon_la_SOURCES += output_text.c libcrmcommon_la_SOURCES += output_xml.c libcrmcommon_la_SOURCES += patchset.c libcrmcommon_la_SOURCES += patchset_display.c libcrmcommon_la_SOURCES += pid.c libcrmcommon_la_SOURCES += probes.c libcrmcommon_la_SOURCES += procfs.c libcrmcommon_la_SOURCES += remote.c libcrmcommon_la_SOURCES += resources.c libcrmcommon_la_SOURCES += results.c libcrmcommon_la_SOURCES += roles.c libcrmcommon_la_SOURCES += rules.c libcrmcommon_la_SOURCES += scheduler.c libcrmcommon_la_SOURCES += schemas.c libcrmcommon_la_SOURCES += scores.c libcrmcommon_la_SOURCES += strings.c libcrmcommon_la_SOURCES += utils.c libcrmcommon_la_SOURCES += watchdog.c libcrmcommon_la_SOURCES += xml.c libcrmcommon_la_SOURCES += xml_attr.c libcrmcommon_la_SOURCES += xml_display.c libcrmcommon_la_SOURCES += xml_io.c libcrmcommon_la_SOURCES += xpath.c # # libcrmcommon_test is used only with unit tests, so we can mock system calls. # See mock.c for details. # include $(top_srcdir)/mk/tap.mk libcrmcommon_test_la_SOURCES = $(libcrmcommon_la_SOURCES) libcrmcommon_test_la_SOURCES += mock.c libcrmcommon_test_la_SOURCES += unittest.c libcrmcommon_test_la_LDFLAGS = $(libcrmcommon_la_LDFLAGS) \ -rpath $(libdir) \ $(LDFLAGS_WRAP) # If GCC emits a builtin function in place of something we've mocked up, that will # get used instead of the mocked version which leads to unexpected test results. So # disable all builtins. Older versions of GCC (at least, on RHEL7) will still emit # replacement code for strdup (and possibly other functions) unless -fno-inline is # also added. libcrmcommon_test_la_CFLAGS = $(libcrmcommon_la_CFLAGS) \ -DPCMK__UNIT_TESTING \ -fno-builtin \ -fno-inline # If -fno-builtin is used, -lm also needs to be added. See the comment at # BUILD_PROFILING above. libcrmcommon_test_la_LIBADD = $(libcrmcommon_la_LIBADD) if BUILD_COVERAGE libcrmcommon_test_la_LIBADD += -lgcov endif libcrmcommon_test_la_LIBADD += -lcmocka libcrmcommon_test_la_LIBADD += -lm nodist_libcrmcommon_test_la_SOURCES = $(nodist_libcrmcommon_la_SOURCES) diff --git a/lib/common/iso8601.c b/lib/common/iso8601.c index 29ed35228e..d24f2688e1 100644 --- a/lib/common/iso8601.c +++ b/lib/common/iso8601.c @@ -1,2118 +1,2118 @@ /* * Copyright 2005-2024 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ /* * References: * https://en.wikipedia.org/wiki/ISO_8601 * http://www.staff.science.uu.nl/~gent0113/calendar/isocalendar.htm */ #include #include #include #include #include #include // INT_MIN, INT_MAX #include #include #include #include #include "crmcommon_private.h" /* * Andrew's code was originally written for OSes whose "struct tm" contains: * long tm_gmtoff; :: Seconds east of UTC * const char *tm_zone; :: Timezone abbreviation * Some OSes lack these, instead having: * time_t (or long) timezone; :: "difference between UTC and local standard time" * char *tzname[2] = { "...", "..." }; * I (David Lee) confess to not understanding the details. So my attempted * generalisations for where their use is necessary may be flawed. * * 1. Does "difference between ..." subtract the same or opposite way? * 2. Should it use "altzone" instead of "timezone"? * 3. Should it use tzname[0] or tzname[1]? Interaction with timezone/altzone? */ #if defined(HAVE_STRUCT_TM_TM_GMTOFF) # define GMTOFF(tm) ((tm)->tm_gmtoff) #else /* Note: extern variable; macro argument not actually used. */ # define GMTOFF(tm) (-timezone+daylight) #endif #define HOUR_SECONDS (60 * 60) #define DAY_SECONDS (HOUR_SECONDS * 24) /*! * \internal * \brief Validate a seconds/microseconds tuple * * The microseconds value must be in the correct range, and if both are nonzero * they must have the same sign. * * \param[in] sec Seconds * \param[in] usec Microseconds * * \return true if the seconds/microseconds tuple is valid, or false otherwise */ #define valid_sec_usec(sec, usec) \ ((QB_ABS(usec) < QB_TIME_US_IN_SEC) \ && (((sec) == 0) || ((usec) == 0) || (((sec) < 0) == ((usec) < 0)))) // A date/time or duration struct crm_time_s { int years; // Calendar year (date/time) or number of years (duration) int months; // Number of months (duration only) int days; // Ordinal day of year (date/time) or number of days (duration) int seconds; // Seconds of day (date/time) or number of seconds (duration) int offset; // Seconds offset from UTC (date/time only) bool duration; // True if duration }; static crm_time_t *parse_date(const char *date_str); static crm_time_t * crm_get_utc_time(const crm_time_t *dt) { crm_time_t *utc = NULL; if (dt == NULL) { errno = EINVAL; return NULL; } utc = crm_time_new_undefined(); utc->years = dt->years; utc->days = dt->days; utc->seconds = dt->seconds; utc->offset = 0; if (dt->offset) { crm_time_add_seconds(utc, -dt->offset); } else { /* Durations (which are the only things that can include months, never have a timezone */ utc->months = dt->months; } crm_time_log(LOG_TRACE, "utc-source", dt, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); crm_time_log(LOG_TRACE, "utc-target", utc, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); return utc; } crm_time_t * crm_time_new(const char *date_time) { tzset(); if (date_time == NULL) { return pcmk__copy_timet(time(NULL)); } return parse_date(date_time); } /*! * \brief Allocate memory for an uninitialized time object * * \return Newly allocated time object * \note The caller is responsible for freeing the return value using * crm_time_free(). */ crm_time_t * crm_time_new_undefined(void) { return (crm_time_t *) pcmk__assert_alloc(1, sizeof(crm_time_t)); } /*! * \brief Check whether a time object has been initialized yet * * \param[in] t Time object to check * * \return TRUE if time object has been initialized, FALSE otherwise */ bool crm_time_is_defined(const crm_time_t *t) { // Any nonzero member indicates something has been done to t return (t != NULL) && (t->years || t->months || t->days || t->seconds || t->offset || t->duration); } void crm_time_free(crm_time_t * dt) { if (dt == NULL) { return; } free(dt); } static int year_days(int year) { int d = 365; if (crm_time_leapyear(year)) { d++; } return d; } /* From http://myweb.ecu.edu/mccartyr/ISOwdALG.txt : * * 5. Find the Jan1Weekday for Y (Monday=1, Sunday=7) * YY = (Y-1) % 100 * C = (Y-1) - YY * G = YY + YY/4 * Jan1Weekday = 1 + (((((C / 100) % 4) x 5) + G) % 7) */ int crm_time_january1_weekday(int year) { int YY = (year - 1) % 100; int C = (year - 1) - YY; int G = YY + YY / 4; int jan1 = 1 + (((((C / 100) % 4) * 5) + G) % 7); crm_trace("YY=%d, C=%d, G=%d", YY, C, G); crm_trace("January 1 %.4d: %d", year, jan1); return jan1; } int crm_time_weeks_in_year(int year) { int weeks = 52; int jan1 = crm_time_january1_weekday(year); /* if jan1 == thursday */ if (jan1 == 4) { weeks++; } else { jan1 = crm_time_january1_weekday(year + 1); /* if dec31 == thursday aka. jan1 of next year is a friday */ if (jan1 == 5) { weeks++; } } return weeks; } // Jan-Dec plus Feb of leap years static int month_days[13] = { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 29 }; /*! * \brief Return number of days in given month of given year * * \param[in] Ordinal month (1-12) * \param[in] Gregorian year * * \return Number of days in given month (0 if given month is invalid) */ int crm_time_days_in_month(int month, int year) { if ((month < 1) || (month > 12)) { return 0; } if ((month == 2) && crm_time_leapyear(year)) { month = 13; } return month_days[month - 1]; } bool crm_time_leapyear(int year) { gboolean is_leap = FALSE; if (year % 4 == 0) { is_leap = TRUE; } if (year % 100 == 0 && year % 400 != 0) { is_leap = FALSE; } return is_leap; } static uint32_t get_ordinal_days(uint32_t y, uint32_t m, uint32_t d) { int lpc; for (lpc = 1; lpc < m; lpc++) { d += crm_time_days_in_month(lpc, y); } return d; } void crm_time_log_alias(int log_level, const char *file, const char *function, int line, const char *prefix, const crm_time_t *date_time, int flags) { char *date_s = crm_time_as_string(date_time, flags); if (log_level == LOG_STDOUT) { printf("%s%s%s\n", (prefix? prefix : ""), (prefix? ": " : ""), date_s); } else { do_crm_log_alias(log_level, file, function, line, "%s%s%s", (prefix? prefix : ""), (prefix? ": " : ""), date_s); } free(date_s); } static void crm_time_get_sec(int sec, uint32_t *h, uint32_t *m, uint32_t *s) { uint32_t hours, minutes, seconds; seconds = QB_ABS(sec); hours = seconds / HOUR_SECONDS; seconds -= HOUR_SECONDS * hours; minutes = seconds / 60; seconds -= 60 * minutes; crm_trace("%d == %.2" PRIu32 ":%.2" PRIu32 ":%.2" PRIu32, sec, hours, minutes, seconds); *h = hours; *m = minutes; *s = seconds; } int crm_time_get_timeofday(const crm_time_t *dt, uint32_t *h, uint32_t *m, uint32_t *s) { crm_time_get_sec(dt->seconds, h, m, s); return TRUE; } int crm_time_get_timezone(const crm_time_t *dt, uint32_t *h, uint32_t *m) { uint32_t s; crm_time_get_sec(dt->seconds, h, m, &s); return TRUE; } long long crm_time_get_seconds(const crm_time_t *dt) { int lpc; crm_time_t *utc = NULL; long long in_seconds = 0; if (dt == NULL) { return 0; } utc = crm_get_utc_time(dt); if (utc == NULL) { return 0; } for (lpc = 1; lpc < utc->years; lpc++) { long long dmax = year_days(lpc); in_seconds += DAY_SECONDS * dmax; } /* utc->months is an offset that can only be set for a duration. * By definition, the value is variable depending on the date to * which it is applied. * * Force 30-day months so that something vaguely sane happens * for anyone that tries to use a month in this way. */ if (utc->months > 0) { in_seconds += DAY_SECONDS * 30 * (long long) (utc->months); } if (utc->days > 0) { in_seconds += DAY_SECONDS * (long long) (utc->days - 1); } in_seconds += utc->seconds; crm_time_free(utc); return in_seconds; } #define EPOCH_SECONDS 62135596800ULL /* Calculated using crm_time_get_seconds() */ long long crm_time_get_seconds_since_epoch(const crm_time_t *dt) { return (dt == NULL)? 0 : (crm_time_get_seconds(dt) - EPOCH_SECONDS); } int crm_time_get_gregorian(const crm_time_t *dt, uint32_t *y, uint32_t *m, uint32_t *d) { int months = 0; int days = dt->days; if(dt->years != 0) { for (months = 1; months <= 12 && days > 0; months++) { int mdays = crm_time_days_in_month(months, dt->years); if (mdays >= days) { break; } else { days -= mdays; } } } else if (dt->months) { /* This is a duration including months, don't convert the days field */ months = dt->months; } else { /* This is a duration not including months, still don't convert the days field */ } *y = dt->years; *m = months; *d = days; crm_trace("%.4d-%.3d -> %.4d-%.2d-%.2d", dt->years, dt->days, dt->years, months, days); return TRUE; } int crm_time_get_ordinal(const crm_time_t *dt, uint32_t *y, uint32_t *d) { *y = dt->years; *d = dt->days; return TRUE; } int crm_time_get_isoweek(const crm_time_t *dt, uint32_t *y, uint32_t *w, uint32_t *d) { /* * Monday 29 December 2008 is written "2009-W01-1" * Sunday 3 January 2010 is written "2009-W53-7" */ int year_num = 0; int jan1 = crm_time_january1_weekday(dt->years); int h = -1; CRM_CHECK(dt->days > 0, return FALSE); /* 6. Find the Weekday for Y M D */ h = dt->days + jan1 - 1; *d = 1 + ((h - 1) % 7); /* 7. Find if Y M D falls in YearNumber Y-1, WeekNumber 52 or 53 */ if (dt->days <= (8 - jan1) && jan1 > 4) { crm_trace("year--, jan1=%d", jan1); year_num = dt->years - 1; *w = crm_time_weeks_in_year(year_num); } else { year_num = dt->years; } /* 8. Find if Y M D falls in YearNumber Y+1, WeekNumber 1 */ if (year_num == dt->years) { int dmax = year_days(year_num); int correction = 4 - *d; if ((dmax - dt->days) < correction) { crm_trace("year++, jan1=%d, i=%d vs. %d", jan1, dmax - dt->days, correction); year_num = dt->years + 1; *w = 1; } } /* 9. Find if Y M D falls in YearNumber Y, WeekNumber 1 through 53 */ if (year_num == dt->years) { int j = dt->days + (7 - *d) + (jan1 - 1); *w = j / 7; if (jan1 > 4) { *w -= 1; } } *y = year_num; crm_trace("Converted %.4d-%.3d to %.4" PRIu32 "-W%.2" PRIu32 "-%" PRIu32, dt->years, dt->days, *y, *w, *d); return TRUE; } #define DATE_MAX 128 /*! * \internal * \brief Print "." to a buffer * * \param[in] sec Seconds * \param[in] usec Microseconds (must be of same sign as \p sec and of * absolute value less than \p QB_TIME_US_IN_SEC) * \param[in,out] buf Result buffer * \param[in,out] offset Current offset within \p buf */ static inline void sec_usec_as_string(long long sec, int usec, char *buf, size_t *offset) { *offset += snprintf(buf + *offset, DATE_MAX - *offset, "%s%lld.%06d", ((sec == 0) && (usec < 0))? "-" : "", sec, QB_ABS(usec)); } /*! * \internal * \brief Get a string representation of a duration * * \param[in] dt Time object to interpret as a duration * \param[in] usec Microseconds to add to \p dt * \param[in] show_usec Whether to include microseconds in \p result * \param[out] result Where to store the result string */ static void crm_duration_as_string(const crm_time_t *dt, int usec, bool show_usec, char *result) { size_t offset = 0; CRM_ASSERT(valid_sec_usec(dt->seconds, usec)); if (dt->years) { offset += snprintf(result + offset, DATE_MAX - offset, "%4d year%s ", dt->years, pcmk__plural_s(dt->years)); } if (dt->months) { offset += snprintf(result + offset, DATE_MAX - offset, "%2d month%s ", dt->months, pcmk__plural_s(dt->months)); } if (dt->days) { offset += snprintf(result + offset, DATE_MAX - offset, "%2d day%s ", dt->days, pcmk__plural_s(dt->days)); } // At least print seconds (and optionally usecs) if ((offset == 0) || (dt->seconds != 0) || (show_usec && (usec != 0))) { if (show_usec) { sec_usec_as_string(dt->seconds, usec, result, &offset); } else { offset += snprintf(result + offset, DATE_MAX - offset, "%d", dt->seconds); } offset += snprintf(result + offset, DATE_MAX - offset, " second%s", pcmk__plural_s(dt->seconds)); } // More than one minute, so provide a more readable breakdown into units if (QB_ABS(dt->seconds) >= 60) { uint32_t h = 0; uint32_t m = 0; uint32_t s = 0; uint32_t u = QB_ABS(usec); bool print_sec_component = false; crm_time_get_sec(dt->seconds, &h, &m, &s); print_sec_component = ((s != 0) || (show_usec && (u != 0))); offset += snprintf(result + offset, DATE_MAX - offset, " ("); if (h) { offset += snprintf(result + offset, DATE_MAX - offset, "%" PRIu32 " hour%s%s", h, pcmk__plural_s(h), ((m != 0) || print_sec_component)? " " : ""); } if (m) { offset += snprintf(result + offset, DATE_MAX - offset, "%" PRIu32 " minute%s%s", m, pcmk__plural_s(m), print_sec_component? " " : ""); } if (print_sec_component) { if (show_usec) { sec_usec_as_string(s, u, result, &offset); } else { offset += snprintf(result + offset, DATE_MAX - offset, "%" PRIu32, s); } offset += snprintf(result + offset, DATE_MAX - offset, " second%s", pcmk__plural_s(dt->seconds)); } offset += snprintf(result + offset, DATE_MAX - offset, ")"); } } /*! * \internal * \brief Get a string representation of a time object * * \param[in] dt Time to convert to string * \param[in] usec Microseconds to add to \p dt * \param[in] flags Group of \p crm_time_* string format options * \param[out] result Where to store the result string * * \note \p result must be of size \p DATE_MAX or larger. */ static void time_as_string_common(const crm_time_t *dt, int usec, uint32_t flags, char *result) { crm_time_t *utc = NULL; size_t offset = 0; if (!crm_time_is_defined(dt)) { strcpy(result, ""); return; } CRM_ASSERT(valid_sec_usec(dt->seconds, usec)); /* Simple cases: as duration, seconds, or seconds since epoch. * These never depend on time zone. */ if (pcmk_is_set(flags, crm_time_log_duration)) { crm_duration_as_string(dt, usec, pcmk_is_set(flags, crm_time_usecs), result); return; } if (pcmk_any_flags_set(flags, crm_time_seconds|crm_time_epoch)) { long long seconds = 0; if (pcmk_is_set(flags, crm_time_seconds)) { seconds = crm_time_get_seconds(dt); } else { seconds = crm_time_get_seconds_since_epoch(dt); } if (pcmk_is_set(flags, crm_time_usecs)) { sec_usec_as_string(seconds, usec, result, &offset); } else { snprintf(result, DATE_MAX, "%lld", seconds); } return; } // Convert to UTC if local timezone was not requested if ((dt->offset != 0) && !pcmk_is_set(flags, crm_time_log_with_timezone)) { crm_trace("UTC conversion"); utc = crm_get_utc_time(dt); dt = utc; } // As readable string if (pcmk_is_set(flags, crm_time_log_date)) { if (pcmk_is_set(flags, crm_time_weeks)) { // YYYY-WW-D uint32_t y = 0; uint32_t w = 0; uint32_t d = 0; if (crm_time_get_isoweek(dt, &y, &w, &d)) { offset += snprintf(result + offset, DATE_MAX - offset, "%" PRIu32 "-W%.2" PRIu32 "-%" PRIu32, y, w, d); } } else if (pcmk_is_set(flags, crm_time_ordinal)) { // YYYY-DDD uint32_t y = 0; uint32_t d = 0; if (crm_time_get_ordinal(dt, &y, &d)) { offset += snprintf(result + offset, DATE_MAX - offset, "%" PRIu32 "-%.3" PRIu32, y, d); } } else { // YYYY-MM-DD uint32_t y = 0; uint32_t m = 0; uint32_t d = 0; if (crm_time_get_gregorian(dt, &y, &m, &d)) { offset += snprintf(result + offset, DATE_MAX - offset, "%.4" PRIu32 "-%.2" PRIu32 "-%.2" PRIu32, y, m, d); } } } if (pcmk_is_set(flags, crm_time_log_timeofday)) { uint32_t h = 0, m = 0, s = 0; if (offset > 0) { offset += snprintf(result + offset, DATE_MAX - offset, " "); } if (crm_time_get_timeofday(dt, &h, &m, &s)) { offset += snprintf(result + offset, DATE_MAX - offset, "%.2" PRIu32 ":%.2" PRIu32 ":%.2" PRIu32, h, m, s); if (pcmk_is_set(flags, crm_time_usecs)) { offset += snprintf(result + offset, DATE_MAX - offset, ".%06" PRIu32, QB_ABS(usec)); } } if (pcmk_is_set(flags, crm_time_log_with_timezone) && (dt->offset != 0)) { crm_time_get_sec(dt->offset, &h, &m, &s); offset += snprintf(result + offset, DATE_MAX - offset, " %c%.2" PRIu32 ":%.2" PRIu32, ((dt->offset < 0)? '-' : '+'), h, m); } else { offset += snprintf(result + offset, DATE_MAX - offset, "Z"); } } crm_time_free(utc); } /*! * \brief Get a string representation of a \p crm_time_t object * * \param[in] dt Time to convert to string * \param[in] flags Group of \p crm_time_* string format options * * \note The caller is responsible for freeing the return value using \p free(). */ char * crm_time_as_string(const crm_time_t *dt, int flags) { char result[DATE_MAX] = { '\0', }; time_as_string_common(dt, 0, flags, result); return pcmk__str_copy(result); } /*! * \internal * \brief Determine number of seconds from an hour:minute:second string * * \param[in] time_str Time specification string * \param[out] result Number of seconds equivalent to time_str * * \return TRUE if specification was valid, FALSE (and set errno) otherwise * \note This may return the number of seconds in a day (which is out of bounds * for a time object) if given 24:00:00. */ static bool crm_time_parse_sec(const char *time_str, int *result) { int rc; uint32_t hour = 0; uint32_t minute = 0; uint32_t second = 0; *result = 0; // Must have at least hour, but minutes and seconds are optional rc = sscanf(time_str, "%" SCNu32 ":%" SCNu32 ":%" SCNu32, &hour, &minute, &second); if (rc == 1) { rc = sscanf(time_str, "%2" SCNu32 "%2" SCNu32 "%2" SCNu32, &hour, &minute, &second); } if (rc == 0) { crm_err("%s is not a valid ISO 8601 time specification", time_str); errno = EINVAL; return FALSE; } crm_trace("Got valid time: %.2" PRIu32 ":%.2" PRIu32 ":%.2" PRIu32, hour, minute, second); if ((hour == 24) && (minute == 0) && (second == 0)) { // Equivalent to 00:00:00 of next day, return number of seconds in day } else if (hour >= 24) { crm_err("%s is not a valid ISO 8601 time specification " "because %" PRIu32 " is not a valid hour", time_str, hour); errno = EINVAL; return FALSE; } if (minute >= 60) { crm_err("%s is not a valid ISO 8601 time specification " "because %" PRIu32 " is not a valid minute", time_str, minute); errno = EINVAL; return FALSE; } if (second >= 60) { crm_err("%s is not a valid ISO 8601 time specification " "because %" PRIu32 " is not a valid second", time_str, second); errno = EINVAL; return FALSE; } *result = (hour * HOUR_SECONDS) + (minute * 60) + second; return TRUE; } static bool crm_time_parse_offset(const char *offset_str, int *offset) { tzset(); if (offset_str == NULL) { // Use local offset #if defined(HAVE_STRUCT_TM_TM_GMTOFF) time_t now = time(NULL); struct tm *now_tm = localtime(&now); #endif int h_offset = GMTOFF(now_tm) / HOUR_SECONDS; int m_offset = (GMTOFF(now_tm) - (HOUR_SECONDS * h_offset)) / 60; if (h_offset < 0 && m_offset < 0) { m_offset = 0 - m_offset; } *offset = (HOUR_SECONDS * h_offset) + (60 * m_offset); return TRUE; } if (offset_str[0] == 'Z') { // @TODO invalid if anything after? *offset = 0; return TRUE; } *offset = 0; if ((offset_str[0] == '+') || (offset_str[0] == '-') || isdigit((int)offset_str[0])) { gboolean negate = FALSE; if (offset_str[0] == '+') { offset_str++; } else if (offset_str[0] == '-') { negate = TRUE; offset_str++; } if (crm_time_parse_sec(offset_str, offset) == FALSE) { return FALSE; } if (negate) { *offset = 0 - *offset; } } // @TODO else invalid? return TRUE; } /*! * \internal * \brief Parse the time portion of an ISO 8601 date/time string * * \param[in] time_str Time portion of specification (after any 'T') * \param[in,out] a_time Time object to parse into * * \return TRUE if valid time was parsed, FALSE (and set errno) otherwise * \note This may add a day to a_time (if the time is 24:00:00). */ static bool crm_time_parse(const char *time_str, crm_time_t *a_time) { uint32_t h, m, s; char *offset_s = NULL; tzset(); if (time_str) { if (crm_time_parse_sec(time_str, &(a_time->seconds)) == FALSE) { return FALSE; } offset_s = strstr(time_str, "Z"); if (offset_s == NULL) { offset_s = strstr(time_str, " "); if (offset_s) { while (isspace(offset_s[0])) { offset_s++; } } } } if (crm_time_parse_offset(offset_s, &(a_time->offset)) == FALSE) { return FALSE; } crm_time_get_sec(a_time->offset, &h, &m, &s); crm_trace("Got tz: %c%2." PRIu32 ":%.2" PRIu32, (a_time->offset < 0)? '-' : '+', h, m); if (a_time->seconds == DAY_SECONDS) { // 24:00:00 == 00:00:00 of next day a_time->seconds = 0; crm_time_add_days(a_time, 1); } return TRUE; } /* * \internal * \brief Parse a time object from an ISO 8601 date/time specification * * \param[in] date_str ISO 8601 date/time specification (or * \c PCMK__VALUE_EPOCH) * * \return New time object on success, NULL (and set errno) otherwise */ static crm_time_t * parse_date(const char *date_str) { const char *time_s = NULL; crm_time_t *dt = NULL; int year = 0; int month = 0; int week = 0; int day = 0; int rc = 0; if (pcmk__str_empty(date_str)) { crm_err("No ISO 8601 date/time specification given"); goto invalid; } if ((date_str[0] == 'T') || (date_str[2] == ':')) { /* Just a time supplied - Infer current date */ dt = crm_time_new(NULL); if (date_str[0] == 'T') { time_s = date_str + 1; } else { time_s = date_str; } goto parse_time; } dt = crm_time_new_undefined(); if ((strncasecmp(PCMK__VALUE_EPOCH, date_str, 5) == 0) && ((date_str[5] == '\0') || (date_str[5] == '/') || isspace(date_str[5]))) { dt->days = 1; dt->years = 1970; crm_time_log(LOG_TRACE, "Unpacked", dt, crm_time_log_date | crm_time_log_timeofday); return dt; } /* YYYY-MM-DD */ rc = sscanf(date_str, "%d-%d-%d", &year, &month, &day); if (rc == 1) { /* YYYYMMDD */ rc = sscanf(date_str, "%4d%2d%2d", &year, &month, &day); } if (rc == 3) { if (month > 12) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid month", date_str, month); goto invalid; } else if (day > crm_time_days_in_month(month, year)) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid day of the month", date_str, day); goto invalid; } else { dt->years = year; dt->days = get_ordinal_days(year, month, day); crm_trace("Parsed Gregorian date '%.4d-%.3d' from date string '%s'", year, dt->days, date_str); } goto parse_time; } /* YYYY-DDD */ rc = sscanf(date_str, "%d-%d", &year, &day); if (rc == 2) { if (day > year_days(year)) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid day of the year (max %d)", date_str, day, year_days(year)); goto invalid; } crm_trace("Parsed ordinal year %d and days %d from date string '%s'", year, day, date_str); dt->days = day; dt->years = year; goto parse_time; } /* YYYY-Www-D */ rc = sscanf(date_str, "%d-W%d-%d", &year, &week, &day); if (rc == 3) { if (week > crm_time_weeks_in_year(year)) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid week of the year (max %d)", date_str, week, crm_time_weeks_in_year(year)); goto invalid; } else if (day < 1 || day > 7) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid day of the week", date_str, day); goto invalid; } else { /* * See https://en.wikipedia.org/wiki/ISO_week_date * * Monday 29 December 2008 is written "2009-W01-1" * Sunday 3 January 2010 is written "2009-W53-7" * Saturday 27 September 2008 is written "2008-W37-6" * * If 1 January is on a Monday, Tuesday, Wednesday or Thursday, it is in week 01. * If 1 January is on a Friday, Saturday or Sunday, it is in week 52 or 53 of the previous year. */ int jan1 = crm_time_january1_weekday(year); crm_trace("Got year %d (Jan 1 = %d), week %d, and day %d from date string '%s'", year, jan1, week, day, date_str); dt->years = year; crm_time_add_days(dt, (week - 1) * 7); if (jan1 <= 4) { crm_time_add_days(dt, 1 - jan1); } else { crm_time_add_days(dt, 8 - jan1); } crm_time_add_days(dt, day); } goto parse_time; } crm_err("'%s' is not a valid ISO 8601 date/time specification", date_str); goto invalid; parse_time: if (time_s == NULL) { time_s = date_str + strspn(date_str, "0123456789-W"); if ((time_s[0] == ' ') || (time_s[0] == 'T')) { ++time_s; } else { time_s = NULL; } } if ((time_s != NULL) && (crm_time_parse(time_s, dt) == FALSE)) { goto invalid; } crm_time_log(LOG_TRACE, "Unpacked", dt, crm_time_log_date | crm_time_log_timeofday); if (crm_time_check(dt) == FALSE) { crm_err("'%s' is not a valid ISO 8601 date/time specification", date_str); goto invalid; } return dt; invalid: crm_time_free(dt); errno = EINVAL; return NULL; } // Parse an ISO 8601 numeric value and return number of characters consumed // @TODO This cannot handle >INT_MAX int values // @TODO Fractions appear to be not working // @TODO Error out on invalid specifications static int parse_int(const char *str, int field_width, int upper_bound, int *result) { int lpc = 0; int offset = 0; int intermediate = 0; gboolean fraction = FALSE; gboolean negate = FALSE; *result = 0; if (*str == '\0') { return 0; } if (str[offset] == 'T') { offset++; } if (str[offset] == '.' || str[offset] == ',') { fraction = TRUE; field_width = -1; offset++; } else if (str[offset] == '-') { negate = TRUE; offset++; } else if (str[offset] == '+' || str[offset] == ':') { offset++; } for (; (fraction || lpc < field_width) && isdigit((int)str[offset]); lpc++) { if (fraction) { intermediate = (str[offset] - '0') / (10 ^ lpc); } else { *result *= 10; intermediate = str[offset] - '0'; } *result += intermediate; offset++; } if (fraction) { *result = (int)(*result * upper_bound); } else if (upper_bound > 0 && *result > upper_bound) { *result = upper_bound; } if (negate) { *result = 0 - *result; } if (lpc > 0) { crm_trace("Found int: %d. Stopped at str[%d]='%c'", *result, lpc, str[lpc]); return offset; } return 0; } /*! * \brief Parse a time duration from an ISO 8601 duration specification * * \param[in] period_s ISO 8601 duration specification (optionally followed by * whitespace, after which the rest of the string will be * ignored) * * \return New time object on success, NULL (and set errno) otherwise * \note It is the caller's responsibility to return the result using * crm_time_free(). */ crm_time_t * crm_time_parse_duration(const char *period_s) { gboolean is_time = FALSE; crm_time_t *diff = NULL; if (pcmk__str_empty(period_s)) { crm_err("No ISO 8601 time duration given"); goto invalid; } if (period_s[0] != 'P') { crm_err("'%s' is not a valid ISO 8601 time duration " "because it does not start with a 'P'", period_s); goto invalid; } if ((period_s[1] == '\0') || isspace(period_s[1])) { crm_err("'%s' is not a valid ISO 8601 time duration " "because nothing follows 'P'", period_s); goto invalid; } diff = crm_time_new_undefined(); diff->duration = TRUE; for (const char *current = period_s + 1; current[0] && (current[0] != '/') && !isspace(current[0]); ++current) { int an_int = 0, rc; if (current[0] == 'T') { /* A 'T' separates year/month/day from hour/minute/seconds. We don't * require it strictly, but just use it to differentiate month from * minutes. */ is_time = TRUE; continue; } // An integer must be next rc = parse_int(current, 10, 0, &an_int); if (rc == 0) { crm_err("'%s' is not a valid ISO 8601 time duration " "because no integer at '%s'", period_s, current); goto invalid; } current += rc; // A time unit must be next (we're not strict about the order) switch (current[0]) { case 'Y': diff->years = an_int; break; case 'M': if (is_time) { /* Minutes */ diff->seconds += an_int * 60; } else { diff->months = an_int; } break; case 'W': diff->days += an_int * 7; break; case 'D': diff->days += an_int; break; case 'H': diff->seconds += an_int * HOUR_SECONDS; break; case 'S': diff->seconds += an_int; break; case '\0': crm_err("'%s' is not a valid ISO 8601 time duration " "because no units after %d", period_s, an_int); goto invalid; default: crm_err("'%s' is not a valid ISO 8601 time duration " "because '%c' is not a valid time unit", period_s, current[0]); goto invalid; } } if (!crm_time_is_defined(diff)) { crm_err("'%s' is not a valid ISO 8601 time duration " "because no amounts and units given", period_s); goto invalid; } return diff; invalid: crm_time_free(diff); errno = EINVAL; return NULL; } /*! * \brief Parse a time period from an ISO 8601 interval specification * * \param[in] period_str ISO 8601 interval specification (start/end, * start/duration, or duration/end) * * \return New time period object on success, NULL (and set errno) otherwise * \note The caller is responsible for freeing the result using * crm_time_free_period(). */ crm_time_period_t * crm_time_parse_period(const char *period_str) { const char *original = period_str; crm_time_period_t *period = NULL; if (pcmk__str_empty(period_str)) { crm_err("No ISO 8601 time period given"); goto invalid; } tzset(); period = pcmk__assert_alloc(1, sizeof(crm_time_period_t)); if (period_str[0] == 'P') { period->diff = crm_time_parse_duration(period_str); if (period->diff == NULL) { goto error; } } else { period->start = parse_date(period_str); if (period->start == NULL) { goto error; } } period_str = strstr(original, "/"); if (period_str) { ++period_str; if (period_str[0] == 'P') { if (period->diff != NULL) { crm_err("'%s' is not a valid ISO 8601 time period " "because it has two durations", original); goto invalid; } period->diff = crm_time_parse_duration(period_str); if (period->diff == NULL) { goto error; } } else { period->end = parse_date(period_str); if (period->end == NULL) { goto error; } } } else if (period->diff != NULL) { // Only duration given, assume start is now period->start = crm_time_new(NULL); } else { // Only start given crm_err("'%s' is not a valid ISO 8601 time period " "because it has no duration or ending time", original); goto invalid; } if (period->start == NULL) { period->start = crm_time_subtract(period->end, period->diff); } else if (period->end == NULL) { period->end = crm_time_add(period->start, period->diff); } if (crm_time_check(period->start) == FALSE) { crm_err("'%s' is not a valid ISO 8601 time period " "because the start is invalid", period_str); goto invalid; } if (crm_time_check(period->end) == FALSE) { crm_err("'%s' is not a valid ISO 8601 time period " "because the end is invalid", period_str); goto invalid; } return period; invalid: errno = EINVAL; error: crm_time_free_period(period); return NULL; } /*! * \brief Free a dynamically allocated time period object * * \param[in,out] period Time period to free */ void crm_time_free_period(crm_time_period_t *period) { if (period) { crm_time_free(period->start); crm_time_free(period->end); crm_time_free(period->diff); free(period); } } void crm_time_set(crm_time_t *target, const crm_time_t *source) { crm_trace("target=%p, source=%p", target, source); CRM_CHECK(target != NULL && source != NULL, return); target->years = source->years; target->days = source->days; target->months = source->months; /* Only for durations */ target->seconds = source->seconds; target->offset = source->offset; crm_time_log(LOG_TRACE, "source", source, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); crm_time_log(LOG_TRACE, "target", target, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); } static void ha_set_tm_time(crm_time_t *target, const struct tm *source) { int h_offset = 0; int m_offset = 0; /* Ensure target is fully initialized */ target->years = 0; target->months = 0; target->days = 0; target->seconds = 0; target->offset = 0; target->duration = FALSE; if (source->tm_year > 0) { /* years since 1900 */ target->years = 1900 + source->tm_year; } if (source->tm_yday >= 0) { /* days since January 1 [0-365] */ target->days = 1 + source->tm_yday; } if (source->tm_hour >= 0) { target->seconds += HOUR_SECONDS * source->tm_hour; } if (source->tm_min >= 0) { target->seconds += 60 * source->tm_min; } if (source->tm_sec >= 0) { target->seconds += source->tm_sec; } /* tm_gmtoff == offset from UTC in seconds */ h_offset = GMTOFF(source) / HOUR_SECONDS; m_offset = (GMTOFF(source) - (HOUR_SECONDS * h_offset)) / 60; crm_trace("Time offset is %lds (%.2d:%.2d)", GMTOFF(source), h_offset, m_offset); target->offset += HOUR_SECONDS * h_offset; target->offset += 60 * m_offset; } void crm_time_set_timet(crm_time_t *target, const time_t *source) { ha_set_tm_time(target, localtime(source)); } /*! * \internal * \brief Set one time object to another if the other is earlier * * \param[in,out] target Time object to set * \param[in] source Time object to use if earlier */ void pcmk__set_time_if_earlier(crm_time_t *target, const crm_time_t *source) { if ((target != NULL) && (source != NULL) && (!crm_time_is_defined(target) || (crm_time_compare(source, target) < 0))) { crm_time_set(target, source); } } crm_time_t * pcmk_copy_time(const crm_time_t *source) { crm_time_t *target = crm_time_new_undefined(); crm_time_set(target, source); return target; } /*! * \internal * \brief Convert a \p time_t time to a \p crm_time_t time * * \param[in] source Time to convert * * \return A \p crm_time_t object representing \p source */ crm_time_t * pcmk__copy_timet(time_t source) { crm_time_t *target = crm_time_new_undefined(); crm_time_set_timet(target, &source); return target; } crm_time_t * crm_time_add(const crm_time_t *dt, const crm_time_t *value) { crm_time_t *utc = NULL; crm_time_t *answer = NULL; if ((dt == NULL) || (value == NULL)) { errno = EINVAL; return NULL; } answer = pcmk_copy_time(dt); utc = crm_get_utc_time(value); if (utc == NULL) { crm_time_free(answer); return NULL; } answer->years += utc->years; crm_time_add_months(answer, utc->months); crm_time_add_days(answer, utc->days); crm_time_add_seconds(answer, utc->seconds); crm_time_free(utc); return answer; } /*! * \internal * \brief Return the XML attribute name corresponding to a time component * * \param[in] component Component to check * * \return XML attribute name corresponding to \p component, or NULL if * \p component is invalid */ const char * pcmk__time_component_attr(enum pcmk__time_component component) { switch (component) { case pcmk__time_years: return PCMK_XA_YEARS; case pcmk__time_months: return PCMK_XA_MONTHS; case pcmk__time_weeks: return PCMK_XA_WEEKS; case pcmk__time_days: return PCMK_XA_DAYS; case pcmk__time_hours: return PCMK_XA_HOURS; case pcmk__time_minutes: return PCMK_XA_MINUTES; case pcmk__time_seconds: return PCMK_XA_SECONDS; default: return NULL; } } typedef void (*component_fn_t)(crm_time_t *, int); /*! * \internal * \brief Get the addition function corresponding to a time component * \param[in] component Component to check * * \return Addition function corresponding to \p component, or NULL if * \p component is invalid */ static component_fn_t component_fn(enum pcmk__time_component component) { switch (component) { case pcmk__time_years: return crm_time_add_years; case pcmk__time_months: return crm_time_add_months; case pcmk__time_weeks: return crm_time_add_weeks; case pcmk__time_days: return crm_time_add_days; case pcmk__time_hours: return crm_time_add_hours; case pcmk__time_minutes: return crm_time_add_minutes; case pcmk__time_seconds: return crm_time_add_seconds; default: return NULL; } } /*! * \internal * \brief Add the value of an XML attribute to a time object * * \param[in,out] t Time object to add to * \param[in] component Component of \p t to add to * \param[in] xml XML with value to add * * \return Standard Pacemaker return code */ int pcmk__add_time_from_xml(crm_time_t *t, enum pcmk__time_component component, const xmlNode *xml) { long long value; const char *attr = pcmk__time_component_attr(component); component_fn_t add = component_fn(component); if ((t == NULL) || (attr == NULL) || (add == NULL)) { return EINVAL; } if (xml == NULL) { return pcmk_rc_ok; } if (pcmk__scan_ll(crm_element_value(xml, attr), &value, 0LL) != pcmk_rc_ok) { return pcmk_rc_unpack_error; } if ((value < INT_MIN) || (value > INT_MAX)) { return ERANGE; } if (value != 0LL) { add(t, (int) value); } return pcmk_rc_ok; } crm_time_t * crm_time_calculate_duration(const crm_time_t *dt, const crm_time_t *value) { crm_time_t *utc = NULL; crm_time_t *answer = NULL; if ((dt == NULL) || (value == NULL)) { errno = EINVAL; return NULL; } utc = crm_get_utc_time(value); if (utc == NULL) { return NULL; } answer = crm_get_utc_time(dt); if (answer == NULL) { crm_time_free(utc); return NULL; } answer->duration = TRUE; answer->years -= utc->years; if(utc->months != 0) { crm_time_add_months(answer, -utc->months); } crm_time_add_days(answer, -utc->days); crm_time_add_seconds(answer, -utc->seconds); crm_time_free(utc); return answer; } crm_time_t * crm_time_subtract(const crm_time_t *dt, const crm_time_t *value) { crm_time_t *utc = NULL; crm_time_t *answer = NULL; if ((dt == NULL) || (value == NULL)) { errno = EINVAL; return NULL; } utc = crm_get_utc_time(value); if (utc == NULL) { return NULL; } answer = pcmk_copy_time(dt); answer->years -= utc->years; if(utc->months != 0) { crm_time_add_months(answer, -utc->months); } crm_time_add_days(answer, -utc->days); crm_time_add_seconds(answer, -utc->seconds); crm_time_free(utc); return answer; } /*! * \brief Check whether a time object represents a sensible date/time * * \param[in] dt Date/time object to check * * \return \c true if years, days, and seconds are sensible, \c false otherwise */ bool crm_time_check(const crm_time_t *dt) { return (dt != NULL) && (dt->days > 0) && (dt->days <= year_days(dt->years)) && (dt->seconds >= 0) && (dt->seconds < DAY_SECONDS); } #define do_cmp_field(l, r, field) \ if(rc == 0) { \ if(l->field > r->field) { \ crm_trace("%s: %d > %d", \ #field, l->field, r->field); \ rc = 1; \ } else if(l->field < r->field) { \ crm_trace("%s: %d < %d", \ #field, l->field, r->field); \ rc = -1; \ } \ } int crm_time_compare(const crm_time_t *a, const crm_time_t *b) { int rc = 0; crm_time_t *t1 = crm_get_utc_time(a); crm_time_t *t2 = crm_get_utc_time(b); if ((t1 == NULL) && (t2 == NULL)) { rc = 0; } else if (t1 == NULL) { rc = -1; } else if (t2 == NULL) { rc = 1; } else { do_cmp_field(t1, t2, years); do_cmp_field(t1, t2, days); do_cmp_field(t1, t2, seconds); } crm_time_free(t1); crm_time_free(t2); return rc; } /*! * \brief Add a given number of seconds to a date/time or duration * * \param[in,out] a_time Date/time or duration to add seconds to * \param[in] extra Number of seconds to add */ void crm_time_add_seconds(crm_time_t *a_time, int extra) { int days = 0; crm_trace("Adding %d seconds to %d (max=%d)", extra, a_time->seconds, DAY_SECONDS); a_time->seconds += extra; days = a_time->seconds / DAY_SECONDS; a_time->seconds %= DAY_SECONDS; // Don't have negative seconds if (a_time->seconds < 0) { a_time->seconds += DAY_SECONDS; --days; } crm_time_add_days(a_time, days); } void crm_time_add_days(crm_time_t * a_time, int extra) { int lower_bound = 1; int ydays = crm_time_leapyear(a_time->years) ? 366 : 365; crm_trace("Adding %d days to %.4d-%.3d", extra, a_time->years, a_time->days); a_time->days += extra; while (a_time->days > ydays) { a_time->years++; a_time->days -= ydays; ydays = crm_time_leapyear(a_time->years) ? 366 : 365; } if(a_time->duration) { lower_bound = 0; } while (a_time->days < lower_bound) { a_time->years--; a_time->days += crm_time_leapyear(a_time->years) ? 366 : 365; } } void crm_time_add_months(crm_time_t * a_time, int extra) { int lpc; uint32_t y, m, d, dmax; crm_time_get_gregorian(a_time, &y, &m, &d); crm_trace("Adding %d months to %.4" PRIu32 "-%.2" PRIu32 "-%.2" PRIu32, extra, y, m, d); if (extra > 0) { for (lpc = extra; lpc > 0; lpc--) { m++; if (m == 13) { m = 1; y++; } } } else { for (lpc = -extra; lpc > 0; lpc--) { m--; if (m == 0) { m = 12; y--; } } } dmax = crm_time_days_in_month(m, y); if (dmax < d) { /* Preserve day-of-month unless the month doesn't have enough days */ d = dmax; } crm_trace("Calculated %.4" PRIu32 "-%.2" PRIu32 "-%.2" PRIu32, y, m, d); a_time->years = y; a_time->days = get_ordinal_days(y, m, d); crm_time_get_gregorian(a_time, &y, &m, &d); crm_trace("Got %.4" PRIu32 "-%.2" PRIu32 "-%.2" PRIu32, y, m, d); } void crm_time_add_minutes(crm_time_t * a_time, int extra) { crm_time_add_seconds(a_time, extra * 60); } void crm_time_add_hours(crm_time_t * a_time, int extra) { crm_time_add_seconds(a_time, extra * HOUR_SECONDS); } void crm_time_add_weeks(crm_time_t * a_time, int extra) { crm_time_add_days(a_time, extra * 7); } void crm_time_add_years(crm_time_t * a_time, int extra) { a_time->years += extra; } static void ha_get_tm_time(struct tm *target, const crm_time_t *source) { *target = (struct tm) { .tm_year = source->years - 1900, .tm_mday = source->days, .tm_sec = source->seconds % 60, .tm_min = ( source->seconds / 60 ) % 60, .tm_hour = source->seconds / HOUR_SECONDS, .tm_isdst = -1, /* don't adjust */ #if defined(HAVE_STRUCT_TM_TM_GMTOFF) .tm_gmtoff = source->offset #endif }; mktime(target); } /* The high-resolution variant of time object was added to meet an immediate * need, and is kept internal API. * * @TODO The long-term goal is to come up with a clean, unified design for a * time type (or types) that meets all the various needs, to replace * crm_time_t, pcmk__time_hr_t, and struct timespec (in lrmd_cmd_t). * Using glib's GDateTime is a possibility (if we are willing to require * glib >= 2.26). */ pcmk__time_hr_t * pcmk__time_hr_convert(pcmk__time_hr_t *target, const crm_time_t *dt) { pcmk__time_hr_t *hr_dt = NULL; if (dt) { hr_dt = target; if (hr_dt == NULL) { hr_dt = pcmk__assert_alloc(1, sizeof(pcmk__time_hr_t)); } *hr_dt = (pcmk__time_hr_t) { .years = dt->years, .months = dt->months, .days = dt->days, .seconds = dt->seconds, .offset = dt->offset, .duration = dt->duration }; } return hr_dt; } void pcmk__time_set_hr_dt(crm_time_t *target, const pcmk__time_hr_t *hr_dt) { CRM_ASSERT((hr_dt) && (target)); *target = (crm_time_t) { .years = hr_dt->years, .months = hr_dt->months, .days = hr_dt->days, .seconds = hr_dt->seconds, .offset = hr_dt->offset, .duration = hr_dt->duration }; } /*! * \internal * \brief Return the current time as a high-resolution time * * \param[out] epoch If not NULL, this will be set to seconds since epoch * * \return Newly allocated high-resolution time set to the current time */ pcmk__time_hr_t * pcmk__time_hr_now(time_t *epoch) { struct timespec tv; crm_time_t dt; pcmk__time_hr_t *hr; qb_util_timespec_from_epoch_get(&tv); if (epoch != NULL) { *epoch = tv.tv_sec; } crm_time_set_timet(&dt, &(tv.tv_sec)); hr = pcmk__time_hr_convert(NULL, &dt); if (hr != NULL) { hr->useconds = tv.tv_nsec / QB_TIME_NS_IN_USEC; } return hr; } pcmk__time_hr_t * pcmk__time_hr_new(const char *date_time) { pcmk__time_hr_t *hr_dt = NULL; if (date_time == NULL) { hr_dt = pcmk__time_hr_now(NULL); } else { crm_time_t *dt; dt = parse_date(date_time); hr_dt = pcmk__time_hr_convert(NULL, dt); crm_time_free(dt); } return hr_dt; } void pcmk__time_hr_free(pcmk__time_hr_t * hr_dt) { free(hr_dt); } char * pcmk__time_format_hr(const char *format, const pcmk__time_hr_t *hr_dt) { #define DATE_LEN_MAX 128 const char *mark_s = NULL; int scanned_pos = 0; int printed_pos = 0; int fmt_pos = 0; size_t date_len = 0; int nano_digits = 0; char nano_s[10] = { '\0', }; char date_s[DATE_LEN_MAX] = { '\0', }; char nanofmt_s[5] = "%"; char *tmp_fmt_s = NULL; struct tm tm = { 0, }; crm_time_t dt = { 0, }; if (!format) { return NULL; } pcmk__time_set_hr_dt(&dt, hr_dt); ha_get_tm_time(&tm, &dt); sprintf(nano_s, "%06d000", hr_dt->useconds); while ((format[scanned_pos]) != '\0') { mark_s = strchr(&format[scanned_pos], '%'); if (mark_s) { int fmt_len = 1; fmt_pos = mark_s - format; while ((format[fmt_pos+fmt_len] != '\0') && (format[fmt_pos+fmt_len] >= '0') && (format[fmt_pos+fmt_len] <= '9')) { fmt_len++; } scanned_pos = fmt_pos + fmt_len + 1; if (format[fmt_pos+fmt_len] == 'N') { nano_digits = atoi(&format[fmt_pos+1]); nano_digits = (nano_digits > 6)?6:nano_digits; nano_digits = (nano_digits < 0)?0:nano_digits; sprintf(&nanofmt_s[1], ".%ds", nano_digits); } else { if (format[scanned_pos] != '\0') { continue; } fmt_pos = scanned_pos; /* print till end */ } } else { scanned_pos = strlen(format); fmt_pos = scanned_pos; /* print till end */ } tmp_fmt_s = strndup(&format[printed_pos], fmt_pos - printed_pos); #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wformat-nonliteral" #endif date_len += strftime(&date_s[date_len], DATE_LEN_MAX - date_len, tmp_fmt_s, &tm); #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic pop #endif printed_pos = scanned_pos; free(tmp_fmt_s); if (nano_digits) { #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wformat-nonliteral" #endif date_len += snprintf(&date_s[date_len], DATE_LEN_MAX - date_len, nanofmt_s, nano_s); #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic pop #endif nano_digits = 0; } } return (date_len == 0)?NULL:strdup(date_s); #undef DATE_LEN_MAX } /*! * \internal * \brief Return a human-friendly string corresponding to an epoch time value * * \param[in] source Pointer to epoch time value (or \p NULL for current time) * \param[in] flags Group of \p crm_time_* flags controlling display format * (0 to use \p ctime() with newline removed) * * \return String representation of \p source on success (may be empty depending * on \p flags; guaranteed not to be \p NULL) * * \note The caller is responsible for freeing the return value using \p free(). */ char * pcmk__epoch2str(const time_t *source, uint32_t flags) { time_t epoch_time = (source == NULL)? time(NULL) : *source; if (flags == 0) { return pcmk__str_copy(pcmk__trim(ctime(&epoch_time))); } else { crm_time_t dt; crm_time_set_timet(&dt, &epoch_time); return crm_time_as_string(&dt, flags); } } /*! * \internal * \brief Return a human-friendly string corresponding to seconds-and- * nanoseconds value * * Time is shown with microsecond resolution if \p crm_time_usecs is in \p * flags. * * \param[in] ts Time in seconds and nanoseconds (or \p NULL for current * time) * \param[in] flags Group of \p crm_time_* flags controlling display format * * \return String representation of \p ts on success (may be empty depending on * \p flags; guaranteed not to be \p NULL) * * \note The caller is responsible for freeing the return value using \p free(). */ char * pcmk__timespec2str(const struct timespec *ts, uint32_t flags) { struct timespec tmp_ts; crm_time_t dt; char result[DATE_MAX] = { 0 }; if (ts == NULL) { qb_util_timespec_from_epoch_get(&tmp_ts); ts = &tmp_ts; } crm_time_set_timet(&dt, &ts->tv_sec); time_as_string_common(&dt, ts->tv_nsec / QB_TIME_NS_IN_USEC, flags, result); return pcmk__str_copy(result); } /*! * \internal * \brief Given a millisecond interval, return a log-friendly string * * \param[in] interval_ms Interval in milliseconds * * \return Readable version of \p interval_ms * * \note The return value is a pointer to static memory that will be * overwritten by later calls to this function. */ const char * pcmk__readable_interval(guint interval_ms) { #define MS_IN_S (1000) #define MS_IN_M (MS_IN_S * 60) #define MS_IN_H (MS_IN_M * 60) #define MS_IN_D (MS_IN_H * 24) #define MAXSTR sizeof("..d..h..m..s...ms") static char str[MAXSTR]; int offset = 0; str[0] = '\0'; - if (interval_ms > MS_IN_D) { + if (interval_ms >= MS_IN_D) { offset += snprintf(str + offset, MAXSTR - offset, "%ud", interval_ms / MS_IN_D); interval_ms -= (interval_ms / MS_IN_D) * MS_IN_D; } - if (interval_ms > MS_IN_H) { + if (interval_ms >= MS_IN_H) { offset += snprintf(str + offset, MAXSTR - offset, "%uh", interval_ms / MS_IN_H); interval_ms -= (interval_ms / MS_IN_H) * MS_IN_H; } - if (interval_ms > MS_IN_M) { + if (interval_ms >= MS_IN_M) { offset += snprintf(str + offset, MAXSTR - offset, "%um", interval_ms / MS_IN_M); interval_ms -= (interval_ms / MS_IN_M) * MS_IN_M; } // Ns, N.NNNs, or NNNms - if (interval_ms > MS_IN_S) { + if (interval_ms >= MS_IN_S) { offset += snprintf(str + offset, MAXSTR - offset, "%u", interval_ms / MS_IN_S); interval_ms -= (interval_ms / MS_IN_S) * MS_IN_S; if (interval_ms > 0) { offset += snprintf(str + offset, MAXSTR - offset, ".%03u", interval_ms); } (void) snprintf(str + offset, MAXSTR - offset, "s"); } else if (interval_ms > 0) { (void) snprintf(str + offset, MAXSTR - offset, "%ums", interval_ms); } else if (str[0] == '\0') { strcpy(str, "0s"); } return str; } diff --git a/lib/common/tests/iso8601/pcmk__readable_interval_test.c b/lib/common/tests/iso8601/pcmk__readable_interval_test.c index 43b55410b0..d354975ce2 100644 --- a/lib/common/tests/iso8601/pcmk__readable_interval_test.c +++ b/lib/common/tests/iso8601/pcmk__readable_interval_test.c @@ -1,27 +1,29 @@ /* * Copyright 2021 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU General Public License version 2 * or later (GPLv2+) WITHOUT ANY WARRANTY. */ #include #include #include static void readable_interval(void **state) { assert_string_equal(pcmk__readable_interval(0), "0s"); + assert_string_equal(pcmk__readable_interval(503), "503ms"); + assert_string_equal(pcmk__readable_interval(3333), "3.333s"); assert_string_equal(pcmk__readable_interval(30000), "30s"); + assert_string_equal(pcmk__readable_interval(61000), "1m1s"); assert_string_equal(pcmk__readable_interval(150000), "2m30s"); - assert_string_equal(pcmk__readable_interval(3333), "3.333s"); assert_string_equal(pcmk__readable_interval(UINT_MAX), "49d17h2m47.295s"); } PCMK__UNIT_TEST(NULL, NULL, cmocka_unit_test(readable_interval)) diff --git a/lib/fencing/Makefile.am b/lib/fencing/Makefile.am index 53020355b2..116b2f4a40 100644 --- a/lib/fencing/Makefile.am +++ b/lib/fencing/Makefile.am @@ -1,32 +1,32 @@ # # Original Author: Sun Jiang Dong # Copyright 2004 International Business Machines # # with later changes copyright 2004-2022 the Pacemaker project contributors. # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # include $(top_srcdir)/mk/common.mk noinst_HEADERS = fencing_private.h lib_LTLIBRARIES = libstonithd.la -libstonithd_la_LDFLAGS = -version-info 34:4:8 +libstonithd_la_LDFLAGS = -version-info 34:5:8 libstonithd_la_CFLAGS = $(CFLAGS_HARDENED_LIB) libstonithd_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libstonithd_la_LIBADD = $(top_builddir)/lib/services/libcrmservice.la libstonithd_la_LIBADD += $(top_builddir)/lib/common/libcrmcommon.la ## Library sources (*must* use += format for bumplibs) libstonithd_la_SOURCES = st_actions.c libstonithd_la_SOURCES += st_client.c if BUILD_LHA_SUPPORT libstonithd_la_SOURCES += st_lha.c endif libstonithd_la_SOURCES += st_output.c libstonithd_la_SOURCES += st_rhcs.c diff --git a/lib/lrmd/Makefile.am b/lib/lrmd/Makefile.am index a9b9c6772a..f0bc784312 100644 --- a/lib/lrmd/Makefile.am +++ b/lib/lrmd/Makefile.am @@ -1,26 +1,26 @@ # # Copyright 2012-2023 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU Lesser General Public License # version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. # include $(top_srcdir)/mk/common.mk lib_LTLIBRARIES = liblrmd.la -liblrmd_la_LDFLAGS = -version-info 30:0:2 +liblrmd_la_LDFLAGS = -version-info 31:0:3 liblrmd_la_CFLAGS = $(CFLAGS_HARDENED_LIB) liblrmd_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) liblrmd_la_LIBADD = $(top_builddir)/lib/fencing/libstonithd.la liblrmd_la_LIBADD += $(top_builddir)/lib/services/libcrmservice.la liblrmd_la_LIBADD += $(top_builddir)/lib/common/libcrmcommon.la ## Library sources (*must* use += format for bumplibs) liblrmd_la_SOURCES = lrmd_alerts.c liblrmd_la_SOURCES += lrmd_client.c liblrmd_la_SOURCES += lrmd_output.c liblrmd_la_SOURCES += proxy_common.c diff --git a/lib/pacemaker/Makefile.am b/lib/pacemaker/Makefile.am index 656e77a5ee..8c1671dd2b 100644 --- a/lib/pacemaker/Makefile.am +++ b/lib/pacemaker/Makefile.am @@ -1,75 +1,75 @@ # # Copyright 2004-2024 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # include $(top_srcdir)/mk/common.mk AM_CPPFLAGS += -I$(top_builddir) -I$(top_srcdir) SUBDIRS = tests noinst_HEADERS = libpacemaker_private.h ## libraries lib_LTLIBRARIES = libpacemaker.la -libpacemaker_la_LDFLAGS = -version-info 8:0:7 +libpacemaker_la_LDFLAGS = -version-info 9:0:8 libpacemaker_la_CFLAGS = $(CFLAGS_HARDENED_LIB) libpacemaker_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libpacemaker_la_LIBADD = $(top_builddir)/lib/pengine/libpe_status.la libpacemaker_la_LIBADD += $(top_builddir)/lib/cib/libcib.la libpacemaker_la_LIBADD += $(top_builddir)/lib/lrmd/liblrmd.la libpacemaker_la_LIBADD += $(top_builddir)/lib/fencing/libstonithd.la libpacemaker_la_LIBADD += $(top_builddir)/lib/services/libcrmservice.la libpacemaker_la_LIBADD += $(top_builddir)/lib/common/libcrmcommon.la # -L$(top_builddir)/lib/pils -lpils -export-dynamic -module -avoid-version ## Library sources (*must* use += format for bumplibs) libpacemaker_la_SOURCES = libpacemaker_la_SOURCES += pcmk_acl.c libpacemaker_la_SOURCES += pcmk_agents.c libpacemaker_la_SOURCES += pcmk_cluster_queries.c libpacemaker_la_SOURCES += pcmk_fence.c libpacemaker_la_SOURCES += pcmk_graph_consumer.c libpacemaker_la_SOURCES += pcmk_graph_logging.c libpacemaker_la_SOURCES += pcmk_graph_producer.c libpacemaker_la_SOURCES += pcmk_injections.c libpacemaker_la_SOURCES += pcmk_options.c libpacemaker_la_SOURCES += pcmk_output.c libpacemaker_la_SOURCES += pcmk_resource.c libpacemaker_la_SOURCES += pcmk_result_code.c libpacemaker_la_SOURCES += pcmk_rule.c libpacemaker_la_SOURCES += pcmk_sched_actions.c libpacemaker_la_SOURCES += pcmk_sched_bundle.c libpacemaker_la_SOURCES += pcmk_sched_clone.c libpacemaker_la_SOURCES += pcmk_sched_colocation.c libpacemaker_la_SOURCES += pcmk_sched_constraints.c libpacemaker_la_SOURCES += pcmk_sched_fencing.c libpacemaker_la_SOURCES += pcmk_sched_group.c libpacemaker_la_SOURCES += pcmk_sched_instances.c libpacemaker_la_SOURCES += pcmk_sched_location.c libpacemaker_la_SOURCES += pcmk_sched_migration.c libpacemaker_la_SOURCES += pcmk_sched_nodes.c libpacemaker_la_SOURCES += pcmk_sched_ordering.c libpacemaker_la_SOURCES += pcmk_sched_primitive.c libpacemaker_la_SOURCES += pcmk_sched_probes.c libpacemaker_la_SOURCES += pcmk_sched_promotable.c libpacemaker_la_SOURCES += pcmk_sched_recurring.c libpacemaker_la_SOURCES += pcmk_sched_remote.c libpacemaker_la_SOURCES += pcmk_sched_resource.c libpacemaker_la_SOURCES += pcmk_sched_tickets.c libpacemaker_la_SOURCES += pcmk_sched_utilization.c libpacemaker_la_SOURCES += pcmk_scheduler.c libpacemaker_la_SOURCES += pcmk_setup.c libpacemaker_la_SOURCES += pcmk_simulate.c libpacemaker_la_SOURCES += pcmk_status.c libpacemaker_la_SOURCES += pcmk_ticket.c libpacemaker_la_SOURCES += pcmk_verify.c diff --git a/lib/pacemaker/pcmk_sched_recurring.c b/lib/pacemaker/pcmk_sched_recurring.c index 417b879496..6a861b7958 100644 --- a/lib/pacemaker/pcmk_sched_recurring.c +++ b/lib/pacemaker/pcmk_sched_recurring.c @@ -1,745 +1,747 @@ /* * Copyright 2004-2024 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU General Public License version 2 * or later (GPLv2+) WITHOUT ANY WARRANTY. */ #include #include #include #include #include #include "libpacemaker_private.h" // Information parsed from an operation history entry in the CIB struct op_history { // XML attributes const char *id; // ID of history entry const char *name; // Action name // Parsed information char *key; // Operation key for action enum rsc_role_e role; // Action role (or pcmk_role_unknown for default) guint interval_ms; // Action interval }; /*! * \internal * \brief Parse an interval from XML * * \param[in] xml XML containing an interval attribute * * \return Interval parsed from XML (or 0 as default) */ static guint xe_interval(const xmlNode *xml) { guint interval_ms = 0U; pcmk_parse_interval_spec(crm_element_value(xml, PCMK_META_INTERVAL), &interval_ms); return interval_ms; } /*! * \internal * \brief Check whether an operation exists multiple times in resource history * * \param[in] rsc Resource with history to search * \param[in] name Name of action to search for * \param[in] interval_ms Interval (in milliseconds) of action to search for * * \return true if an operation with \p name and \p interval_ms exists more than * once in the operation history of \p rsc, otherwise false */ static bool is_op_dup(const pcmk_resource_t *rsc, const char *name, guint interval_ms) { const char *id = NULL; for (xmlNode *op = pcmk__xe_first_child(rsc->ops_xml, PCMK_XE_OP, NULL, NULL); op != NULL; op = pcmk__xe_next_same(op)) { // Check whether action name and interval match if (!pcmk__str_eq(crm_element_value(op, PCMK_XA_NAME), name, pcmk__str_none) || (xe_interval(op) != interval_ms)) { continue; } if (pcmk__xe_id(op) == NULL) { continue; // Shouldn't be possible } if (id == NULL) { id = pcmk__xe_id(op); // First matching op } else { pcmk__config_err("Operation %s is duplicate of %s (do not use " "same name and interval combination more " "than once per resource)", pcmk__xe_id(op), id); return true; } } return false; } /*! * \internal * \brief Check whether an action name is one that can be recurring * * \param[in] name Action name to check * * \return true if \p name is an action known to be unsuitable as a recurring * operation, otherwise false * * \note Pacemaker's current philosophy is to allow users to configure recurring * operations except for a short list of actions known not to be suitable * for that (as opposed to allowing only actions known to be suitable, * which includes only monitor). Among other things, this approach allows * users to define their own custom operations and make them recurring, * though that use case is not well tested. */ static bool op_cannot_recur(const char *name) { return pcmk__str_any_of(name, PCMK_ACTION_STOP, PCMK_ACTION_START, PCMK_ACTION_DEMOTE, PCMK_ACTION_PROMOTE, PCMK_ACTION_RELOAD_AGENT, PCMK_ACTION_MIGRATE_TO, PCMK_ACTION_MIGRATE_FROM, NULL); } /*! * \internal * \brief Check whether a resource history entry is for a recurring action * * \param[in] rsc Resource that history entry is for * \param[in] xml XML of resource history entry to check * \param[out] op Where to store parsed info if recurring * * \return true if \p xml is for a recurring action, otherwise false */ static bool is_recurring_history(const pcmk_resource_t *rsc, const xmlNode *xml, struct op_history *op) { const char *role = NULL; op->interval_ms = xe_interval(xml); if (op->interval_ms == 0) { return false; // Not recurring } op->id = pcmk__xe_id(xml); if (pcmk__str_empty(op->id)) { pcmk__config_err("Ignoring resource history entry without ID"); return false; // Shouldn't be possible (unless CIB was manually edited) } op->name = crm_element_value(xml, PCMK_XA_NAME); if (op_cannot_recur(op->name)) { pcmk__config_err("Ignoring %s because %s action cannot be recurring", op->id, pcmk__s(op->name, "unnamed")); return false; } // There should only be one recurring operation per action/interval if (is_op_dup(rsc, op->name, op->interval_ms)) { return false; } // Ensure role is valid if specified role = crm_element_value(xml, PCMK_XA_ROLE); if (role == NULL) { op->role = pcmk_role_unknown; } else { op->role = pcmk_parse_role(role); if (op->role == pcmk_role_unknown) { pcmk__config_err("Ignoring %s role because %s is not a valid role", op->id, role); return false; } } // Only actions that are still configured and enabled matter if (pcmk__find_action_config(rsc, op->name, op->interval_ms, false) == NULL) { pcmk__rsc_trace(rsc, "Ignoring %s (%s-interval %s for %s) because it is " "disabled or no longer in configuration", op->id, pcmk__readable_interval(op->interval_ms), op->name, rsc->id); return false; } op->key = pcmk__op_key(rsc->id, op->name, op->interval_ms); return true; } /*! * \internal * \brief Check whether a recurring action for an active role should be optional * * \param[in] rsc Resource that recurring action is for * \param[in] node Node that \p rsc will be active on (if any) * \param[in] key Operation key for recurring action to check * \param[in,out] start Start action for \p rsc * * \return true if recurring action should be optional, otherwise false */ static bool active_recurring_should_be_optional(const pcmk_resource_t *rsc, const pcmk_node_t *node, const char *key, pcmk_action_t *start) { GList *possible_matches = NULL; if (node == NULL) { // Should only be possible if unmanaged and stopped pcmk__rsc_trace(rsc, "%s will be mandatory because resource is unmanaged", key); return false; } if (!pcmk_is_set(rsc->cmds->action_flags(start, NULL), pcmk_action_optional)) { pcmk__rsc_trace(rsc, "%s will be mandatory because %s is", key, start->uuid); return false; } possible_matches = find_actions_exact(rsc->actions, key, node); if (possible_matches == NULL) { pcmk__rsc_trace(rsc, "%s will be mandatory because it is not active on %s", key, pcmk__node_name(node)); return false; } for (const GList *iter = possible_matches; iter != NULL; iter = iter->next) { const pcmk_action_t *op = (const pcmk_action_t *) iter->data; if (pcmk_is_set(op->flags, pcmk_action_reschedule)) { pcmk__rsc_trace(rsc, "%s will be mandatory because " "it needs to be rescheduled", key); g_list_free(possible_matches); return false; } } g_list_free(possible_matches); return true; } /*! * \internal * \brief Create recurring action from resource history entry for an active role * * \param[in,out] rsc Resource that resource history is for * \param[in,out] start Start action for \p rsc on \p node * \param[in] node Node that resource will be active on (if any) * \param[in] op Resource history entry */ static void recurring_op_for_active(pcmk_resource_t *rsc, pcmk_action_t *start, const pcmk_node_t *node, const struct op_history *op) { pcmk_action_t *mon = NULL; bool is_optional = true; bool role_match = false; enum rsc_role_e monitor_role = op->role; // We're only interested in recurring actions for active roles if (monitor_role == pcmk_role_stopped) { return; } is_optional = active_recurring_should_be_optional(rsc, node, op->key, start); // Check whether monitor's role matches role resource will have if (monitor_role == pcmk_role_unknown) { monitor_role = pcmk_role_unpromoted; role_match = (rsc->next_role != pcmk_role_promoted); } else { role_match = (rsc->next_role == monitor_role); } if (!role_match) { if (is_optional) { // It's running, so cancel it char *after_key = NULL; pcmk_action_t *cancel_op = pcmk__new_cancel_action(rsc, op->name, op->interval_ms, node); switch (rsc->role) { case pcmk_role_unpromoted: case pcmk_role_started: if (rsc->next_role == pcmk_role_promoted) { after_key = promote_key(rsc); } else if (rsc->next_role == pcmk_role_stopped) { after_key = stop_key(rsc); } break; case pcmk_role_promoted: after_key = demote_key(rsc); break; default: break; } if (after_key) { pcmk__new_ordering(rsc, NULL, cancel_op, rsc, after_key, NULL, pcmk__ar_unrunnable_first_blocks, rsc->cluster); } } do_crm_log((is_optional? LOG_INFO : LOG_TRACE), "%s recurring action %s because %s configured for %s role " "(not %s)", (is_optional? "Cancelling" : "Ignoring"), op->key, op->id, pcmk_role_text(monitor_role), pcmk_role_text(rsc->next_role)); return; } pcmk__rsc_trace(rsc, "Creating %s recurring action %s for %s (%s %s on %s)", (is_optional? "optional" : "mandatory"), op->key, op->id, rsc->id, pcmk_role_text(rsc->next_role), pcmk__node_name(node)); mon = custom_action(rsc, strdup(op->key), op->name, node, is_optional, rsc->cluster); if (!pcmk_is_set(start->flags, pcmk_action_runnable)) { pcmk__rsc_trace(rsc, "%s is unrunnable because start is", mon->uuid); pcmk__clear_action_flags(mon, pcmk_action_runnable); } else if ((node == NULL) || !node->details->online || node->details->unclean) { pcmk__rsc_trace(rsc, "%s is unrunnable because no node is available", mon->uuid); pcmk__clear_action_flags(mon, pcmk_action_runnable); } else if (!pcmk_is_set(mon->flags, pcmk_action_optional)) { pcmk__rsc_info(rsc, "Start %s-interval %s for %s on %s", pcmk__readable_interval(op->interval_ms), mon->task, rsc->id, pcmk__node_name(node)); } if (rsc->next_role == pcmk_role_promoted) { pe__add_action_expected_result(mon, CRM_EX_PROMOTED); } // Order monitor relative to other actions if ((node == NULL) || pcmk_is_set(rsc->flags, pcmk_rsc_managed)) { pcmk__new_ordering(rsc, start_key(rsc), NULL, NULL, strdup(mon->uuid), mon, pcmk__ar_first_implies_then |pcmk__ar_unrunnable_first_blocks, rsc->cluster); pcmk__new_ordering(rsc, reload_key(rsc), NULL, NULL, strdup(mon->uuid), mon, pcmk__ar_first_implies_then |pcmk__ar_unrunnable_first_blocks, rsc->cluster); if (rsc->next_role == pcmk_role_promoted) { pcmk__new_ordering(rsc, promote_key(rsc), NULL, rsc, NULL, mon, pcmk__ar_ordered |pcmk__ar_unrunnable_first_blocks, rsc->cluster); } else if (rsc->role == pcmk_role_promoted) { pcmk__new_ordering(rsc, demote_key(rsc), NULL, rsc, NULL, mon, pcmk__ar_ordered |pcmk__ar_unrunnable_first_blocks, rsc->cluster); } } } /*! * \internal * \brief Cancel a recurring action if running on a node * * \param[in,out] rsc Resource that action is for * \param[in] node Node to cancel action on * \param[in] key Operation key for action * \param[in] name Action name * \param[in] interval_ms Action interval (in milliseconds) */ static void cancel_if_running(pcmk_resource_t *rsc, const pcmk_node_t *node, const char *key, const char *name, guint interval_ms) { GList *possible_matches = find_actions_exact(rsc->actions, key, node); pcmk_action_t *cancel_op = NULL; if (possible_matches == NULL) { return; // Recurring action isn't running on this node } g_list_free(possible_matches); cancel_op = pcmk__new_cancel_action(rsc, name, interval_ms, node); switch (rsc->next_role) { case pcmk_role_started: case pcmk_role_unpromoted: /* Order starts after cancel. If the current role is * stopped, this cancels the monitor before the resource * starts; if the current role is started, then this cancels * the monitor on a migration target before starting there. */ pcmk__new_ordering(rsc, NULL, cancel_op, rsc, start_key(rsc), NULL, pcmk__ar_unrunnable_first_blocks, rsc->cluster); break; default: break; } pcmk__rsc_info(rsc, "Cancelling %s-interval %s action for %s on %s because " "configured for " PCMK_ROLE_STOPPED " role (not %s)", pcmk__readable_interval(interval_ms), name, rsc->id, pcmk__node_name(node), pcmk_role_text(rsc->next_role)); } /*! * \internal * \brief Order an action after all probes of a resource on a node * * \param[in,out] rsc Resource to check for probes * \param[in] node Node to check for probes of \p rsc * \param[in,out] action Action to order after probes of \p rsc on \p node */ static void order_after_probes(pcmk_resource_t *rsc, const pcmk_node_t *node, pcmk_action_t *action) { GList *probes = pe__resource_actions(rsc, node, PCMK_ACTION_MONITOR, FALSE); for (GList *iter = probes; iter != NULL; iter = iter->next) { order_actions((pcmk_action_t *) iter->data, action, pcmk__ar_unrunnable_first_blocks); } g_list_free(probes); } /*! * \internal * \brief Order an action after all stops of a resource on a node * * \param[in,out] rsc Resource to check for stops * \param[in] node Node to check for stops of \p rsc * \param[in,out] action Action to order after stops of \p rsc on \p node */ static void order_after_stops(pcmk_resource_t *rsc, const pcmk_node_t *node, pcmk_action_t *action) { GList *stop_ops = pe__resource_actions(rsc, node, PCMK_ACTION_STOP, TRUE); for (GList *iter = stop_ops; iter != NULL; iter = iter->next) { pcmk_action_t *stop = (pcmk_action_t *) iter->data; if (!pcmk_is_set(stop->flags, pcmk_action_optional) && !pcmk_is_set(action->flags, pcmk_action_optional) && !pcmk_is_set(rsc->flags, pcmk_rsc_managed)) { pcmk__rsc_trace(rsc, "%s optional on %s: unmanaged", action->uuid, pcmk__node_name(node)); pcmk__set_action_flags(action, pcmk_action_optional); } if (!pcmk_is_set(stop->flags, pcmk_action_runnable)) { crm_debug("%s unrunnable on %s: stop is unrunnable", action->uuid, pcmk__node_name(node)); pcmk__clear_action_flags(action, pcmk_action_runnable); } if (pcmk_is_set(rsc->flags, pcmk_rsc_managed)) { pcmk__new_ordering(rsc, stop_key(rsc), stop, NULL, NULL, action, pcmk__ar_first_implies_then |pcmk__ar_unrunnable_first_blocks, rsc->cluster); } } g_list_free(stop_ops); } /*! * \internal * \brief Create recurring action from resource history entry for inactive role * * \param[in,out] rsc Resource that resource history is for * \param[in] node Node that resource will be active on (if any) * \param[in] op Resource history entry */ static void recurring_op_for_inactive(pcmk_resource_t *rsc, const pcmk_node_t *node, const struct op_history *op) { GList *possible_matches = NULL; // We're only interested in recurring actions for the inactive role if (op->role != pcmk_role_stopped) { return; } if (!pcmk_is_set(rsc->flags, pcmk_rsc_unique)) { crm_notice("Ignoring %s (recurring monitors for " PCMK_ROLE_STOPPED " role are not supported for anonymous clones)", op->id); return; // @TODO add support } pcmk__rsc_trace(rsc, "Creating recurring action %s for %s on nodes " "where it should not be running", op->id, rsc->id); for (GList *iter = rsc->cluster->nodes; iter != NULL; iter = iter->next) { pcmk_node_t *stop_node = (pcmk_node_t *) iter->data; bool is_optional = true; pcmk_action_t *stopped_mon = NULL; // Cancel action on node where resource will be active if ((node != NULL) && pcmk__str_eq(stop_node->details->uname, node->details->uname, pcmk__str_casei)) { cancel_if_running(rsc, node, op->key, op->name, op->interval_ms); continue; } // Recurring action on this node is optional if it's already active here possible_matches = find_actions_exact(rsc->actions, op->key, stop_node); is_optional = (possible_matches != NULL); g_list_free(possible_matches); pcmk__rsc_trace(rsc, "Creating %s recurring action %s for %s (%s " PCMK_ROLE_STOPPED " on %s)", (is_optional? "optional" : "mandatory"), op->key, op->id, rsc->id, pcmk__node_name(stop_node)); stopped_mon = custom_action(rsc, strdup(op->key), op->name, stop_node, is_optional, rsc->cluster); pe__add_action_expected_result(stopped_mon, CRM_EX_NOT_RUNNING); if (pcmk_is_set(rsc->flags, pcmk_rsc_managed)) { order_after_probes(rsc, stop_node, stopped_mon); } /* The recurring action is for the inactive role, so it shouldn't be * performed until the resource is inactive. */ order_after_stops(rsc, stop_node, stopped_mon); if (!stop_node->details->online || stop_node->details->unclean) { pcmk__rsc_debug(rsc, "%s unrunnable on %s: node unavailable)", stopped_mon->uuid, pcmk__node_name(stop_node)); pcmk__clear_action_flags(stopped_mon, pcmk_action_runnable); } if (pcmk_is_set(stopped_mon->flags, pcmk_action_runnable) && !pcmk_is_set(stopped_mon->flags, pcmk_action_optional)) { crm_notice("Start recurring %s-interval %s for " PCMK_ROLE_STOPPED " %s on %s", pcmk__readable_interval(op->interval_ms), stopped_mon->task, rsc->id, pcmk__node_name(stop_node)); } } } /*! * \internal * \brief Create recurring actions for a resource * * \param[in,out] rsc Resource to create recurring actions for */ void pcmk__create_recurring_actions(pcmk_resource_t *rsc) { pcmk_action_t *start = NULL; if (pcmk_is_set(rsc->flags, pcmk_rsc_blocked)) { pcmk__rsc_trace(rsc, "Skipping recurring actions for blocked resource %s", rsc->id); return; } if (pcmk_is_set(rsc->flags, pcmk_rsc_maintenance)) { pcmk__rsc_trace(rsc, "Skipping recurring actions for %s " "in maintenance mode", rsc->id); return; } if (rsc->allocated_to == NULL) { // Recurring actions for active roles not needed } else if (rsc->allocated_to->details->maintenance) { pcmk__rsc_trace(rsc, "Skipping recurring actions for %s on %s " "in maintenance mode", rsc->id, pcmk__node_name(rsc->allocated_to)); } else if ((rsc->next_role != pcmk_role_stopped) || !pcmk_is_set(rsc->flags, pcmk_rsc_managed)) { // Recurring actions for active roles needed start = start_action(rsc, rsc->allocated_to, TRUE); } pcmk__rsc_trace(rsc, "Creating any recurring actions needed for %s", rsc->id); for (xmlNode *op = pcmk__xe_first_child(rsc->ops_xml, PCMK_XE_OP, NULL, NULL); op != NULL; op = pcmk__xe_next_same(op)) { struct op_history op_history = { NULL, }; if (!is_recurring_history(rsc, op, &op_history)) { continue; } if (start != NULL) { recurring_op_for_active(rsc, start, rsc->allocated_to, &op_history); } recurring_op_for_inactive(rsc, rsc->allocated_to, &op_history); free(op_history.key); } } /*! * \internal * \brief Create an executor cancel action * * \param[in,out] rsc Resource of action to cancel * \param[in] task Name of action to cancel * \param[in] interval_ms Interval of action to cancel * \param[in] node Node of action to cancel * * \return Created op */ pcmk_action_t * pcmk__new_cancel_action(pcmk_resource_t *rsc, const char *task, guint interval_ms, const pcmk_node_t *node) { pcmk_action_t *cancel_op = NULL; char *key = NULL; char *interval_ms_s = NULL; CRM_ASSERT((rsc != NULL) && (task != NULL) && (node != NULL)); - // @TODO dangerous if possible to schedule another action with this key key = pcmk__op_key(rsc->id, task, interval_ms); + /* This finds an existing action by key, so custom_action() does not change + * cancel_op->task. + */ cancel_op = custom_action(rsc, key, PCMK_ACTION_CANCEL, node, FALSE, rsc->cluster); - cancel_op->task = pcmk__str_copy(PCMK_ACTION_CANCEL); - cancel_op->cancel_task = pcmk__str_copy(task); + pcmk__str_update(&(cancel_op->task), PCMK_ACTION_CANCEL); + pcmk__str_update(&(cancel_op->cancel_task), task); interval_ms_s = crm_strdup_printf("%u", interval_ms); pcmk__insert_meta(cancel_op, PCMK_XA_OPERATION, task); pcmk__insert_meta(cancel_op, PCMK_META_INTERVAL, interval_ms_s); free(interval_ms_s); return cancel_op; } /*! * \internal * \brief Schedule cancellation of a recurring action * * \param[in,out] rsc Resource that action is for * \param[in] call_id Action's call ID from history * \param[in] task Action name * \param[in] interval_ms Action interval * \param[in] node Node that history entry is for * \param[in] reason Short description of why action is cancelled */ void pcmk__schedule_cancel(pcmk_resource_t *rsc, const char *call_id, const char *task, guint interval_ms, const pcmk_node_t *node, const char *reason) { pcmk_action_t *cancel = NULL; CRM_CHECK((rsc != NULL) && (task != NULL) && (node != NULL) && (reason != NULL), return); crm_info("Recurring %s-interval %s for %s will be stopped on %s: %s", pcmk__readable_interval(interval_ms), task, rsc->id, pcmk__node_name(node), reason); cancel = pcmk__new_cancel_action(rsc, task, interval_ms, node); pcmk__insert_meta(cancel, PCMK__XA_CALL_ID, call_id); // Cancellations happen after stops pcmk__new_ordering(rsc, stop_key(rsc), NULL, rsc, NULL, cancel, pcmk__ar_ordered, rsc->cluster); } /*! * \internal * \brief Create a recurring action marked as needing rescheduling if active * * \param[in,out] rsc Resource that action is for * \param[in] task Name of action being rescheduled * \param[in] interval_ms Action interval (in milliseconds) * \param[in,out] node Node where action should be rescheduled */ void pcmk__reschedule_recurring(pcmk_resource_t *rsc, const char *task, guint interval_ms, pcmk_node_t *node) { pcmk_action_t *op = NULL; trigger_unfencing(rsc, node, "Device parameters changed (reschedule)", NULL, rsc->cluster); op = custom_action(rsc, pcmk__op_key(rsc->id, task, interval_ms), task, node, TRUE, rsc->cluster); pcmk__set_action_flags(op, pcmk_action_reschedule); } /*! * \internal * \brief Check whether an action is recurring * * \param[in] action Action to check * * \return true if \p action has a nonzero interval, otherwise false */ bool pcmk__action_is_recurring(const pcmk_action_t *action) { guint interval_ms = 0; if (pcmk__guint_from_hash(action->meta, PCMK_META_INTERVAL, 0, &interval_ms) != pcmk_rc_ok) { return false; } return (interval_ms > 0); } diff --git a/lib/pengine/Makefile.am b/lib/pengine/Makefile.am index cefdf1106a..2bb50da581 100644 --- a/lib/pengine/Makefile.am +++ b/lib/pengine/Makefile.am @@ -1,82 +1,82 @@ # # Copyright 2004-2024 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # include $(top_srcdir)/mk/common.mk # Without "." here, check-recursive will run through the subdirectories first # and then run "make check" here. This will fail, because there's things in # the subdirectories that need check_LTLIBRARIES built first. Adding "." here # changes the order so the subdirectories are processed afterwards. SUBDIRS = . tests ## libraries lib_LTLIBRARIES = libpe_rules.la \ libpe_status.la check_LTLIBRARIES = libpe_status_test.la noinst_HEADERS = pe_status_private.h -libpe_rules_la_LDFLAGS = -version-info 30:1:4 +libpe_rules_la_LDFLAGS = -version-info 30:2:4 libpe_rules_la_CFLAGS = $(CFLAGS_HARDENED_LIB) libpe_rules_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libpe_rules_la_LIBADD = $(top_builddir)/lib/common/libcrmcommon.la ## Library sources (*must* use += format for bumplibs) libpe_rules_la_SOURCES = common.c libpe_rules_la_SOURCES += rules.c libpe_rules_la_SOURCES += rules_alerts.c -libpe_status_la_LDFLAGS = -version-info 35:0:7 +libpe_status_la_LDFLAGS = -version-info 35:1:7 libpe_status_la_CFLAGS = $(CFLAGS_HARDENED_LIB) libpe_status_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libpe_status_la_LIBADD = $(top_builddir)/lib/common/libcrmcommon.la ## Library sources (*must* use += format for bumplibs) libpe_status_la_SOURCES = libpe_status_la_SOURCES += bundle.c libpe_status_la_SOURCES += clone.c libpe_status_la_SOURCES += common.c libpe_status_la_SOURCES += complex.c libpe_status_la_SOURCES += failcounts.c libpe_status_la_SOURCES += group.c libpe_status_la_SOURCES += native.c libpe_status_la_SOURCES += pe_actions.c libpe_status_la_SOURCES += pe_health.c libpe_status_la_SOURCES += pe_digest.c libpe_status_la_SOURCES += pe_notif.c libpe_status_la_SOURCES += pe_output.c libpe_status_la_SOURCES += remote.c libpe_status_la_SOURCES += rules.c libpe_status_la_SOURCES += status.c libpe_status_la_SOURCES += tags.c libpe_status_la_SOURCES += unpack.c libpe_status_la_SOURCES += utils.c # # libpe_status_test is only used with unit tests, so we can # mock system calls. See lib/common/mock.c for details. # include $(top_srcdir)/mk/tap.mk libpe_status_test_la_SOURCES = $(libpe_status_la_SOURCES) libpe_status_test_la_LDFLAGS = $(libpe_status_la_LDFLAGS) \ -rpath $(libdir) \ $(LDFLAGS_WRAP) # See comments on libcrmcommon_test_la in lib/common/Makefile.am regarding these flags. libpe_status_test_la_CFLAGS = $(libpe_status_la_CFLAGS) \ -DPCMK__UNIT_TESTING \ -fno-builtin \ -fno-inline libpe_status_test_la_LIBADD = $(top_builddir)/lib/common/libcrmcommon_test.la \ -lcmocka \ -lm diff --git a/lib/services/Makefile.am b/lib/services/Makefile.am index 5a1900331f..69c8a2cb73 100644 --- a/lib/services/Makefile.am +++ b/lib/services/Makefile.am @@ -1,42 +1,42 @@ # # Copyright 2012-2023 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU Lesser General Public License # version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. # MAINTAINERCLEANFILES = Makefile.in AM_CPPFLAGS = -I$(top_srcdir)/include lib_LTLIBRARIES = libcrmservice.la noinst_HEADERS = $(wildcard *.h) -libcrmservice_la_LDFLAGS = -version-info 32:0:4 +libcrmservice_la_LDFLAGS = -version-info 32:1:4 libcrmservice_la_CFLAGS = libcrmservice_la_CFLAGS += $(CFLAGS_HARDENED_LIB) libcrmservice_la_LDFLAGS += $(LDFLAGS_HARDENED_LIB) libcrmservice_la_LIBADD = $(top_builddir)/lib/common/libcrmcommon.la \ $(DBUS_LIBS) ## Library sources (*must* use += format for bumplibs) libcrmservice_la_SOURCES = services.c libcrmservice_la_SOURCES += services_linux.c libcrmservice_la_SOURCES += services_lsb.c libcrmservice_la_SOURCES += services_ocf.c if BUILD_DBUS libcrmservice_la_SOURCES += dbus.c endif if BUILD_UPSTART libcrmservice_la_SOURCES += upstart.c endif if BUILD_SYSTEMD libcrmservice_la_SOURCES += systemd.c endif if BUILD_NAGIOS libcrmservice_la_SOURCES += services_nagios.c endif diff --git a/m4/version.m4 b/m4/version.m4 index f469bba1d3..0e9f1e4820 100644 --- a/m4/version.m4 +++ b/m4/version.m4 @@ -1,2 +1,2 @@ -m4_define([VERSION_NUMBER], [2.1.7]) +m4_define([VERSION_NUMBER], [2.1.8]) m4_define([PCMK_URL], [https://ClusterLabs.org/pacemaker/]) diff --git a/po/zh_CN.po b/po/zh_CN.po index 6a291b2c02..d79554c3d5 100644 --- a/po/zh_CN.po +++ b/po/zh_CN.po @@ -1,1221 +1,1452 @@ # # Copyright 2003-2024 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU Lesser General Public License # version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: Pacemaker 2\n" "Report-Msgid-Bugs-To: developers@clusterlabs.org\n" -"POT-Creation-Date: 2024-03-27 16:47+0800\n" +"POT-Creation-Date: 2024-05-13 16:10-0500\n" "PO-Revision-Date: 2021-11-08 11:04+0800\n" "Last-Translator: Vivi \n" "Language-Team: CHINESE \n" "Language: zh_CN\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" -#: daemons/fenced/pacemaker-fenced.c:500 -#, fuzzy -msgid "An alternate parameter to supply instead of 'port'" -msgstr "用于替代 'port' 的其它参数" +#: daemons/fenced/pacemaker-fenced.c:498 +msgid "Instance attributes available for all \"stonith\"-class resources" +msgstr " 可用于所有stonith类资源的实例属性" -#: daemons/fenced/pacemaker-fenced.c:501 -#, fuzzy +#: daemons/fenced/pacemaker-fenced.c:500 msgid "" -"Some devices do not support the standard 'port' parameter or may provide " -"additional ones. Use this to specify an alternate, device-specific, " -"parameter that should indicate the machine to be fenced. A value of \"none\" " -"can be used to tell the cluster not to supply any additional parameters." +"Instance attributes available for all \"stonith\"-class resources and used " +"by Pacemaker's fence daemon, formerly known as stonithd" msgstr "" -"一些设备不支持标准的 'port' 参数, 或者可能会提供其它的参数. 使用此选项可指定" -"一个替代的, 该设备专用的参数, 该参数应该指出需要 fence 的机器. 可以使用 " -"\"none\" 值用于告诉集群不要提供任何其它的参数. " +" 可用于所有stonith类资源的实例属性,并由Pacemaker的fence守护程序使用(以前称" +"为stonithd)" #: daemons/fenced/pacemaker-fenced.c:511 +msgid "Deprecated (will be removed in a future release)" +msgstr "已弃用(将在未来版本中删除)" + +#: daemons/fenced/pacemaker-fenced.c:514 +msgid "Intended for use in regression testing only" +msgstr "仅适用于回归测试" + +#: daemons/fenced/pacemaker-fenced.c:517 +msgid "Send logs to the additional named logfile" +msgstr "将日志发送到其他命名日志文件" + +#: lib/common/options.c:57 +msgid "Pacemaker version on cluster node elected Designated Controller (DC)" +msgstr "集群选定的控制器节点(DC)的 Pacemaker 版本" + +#: lib/common/options.c:59 #, fuzzy msgid "" -"A mapping of node names to port numbers for devices that do not support node " -"names." -msgstr "为不支持主机名的设备提供主机名到端口号的映射. " +"Includes a hash which identifies the exact revision the code was built from. " +"Used for diagnostic purposes." +msgstr "它包含一个标识所构建代码修订版本的哈希值. 其可用于诊断." -#: daemons/fenced/pacemaker-fenced.c:513 +#: lib/common/options.c:66 #, fuzzy +msgid "The messaging layer on which Pacemaker is currently running" +msgstr "Pacemaker 当前运行的消息传递层" + +#: lib/common/options.c:67 +msgid "Used for informational and diagnostic purposes." +msgstr "用于提供信息和诊断." + +#: lib/common/options.c:73 +msgid "An arbitrary name for the cluster" +msgstr "任意的集群名称" + +#: lib/common/options.c:74 msgid "" -"For example, \"node1:1;node2:2,3\" would tell the cluster to use port 1 for " -"node1 and ports 2 and 3 for node2." +"This optional value is mostly for users' convenience as desired in " +"administration, but may also be used in Pacemaker configuration rules via " +"the #cluster-name node attribute, and by higher-level tools and resource " +"agents." msgstr "" -"例如, \"node1:1;node2:2,3\" 将会告诉集群对node1使用端口1, 对node2使用端口2和" -"3." +"该可选值主要是为了方便用户根据管理的需要使用, 可以通过 #cluster-name 节点属性" +"在 Pacemaker 配置规则中使用, 以及被更高级的工具和资源代理使用." -#: daemons/fenced/pacemaker-fenced.c:520 -msgid "Nodes targeted by this device" -msgstr "此设备针对的节点" +#: lib/common/options.c:83 +msgid "How long to wait for a response from other nodes during start-up" +msgstr "启动过程中等待其他节点响应的时间" -#: daemons/fenced/pacemaker-fenced.c:521 -#, fuzzy +#: lib/common/options.c:84 msgid "" -"Comma-separated list of nodes that can be targeted by this device (for " -"example, \"node1,node2,node3\"). If pcmk_host_check is \"static-list\", " -"either this or pcmk_host_map must be set." -msgstr "此设备可以针对的节点列表,节点之间用逗号分隔(例如,node1,node2, node3).如果pcmk_host_list=\"static-list\")" +"The optimal value will depend on the speed and load of your network and the " +"type of switches used." +msgstr "其最佳值将取决于您的网络速度和负载以及使用的交换机类型." + +#: lib/common/options.c:91 +msgid "" +"Polling interval to recheck cluster state and evaluate rules with date " +"specifications" +msgstr "重新检查集群状态及评估日期规范规则的轮询间隔" -#: daemons/fenced/pacemaker-fenced.c:530 +#: lib/common/options.c:93 #, fuzzy -msgid "How to determine which nodes can be targeted by the device" -msgstr "如何确定设备可以针对哪些节点" +msgid "" +"Pacemaker is primarily event-driven, and looks ahead to know when to recheck " +"cluster state for failure-timeout settings and most time-based rules. " +"However, it will also recheck the cluster after this amount of inactivity, " +"to evaluate rules with date specifications and serve as a fail-safe for " +"certain types of scheduler bugs. A value of 0 disables polling. A positive " +"value sets an interval in seconds, unless other units are specified (for " +"example, \"5min\")." +msgstr "" +"Pacemaker 主要是通过事件驱动的, 并会提前预测何时重新检查集群状态以评估大多数" +"基于时间的规则以及 failure-timeout 配置, 然而无论如何, 经过指定的时间后如果没" +"有活动, 它将重新检查集群, 以评估具有日期规范的规则, 并为某些类型的调度程序缺" +"陷提供故障保护. 如果值为0, 将禁用轮询. 如果值为正数, 则设置以秒为单位的时间间" +"隔, 除非指定了其它单位 (例如, \"5min\")." + +#: lib/common/options.c:107 +msgid "How a cluster node should react if notified of its own fencing" +msgstr "集群节点在收到针对自己的 fence 操作结果通知时应如何反应" -#: daemons/fenced/pacemaker-fenced.c:531 +#: lib/common/options.c:108 #, fuzzy msgid "" -"Use \"dynamic-list\" to query the device via the 'list' command; \"static-" -"list\" to check the pcmk_host_list attribute; \"status\" to query the device " -"via the 'status' command; or \"none\" to assume every device can fence every " -"node. The default value is \"static-list\" if pcmk_host_map or " -"pcmk_host_list is set; otherwise \"dynamic-list\" if the device supports the " -"list operation; otherwise \"status\" if the device supports the status " -"operation; otherwise \"none\"" +"A cluster node may receive notification of a \"succeeded\" fencing that " +"targeted it if fencing is misconfigured, or if fabric fencing is in use that " +"doesn't cut cluster communication. Use \"stop\" to attempt to immediately " +"stop Pacemaker and stay stopped, or \"panic\" to attempt to immediately " +"reboot the local node, falling back to stop on failure." msgstr "" -"选项值 \"dynamic-list\" 表示通过 'list' 命令查询设备; 选项值 \"static-list" -"\"表示检查 pcmk_host_list 属性; 选项值 \"status\" 表示通过 'status' 命令查询" -"设备; 或使用选项值 \"none\" 假设每个设备都可以 fence 所有节点. 如果\"pcmk_host_map\"或\"pcmk_host_list\"被设置,默认值为\"static-list\";否则,如果设备支持列表操作,则为\"dynamic-list\";如果设备支持状态操作,则为\"status\";否则为\"none\"" +"如果有错误的 fence 配置, 或者在使用 fabric fence 机制 (并不会切断集群通信), " +"则集群节点可能会收到针对自己的 \"succeeded\" fence 结果通知. 使用 \"stop\" 尝" +"试立即停止 pacemaker 并保持停止状态,或者使用 \"panic\" 尝试立即重新启动本地节" +"点,如果失败则返回执行 stop." -#: daemons/fenced/pacemaker-fenced.c:544 +#: lib/common/options.c:119 msgid "" -"Enable a delay of no more than the time specified before executing fencing " -"actions." -msgstr "在执行 fence 操作前启用不超过指定时间的延迟" +"Declare an election failed if it is not decided within this much time. If " +"you need to adjust this value, it probably indicates the presence of a bug." +msgstr "" +"如果集群在本项设置时间内没有作出决定则宣布选举失败. 这可能表明当前存在错误, " +"您需要调整该值." -#: daemons/fenced/pacemaker-fenced.c:546 +#: lib/common/options.c:128 msgid "" -"Enable a delay of no more than the time specified before executing fencing " -"actions. Pacemaker derives the overall delay by taking the value of " -"pcmk_delay_base and adding a random delay value such that the sum is kept " -"below this maximum." +"Exit immediately if shutdown does not complete within this much time. If you " +"need to adjust this value, it probably indicates the presence of a bug." msgstr "" -"在执行 fence 操作前启用不超过指定时间的延迟. Pacemaker通过获取 " -"pcmk_delay_base 的值并添加随机延迟值来得出总延迟, 并且确保总和不超过此最大值." +"如果在这段时间内关机仍未完成, pacemaker 将立即退出. 这可能表明当前存在错误, " +"您需要调整该值." -#: daemons/fenced/pacemaker-fenced.c:555 -msgid "Enable a base delay for fencing actions and specify base delay value." -msgstr "为 fence 操作启用一个指定的基础延迟. " +#: lib/common/options.c:138 lib/common/options.c:147 +msgid "" +"If you need to adjust this value, it probably indicates the presence of a " +"bug." +msgstr "这可能表明当前存在错误, 您需要调整该值." -#: daemons/fenced/pacemaker-fenced.c:557 +#: lib/common/options.c:156 #, fuzzy msgid "" -"This enables a static delay for fencing actions, which can help avoid " -"\"death matches\" where two nodes try to fence each other at the same time. " -"If pcmk_delay_max is also used, a random delay will be added such that the " -"total delay is kept below that value. This can be set to a single time value " -"to apply to any node targeted by this device (useful if a separate device is " -"configured for each target), or to a node map (for example, \"node1:1s;" -"node2:5\") to set a different value for each target." +"Enabling this option will slow down cluster recovery under all conditions" +msgstr "启用此选项将在所有情况下减慢集群恢复的速度" + +#: lib/common/options.c:158 +msgid "" +"Delay cluster recovery for this much time to allow for additional events to " +"occur. Useful if your configuration is sensitive to the order in which ping " +"updates arrive." msgstr "" -"这为 fence 操作启用一个静态延迟, 这有助于避免 \"death matches\" 即两个节点同" -"时尝试互相 fence. 如果还同时使用了pcmk_delay_max, 则会添加一个随机延迟, 并确" -"保总延迟保持在该值以下. 可以将其设置为单个时间值, 以应用于该设备的所有目标节" -"点 (如果为每个目标节点都配置了单独的设备的情况下, 这很有用) 或设置成一个节点" -"映射形式 (例如,\"node1:1s;node2:5\") 从而为每个目标节点设置不同值. " +"集群恢复将被推迟指定的时间间隔, 以等待更多事件发生. 如果您的配置对 ping 更新" +"到达的顺序很敏感, 则可以使用此选项." + +#: lib/common/options.c:168 +msgid "What to do when the cluster does not have quorum" +msgstr "当集群没有达到必需票数时该如何做" -#: daemons/fenced/pacemaker-fenced.c:570 +#: lib/common/options.c:175 +msgid "Whether to lock resources to a cleanly shut down node" +msgstr "是否锁定资源到完全关闭的节点" + +#: lib/common/options.c:176 msgid "" -"The maximum number of actions can be performed in parallel on this device" -msgstr "可以在该设备上并发执行的最多操作数量" +"When true, resources active on a node when it is cleanly shut down are kept " +"\"locked\" to that node (not allowed to run elsewhere) until they start " +"again on that node after it rejoins (or for at most shutdown-lock-limit, if " +"set). Stonith resources and Pacemaker Remote connections are never locked. " +"Clone and bundle instances and the promoted role of promotable clones are " +"currently never locked, though support could be added in a future release." +msgstr "" +"设置为 true 时, 在完全关闭的节点上活动的资源将被 \"locked\" 到该节点 (不允许" +"在其它方运行), 直到该节点重新加入后它们再次在该节点上启动 (最长为 shutdown-" +"lock-limit,如果已设置). Stonith 资源和 Pacemaker Remote 连接永远不会被锁定. " +"克隆和捆绑实例以及可提升克隆的提升角色目前不会被锁定, 尽管可能在未来的发行版" +"中添加支持. " -#: daemons/fenced/pacemaker-fenced.c:572 -#, fuzzy +#: lib/common/options.c:189 +msgid "Do not lock resources to a cleanly shut down node longer than this" +msgstr "资源会被锁定到完全关闭的节点的最长时间" + +#: lib/common/options.c:191 msgid "" -"Cluster property concurrent-fencing=\"true\" needs to be configured first. " -"Then use this to specify the maximum number of actions can be performed in " -"parallel on this device. A value of -1 means an unlimited number of actions " -"can be performed in parallel." +"If shutdown-lock is true and this is set to a nonzero time duration, " +"shutdown locks will expire after this much time has passed since the " +"shutdown was initiated, even if the node has not rejoined." msgstr "" -"需要先配置集群属性 concurrent-fencing=\"true\". 然后使用此参数指定可以在该设" -"备上并发执行的最多操作数量. -1 表示可以并行执行无限数量的操作. " +"如果 shutdown-lock 为 true, 并且将此选项设置为非零时间间隔, 则自关闭操作执行" +"经过此时间后,shutdown lock 将过期, 即使该节点尚未重新加入也是如此. " -#: daemons/fenced/pacemaker-fenced.c:582 -#, fuzzy -msgid "An alternate command to run instead of 'reboot'" -msgstr "运行替代命令,而不是'reboot'" +#: lib/common/options.c:200 +msgid "Enable Access Control Lists (ACLs) for the CIB" +msgstr "为 CIB 启用访问控制列表 (ACL) " -#: daemons/fenced/pacemaker-fenced.c:583 -#, fuzzy +#: lib/common/options.c:207 +msgid "Whether resources can run on any node by default" +msgstr "默认情况下资源是否可以在任何节点上运行" + +#: lib/common/options.c:214 msgid "" -"Some devices do not support the standard commands or may provide additional " -"ones. Use this to specify an alternate, device-specific, command that " -"implements the 'reboot' action." +"Whether the cluster should refrain from monitoring, starting, and stopping " +"resources" +msgstr "集群是否应避免监视, 启动和停止资源" + +#: lib/common/options.c:222 +msgid "" +"Whether a start failure should prevent a resource from being recovered on " +"the same node" +msgstr "资源启动失败是否应阻止在同一节点上恢复该资源" + +#: lib/common/options.c:224 +msgid "" +"When true, the cluster will immediately ban a resource from a node if it " +"fails to start there. When false, the cluster will instead check the " +"resource's fail count against its migration-threshold." msgstr "" -"一些设备不支持标准命令或可能提供其他命令,使用此选项可以指定一个该设备特定的" -"替代命令,用来实现'reboot'操作。" +"当为true, 如果资源启动失败, 集群将立即禁止节点启动该资源, 当为false, 集群将检" +"查资源的失败次数是否超过了其 migration-threshold. " + +#: lib/common/options.c:232 +msgid "Whether the cluster should check for active resources during start-up" +msgstr "集群是否在启动期间检查活动的资源" -#: daemons/fenced/pacemaker-fenced.c:591 +#: lib/common/options.c:242 #, fuzzy +msgid "Whether nodes may be fenced as part of recovery" +msgstr "节点是否可以被 fence 作为集群恢复的一部分" + +#: lib/common/options.c:243 msgid "" -"Specify an alternate timeout to use for 'reboot' actions instead of stonith-" -"timeout" -msgstr "指定用于'reboot' 操作的替代超时,而不是stonith-timeout" +"If false, unresponsive nodes are immediately assumed to be harmless, and " +"resources that were active on them may be recovered elsewhere. This can " +"result in a \"split-brain\" situation, potentially leading to data loss and/" +"or service unavailability." +msgstr "" +"如果为 false, 则立即假定无响应的节点是无害的, 并且可以在其它位置恢复在其上活" +"动的资源. 这可能会导致 \"split-brain\" 情况, 从而可能导致数据丢失和(或)服务不" +"可用. " -#: daemons/fenced/pacemaker-fenced.c:593 -#, fuzzy +#: lib/common/options.c:253 msgid "" -"Some devices need much more/less time to complete than normal. Use this to " -"specify an alternate, device-specific, timeout for 'reboot' actions." +"Action to send to fence device when a node needs to be fenced (\"poweroff\" " +"is a deprecated alias for \"off\")" msgstr "" -"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" -"于'reboot'操作的该设备特定的替代超时。" +"当节点需要被 fence 时, 向 fence 设备发送的操作 (\"poweroff\" 作为 \"off\" 的" +"别名已被弃用)" + +#: lib/common/options.c:261 +msgid "" +"How long to wait for on, off, and reboot fence actions to complete by default" +msgstr "默认情况下, 等待 on, off, 和 reboot fence 操作完成的时间" + +#: lib/common/options.c:269 +msgid "Whether watchdog integration is enabled" +msgstr "是否启用 watchdog 集成设置" -#: daemons/fenced/pacemaker-fenced.c:601 +#: lib/common/options.c:270 +msgid "" +"This is set automatically by the cluster according to whether SBD is " +"detected to be in use. User-configured values are ignored. The value `true` " +"is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is " +"nonzero. In that case, if fencing is required, watchdog-based self-fencing " +"will be performed via SBD without requiring a fencing resource explicitly " +"configured." +msgstr "" +"集群会根据是否检测到 SBD 正在使用来自动设置此值. 用户配置的值将被忽略. 如果使" +"用了无盘 SBD 并且 `stonith-watchdog-timeout` 不为零, 则值 `true` 才有实际意" +"义. 在这种情况下, 如果需要fence, 将通过 SBD 执行基于 watchdog 的自我 fence, " +"而不需要明确配置 fence 资源." + +#: lib/common/options.c:291 #, fuzzy msgid "" -"The maximum number of times to try the 'reboot' command within the timeout " -"period" -msgstr "在超时前重试'reboot'命令的最大次数" +"How long before nodes can be assumed to be safely down when watchdog-based " +"self-fencing via SBD is in use" +msgstr "" +"当基于 watchdog 的自我 fence 机制通过SBD 被执行时, 节点被认为安全下线的等待时" +"间有多长" -#: daemons/fenced/pacemaker-fenced.c:603 +#: lib/common/options.c:293 #, fuzzy msgid "" -"Some devices do not support multiple connections. Operations may \"fail\" if " -"the device is busy with another task. In that case, Pacemaker will " -"automatically retry the operation if there is time remaining. Use this " -"option to alter the number of times Pacemaker tries a 'reboot' action before " -"giving up." +"If this is set to a positive value, lost nodes are assumed to achieve self-" +"fencing using watchdog-based SBD within this much time. This does not " +"require a fencing resource to be explicitly configured, though a " +"fence_watchdog resource can be configured, to limit use to specific nodes. " +"If this is set to 0 (the default), the cluster will never assume watchdog-" +"based self-fencing. If this is set to a negative value, the cluster will use " +"twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if " +"that is positive, or otherwise treat this as 0. WARNING: When used, this " +"timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use " +"watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes " +"where this is not true for the local value or SBD is not active. When this " +"is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same " +"value on all nodes that use SBD, otherwise data corruption or loss could " +"occur." msgstr "" -"一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' ,因此" -"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" -"试'reboot' 操作的次数." +"如果设为正值, 丢失的节点将在设定的时间内被认定使用基于 watchdog 的 SBD 完成自" +"我 fence. 这不需要明确配置一个 fence 资源, 但可以配置一个 fence_watchdog 资源" +"来限制对特定节点使用. 如果设为0 (默认值), 集群将永远不会认定节点使用基于 " +"watchdog 的自我 fence. 如果设为负值, 集群将使用本地 `SBD_WATCHDOG_TIMEOUT` 环" +"境变量的两倍值(如果该值为正), 否则会将该值视为0. 警告: 在所有使用基于 " +"watchdog 的 SBD 的节点上, 此超时值需大于 `SBD_WATCHDOG_TIMEOUT` 的值, 否则 " +"Pacemaker 不会在任何不符合此条件的节点上启动, 也不会在任何未启用 SBD 的节点上" +"启动. 当设为负值时所有使用 SBD 的节点上 `SBD_WATCHDOG_TIMEOUT` 的值必须设置为" +"相同的值, 否则可能导致数据损坏或丢失." + +#: lib/common/options.c:313 +msgid "" +"How many times fencing can fail before it will no longer be immediately re-" +"attempted on a target" +msgstr "fence 操作失败多少次会停止立即尝试" -#: daemons/fenced/pacemaker-fenced.c:613 +#: lib/common/options.c:321 +msgid "Allow performing fencing operations in parallel" +msgstr "允许并行执行 fencing 操作" + +#: lib/common/options.c:328 #, fuzzy -msgid "An alternate command to run instead of 'off'" -msgstr "运行替代命令,而不是'off'" +msgid "Whether to fence unseen nodes at start-up" +msgstr "*** 仅高级使用 *** 是否在启动时fence不可见节点" -#: daemons/fenced/pacemaker-fenced.c:614 +#: lib/common/options.c:329 #, fuzzy msgid "" -"Some devices do not support the standard commands or may provide additional " -"ones. Use this to specify an alternate, device-specific, command that " -"implements the 'off' action." +"Setting this to false may lead to a \"split-brain\" situation, potentially " +"leading to data loss and/or service unavailability." msgstr "" -"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备专用的替代" -"命令,用来实现'off'操作。" +"将此设置为 false 可能会导致 \"split-brain\" 的情况,可能导致数据丢失和(或)服" +"务不可用。" -#: daemons/fenced/pacemaker-fenced.c:622 -#, fuzzy +#: lib/common/options.c:336 msgid "" -"Specify an alternate timeout to use for 'off' actions instead of stonith-" -"timeout" -msgstr "指定用于off 操作的替代超时,而不是stonith-timeout" +"Apply fencing delay targeting the lost nodes with the highest total resource " +"priority" +msgstr "针对具有最高总资源优先级的丢失节点应用fencing延迟" -#: daemons/fenced/pacemaker-fenced.c:624 -#, fuzzy +#: lib/common/options.c:338 msgid "" -"Some devices need much more/less time to complete than normal. Use this to " -"specify an alternate, device-specific, timeout for 'off' actions." +"Apply specified delay for the fencings that are targeting the lost nodes " +"with the highest total resource priority in case we don't have the majority " +"of the nodes in our cluster partition, so that the more significant nodes " +"potentially win any fencing match, which is especially meaningful under " +"split-brain of 2-node cluster. A promoted resource instance takes the base " +"priority + 1 on calculation if the base priority is not 0. Any static/random " +"delays that are introduced by `pcmk_delay_base/max` configured for the " +"corresponding fencing resources will be added to this delay. This delay " +"should be significantly greater than, safely twice, the maximum " +"`pcmk_delay_base/max`. By default, priority fencing delay is disabled." msgstr "" -"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" -"于'off'操作的该设备特定的替代超时。" +"如果我们所在的集群分区并不拥有大多数集群节点,则针对丢失节点的fence操作应用指" +"定的延迟,这样更重要的节点就能够赢得fence竞赛。这对于双节点集群在split-brain" +"状况下尤其有意义。如果基本优先级不为0,在计算时主资源实例获得基本优先级+1。任" +"何对于相应的 fence 资源由 pcmk_delay_base/max 配置所引入的静态/随机延迟会被添" +"加到此延迟。为了安全, 这个延迟应该明显大于 pcmk_delay_base/max 的最大设置值," +"例如两倍。默认情况下,优先级fencing延迟已禁用。" -#: daemons/fenced/pacemaker-fenced.c:632 -#, fuzzy +#: lib/common/options.c:355 msgid "" -"The maximum number of times to try the 'off' command within the timeout " -"period" -msgstr "在超时前重试'off'命令的最大次数" +"How long to wait for a node that has joined the cluster to join the " +"controller process group" +msgstr "等待已加入集群的节点加入控制器进程组的时间" -#: daemons/fenced/pacemaker-fenced.c:634 -#, fuzzy +#: lib/common/options.c:357 msgid "" -"Some devices do not support multiple connections. Operations may \"fail\" if " -"the device is busy with another task. In that case, Pacemaker will " -"automatically retry the operation if there is time remaining. Use this " -"option to alter the number of times Pacemaker tries a 'off' action before " -"giving up." +"Fence nodes that do not join the controller process group within this much " +"time after joining the cluster, to allow the cluster to continue managing " +"resources. A value of 0 means never fence pending nodes. Setting the value " +"to 2h means fence nodes after 2 hours." msgstr "" -" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" -"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" -"试'off' 操作的次数." +"如果节点加入集群后在此时间内不加入控制器进程组,Fence该节点,以便群集继续管理" +"资源。值为0表示永远不 fence 待定节点。将值设置为2h表示2小时后 fence 待定节" +"点。" -#: daemons/fenced/pacemaker-fenced.c:644 -#, fuzzy -msgid "An alternate command to run instead of 'on'" -msgstr "仅高级使用:运行替代命令,而不是'on'" +#: lib/common/options.c:367 +msgid "Maximum time for node-to-node communication" +msgstr "最大节点间通信时间" -#: daemons/fenced/pacemaker-fenced.c:645 -#, fuzzy +#: lib/common/options.c:368 msgid "" -"Some devices do not support the standard commands or may provide additional " -"ones. Use this to specify an alternate, device-specific, command that " -"implements the 'on' action." +"The node elected Designated Controller (DC) will consider an action failed " +"if it does not get a response from the node executing the action within this " +"time (after considering the action's own timeout). The \"correct\" value " +"will depend on the speed and load of your network and cluster nodes." msgstr "" -"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" -"代命令,用来实现'on'操作。" +"如果一个操作未在该时间内(并且考虑操作本身的超时时长)从执行该操作的节点获得" +"响应,则会被选为指定控制器(DC)的节点认定为失败。\"正确\" 值将取决于速度和您" +"的网络和集群节点的负载。" -#: daemons/fenced/pacemaker-fenced.c:653 -#, fuzzy +#: lib/common/options.c:380 +msgid "Maximum amount of system load that should be used by cluster nodes" +msgstr "集群节点应该使用的最大系统负载量" + +#: lib/common/options.c:382 msgid "" -"Specify an alternate timeout to use for 'on' actions instead of stonith-" -"timeout" -msgstr "指定用于on 操作的替代超时,而不是stonith-timeout" +"The cluster will slow down its recovery process when the amount of system " +"resources used (currently CPU) approaches this limit" +msgstr "当使用的系统资源量(当前指 CPU)接近此限制时, 集群将减慢其恢复过程" -#: daemons/fenced/pacemaker-fenced.c:655 +#: lib/common/options.c:389 +msgid "" +"Maximum number of jobs that can be scheduled per node (defaults to 2x cores)" +msgstr "每个节点可以调度的最大作业数(默认为2x内核数)" + +#: lib/common/options.c:397 #, fuzzy msgid "" -"Some devices need much more/less time to complete than normal. Use this to " -"specify an alternate, device-specific, timeout for 'on' actions." +"Maximum number of jobs that the cluster may execute in parallel across all " +"nodes" +msgstr "集群可以在所有节点上并发执行的最大作业数" + +#: lib/common/options.c:399 +msgid "" +"The \"correct\" value will depend on the speed and load of your network and " +"cluster nodes. If set to 0, the cluster will impose a dynamically calculated " +"limit when any node has a high load." msgstr "" -"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" -"于'on'操作的该设备特定的替代超时。" +"\"正确\" 值将取决于速度和您的网络与集群节点的负载。如果设置为0,当任何节点具" +"有高负载时,集群将施加一个动态计算的限制。" -#: daemons/fenced/pacemaker-fenced.c:663 -#, fuzzy +#: lib/common/options.c:408 msgid "" -"The maximum number of times to try the 'on' command within the timeout period" -msgstr "在超时前重试'on'命令的最大次数" +"The number of live migration actions that the cluster is allowed to execute " +"in parallel on a node (-1 means no limit)" +msgstr "允许集群在一个节点上并行执行的实时迁移操作的数量(-1表示没有限制)" -#: daemons/fenced/pacemaker-fenced.c:665 -#, fuzzy +#: lib/common/options.c:427 +msgid "Maximum IPC message backlog before disconnecting a cluster daemon" +msgstr "断开集群守护程序之前的最大IPC消息积压" + +#: lib/common/options.c:428 msgid "" -"Some devices do not support multiple connections. Operations may \"fail\" if " -"the device is busy with another task. In that case, Pacemaker will " -"automatically retry the operation if there is time remaining. Use this " -"option to alter the number of times Pacemaker tries a 'on' action before " -"giving up." +"Raise this if log has \"Evicting client\" messages for cluster daemon PIDs " +"(a good value is the number of resources in the cluster multiplied by the " +"number of nodes)." msgstr "" -" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" -"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" -"试'on' 操作的次数." +"如果日志中有针对集群守护程序PID的消息“Evicting client”,(则建议将值设为集群" +"中的资源数量乘以节点数量)" -#: daemons/fenced/pacemaker-fenced.c:675 +#: lib/common/options.c:438 #, fuzzy -msgid "An alternate command to run instead of 'list'" -msgstr "运行替代命令,而不是'list'" +msgid "Whether the cluster should stop all active resources" +msgstr "集群是否在启动期间检查运行资源" + +#: lib/common/options.c:445 +msgid "Whether to stop resources that were removed from the configuration" +msgstr "是否停止配置已被删除的资源" + +#: lib/common/options.c:453 +msgid "Whether to cancel recurring actions removed from the configuration" +msgstr "是否取消配置已被删除的的重复操作" -#: daemons/fenced/pacemaker-fenced.c:676 +#: lib/common/options.c:461 #, fuzzy -msgid "" -"Some devices do not support the standard commands or may provide additional " -"ones. Use this to specify an alternate, device-specific, command that " -"implements the 'list' action." -msgstr "" -"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" -"代命令,用来实现'list'操作。" +msgid "Whether to remove stopped resources from the executor" +msgstr "是否从pacemaker-execd 守护进程中清除已停止的资源" -#: daemons/fenced/pacemaker-fenced.c:684 +#: lib/common/options.c:462 #, fuzzy -msgid "" -"Specify an alternate timeout to use for 'list' actions instead of stonith-" -"timeout" -msgstr "指定用于list 操作的替代超时,而不是stonith-timeout" +msgid "Values other than default are poorly tested and potentially dangerous." +msgstr "非默认值未经过充分的测试,有潜在的风险。该选项将在未来的版本中删除。" + +#: lib/common/options.c:471 +msgid "The number of scheduler inputs resulting in errors to save" +msgstr "保存导致错误的调度程序输入的数量" + +#: lib/common/options.c:472 lib/common/options.c:479 lib/common/options.c:486 +msgid "Zero to disable, -1 to store unlimited." +msgstr "零表示禁用,-1表示存储不受限制。" + +#: lib/common/options.c:478 +msgid "The number of scheduler inputs resulting in warnings to save" +msgstr "保存导致警告的调度程序输入的数量" + +#: lib/common/options.c:485 +msgid "The number of scheduler inputs without errors or warnings to save" +msgstr "保存没有错误或警告的调度程序输入的数量" -#: daemons/fenced/pacemaker-fenced.c:686 +#: lib/common/options.c:497 #, fuzzy +msgid "How cluster should react to node health attributes" +msgstr "集群节点对节点健康属性如何反应" + +#: lib/common/options.c:498 msgid "" -"Some devices need much more/less time to complete than normal. Use this to " -"specify an alternate, device-specific, timeout for 'list' actions." +"Requires external entities to create node attributes (named with the prefix " +"\"#health\") with values \"red\", \"yellow\", or \"green\"." msgstr "" -"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" -"于'list'操作的该设备特定的替代超时。" +"需要外部实体创建具有“red”,“yellow”或“green”值的节点属性(前缀为“#health”)" -#: daemons/fenced/pacemaker-fenced.c:694 -#, fuzzy -msgid "" -"The maximum number of times to try the 'list' command within the timeout " -"period" -msgstr "在超时前重试'list'命令的最大次数" +#: lib/common/options.c:506 +msgid "Base health score assigned to a node" +msgstr "分配给节点的基本健康分数" -#: daemons/fenced/pacemaker-fenced.c:696 -#, fuzzy +#: lib/common/options.c:507 +msgid "Only used when \"node-health-strategy\" is set to \"progressive\"." +msgstr "仅在“node-health-strategy”设置为“progressive”时使用。" + +#: lib/common/options.c:514 +msgid "The score to use for a node health attribute whose value is \"green\"" +msgstr "为节点健康属性值为“green”所使用的分数" + +#: lib/common/options.c:516 lib/common/options.c:525 lib/common/options.c:534 msgid "" -"Some devices do not support multiple connections. Operations may \"fail\" if " -"the device is busy with another task. In that case, Pacemaker will " -"automatically retry the operation if there is time remaining. Use this " -"option to alter the number of times Pacemaker tries a 'list' action before " -"giving up." -msgstr "" -" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" -"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" -"试'list' 操作的次数." +"Only used when \"node-health-strategy\" is set to \"custom\" or \"progressive" +"\"." +msgstr "仅在“node-health-strategy”设置为“custom”或“progressive”时使用。" -#: daemons/fenced/pacemaker-fenced.c:706 -#, fuzzy -msgid "An alternate command to run instead of 'monitor'" -msgstr "运行替代命令,而不是'monitor'" +#: lib/common/options.c:523 +msgid "The score to use for a node health attribute whose value is \"yellow\"" +msgstr "为节点健康属性值为“yellow”所使用的分数" + +#: lib/common/options.c:532 +msgid "The score to use for a node health attribute whose value is \"red\"" +msgstr "为节点健康属性值为“red”所使用的分数" -#: daemons/fenced/pacemaker-fenced.c:707 +#: lib/common/options.c:545 #, fuzzy -msgid "" -"Some devices do not support the standard commands or may provide additional " -"ones. Use this to specify an alternate, device-specific, command that " -"implements the 'monitor' action." -msgstr "" -"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" -"代命令,用来实现'monitor'操作。" +msgid "How the cluster should allocate resources to nodes" +msgstr "集群应该如何分配资源到节点" -#: daemons/fenced/pacemaker-fenced.c:715 +#: lib/common/options.c:563 #, fuzzy -msgid "" -"Specify an alternate timeout to use for 'monitor' actions instead of stonith-" -"timeout" -msgstr "指定用于monitor 操作的替代超时,而不是stonith-timeout" +msgid "An alternate parameter to supply instead of 'port'" +msgstr "用于替代 'port' 的其它参数" -#: daemons/fenced/pacemaker-fenced.c:717 +#: lib/common/options.c:564 #, fuzzy msgid "" -"Some devices need much more/less time to complete than normal. Use this to " -"specify an alternate, device-specific, timeout for 'monitor' actions." +"Some devices do not support the standard 'port' parameter or may provide " +"additional ones. Use this to specify an alternate, device-specific, " +"parameter that should indicate the machine to be fenced. A value of \"none\" " +"can be used to tell the cluster not to supply any additional parameters." msgstr "" -"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" -"于'monitor'操作的该设备特定的替代超时。" +"一些设备不支持标准的 'port' 参数, 或者可能会提供其它的参数. 使用此选项可指定" +"一个替代的, 该设备专用的参数, 该参数应该指出需要 fence 的机器. 可以使用 " +"\"none\" 值用于告诉集群不要提供任何其它的参数. " -#: daemons/fenced/pacemaker-fenced.c:725 +#: lib/common/options.c:574 #, fuzzy msgid "" -"The maximum number of times to try the 'monitor' command within the timeout " -"period" -msgstr "在超时前重试'monitor'命令的最大次数" +"A mapping of node names to port numbers for devices that do not support node " +"names." +msgstr "为不支持主机名的设备提供主机名到端口号的映射. " -#: daemons/fenced/pacemaker-fenced.c:727 +#: lib/common/options.c:576 #, fuzzy msgid "" -"Some devices do not support multiple connections. Operations may \"fail\" if " -"the device is busy with another task. In that case, Pacemaker will " -"automatically retry the operation if there is time remaining. Use this " -"option to alter the number of times Pacemaker tries a 'monitor' action " -"before giving up." +"For example, \"node1:1;node2:2,3\" would tell the cluster to use port 1 for " +"node1 and ports 2 and 3 for node2." msgstr "" -" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" -"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" -"试'monitor' 操作的次数." +"例如, \"node1:1;node2:2,3\" 将会告诉集群对node1使用端口1, 对node2使用端口2和" +"3." -#: daemons/fenced/pacemaker-fenced.c:737 -#, fuzzy -msgid "An alternate command to run instead of 'status'" -msgstr "运行替代命令,而不是'status'" +#: lib/common/options.c:583 +msgid "Nodes targeted by this device" +msgstr "此设备针对的节点" -#: daemons/fenced/pacemaker-fenced.c:738 +#: lib/common/options.c:584 #, fuzzy msgid "" -"Some devices do not support the standard commands or may provide additional " -"ones. Use this to specify an alternate, device-specific, command that " -"implements the 'status' action." +"Comma-separated list of nodes that can be targeted by this device (for " +"example, \"node1,node2,node3\"). If pcmk_host_check is \"static-list\", " +"either this or pcmk_host_map must be set." msgstr "" -"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" -"代命令,用来实现'status'操作。" +"此设备可以针对的节点列表,节点之间用逗号分隔(例如,node1,node2, node3).如果" +"pcmk_host_list=\"static-list\")" -#: daemons/fenced/pacemaker-fenced.c:746 +#: lib/common/options.c:594 #, fuzzy -msgid "" -"Specify an alternate timeout to use for 'status' actions instead of stonith-" -"timeout" -msgstr "指定用于status 操作的替代超时,而不是stonith-timeout" +msgid "How to determine which nodes can be targeted by the device" +msgstr "如何确定设备可以针对哪些节点" -#: daemons/fenced/pacemaker-fenced.c:748 +#: lib/common/options.c:595 #, fuzzy msgid "" -"Some devices need much more/less time to complete than normal. Use this to " -"specify an alternate, device-specific, timeout for 'status' actions." +"Use \"dynamic-list\" to query the device via the 'list' command; \"static-" +"list\" to check the pcmk_host_list attribute; \"status\" to query the device " +"via the 'status' command; or \"none\" to assume every device can fence every " +"node. The default value is \"static-list\" if pcmk_host_map or " +"pcmk_host_list is set; otherwise \"dynamic-list\" if the device supports the " +"list operation; otherwise \"status\" if the device supports the status " +"operation; otherwise \"none\"" msgstr "" -"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" -"于'status'操作的该设备特定的替代超时" +"选项值 \"dynamic-list\" 表示通过 'list' 命令查询设备; 选项值 \"static-list" +"\"表示检查 pcmk_host_list 属性; 选项值 \"status\" 表示通过 'status' 命令查询" +"设备; 或使用选项值 \"none\" 假设每个设备都可以 fence 所有节点. 如果" +"\"pcmk_host_map\"或\"pcmk_host_list\"被设置,默认值为\"static-list\";否则," +"如果设备支持列表操作,则为\"dynamic-list\";如果设备支持状态操作,则为" +"\"status\";否则为\"none\"" -#: daemons/fenced/pacemaker-fenced.c:756 -#, fuzzy +#: lib/common/options.c:608 msgid "" -"The maximum number of times to try the 'status' command within the timeout " -"period" -msgstr "仅高级使用:在超时前重试'status'命令的最大次数" +"Enable a delay of no more than the time specified before executing fencing " +"actions." +msgstr "在执行 fence 操作前启用不超过指定时间的延迟" -#: daemons/fenced/pacemaker-fenced.c:758 -#, fuzzy +#: lib/common/options.c:610 msgid "" -"Some devices do not support multiple connections. Operations may \"fail\" if " -"the device is busy with another task. In that case, Pacemaker will " -"automatically retry the operation if there is time remaining. Use this " -"option to alter the number of times Pacemaker tries a 'status' action before " -"giving up." +"Enable a delay of no more than the time specified before executing fencing " +"actions. Pacemaker derives the overall delay by taking the value of " +"pcmk_delay_base and adding a random delay value such that the sum is kept " +"below this maximum." msgstr "" -" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" -"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" -"试'status' 操作的次数." +"在执行 fence 操作前启用不超过指定时间的延迟. Pacemaker通过获取 " +"pcmk_delay_base 的值并添加随机延迟值来得出总延迟, 并且确保总和不超过此最大值." -#: daemons/fenced/pacemaker-fenced.c:773 -msgid "Instance attributes available for all \"stonith\"-class resources" -msgstr " 可用于所有stonith类资源的实例属性" +#: lib/common/options.c:619 +msgid "Enable a base delay for fencing actions and specify base delay value." +msgstr "为 fence 操作启用一个指定的基础延迟. " -#: daemons/fenced/pacemaker-fenced.c:775 +#: lib/common/options.c:621 +#, fuzzy msgid "" -"Instance attributes available for all \"stonith\"-class resources and used " -"by Pacemaker's fence daemon, formerly known as stonithd" +"This enables a static delay for fencing actions, which can help avoid " +"\"death matches\" where two nodes try to fence each other at the same time. " +"If pcmk_delay_max is also used, a random delay will be added such that the " +"total delay is kept below that value. This can be set to a single time value " +"to apply to any node targeted by this device (useful if a separate device is " +"configured for each target), or to a node map (for example, \"node1:1s;" +"node2:5\") to set a different value for each target." msgstr "" -" 可用于所有stonith类资源的实例属性,并由Pacemaker的fence守护程序使用(以前称" -"为stonithd)" - -#: daemons/fenced/pacemaker-fenced.c:811 -msgid "Deprecated (will be removed in a future release)" -msgstr "已弃用(将在未来版本中删除)" - -#: daemons/fenced/pacemaker-fenced.c:814 -msgid "Intended for use in regression testing only" -msgstr "仅适用于回归测试" - -#: daemons/fenced/pacemaker-fenced.c:817 -msgid "Send logs to the additional named logfile" -msgstr "将日志发送到其他命名日志文件" +"这为 fence 操作启用一个静态延迟, 这有助于避免 \"death matches\" 即两个节点同" +"时尝试互相 fence. 如果还同时使用了pcmk_delay_max, 则会添加一个随机延迟, 并确" +"保总延迟保持在该值以下. 可以将其设置为单个时间值, 以应用于该设备的所有目标节" +"点 (如果为每个目标节点都配置了单独的设备的情况下, 这很有用) 或设置成一个节点" +"映射形式 (例如,\"node1:1s;node2:5\") 从而为每个目标节点设置不同值. " -#: lib/common/options.c:57 -msgid "Pacemaker version on cluster node elected Designated Controller (DC)" -msgstr "集群选定的控制器节点(DC)的 Pacemaker 版本" +#: lib/common/options.c:634 +msgid "" +"The maximum number of actions can be performed in parallel on this device" +msgstr "可以在该设备上并发执行的最多操作数量" -#: lib/common/options.c:59 +#: lib/common/options.c:636 #, fuzzy msgid "" -"Includes a hash which identifies the exact revision the code was built from. " -"Used for diagnostic purposes." -msgstr "它包含一个标识所构建代码修订版本的哈希值. 其可用于诊断." +"Cluster property concurrent-fencing=\"true\" needs to be configured first. " +"Then use this to specify the maximum number of actions can be performed in " +"parallel on this device. A value of -1 means an unlimited number of actions " +"can be performed in parallel." +msgstr "" +"需要先配置集群属性 concurrent-fencing=\"true\". 然后使用此参数指定可以在该设" +"备上并发执行的最多操作数量. -1 表示可以并行执行无限数量的操作. " -#: lib/common/options.c:66 +#: lib/common/options.c:646 #, fuzzy -msgid "The messaging layer on which Pacemaker is currently running" -msgstr "Pacemaker 当前运行的消息传递层" - -#: lib/common/options.c:67 -msgid "Used for informational and diagnostic purposes." -msgstr "用于提供信息和诊断." - -#: lib/common/options.c:73 -msgid "An arbitrary name for the cluster" -msgstr "任意的集群名称" +msgid "An alternate command to run instead of 'reboot'" +msgstr "运行替代命令,而不是'reboot'" -#: lib/common/options.c:74 +#: lib/common/options.c:647 +#, fuzzy msgid "" -"This optional value is mostly for users' convenience as desired in " -"administration, but may also be used in Pacemaker configuration rules via " -"the #cluster-name node attribute, and by higher-level tools and resource " -"agents." +"Some devices do not support the standard commands or may provide additional " +"ones. Use this to specify an alternate, device-specific, command that " +"implements the 'reboot' action." msgstr "" -"该可选值主要是为了方便用户根据管理的需要使用, 可以通过 #cluster-name 节点属性" -"在 Pacemaker 配置规则中使用, 以及被更高级的工具和资源代理使用." +"一些设备不支持标准命令或可能提供其他命令,使用此选项可以指定一个该设备特定的" +"替代命令,用来实现'reboot'操作。" -#: lib/common/options.c:83 -msgid "How long to wait for a response from other nodes during start-up" -msgstr "启动过程中等待其他节点响应的时间" +#: lib/common/options.c:655 +#, fuzzy +msgid "" +"Specify an alternate timeout to use for 'reboot' actions instead of stonith-" +"timeout" +msgstr "指定用于'reboot' 操作的替代超时,而不是stonith-timeout" -#: lib/common/options.c:84 +#: lib/common/options.c:657 +#, fuzzy msgid "" -"The optimal value will depend on the speed and load of your network and the " -"type of switches used." -msgstr "其最佳值将取决于您的网络速度和负载以及使用的交换机类型." +"Some devices need much more/less time to complete than normal. Use this to " +"specify an alternate, device-specific, timeout for 'reboot' actions." +msgstr "" +"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" +"于'reboot'操作的该设备特定的替代超时。" -#: lib/common/options.c:91 +#: lib/common/options.c:665 +#, fuzzy msgid "" -"Polling interval to recheck cluster state and evaluate rules with date " -"specifications" -msgstr "重新检查集群状态及评估日期规范规则的轮询间隔" +"The maximum number of times to try the 'reboot' command within the timeout " +"period" +msgstr "在超时前重试'reboot'命令的最大次数" -#: lib/common/options.c:93 +#: lib/common/options.c:667 #, fuzzy msgid "" -"Pacemaker is primarily event-driven, and looks ahead to know when to recheck " -"cluster state for failure-timeout settings and most time-based rules. " -"However, it will also recheck the cluster after this amount of inactivity, " -"to evaluate rules with date specifications and serve as a fail-safe for " -"certain types of scheduler bugs. A value of 0 disables polling. A positive " -"value sets an interval in seconds, unless other units are specified (for " -"example, \"5min\")." +"Some devices do not support multiple connections. Operations may \"fail\" if " +"the device is busy with another task. In that case, Pacemaker will " +"automatically retry the operation if there is time remaining. Use this " +"option to alter the number of times Pacemaker tries a 'reboot' action before " +"giving up." msgstr "" -"Pacemaker 主要是通过事件驱动的, 并会提前预测何时重新检查集群状态以评估大多数" -"基于时间的规则以及 failure-timeout 配置, 然而无论如何, 经过指定的时间后如果没" -"有活动, 它将重新检查集群, 以评估具有日期规范的规则, 并为某些类型的调度程序缺" -"陷提供故障保护. 如果值为0, 将禁用轮询. 如果值为正数, 则设置以秒为单位的时间间" -"隔, 除非指定了其它单位 (例如, \"5min\")." +"一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' ,因此" +"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" +"试'reboot' 操作的次数." -#: lib/common/options.c:107 -msgid "How a cluster node should react if notified of its own fencing" -msgstr "集群节点在收到针对自己的 fence 操作结果通知时应如何反应" +#: lib/common/options.c:677 +#, fuzzy +msgid "An alternate command to run instead of 'off'" +msgstr "运行替代命令,而不是'off'" -#: lib/common/options.c:108 +#: lib/common/options.c:678 #, fuzzy msgid "" -"A cluster node may receive notification of a \"succeeded\" fencing that " -"targeted it if fencing is misconfigured, or if fabric fencing is in use that " -"doesn't cut cluster communication. Use \"stop\" to attempt to immediately " -"stop Pacemaker and stay stopped, or \"panic\" to attempt to immediately " -"reboot the local node, falling back to stop on failure." +"Some devices do not support the standard commands or may provide additional " +"ones. Use this to specify an alternate, device-specific, command that " +"implements the 'off' action." msgstr "" -"如果有错误的 fence 配置, 或者在使用 fabric fence 机制 (并不会切断集群通信), " -"则集群节点可能会收到针对自己的 \"succeeded\" fence 结果通知. 使用 \"stop\" 尝" -"试立即停止 pacemaker 并保持停止状态,或者使用 \"panic\" 尝试立即重新启动本地节" -"点,如果失败则返回执行 stop." +"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备专用的替代" +"命令,用来实现'off'操作。" -#: lib/common/options.c:119 +#: lib/common/options.c:686 +#, fuzzy msgid "" -"Declare an election failed if it is not decided within this much time. If " -"you need to adjust this value, it probably indicates the presence of a bug." -msgstr "" -"如果集群在本项设置时间内没有作出决定则宣布选举失败. 这可能表明当前存在错误, " -"您需要调整该值." +"Specify an alternate timeout to use for 'off' actions instead of stonith-" +"timeout" +msgstr "指定用于off 操作的替代超时,而不是stonith-timeout" -#: lib/common/options.c:128 +#: lib/common/options.c:688 +#, fuzzy msgid "" -"Exit immediately if shutdown does not complete within this much time. If you " -"need to adjust this value, it probably indicates the presence of a bug." +"Some devices need much more/less time to complete than normal. Use this to " +"specify an alternate, device-specific, timeout for 'off' actions." msgstr "" -"如果在这段时间内关机仍未完成, pacemaker 将立即退出. 这可能表明当前存在错误, " -"您需要调整该值." +"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" +"于'off'操作的该设备特定的替代超时。" -#: lib/common/options.c:138 lib/common/options.c:147 +#: lib/common/options.c:696 +#, fuzzy msgid "" -"If you need to adjust this value, it probably indicates the presence of a " -"bug." -msgstr "这可能表明当前存在错误, 您需要调整该值." +"The maximum number of times to try the 'off' command within the timeout " +"period" +msgstr "在超时前重试'off'命令的最大次数" -#: lib/common/options.c:155 +#: lib/common/options.c:698 #, fuzzy msgid "" -"Enabling this option will slow down cluster recovery under all conditions" +"Some devices do not support multiple connections. Operations may \"fail\" if " +"the device is busy with another task. In that case, Pacemaker will " +"automatically retry the operation if there is time remaining. Use this " +"option to alter the number of times Pacemaker tries a 'off' action before " +"giving up." msgstr "" -"启用此选项将在所有情况下减慢集群" -"恢复的速度" +" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" +"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" +"试'off' 操作的次数." + +#: lib/common/options.c:708 +#, fuzzy +msgid "An alternate command to run instead of 'on'" +msgstr "仅高级使用:运行替代命令,而不是'on'" -#: lib/common/options.c:157 +#: lib/common/options.c:709 +#, fuzzy msgid "" -"Delay cluster recovery for this much time to allow for additional events to " -"occur. Useful if your configuration is sensitive to the order in which ping " -"updates arrive." +"Some devices do not support the standard commands or may provide additional " +"ones. Use this to specify an alternate, device-specific, command that " +"implements the 'on' action." msgstr "" -"集群恢复将被推迟指定的时间间隔, 以等待更多事件发生. 如果您的配置对 ping 更新" -"到达的顺序很敏感, 则可以使用此选项." - -#: lib/common/options.c:167 -msgid "What to do when the cluster does not have quorum" -msgstr "当集群没有达到必需票数时该如何做" +"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" +"代命令,用来实现'on'操作。" -#: lib/common/options.c:174 -msgid "Whether to lock resources to a cleanly shut down node" -msgstr "是否锁定资源到完全关闭的节点" +#: lib/common/options.c:717 +#, fuzzy +msgid "" +"Specify an alternate timeout to use for 'on' actions instead of stonith-" +"timeout" +msgstr "指定用于on 操作的替代超时,而不是stonith-timeout" -#: lib/common/options.c:175 +#: lib/common/options.c:719 +#, fuzzy msgid "" -"When true, resources active on a node when it is cleanly shut down are kept " -"\"locked\" to that node (not allowed to run elsewhere) until they start " -"again on that node after it rejoins (or for at most shutdown-lock-limit, if " -"set). Stonith resources and Pacemaker Remote connections are never locked. " -"Clone and bundle instances and the promoted role of promotable clones are " -"currently never locked, though support could be added in a future release." +"Some devices need much more/less time to complete than normal. Use this to " +"specify an alternate, device-specific, timeout for 'on' actions." msgstr "" -"设置为 true 时, 在完全关闭的节点上活动的资源将被 \"locked\" 到该节点 (不允许" -"在其它方运行), 直到该节点重新加入后它们再次在该节点上启动 (最长为 shutdown-" -"lock-limit,如果已设置). Stonith 资源和 Pacemaker Remote 连接永远不会被锁定. " -"克隆和捆绑实例以及可提升克隆的提升角色目前不会被锁定, 尽管可能在未来的发行版" -"中添加支持. " +"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" +"于'on'操作的该设备特定的替代超时。" -#: lib/common/options.c:188 -msgid "Do not lock resources to a cleanly shut down node longer than this" -msgstr "资源会被锁定到完全关闭的节点的最长时间" +#: lib/common/options.c:727 +#, fuzzy +msgid "" +"The maximum number of times to try the 'on' command within the timeout period" +msgstr "在超时前重试'on'命令的最大次数" -#: lib/common/options.c:190 +#: lib/common/options.c:729 +#, fuzzy msgid "" -"If shutdown-lock is true and this is set to a nonzero time duration, " -"shutdown locks will expire after this much time has passed since the " -"shutdown was initiated, even if the node has not rejoined." +"Some devices do not support multiple connections. Operations may \"fail\" if " +"the device is busy with another task. In that case, Pacemaker will " +"automatically retry the operation if there is time remaining. Use this " +"option to alter the number of times Pacemaker tries a 'on' action before " +"giving up." msgstr "" -"如果 shutdown-lock 为 true, 并且将此选项设置为非零时间间隔, 则自关闭操作执行" -"经过此时间后,shutdown lock 将过期, 即使该节点尚未重新加入也是如此. " - -#: lib/common/options.c:199 -msgid "Enable Access Control Lists (ACLs) for the CIB" -msgstr "为 CIB 启用访问控制列表 (ACL) " +" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" +"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" +"试'on' 操作的次数." -#: lib/common/options.c:206 -msgid "Whether resources can run on any node by default" -msgstr "默认情况下资源是否可以在任何节点上运行" +#: lib/common/options.c:739 +#, fuzzy +msgid "An alternate command to run instead of 'list'" +msgstr "运行替代命令,而不是'list'" -#: lib/common/options.c:213 +#: lib/common/options.c:740 +#, fuzzy msgid "" -"Whether the cluster should refrain from monitoring, starting, and stopping " -"resources" -msgstr "集群是否应避免监视, 启动和停止资源" +"Some devices do not support the standard commands or may provide additional " +"ones. Use this to specify an alternate, device-specific, command that " +"implements the 'list' action." +msgstr "" +"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" +"代命令,用来实现'list'操作。" -#: lib/common/options.c:221 +#: lib/common/options.c:748 +#, fuzzy msgid "" -"Whether a start failure should prevent a resource from being recovered on " -"the same node" -msgstr "资源启动失败是否应阻止在同一节点上恢复该资源" +"Specify an alternate timeout to use for 'list' actions instead of stonith-" +"timeout" +msgstr "指定用于list 操作的替代超时,而不是stonith-timeout" -#: lib/common/options.c:223 +#: lib/common/options.c:750 +#, fuzzy msgid "" -"When true, the cluster will immediately ban a resource from a node if it " -"fails to start there. When false, the cluster will instead check the " -"resource's fail count against its migration-threshold." +"Some devices need much more/less time to complete than normal. Use this to " +"specify an alternate, device-specific, timeout for 'list' actions." msgstr "" -"当为true, 如果资源启动失败, 集群将立即禁止节点启动该资源, 当为false, 集群将检" -"查资源的失败次数是否超过了其 migration-threshold. " - -#: lib/common/options.c:231 -msgid "Whether the cluster should check for active resources during start-up" -msgstr "集群是否在启动期间检查活动的资源" +"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" +"于'list'操作的该设备特定的替代超时。" -#: lib/common/options.c:241 +#: lib/common/options.c:758 #, fuzzy -msgid "Whether nodes may be fenced as part of recovery" -msgstr "" -"节点是否可以被 fence 作为集群恢复" -"的一部分" - -#: lib/common/options.c:242 msgid "" -"If false, unresponsive nodes are immediately assumed to be harmless, and " -"resources that were active on them may be recovered elsewhere. This can " -"result in a \"split-brain\" situation, potentially leading to data loss and/" -"or service unavailability." -msgstr "" -"如果为 false, 则立即假定无响应的节点是无害的, 并且可以在其它位置恢复在其上活" -"动的资源. 这可能会导致 \"split-brain\" 情况, 从而可能导致数据丢失和(或)服务不" -"可用. " +"The maximum number of times to try the 'list' command within the timeout " +"period" +msgstr "在超时前重试'list'命令的最大次数" -#: lib/common/options.c:251 +#: lib/common/options.c:760 +#, fuzzy msgid "" -"Action to send to fence device when a node needs to be fenced (\"poweroff\" " -"is a deprecated alias for \"off\")" +"Some devices do not support multiple connections. Operations may \"fail\" if " +"the device is busy with another task. In that case, Pacemaker will " +"automatically retry the operation if there is time remaining. Use this " +"option to alter the number of times Pacemaker tries a 'list' action before " +"giving up." msgstr "" -"当节点需要被 fence 时, 向 fence 设备发送的操作 (\"poweroff\" 作为 \"off\" 的" -"别名已被弃用)" - -#: lib/common/options.c:259 -msgid "" -"How long to wait for on, off, and reboot fence actions to complete by default" -msgstr "默认情况下, 等待 on, off, 和 reboot fence 操作完成的时间" +" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" +"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" +"试'list' 操作的次数." -#: lib/common/options.c:267 -msgid "Whether watchdog integration is enabled" -msgstr "是否启用 watchdog 集成设置" +#: lib/common/options.c:770 +#, fuzzy +msgid "An alternate command to run instead of 'monitor'" +msgstr "运行替代命令,而不是'monitor'" -#: lib/common/options.c:268 +#: lib/common/options.c:771 +#, fuzzy msgid "" -"This is set automatically by the cluster according to whether SBD is " -"detected to be in use. User-configured values are ignored. The value `true` " -"is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is " -"nonzero. In that case, if fencing is required, watchdog-based self-fencing " -"will be performed via SBD without requiring a fencing resource explicitly " -"configured." +"Some devices do not support the standard commands or may provide additional " +"ones. Use this to specify an alternate, device-specific, command that " +"implements the 'monitor' action." msgstr "" -"集群会根据是否检测到 SBD 正在使用来自动设置此值. 用户配置的值将被忽略. 如果使" -"用了无盘 SBD 并且 `stonith-watchdog-timeout` 不为零, 则值 `true` 才有实际意" -"义. 在这种情况下, 如果需要fence, 将通过 SBD 执行基于 watchdog 的自我 fence, " -"而不需要明确配置 fence 资源." +"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" +"代命令,用来实现'monitor'操作。" -#: lib/common/options.c:289 +#: lib/common/options.c:779 #, fuzzy msgid "" -"How long before nodes can be assumed to be safely down when watchdog-based " -"self-fencing via SBD is in use" -msgstr "" -"当基于 watchdog 的自我 fence 机制通过SBD 被执行时, 节点被认为安全下线的等待时" -"间有多长" +"Specify an alternate timeout to use for 'monitor' actions instead of stonith-" +"timeout" +msgstr "指定用于monitor 操作的替代超时,而不是stonith-timeout" -#: lib/common/options.c:291 +#: lib/common/options.c:781 #, fuzzy msgid "" -"If this is set to a positive value, lost nodes are assumed to achieve self-" -"fencing using watchdog-based SBD within this much time. This does not " -"require a fencing resource to be explicitly configured, though a " -"fence_watchdog resource can be configured, to limit use to specific nodes. " -"If this is set to 0 (the default), the cluster will never assume watchdog-" -"based self-fencing. If this is set to a negative value, the cluster will use " -"twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if " -"that is positive, or otherwise treat this as 0. WARNING: When used, this " -"timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use " -"watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes " -"where this is not true for the local value or SBD is not active. When this " -"is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same " -"value on all nodes that use SBD, otherwise data corruption or loss could " -"occur." +"Some devices need much more/less time to complete than normal. Use this to " +"specify an alternate, device-specific, timeout for 'monitor' actions." msgstr "" -"如果设为正值, 丢失的节点将在设定的时间内被认定使用基于 watchdog 的 SBD 完成自" -"我 fence. 这不需要明确配置一个 fence 资源, 但可以配置一个 fence_watchdog 资源" -"来限制对特定节点使用. 如果设为0 (默认值), 集群将永远不会认定节点使用基于 " -"watchdog 的自我 fence. 如果设为负值, 集群将使用本地 `SBD_WATCHDOG_TIMEOUT` 环" -"境变量的两倍值(如果该值为正), 否则会将该值视为0. 警告: 在所有使用基于 " -"watchdog 的 SBD 的节点上, 此超时值需大于 `SBD_WATCHDOG_TIMEOUT` 的值, 否则 " -"Pacemaker 不会在任何不符合此条件的节点上启动, 也不会在任何未启用 SBD 的节点上" -"启动. 当设为负值时所有使用 SBD 的节点上 `SBD_WATCHDOG_TIMEOUT` 的值必须设置为" -"相同的值, 否则可能导致数据损坏或丢失." +"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" +"于'monitor'操作的该设备特定的替代超时。" -#: lib/common/options.c:311 +#: lib/common/options.c:789 +#, fuzzy msgid "" -"How many times fencing can fail before it will no longer be immediately re-" -"attempted on a target" -msgstr "fence 操作失败多少次会停止立即尝试" +"The maximum number of times to try the 'monitor' command within the timeout " +"period" +msgstr "在超时前重试'monitor'命令的最大次数" -#: lib/common/options.c:319 -msgid "Allow performing fencing operations in parallel" -msgstr "允许并行执行 fencing 操作" +#: lib/common/options.c:791 +#, fuzzy +msgid "" +"Some devices do not support multiple connections. Operations may \"fail\" if " +"the device is busy with another task. In that case, Pacemaker will " +"automatically retry the operation if there is time remaining. Use this " +"option to alter the number of times Pacemaker tries a 'monitor' action " +"before giving up." +msgstr "" +" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" +"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" +"试'monitor' 操作的次数." -#: lib/common/options.c:326 +#: lib/common/options.c:801 #, fuzzy -msgid "Whether to fence unseen nodes at start-up" -msgstr "*** 仅高级使用 *** 是否在启动时fence不可见节点" +msgid "An alternate command to run instead of 'status'" +msgstr "运行替代命令,而不是'status'" -#: lib/common/options.c:327 +#: lib/common/options.c:802 #, fuzzy msgid "" -"Setting this to false may lead to a \"split-brain\" situation, potentially " -"leading to data loss and/or service unavailability." +"Some devices do not support the standard commands or may provide additional " +"ones. Use this to specify an alternate, device-specific, command that " +"implements the 'status' action." msgstr "" -"将此设置为 false 可能会导致 \"split-brain\" 的情况,可能导致数据丢失和(或)服" -"务不可用。" +"一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" +"代命令,用来实现'status'操作。" -#: lib/common/options.c:334 +#: lib/common/options.c:810 +#, fuzzy msgid "" -"Apply fencing delay targeting the lost nodes with the highest total resource " -"priority" -msgstr "针对具有最高总资源优先级的丢失节点应用fencing延迟" +"Specify an alternate timeout to use for 'status' actions instead of stonith-" +"timeout" +msgstr "指定用于status 操作的替代超时,而不是stonith-timeout" -#: lib/common/options.c:336 +#: lib/common/options.c:812 +#, fuzzy msgid "" -"Apply specified delay for the fencings that are targeting the lost nodes " -"with the highest total resource priority in case we don't have the majority " -"of the nodes in our cluster partition, so that the more significant nodes " -"potentially win any fencing match, which is especially meaningful under " -"split-brain of 2-node cluster. A promoted resource instance takes the base " -"priority + 1 on calculation if the base priority is not 0. Any static/random " -"delays that are introduced by `pcmk_delay_base/max` configured for the " -"corresponding fencing resources will be added to this delay. This delay " -"should be significantly greater than, safely twice, the maximum " -"`pcmk_delay_base/max`. By default, priority fencing delay is disabled." +"Some devices need much more/less time to complete than normal. Use this to " +"specify an alternate, device-specific, timeout for 'status' actions." msgstr "" -"如果我们所在的集群分区并不拥有大多数集群节点,则针对丢失节点的fence操作应用指" -"定的延迟,这样更重要的节点就能够赢得fence竞赛。这对于双节点集群在split-brain" -"状况下尤其有意义。如果基本优先级不为0,在计算时主资源实例获得基本优先级+1。任" -"何对于相应的 fence 资源由 pcmk_delay_base/max 配置所引入的静态/随机延迟会被添" -"加到此延迟。为了安全, 这个延迟应该明显大于 pcmk_delay_base/max 的最大设置值," -"例如两倍。默认情况下,优先级fencing延迟已禁用。" +"一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" +"于'status'操作的该设备特定的替代超时" -#: lib/common/options.c:353 +#: lib/common/options.c:820 +#, fuzzy msgid "" -"How long to wait for a node that has joined the cluster to join the " -"controller process group" -msgstr "等待已加入集群的节点加入控制器进程组的时间" +"The maximum number of times to try the 'status' command within the timeout " +"period" +msgstr "仅高级使用:在超时前重试'status'命令的最大次数" -#: lib/common/options.c:355 +#: lib/common/options.c:822 +#, fuzzy msgid "" -"Fence nodes that do not join the controller process group within this much " -"time after joining the cluster, to allow the cluster to continue managing " -"resources. A value of 0 means never fence pending nodes. Setting the value " -"to 2h means fence nodes after 2 hours." +"Some devices do not support multiple connections. Operations may \"fail\" if " +"the device is busy with another task. In that case, Pacemaker will " +"automatically retry the operation if there is time remaining. Use this " +"option to alter the number of times Pacemaker tries a 'status' action before " +"giving up." msgstr "" -"如果节点加入集群后在此时间内不加入控制器进程组,Fence该节点,以便群集继续管理" -"资源。值为0表示永远不 fence 待定节点。将值设置为2h表示2小时后 fence 待定节" -"点。" +" 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" +"Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" +"试'status' 操作的次数." -#: lib/common/options.c:365 -msgid "Maximum time for node-to-node communication" -msgstr "最大节点间通信时间" +#: lib/common/options.c:843 +msgid "Resource assignment priority" +msgstr "" -#: lib/common/options.c:366 +#: lib/common/options.c:844 msgid "" -"The node elected Designated Controller (DC) will consider an action failed " -"if it does not get a response from the node executing the action within this " -"time (after considering the action's own timeout). The \"correct\" value " -"will depend on the speed and load of your network and cluster nodes." +"If not all resources can be active, the cluster will stop lower-priority " +"resources in order to keep higher-priority ones active." msgstr "" -"如果一个操作未在该时间内(并且考虑操作本身的超时时长)从执行该操作的节点获得" -"响应,则会被选为指定控制器(DC)的节点认定为失败。\"正确\" 值将取决于速度和您" -"的网络和集群节点的负载。" -#: lib/common/options.c:378 -msgid "Maximum amount of system load that should be used by cluster nodes" -msgstr "集群节点应该使用的最大系统负载量" +#: lib/common/options.c:852 +msgid "Default value for influence in colocation constraints" +msgstr "" -#: lib/common/options.c:380 +#: lib/common/options.c:853 msgid "" -"The cluster will slow down its recovery process when the amount of system " -"resources used (currently CPU) approaches this limit" -msgstr "当使用的系统资源量(当前指 CPU)接近此限制时, 集群将减慢其恢复过程" +"Use this value as the default for influence in all colocation constraints " +"involving this resource, as well as in the implicit colocation constraints " +"created if this resource is in a group." +msgstr "" + +#: lib/common/options.c:863 +#, fuzzy +msgid "State the cluster should attempt to keep this resource in" +msgstr "集群是否在启动期间检查运行资源" -#: lib/common/options.c:387 +#: lib/common/options.c:864 msgid "" -"Maximum number of jobs that can be scheduled per node (defaults to 2x cores)" -msgstr "每个节点可以调度的最大作业数(默认为2x内核数)" +"\"Stopped\" forces the resource to be stopped. \"Started\" allows the " +"resource to be started (and in the case of promotable clone resources, " +"promoted if appropriate). \"Unpromoted\" allows the resource to be started, " +"but only in the unpromoted role if the resource is promotable. \"Promoted\" " +"is equivalent to \"Started\"." +msgstr "" -#: lib/common/options.c:395 +#: lib/common/options.c:875 #, fuzzy +msgid "Whether the cluster is allowed to actively change the resource's state" +msgstr "集群是否在启动期间检查运行资源" + +#: lib/common/options.c:877 msgid "" -"Maximum number of jobs that the cluster may execute in parallel across all " -"nodes" -msgstr "集群可以在所有节点上并发执行的最大作业数" +"If false, the cluster will not start, stop, promote, or demote the resource " +"on any node. Recurring actions for the resource are unaffected. If true, a " +"true value for the maintenance-mode cluster option, the maintenance node " +"attribute, or the maintenance resource meta-attribute overrides this." +msgstr "" -#: lib/common/options.c:397 +#: lib/common/options.c:887 msgid "" -"The \"correct\" value will depend on the speed and load of your network and " -"cluster nodes. If set to 0, the cluster will impose a dynamically calculated " -"limit when any node has a high load." +"If true, the cluster will not schedule any actions involving the resource" msgstr "" -"\"正确\" 值将取决于速度和您的网络与集群节点的负载。如果设置为0,当任何节点具" -"有高负载时,集群将施加一个动态计算的限制。" -#: lib/common/options.c:406 +#: lib/common/options.c:889 msgid "" -"The number of live migration actions that the cluster is allowed to execute " -"in parallel on a node (-1 means no limit)" -msgstr "允许集群在一个节点上并行执行的实时迁移操作的数量(-1表示没有限制)" +"If true, the cluster will not start, stop, promote, or demote the resource " +"on any node, and will pause any recurring monitors (except those specifying " +"role as \"Stopped\"). If false, a true value for the maintenance-mode " +"cluster option or maintenance node attribute overrides this." +msgstr "" -#: lib/common/options.c:414 -msgid "Maximum IPC message backlog before disconnecting a cluster daemon" -msgstr "断开集群守护程序之前的最大IPC消息积压" +#: lib/common/options.c:899 +msgid "Score to add to the current node when a resource is already active" +msgstr "" -#: lib/common/options.c:415 +#: lib/common/options.c:901 msgid "" -"Raise this if log has \"Evicting client\" messages for cluster daemon PIDs " -"(a good value is the number of resources in the cluster multiplied by the " -"number of nodes)." +"Score to add to the current node when a resource is already active. This " +"allows running resources to stay where they are, even if they would be " +"placed elsewhere if they were being started from a stopped state. The " +"default is 1 for individual clone instances, and 0 for all other resources." msgstr "" -"如果日志中有针对集群守护程序PID的消息“Evicting client”,(则建议将值设为集群" -"中的资源数量乘以节点数量)" -#: lib/common/options.c:425 -#, fuzzy -msgid "Whether the cluster should stop all active resources" -msgstr "集群是否在启动期间检查运行资源" +#: lib/common/options.c:914 +msgid "Conditions under which the resource can be started" +msgstr "" -#: lib/common/options.c:432 -msgid "Whether to stop resources that were removed from the configuration" -msgstr "是否停止配置已被删除的资源" +#: lib/common/options.c:915 +msgid "" +"Conditions under which the resource can be started. \"nothing\" means the " +"cluster can always start this resource. \"quorum\" means the cluster can " +"start this resource only if a majority of the configured nodes are active. " +"\"fencing\" means the cluster can start this resource only if a majority of " +"the configured nodes are active and any failed or unknown nodes have been " +"fenced. \"unfencing\" means the cluster can start this resource only if a " +"majority of the configured nodes are active and any failed or unknown nodes " +"have been fenced, and only on nodes that have been unfenced. The default is " +"\"quorum\" for resources with a class of stonith; otherwise, \"unfencing\" " +"if unfencing is active in the cluster; otherwise, \"fencing\" if the stonith-" +"enabled cluster option is true; otherwise, \"quorum\"." +msgstr "" -#: lib/common/options.c:440 -msgid "Whether to cancel recurring actions removed from the configuration" -msgstr "是否取消配置已被删除的的重复操作" +#: lib/common/options.c:936 +msgid "" +"Number of failures on a node before the resource becomes ineligible to run " +"there." +msgstr "" -#: lib/common/options.c:448 -#, fuzzy -msgid "Whether to remove stopped resources from the executor" -msgstr "是否从pacemaker-execd 守护进程中清除已停止的资源" +#: lib/common/options.c:938 +msgid "" +"Number of failures that may occur for this resource on a node, before that " +"node is marked ineligible to host this resource. A value of 0 indicates that " +"this feature is disabled (the node will never be marked ineligible). By " +"contrast, the cluster treats \"INFINITY\" (the default) as a very large but " +"finite number. This option has an effect only if the failed operation " +"specifies its on-fail attribute as \"restart\" (the default), and " +"additionally for failed start operations, if the start-failure-is-fatal " +"cluster property is set to false." +msgstr "" -#: lib/common/options.c:449 -#, fuzzy -msgid "Values other than default are poorly tested and potentially dangerous." -msgstr "非默认值未经过充分的测试,有潜在的风险。该选项将在未来的版本中删除。" +#: lib/common/options.c:952 +msgid "Number of seconds before acting as if a failure had not occurred" +msgstr "" -#: lib/common/options.c:458 -msgid "The number of scheduler inputs resulting in errors to save" -msgstr "保存导致错误的调度程序输入的数量" +#: lib/common/options.c:953 +msgid "" +"Number of seconds after a failed action for this resource before acting as " +"if the failure had not occurred, and potentially allowing the resource back " +"to the node on which it failed. A value of 0 indicates that this feature is " +"disabled." +msgstr "" -#: lib/common/options.c:459 lib/common/options.c:466 lib/common/options.c:473 -msgid "Zero to disable, -1 to store unlimited." -msgstr "零表示禁用,-1表示存储不受限制。" +#: lib/common/options.c:964 +msgid "" +"What to do if the cluster finds the resource active on more than one node" +msgstr "" -#: lib/common/options.c:465 -msgid "The number of scheduler inputs resulting in warnings to save" -msgstr "保存导致警告的调度程序输入的数量" +#: lib/common/options.c:966 +msgid "" +"What to do if the cluster finds the resource active on more than one node. " +"\"block\" means to mark the resource as unmanaged. \"stop_only\" means to " +"stop all active instances of this resource and leave them stopped. " +"\"stop_start\" means to stop all active instances of this resource and start " +"the resource in one location only. \"stop_unexpected\" means to stop all " +"active instances of this resource except where the resource should be " +"active. (This should be used only when extra instances are not expected to " +"disrupt existing instances, and the resource agent's monitor of an existing " +"instance is capable of detecting any problems that could be caused. Note " +"that any resources ordered after this one will still need to be restarted.)" +msgstr "" -#: lib/common/options.c:472 -msgid "The number of scheduler inputs without errors or warnings to save" -msgstr "保存没有错误或警告的调度程序输入的数量" +#: lib/common/options.c:985 +#, fuzzy +msgid "" +"Whether the cluster should try to \"live migrate\" this resource when it " +"needs to be moved" +msgstr "集群是否在启动期间检查运行资源" -#: lib/common/options.c:484 +#: lib/common/options.c:987 +msgid "" +"Whether the cluster should try to \"live migrate\" this resource when it " +"needs to be moved. The default is true for ocf:pacemaker:remote resources, " +"and false otherwise." +msgstr "" + +#: lib/common/options.c:996 +msgid "" +"Whether the resource should be allowed to run on a node even if the node's " +"health score would otherwise prevent it" +msgstr "" + +#: lib/common/options.c:1004 #, fuzzy -msgid "How cluster should react to node health attributes" -msgstr "集群节点对节点健康属性如何反应" +msgid "Where to check user-defined node attributes" +msgstr "*** 仅高级使用 *** 是否在启动时fence不可见节点" -#: lib/common/options.c:485 +#: lib/common/options.c:1005 msgid "" -"Requires external entities to create node attributes (named with the prefix " -"\"#health\") with values \"red\", \"yellow\", or \"green\"." +"Whether to check user-defined node attributes on the physical host where a " +"container is running or on the local node. This is usually set for a bundle " +"resource and inherited by the bundle's primitive resource. A value of \"host" +"\" means to check user-defined node attributes on the underlying physical " +"host. Any other value means to check user-defined node attributes on the " +"local node (for a bundled primitive resource, this is the bundle node)." msgstr "" -"需要外部实体创建具有“red”,“yellow”或“green”值的节点属性(前缀为“#health”)" -#: lib/common/options.c:493 -msgid "Base health score assigned to a node" -msgstr "分配给节点的基本健康分数" +#: lib/common/options.c:1018 +msgid "" +"Name of the Pacemaker Remote guest node this resource is associated with, if " +"any" +msgstr "" -#: lib/common/options.c:494 -msgid "Only used when \"node-health-strategy\" is set to \"progressive\"." -msgstr "仅在“node-health-strategy”设置为“progressive”时使用。" +#: lib/common/options.c:1020 +msgid "" +"Name of the Pacemaker Remote guest node this resource is associated with, if " +"any. If specified, this both enables the resource as a guest node and " +"defines the unique name used to identify the guest node. The guest must be " +"configured to run the Pacemaker Remote daemon when it is started. WARNING: " +"This value cannot overlap with any resource or node IDs." +msgstr "" -#: lib/common/options.c:501 -msgid "The score to use for a node health attribute whose value is \"green\"" -msgstr "为节点健康属性值为“green”所使用的分数" +#: lib/common/options.c:1032 +msgid "" +"If remote-node is specified, the IP address or hostname used to connect to " +"the guest via Pacemaker Remote" +msgstr "" -#: lib/common/options.c:503 lib/common/options.c:512 lib/common/options.c:521 +#: lib/common/options.c:1034 msgid "" -"Only used when \"node-health-strategy\" is set to \"custom\" or \"progressive" -"\"." -msgstr "仅在“node-health-strategy”设置为“custom”或“progressive”时使用。" +"If remote-node is specified, the IP address or hostname used to connect to " +"the guest via Pacemaker Remote. The Pacemaker Remote daemon on the guest " +"must be configured to accept connections on this address. The default is the " +"value of the remote-node meta-attribute." +msgstr "" -#: lib/common/options.c:510 -msgid "The score to use for a node health attribute whose value is \"yellow\"" -msgstr "为节点健康属性值为“yellow”所使用的分数" +#: lib/common/options.c:1044 +msgid "" +"If remote-node is specified, port on the guest used for its Pacemaker Remote " +"connection" +msgstr "" -#: lib/common/options.c:519 -msgid "The score to use for a node health attribute whose value is \"red\"" -msgstr "为节点健康属性值为“red”所使用的分数" +#: lib/common/options.c:1046 +msgid "" +"If remote-node is specified, the port on the guest used for its Pacemaker " +"Remote connection. The Pacemaker Remote daemon on the guest must be " +"configured to listen on this port." +msgstr "" -#: lib/common/options.c:532 -#, fuzzy -msgid "How the cluster should allocate resources to nodes" -msgstr "集群应该如何分配资源到节点" +#: lib/common/options.c:1054 +msgid "" +"If remote-node is specified, how long before a pending Pacemaker Remote " +"guest connection times out." +msgstr "" + +#: lib/common/options.c:1062 +msgid "" +"If remote-node is specified, this acts as the allow-migrate meta-attribute " +"for the implicit remote connection resource (ocf:pacemaker:remote)." +msgstr "" #: lib/common/cmdline.c:70 msgid "Display software version and exit" msgstr "显示软件版本信息" #: lib/common/cmdline.c:73 msgid "Increase debug output (may be specified multiple times)" msgstr "显示更多调试信息(可多次指定)" #: lib/common/cmdline.c:92 msgid "FORMAT" msgstr "格式" #: lib/common/cmdline.c:94 msgid "Specify file name for output (or \"-\" for stdout)" msgstr "指定输出的文件名 或指定'-' 表示标准输出" #: lib/common/cmdline.c:94 msgid "DEST" msgstr "目标" #: lib/common/cmdline.c:100 msgid "Output Options:" msgstr "输出选项" #: lib/common/cmdline.c:100 msgid "Show output help" msgstr "显示输出帮助" -#: tools/crm_resource.c:201 +#: tools/crm_resource.c:204 #, c-format msgid "Aborting because no messages received in %d seconds" msgstr "中止,因为在%d秒内没有接收到消息" -#: tools/crm_resource.c:355 +#: tools/crm_resource.c:374 #, c-format msgid "Invalid check level setting: %s" msgstr "无效的检查级别设置:%s" -#: tools/crm_resource.c:857 +#: tools/crm_resource.c:891 #, c-format msgid "" "Resource '%s' not moved: active in %d locations (promoted in %d).\n" "To prevent '%s' from running on a specific location, specify a node.To " "prevent '%s' from being promoted at a specific location, specify a node and " "the --promoted option." msgstr "" "资源'%s'未移动:在%d个位置运行(其中在%d个位置为主实例)\n" "若要阻止'%s'在特定位置运行,请指定一个节点。若要防止'%s'在指定位置升级,指定" "一个节点并使用--promoted选项" -#: tools/crm_resource.c:868 +#: tools/crm_resource.c:902 #, c-format msgid "" "Resource '%s' not moved: active in %d locations.\n" "To prevent '%s' from running on a specific location, specify a node." msgstr "" "资源%s未移动:在%d个位置运行\n" "若要防止'%s'运行在特定位置,指定一个节点" -#: tools/crm_resource.c:945 +#: tools/crm_resource.c:979 #, c-format msgid "Could not get modified CIB: %s\n" msgstr "无法获得修改的CIB:%s\n" -#: tools/crm_resource.c:1023 +#: tools/crm_resource.c:1077 #, c-format msgid "No cluster connection to Pacemaker Remote node %s detected" msgstr "未检测到至pacemaker远程节点%s的集群连接" -#: tools/crm_resource.c:1084 +#: tools/crm_resource.c:1138 msgid "Must specify -t with resource type" msgstr "需要使用-t指定资源类型" -#: tools/crm_resource.c:1090 +#: tools/crm_resource.c:1144 msgid "Must supply -v with new value" msgstr "必须使用-v指定新值" -#: tools/crm_resource.c:1122 +#: tools/crm_resource.c:1176 msgid "Could not create executor connection" msgstr "无法创建到pacemaker-execd守护进程的连接" -#: tools/crm_resource.c:1147 +#: tools/crm_resource.c:1201 #, fuzzy, c-format msgid "Metadata query for %s failed: %s" msgstr ",查询%s的元数据失败: %s\n" -#: tools/crm_resource.c:1153 +#: tools/crm_resource.c:1207 #, c-format msgid "'%s' is not a valid agent specification" msgstr "'%s' 是一个无效的代理" -#: tools/crm_resource.c:1166 +#: tools/crm_resource.c:1220 msgid "--resource cannot be used with --class, --agent, and --provider" msgstr "--resource 不能与 --class, --agent, --provider一起使用" -#: tools/crm_resource.c:1171 +#: tools/crm_resource.c:1225 msgid "" "--class, --agent, and --provider can only be used with --validate and --" "force-*" msgstr "--class, --agent和--provider只能被用于--validate和--force-*" -#: tools/crm_resource.c:1180 +#: tools/crm_resource.c:1234 msgid "stonith does not support providers" msgstr "stonith 不支持提供者" -#: tools/crm_resource.c:1184 +#: tools/crm_resource.c:1238 #, c-format msgid "%s is not a known stonith agent" msgstr "%s 不是一个已知stonith代理" -#: tools/crm_resource.c:1189 +#: tools/crm_resource.c:1243 #, c-format msgid "%s:%s:%s is not a known resource" msgstr "%s:%s:%s 不是一个已知资源" -#: tools/crm_resource.c:1494 +#: tools/crm_resource.c:1551 #, c-format msgid "Error creating output format %s: %s" msgstr "创建输出格式错误 %s:%s" -#: tools/crm_resource.c:1515 +#: tools/crm_resource.c:1572 msgid "--expired requires --clear or -U" msgstr "--expired需要和--clear或-U一起使用" -#: tools/crm_resource.c:1532 +#: tools/crm_resource.c:1589 #, c-format msgid "Error parsing '%s' as a name=value pair" msgstr "'%s'解析错误,格式为name=value" -#: tools/crm_resource.c:1631 +#: tools/crm_resource.c:1688 msgid "Must supply a resource id with -r" msgstr "必须使用-r指定资源id" -#: tools/crm_resource.c:1637 +#: tools/crm_resource.c:1694 msgid "Must supply a node name with -N" msgstr "必须使用-N指定节点名称" -#: tools/crm_resource.c:1651 +#: tools/crm_resource.c:1708 msgid "Could not create CIB connection" msgstr "无法创建到CIB的连接" -#: tools/crm_resource.c:1659 +#: tools/crm_resource.c:1716 #, c-format msgid "Could not connect to the CIB: %s" msgstr "不能连接到CIB:%s" -#: tools/crm_resource.c:1682 +#: tools/crm_resource.c:1739 #, c-format msgid "Resource '%s' not found" msgstr "没有发现'%s'资源" -#: tools/crm_resource.c:1694 +#: tools/crm_resource.c:1751 #, c-format msgid "Cannot operate on clone resource instance '%s'" msgstr "不能操作克隆资源实例'%s'" -#: tools/crm_resource.c:1706 +#: tools/crm_resource.c:1763 #, c-format msgid "Node '%s' not found" msgstr "没有发现%s节点" -#: tools/crm_resource.c:1717 +#: tools/crm_resource.c:1774 #, c-format msgid "Error connecting to the controller: %s" msgstr "连接到控制器错误:%s" -#: tools/crm_resource.c:1726 +#: tools/crm_resource.c:1783 #, fuzzy, c-format msgid "Error connecting to %s: %s" msgstr "连接到控制器错误:%s" -#: tools/crm_resource.c:1986 +#: tools/crm_resource.c:2052 msgid "You need to supply a value with the -v option" msgstr "需要使用-v选项提供一个值" -#: tools/crm_resource.c:2040 +#: tools/crm_resource.c:2106 msgid "You need to specify a resource type with -t" msgstr "需要使用-t指定资源类型" -#: tools/crm_resource.c:2047 +#: tools/crm_resource.c:2113 #, fuzzy, c-format msgid "Could not delete resource %s: %s" msgstr "无法删除资源:%s:%s" -#: tools/crm_resource.c:2057 +#: tools/crm_resource.c:2123 #, c-format msgid "Unimplemented command: %d" msgstr "无效的命令:%d" -#: tools/crm_resource.c:2087 +#: tools/crm_resource.c:2153 #, c-format msgid "Error performing operation: %s" msgstr "执行操作错误:%s" #, fuzzy #~ msgid "For example, \"node1,node2,node3\"." #~ msgstr "例如, \"node1,node2,node3\"." #, fuzzy #~ msgid "*** Advanced Use Only ***" #~ msgstr "*** Advanced Use Only(仅限高级用户使用) ***" #, fuzzy #~ msgid "" #~ "Zero disables polling, while positive values are an interval in seconds " #~ "(unless other units are specified, for example \"5min\")" #~ msgstr "" #~ "0 表示禁用轮询,而正值表示以秒为单位的时间间隔(除非指定了其他单位, 例如 " #~ "\"5min\" 表示5分钟)" #~ msgid " Allowed values: " #~ msgstr " 允许的值: " #~ msgid "" #~ "This value is not used by Pacemaker, but is kept for backward " #~ "compatibility, and certain legacy fence agents might use it." #~ msgstr "" #~ "Pacemaker不使用此值,但保留此值是为了向后兼容,某些传统的fence 代理可能会" #~ "使用它。" #~ msgid "No agents found for standard '%s'" #~ msgstr "没有发现指定的'%s'标准代理" #, fuzzy #~ msgid "No agents found for standard '%s' and provider '%s'" #~ msgstr "没有发现指定的标准%s和提供者%S的资源代理" #~ msgid "No %s found for %s" #~ msgstr "没有发现%s符合%s" #~ msgid "No %s found" #~ msgstr "没有发现%s" #~ msgid "" #~ "If nonzero, along with `have-watchdog=true` automatically set by the " #~ "cluster, when fencing is required, watchdog-based self-fencing will be " #~ "performed via SBD without requiring a fencing resource explicitly " #~ "configured. If `stonith-watchdog-timeout` is set to a positive value, " #~ "unseen nodes are assumed to self-fence within this much time. +WARNING:+ " #~ "It must be ensured that this value is larger than the " #~ "`SBD_WATCHDOG_TIMEOUT` environment variable on all nodes. Pacemaker " #~ "verifies the settings individually on all nodes and prevents startup or " #~ "shuts down if configured wrongly on the fly. It's strongly recommended " #~ "that `SBD_WATCHDOG_TIMEOUT` is set to the same value on all nodes. If " #~ "`stonith-watchdog-timeout` is set to a negative value, and " #~ "`SBD_WATCHDOG_TIMEOUT` is set, twice that value will be used. +WARNING:+ " #~ "In this case, it's essential (currently not verified by Pacemaker) that " #~ "`SBD_WATCHDOG_TIMEOUT` is set to the same value on all nodes." #~ msgstr "" #~ "如果值非零,且集群设置了 `have-watchdog=true` ,当需要 fence 操作时,基于 " #~ "watchdog 的自我 fence 机制将通过SBD执行,而不需要显式配置 fence 资源。如" #~ "果 `stonith-watchdog-timeout` 被设为正值,则假定不可见的节点在这段时间内自" #~ "我fence。 +WARNING:+ 必须确保该值大于所有节点上的`SBD_WATCHDOG_TIMEOUT` 环" #~ "境变量。Pacemaker将在所有节点上单独验证设置,如发现有错误的动态配置,将防" #~ "止节点启动或关闭。强烈建议在所有节点上将 `SBD_WATCHDOG_TIMEOUT` 设置为相同" #~ "的值。如果 `stonith-watchdog-timeout` 设置为负值。并且设置了 " #~ "`SBD_WATCHDOG_TIMEOUT` ,则将使用该值的两倍, +WARNING:+ 在这种情况下,必" #~ "须将所有节点上 `SBD_WATCHDOG_TIMEOUT` 设置为相同的值(目前没有通过pacemaker" #~ "验证)。" diff --git a/tools/crm_simulate.c b/tools/crm_simulate.c index fe41bf0944..81ff8b3e8d 100644 --- a/tools/crm_simulate.c +++ b/tools/crm_simulate.c @@ -1,587 +1,594 @@ /* * Copyright 2009-2024 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU General Public License version 2 * or later (GPLv2+) WITHOUT ANY WARRANTY. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #define SUMMARY "crm_simulate - simulate a Pacemaker cluster's response to events" struct { char *dot_file; char *graph_file; gchar *input_file; pcmk_injections_t *injections; unsigned int flags; gchar *output_file; long long repeat; gboolean store; gchar *test_dir; char *use_date; char *xml_file; } options = { .flags = pcmk_sim_show_pending | pcmk_sim_sanitized, .repeat = 1 }; uint32_t section_opts = 0; char *temp_shadow = NULL; crm_exit_t exit_code = CRM_EX_OK; #define INDENT " " static pcmk__supported_format_t formats[] = { PCMK__SUPPORTED_FORMAT_NONE, PCMK__SUPPORTED_FORMAT_TEXT, PCMK__SUPPORTED_FORMAT_XML, { NULL, NULL, NULL } }; static gboolean all_actions_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_all_actions; return TRUE; } static gboolean attrs_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { section_opts |= pcmk_section_attributes; return TRUE; } static gboolean failcounts_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { section_opts |= pcmk_section_failcounts | pcmk_section_failures; return TRUE; } static gboolean in_place_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.store = TRUE; options.flags |= pcmk_sim_process | pcmk_sim_simulate; return TRUE; } static gboolean live_check_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { if (options.xml_file) { free(options.xml_file); } options.xml_file = NULL; options.flags &= ~pcmk_sim_sanitized; return TRUE; } static gboolean node_down_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.injections->node_down = g_list_append(options.injections->node_down, g_strdup(optarg)); return TRUE; } static gboolean node_fail_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.injections->node_fail = g_list_append(options.injections->node_fail, g_strdup(optarg)); return TRUE; } static gboolean node_up_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { pcmk__simulate_node_config = true; options.injections->node_up = g_list_append(options.injections->node_up, g_strdup(optarg)); return TRUE; } static gboolean op_fail_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_process | pcmk_sim_simulate; options.injections->op_fail = g_list_append(options.injections->op_fail, g_strdup(optarg)); return TRUE; } static gboolean op_inject_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.injections->op_inject = g_list_append(options.injections->op_inject, g_strdup(optarg)); return TRUE; } static gboolean pending_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_show_pending; return TRUE; } static gboolean process_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_process; return TRUE; } static gboolean quorum_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { pcmk__str_update(&options.injections->quorum, optarg); return TRUE; } static gboolean save_dotfile_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_process; pcmk__str_update(&options.dot_file, optarg); return TRUE; } static gboolean save_graph_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_process; pcmk__str_update(&options.graph_file, optarg); return TRUE; } static gboolean show_scores_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_process | pcmk_sim_show_scores; return TRUE; } static gboolean simulate_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_process | pcmk_sim_simulate; return TRUE; } static gboolean ticket_activate_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.injections->ticket_activate = g_list_append(options.injections->ticket_activate, g_strdup(optarg)); return TRUE; } static gboolean ticket_grant_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.injections->ticket_grant = g_list_append(options.injections->ticket_grant, g_strdup(optarg)); return TRUE; } static gboolean ticket_revoke_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.injections->ticket_revoke = g_list_append(options.injections->ticket_revoke, g_strdup(optarg)); return TRUE; } static gboolean ticket_standby_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.injections->ticket_standby = g_list_append(options.injections->ticket_standby, g_strdup(optarg)); return TRUE; } static gboolean utilization_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.flags |= pcmk_sim_process | pcmk_sim_show_utilization; return TRUE; } static gboolean watchdog_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { pcmk__str_update(&options.injections->watchdog, optarg); return TRUE; } static gboolean xml_file_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { pcmk__str_update(&options.xml_file, optarg); options.flags |= pcmk_sim_sanitized; return TRUE; } static gboolean xml_pipe_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { pcmk__str_update(&options.xml_file, "-"); options.flags |= pcmk_sim_sanitized; return TRUE; } static GOptionEntry operation_entries[] = { { "run", 'R', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, process_cb, "Process the supplied input and show what actions the cluster will take in response", NULL }, { "simulate", 'S', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, simulate_cb, "Like --run, but also simulate taking those actions and show the resulting new status", NULL }, { "in-place", 'X', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, in_place_cb, "Like --simulate, but also store the results back to the input file", NULL }, { "show-attrs", 'A', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, attrs_cb, "Show node attributes", NULL }, { "show-failcounts", 'c', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, failcounts_cb, "Show resource fail counts", NULL }, { "show-scores", 's', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, show_scores_cb, "Show allocation scores", NULL }, { "show-utilization", 'U', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, utilization_cb, "Show utilization information", NULL }, { "profile", 'P', 0, G_OPTION_ARG_FILENAME, &options.test_dir, "Process all the XML files in the named directory to create profiling data", "DIR" }, { "repeat", 'N', 0, G_OPTION_ARG_INT, &options.repeat, "With --profile, repeat each test N times and print timings", "N" }, /* Deprecated */ { "pending", 'j', G_OPTION_FLAG_NO_ARG|G_OPTION_FLAG_HIDDEN, G_OPTION_ARG_CALLBACK, pending_cb, "Display pending state if '" PCMK_META_RECORD_PENDING "' is enabled", NULL }, { NULL } }; static GOptionEntry synthetic_entries[] = { { "node-up", 'u', 0, G_OPTION_ARG_CALLBACK, node_up_cb, "Simulate bringing a node online", "NODE" }, { "node-down", 'd', 0, G_OPTION_ARG_CALLBACK, node_down_cb, "Simulate taking a node offline", "NODE" }, { "node-fail", 'f', 0, G_OPTION_ARG_CALLBACK, node_fail_cb, "Simulate a node failing", "NODE" }, { "op-inject", 'i', 0, G_OPTION_ARG_CALLBACK, op_inject_cb, "Generate a failure for the cluster to react to in the simulation.\n" INDENT "See `Operation Specification` help for more information.", "OPSPEC" }, { "op-fail", 'F', 0, G_OPTION_ARG_CALLBACK, op_fail_cb, "If the specified task occurs during the simulation, have it fail with return code ${rc}.\n" INDENT "The transition will normally stop at the failed action.\n" INDENT "Save the result with --save-output and re-run with --xml-file.\n" INDENT "See `Operation Specification` help for more information.", "OPSPEC" }, { "set-datetime", 't', 0, G_OPTION_ARG_STRING, &options.use_date, "Set date/time (ISO 8601 format, see https://en.wikipedia.org/wiki/ISO_8601)", "DATETIME" }, { "quorum", 'q', 0, G_OPTION_ARG_CALLBACK, quorum_cb, "Set to '1' (or 'true') to indicate cluster has quorum", "QUORUM" }, { "watchdog", 'w', 0, G_OPTION_ARG_CALLBACK, watchdog_cb, "Set to '1' (or 'true') to indicate cluster has an active watchdog device", "DEVICE" }, { "ticket-grant", 'g', 0, G_OPTION_ARG_CALLBACK, ticket_grant_cb, "Simulate granting a ticket", "TICKET" }, { "ticket-revoke", 'r', 0, G_OPTION_ARG_CALLBACK, ticket_revoke_cb, "Simulate revoking a ticket", "TICKET" }, { "ticket-standby", 'b', 0, G_OPTION_ARG_CALLBACK, ticket_standby_cb, "Simulate making a ticket standby", "TICKET" }, { "ticket-activate", 'e', 0, G_OPTION_ARG_CALLBACK, ticket_activate_cb, "Simulate activating a ticket", "TICKET" }, { NULL } }; static GOptionEntry artifact_entries[] = { { "save-input", 'I', 0, G_OPTION_ARG_FILENAME, &options.input_file, "Save the input configuration to the named file", "FILE" }, { "save-output", 'O', 0, G_OPTION_ARG_FILENAME, &options.output_file, "Save the output configuration to the named file", "FILE" }, { "save-graph", 'G', 0, G_OPTION_ARG_CALLBACK, save_graph_cb, "Save the transition graph (XML format) to the named file", "FILE" }, { "save-dotfile", 'D', 0, G_OPTION_ARG_CALLBACK, save_dotfile_cb, "Save the transition graph (DOT format) to the named file", "FILE" }, { "all-actions", 'a', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, all_actions_cb, "Display all possible actions in DOT graph (even if not part of transition)", NULL }, { NULL } }; static GOptionEntry source_entries[] = { { "live-check", 'L', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, live_check_cb, "Connect to CIB manager and use the current CIB contents as input", NULL }, { "xml-file", 'x', 0, G_OPTION_ARG_CALLBACK, xml_file_cb, "Retrieve XML from the named file", "FILE" }, { "xml-pipe", 'p', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, xml_pipe_cb, "Retrieve XML from stdin", NULL }, { NULL } }; static int setup_input(pcmk__output_t *out, const char *input, const char *output, GError **error) { int rc = pcmk_rc_ok; xmlNode *cib_object = NULL; char *local_output = NULL; if (input == NULL) { /* Use live CIB */ rc = cib__signon_query(out, NULL, &cib_object); if (rc != pcmk_rc_ok) { // cib__signon_query() outputs any relevant error return rc; } } else if (pcmk__str_eq(input, "-", pcmk__str_casei)) { cib_object = pcmk__xml_read(NULL); } else { cib_object = pcmk__xml_read(input); } + if (cib_object == NULL) { + rc = pcmk_rc_bad_input; + g_set_error(error, PCMK__EXITC_ERROR, pcmk_rc2exitc(rc), + "Could not read input XML: %s", pcmk_rc_str(rc)); + return rc; + } + if (pcmk_find_cib_element(cib_object, PCMK_XE_STATUS) == NULL) { pcmk__xe_create(cib_object, PCMK_XE_STATUS); } rc = pcmk_update_configured_schema(&cib_object, false); if (rc != pcmk_rc_ok) { free_xml(cib_object); return rc; } if (!pcmk__validate_xml(cib_object, NULL, NULL, NULL)) { free_xml(cib_object); return pcmk_rc_schema_validation; } if (output == NULL) { char *pid = pcmk__getpid_s(); local_output = get_shadow_file(pid); temp_shadow = strdup(local_output); output = local_output; free(pid); } rc = pcmk__xml_write_file(cib_object, output, false, NULL); if (rc != pcmk_rc_ok) { g_set_error(error, PCMK__EXITC_ERROR, CRM_EX_CANTCREAT, "Could not create '%s': %s", output, pcmk_rc_str(rc)); } else { setenv("CIB_file", output, 1); } free_xml(cib_object); free(local_output); return rc; } static GOptionContext * build_arg_context(pcmk__common_args_t *args, GOptionGroup **group) { GOptionContext *context = NULL; GOptionEntry extra_prog_entries[] = { { "quiet", 'Q', 0, G_OPTION_ARG_NONE, &(args->quiet), "Display only essential output", NULL }, { NULL } }; const char *description = "Operation Specification:\n\n" "The OPSPEC in any command line option is of the form\n" "${resource}_${task}_${interval_in_ms}@${node}=${rc}\n" "(memcached_monitor_20000@bart.example.com=7, for example).\n" "${rc} is an OCF return code. For more information on these\n" "return codes, refer to https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Administration/html/agents.html#ocf-return-codes\n\n" "Examples:\n\n" "Pretend a recurring monitor action found memcached stopped on node\n" "fred.example.com and, during recovery, that the memcached stop\n" "action failed:\n\n" "\tcrm_simulate -LS --op-inject memcached:0_monitor_20000@bart.example.com=7 " "--op-fail memcached:0_stop_0@fred.example.com=1 --save-output /tmp/memcached-test.xml\n\n" "Now see what the reaction to the stop failed would be:\n\n" "\tcrm_simulate -S --xml-file /tmp/memcached-test.xml\n\n"; context = pcmk__build_arg_context(args, "text (default), xml", group, NULL); pcmk__add_main_args(context, extra_prog_entries); g_option_context_set_description(context, description); pcmk__add_arg_group(context, "operations", "Operations:", "Show operations options", operation_entries); pcmk__add_arg_group(context, "synthetic", "Synthetic Cluster Events:", "Show synthetic cluster event options", synthetic_entries); pcmk__add_arg_group(context, "artifact", "Artifact Options:", "Show artifact options", artifact_entries); pcmk__add_arg_group(context, "source", "Data Source:", "Show data source options", source_entries); return context; } int main(int argc, char **argv) { int rc = pcmk_rc_ok; pcmk_scheduler_t *scheduler = NULL; pcmk__output_t *out = NULL; GError *error = NULL; GOptionGroup *output_group = NULL; pcmk__common_args_t *args = pcmk__new_common_args(SUMMARY); gchar **processed_args = pcmk__cmdline_preproc(argv, "bdefgiqrtuwxDFGINOP"); GOptionContext *context = build_arg_context(args, &output_group); options.injections = calloc(1, sizeof(pcmk_injections_t)); if (options.injections == NULL) { rc = ENOMEM; goto done; } /* This must come before g_option_context_parse_strv. */ options.xml_file = strdup("-"); pcmk__register_formats(output_group, formats); if (!g_option_context_parse_strv(context, &processed_args, &error)) { exit_code = CRM_EX_USAGE; goto done; } pcmk__cli_init_logging("crm_simulate", args->verbosity); rc = pcmk__output_new(&out, args->output_ty, args->output_dest, argv); if (rc != pcmk_rc_ok) { fprintf(stderr, "Error creating output format %s: %s\n", args->output_ty, pcmk_rc_str(rc)); exit_code = CRM_EX_ERROR; goto done; } if (pcmk__str_eq(args->output_ty, "text", pcmk__str_null_matches) && !pcmk_is_set(options.flags, pcmk_sim_show_scores) && !pcmk_is_set(options.flags, pcmk_sim_show_utilization)) { pcmk__output_text_set_fancy(out, true); } pe__register_messages(out); pcmk__register_lib_messages(out); out->quiet = args->quiet; if (args->version) { out->version(out, false); goto done; } if (args->verbosity > 0) { options.flags |= pcmk_sim_verbose; #ifdef PCMK__COMPAT_2_0 /* Redirect stderr to stdout so we can grep the output */ close(STDERR_FILENO); dup2(STDOUT_FILENO, STDERR_FILENO); #endif } scheduler = pe_new_working_set(); if (scheduler == NULL) { rc = ENOMEM; g_set_error(&error, PCMK__RC_ERROR, rc, "Could not allocate scheduler data"); goto done; } if (pcmk_is_set(options.flags, pcmk_sim_show_scores)) { pcmk__set_scheduler_flags(scheduler, pcmk_sched_output_scores); } if (pcmk_is_set(options.flags, pcmk_sim_show_utilization)) { pcmk__set_scheduler_flags(scheduler, pcmk_sched_show_utilization); } pcmk__set_scheduler_flags(scheduler, pcmk_sched_no_compat); if (options.test_dir != NULL) { scheduler->priv = out; pcmk__profile_dir(options.test_dir, options.repeat, scheduler, options.use_date); rc = pcmk_rc_ok; goto done; } rc = setup_input(out, options.xml_file, options.store? options.xml_file : options.output_file, &error); if (rc != pcmk_rc_ok) { goto done; } rc = pcmk__simulate(scheduler, out, options.injections, options.flags, section_opts, options.use_date, options.input_file, options.graph_file, options.dot_file); done: pcmk__output_and_clear_error(&error, NULL); /* There sure is a lot to free in options. */ free(options.dot_file); free(options.graph_file); g_free(options.input_file); g_free(options.output_file); g_free(options.test_dir); free(options.use_date); free(options.xml_file); pcmk_free_injections(options.injections); pcmk__free_arg_context(context); g_strfreev(processed_args); if (scheduler != NULL) { pe_free_working_set(scheduler); } fflush(stderr); if (temp_shadow) { unlink(temp_shadow); free(temp_shadow); } if (rc != pcmk_rc_ok) { exit_code = pcmk_rc2exitc(rc); } if (out != NULL) { out->finish(out, exit_code, true, NULL); pcmk__output_free(out); } pcmk__unregister_formats(); crm_exit(exit_code); } diff --git a/xml/README.md b/xml/README.md index a3a1973dfa..1fbd42c4e5 100644 --- a/xml/README.md +++ b/xml/README.md @@ -1,135 +1,137 @@ # Schema Reference Pacemaker's XML schema has a version of its own, independent of the version of Pacemaker itself. ## Versioned Schema Evolution A versioned schema offers transparent backward and forward compatibility. - It reflects the timeline of schema-backed features (introduction, changes to the syntax, possibly deprecation) through the versioned stable schema increments, while keeping schema versions used by default by older Pacemaker versions untouched. - Pacemaker internally uses the latest stable schema version, and relies on supplemental transformations to promote cluster configurations based on older, incompatible schema versions into the desired form. ## Mapping Pacemaker Versions to Schema Versions | Pacemaker | Latest Schema | Changed | --------- | ------------- | ---------------------------------------------- +| `2.1.8` | `3.10` | `alerts`, `constraints`, `nodes`, `nvset`, +| | | `options`, `resources`, `rule` | `2.1.5` | `3.9` | `alerts`, `constraints`, `nodes`, `nvset`, | | | `options`, `resources`, `rule` | `2.1.3` | `3.8` | `acls` | `2.1.0` | `3.7` | `constraints`, `resources` | `2.0.5` | `3.5` | `api`, `resources`, `rule` | `2.0.4` | `3.3` | `tags` | `2.0.1` | `3.2` | `resources` | `2.0.0` | `3.1` | `constraints`, `resources` | `1.1.18` | `2.10` | `resources`, `alerts` | `1.1.17` | `2.9` | `resources`, `rule` | `1.1.16` | `2.6` | `constraints` | `1.1.15` | `2.5` | `alerts` | `1.1.14` | `2.4` | `fencing` | `1.1.13` | `2.3` | `constraints` | `1.1.12` | `2.0` | `nodes`, `nvset`, `resources`, `tags`, `acls` | `1.1.8` | `1.2` | ## Schema generation Each logical portion of the schema goes into its own RNG file, named like `${base}-${X}.${Y}.rng`. `${base}` identifies the portion of the schema (e.g. constraints, resources); ${X}.${Y} is the latest schema version that contained changes in this portion of the schema. The complete, overall schema, `pacemaker-${X}.${Y}.rng`, is automatically generated from the other files via the Makefile. # Updating schema files # ## New features ## The current schema version is determined at runtime when crm\_schema\_init() scans the CRM\_SCHEMA\_DIRECTORY. It will have the form `pacemaker-${X}.${Y}` and the highest `${X}.${Y}` wins. ### Simple Additions When the new syntax is a simple addition to the previous one, create a new entry, incrementing `${Y}`. ### Feature Removal or otherwise Incompatible Changes When the new syntax is not a simple addition to the previous one, create a new entry, incrementing `${X}` and setting `${Y} = 0`. An XSLT file is also required that converts an old syntax to the new one and must be named `upgrade-${Xold}.${Yold}.xsl`. See `xml/upgrade-1.3.xsl` for an example. Since `xml/upgrade-2.10.xsl`, rather self-descriptive approach is taken, separating metadata of the replacements and other modifications to perform from the actual executive parts, which is leveraged, e.g., with the on-the-fly overview as obtained with `./regression.sh -X test2to3`. Also this was the first time particular key names of `nvpair`s, i.e. below the granularity of the schemas so far, received attention, and consequently, no longer expected names became systemically banned in the after-upgrade schemas, using `` construct in the data type specification pertaining the affected XML path. The implied complexity also resulted in establishing a new compound, stepwise transformation, alleviating the procedural burden from the core upgrade recipe. In particular, `id-ref` based syntactic simplification granted in the CIB format introduces nonnegligible internal "noise" because of the extra indirection encumbered with generally non-bijective character of such a scheme (context-dependent interpretation). To reduce this strain, a symmetric arrangement is introduced as a pair of _enter_/_leave_ (pre-upgrade/post-upgrade) transformations where the latter is meant to eventually reversibly restore what the former intentionally simplified (normalized) for upgrade transformation's peruse. It's optional (even the post-upgrade counterpart is optional alone) and depends on whether the suitable files are found along the upgrade transformation itself: e.g., for `upgrade-2.10.xsl`, such files are `upgrade-2.10-enter.xsl` and `upgrade-2.10-leave.xsl`. Note that unfolding + refolding `id-ref` shortcuts is just a practically imposed individual case of how to reversibly make the configuration space tractable in the upgrade itself, allowing for more sophistication down the road. ### General Procedure 1. Copy the most recent version of `${base}-*.rng` to `${base}-${X}.${Y}.rng`, such that the new file name increments the highest number of any schema file, not just the file being edited. 2. Commit the copy, e.g. `"Low: xml: clone ${base} schema in preparation for changes"`. This way, the actual change will be obvious in the commit history. 3. Modify `${base}-${X}.${Y}.rng` as required. 4. If required, add an XSLT file, and update `xslt\_SCRIPTS` in `xml/Makefile.am`. 5. Commit. 6. Run `make -C xml clean; make -C xml` to rebuild the schemas in the local source directory. 7. The CIB validity and upgrade regression tests will break after the schema is updated. Run `cts/cts-cli -s` to make the expected outputs reflect the changes made so far, and run `git diff` to ensure that these changes look sane. Finally, commit the changes. 8. Similarly, with the new major version `${X}`, it's advisable to refresh scheduler tests at some point. See the instructions in `cts/README.md`. ## Using a New Schema New features will not be available until the cluster administrator: 1. Updates all the nodes 2. Runs the equivalent of `cibadmin --upgrade --force` ## Random Notes From the source directory, run `make -C xml diff` to see the changes in the current schema (compared to the previous ones). Alternatively, if the intention is to grok the overall historical schema evolution, use `make -C xml fulldiff`.