diff --git a/cts/cli/regression.daemons.exp b/cts/cli/regression.daemons.exp index b088e3cb4d..51a0edc88d 100644 --- a/cts/cli/regression.daemons.exp +++ b/cts/cli/regression.daemons.exp @@ -1,441 +1,441 @@ =#=#=#= Begin test: Get CIB manager metadata =#=#=#= 1.1 Cluster options used by Pacemaker's Cluster Information Base manager Cluster Information Base manager options Enable Access Control Lists (ACLs) for the CIB Enable Access Control Lists (ACLs) for the CIB Raise this if log has "Evicting client" messages for cluster daemon PIDs (a good value is the number of resources in the cluster multiplied by the number of nodes). Maximum IPC message backlog before disconnecting a cluster daemon =#=#=#= End test: Get CIB manager metadata - OK (0) =#=#=#= * Passed: pacemaker-based - Get CIB manager metadata =#=#=#= Begin test: Get controller metadata =#=#=#= 1.1 Cluster options used by Pacemaker's controller Pacemaker controller options Includes a hash which identifies the exact changeset the code was built from. Used for diagnostic purposes. Pacemaker version on cluster node elected Designated Controller (DC) Used for informational and diagnostic purposes. The messaging stack on which Pacemaker is currently running This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents. An arbitrary name for the cluster The optimal value will depend on the speed and load of your network and the type of switches used. How long to wait for a response from other nodes during start-up Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure timeouts and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. Allowed values: Zero disables polling, while positive values are an interval in seconds(unless other units are specified, for example "5min") Polling interval to recheck cluster state and evaluate rules with date specifications The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit Maximum amount of system load that should be used by cluster nodes Maximum number of jobs that can be scheduled per node (defaults to 2x cores) Maximum number of jobs that can be scheduled per node (defaults to 2x cores) A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Allowed values are "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure. How a cluster node should react if notified of its own fencing Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** If you need to adjust this value, it probably indicates the presence of a bug. *** Advanced Use Only *** Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive. *** Advanced Use Only *** Enabling this option will slow down cluster recovery under all conditions If this is set to a positive value, lost nodes are assumed to self-fence using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in use How many times fencing can fail before it will no longer be immediately re-attempted on a target How many times fencing can fail before it will no longer be immediately re-attempted on a target What to do when the cluster does not have quorum Allowed values: stop, freeze, ignore, demote, suicide What to do when the cluster does not have quorum When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Whether to lock resources to a cleanly shut down node =#=#=#= End test: Get controller metadata - OK (0) =#=#=#= * Passed: pacemaker-controld - Get controller metadata =#=#=#= Begin test: Get fencer metadata =#=#=#= 1.1 Instance attributes available for all "stonith"-class resources and used by Pacemaker's fence daemon, formerly known as stonithd Instance attributes available for all "stonith"-class resources some devices do not support the standard 'port' parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of none can be used to tell the cluster not to supply any additional parameters. Advanced use only: An alternate parameter to supply instead of 'port' Eg. node1:1;node2:2,3 would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2 A mapping of host names to ports numbers for devices that do not support host names. A list of machines controlled by this device (Optional unless pcmk_host_list=static-list) Eg. node1,node2,node3 Allowed values: dynamic-list (query the device via the 'list' command), static-list (check the pcmk_host_list attribute), status (query the device via the 'status' command), none (assume every device can fence every machine) How to determine which machines are controlled by the device. Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value such that the sum is kept below this maximum. Enable a base delay for fencing actions and specify base delay value. This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value.This can be set to a single time value to apply to any node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value per target. Enable a base delay for fencing actions and specify base delay value. Cluster property concurrent-fencing=true needs to be configured first.Then use this to specify the maximum number of actions can be performed in parallel on this device. -1 is unlimited. The maximum number of actions can be performed in parallel on this device Some devices do not support the standard commands or may provide additional ones.\nUse this to specify an alternate, device-specific, command that implements the 'reboot' action. Advanced use only: An alternate command to run instead of 'reboot' Some devices need much more/less time to complete than normal.Use this to specify an alternate, device-specific, timeout for 'reboot' actions. Advanced use only: Specify an alternate timeout to use for reboot actions instead of stonith-timeout Some devices do not support multiple connections. Operations may 'fail' if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries 'reboot' actions before giving up. Advanced use only: The maximum number of times to retry the 'reboot' command within the timeout period Some devices do not support the standard commands or may provide additional ones.Use this to specify an alternate, device-specific, command that implements the 'off' action. Advanced use only: An alternate command to run instead of 'off' Some devices need much more/less time to complete than normal.Use this to specify an alternate, device-specific, timeout for 'off' actions. Advanced use only: Specify an alternate timeout to use for off actions instead of stonith-timeout Some devices do not support multiple connections. Operations may 'fail' if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries 'off' actions before giving up. Advanced use only: The maximum number of times to retry the 'off' command within the timeout period Some devices do not support the standard commands or may provide additional ones.Use this to specify an alternate, device-specific, command that implements the 'on' action. Advanced use only: An alternate command to run instead of 'on' Some devices need much more/less time to complete than normal.Use this to specify an alternate, device-specific, timeout for 'on' actions. Advanced use only: Specify an alternate timeout to use for on actions instead of stonith-timeout Some devices do not support multiple connections. Operations may 'fail' if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries 'on' actions before giving up. Advanced use only: The maximum number of times to retry the 'on' command within the timeout period Some devices do not support the standard commands or may provide additional ones.Use this to specify an alternate, device-specific, command that implements the 'list' action. Advanced use only: An alternate command to run instead of 'list' Some devices need much more/less time to complete than normal.Use this to specify an alternate, device-specific, timeout for 'list' actions. Advanced use only: Specify an alternate timeout to use for list actions instead of stonith-timeout Some devices do not support multiple connections. Operations may 'fail' if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries 'list' actions before giving up. Advanced use only: The maximum number of times to retry the 'list' command within the timeout period Some devices do not support the standard commands or may provide additional ones.Use this to specify an alternate, device-specific, command that implements the 'monitor' action. Advanced use only: An alternate command to run instead of 'monitor' Some devices need much more/less time to complete than normal.\nUse this to specify an alternate, device-specific, timeout for 'monitor' actions. Advanced use only: Specify an alternate timeout to use for monitor actions instead of stonith-timeout Some devices do not support multiple connections. Operations may 'fail' if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries 'monitor' actions before giving up. Advanced use only: The maximum number of times to retry the 'monitor' command within the timeout period Some devices do not support the standard commands or may provide additional ones.Use this to specify an alternate, device-specific, command that implements the 'status' action. Advanced use only: An alternate command to run instead of 'status' Some devices need much more/less time to complete than normal.Use this to specify an alternate, device-specific, timeout for 'status' actions. Advanced use only: Specify an alternate timeout to use for status actions instead of stonith-timeout Some devices do not support multiple connections. Operations may 'fail' if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries 'status' actions before giving up. Advanced use only: The maximum number of times to retry the 'status' command within the timeout period =#=#=#= End test: Get fencer metadata - OK (0) =#=#=#= * Passed: pacemaker-fenced - Get fencer metadata =#=#=#= Begin test: Get scheduler metadata =#=#=#= 1.1 Cluster options used by Pacemaker's scheduler Pacemaker scheduler options What to do when the cluster does not have quorum Allowed values: stop, freeze, ignore, demote, suicide What to do when the cluster does not have quorum Whether resources can run on any node by default Whether resources can run on any node by default Whether the cluster should refrain from monitoring, starting, and stopping resources Whether the cluster should refrain from monitoring, starting, and stopping resources When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold. Whether a start failure should prevent a resource from being recovered on the same node Whether the cluster should check for active resources during start-up Whether the cluster should check for active resources during start-up When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release. Whether to lock resources to a cleanly shut down node If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined. Do not lock resources to a cleanly shut down node longer than this If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability. *** Advanced Use Only *** Whether nodes may be fenced as part of recovery Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") Allowed values: reboot, off, poweroff Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") This value is not used by Pacemaker, but is kept for backward compatibility, and certain legacy fence agents might use it. *** Advanced Use Only *** Unused by Pacemaker This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. Whether watchdog integration is enabled Allow performing fencing operations in parallel Allow performing fencing operations in parallel Setting this to false may lead to a "split-brain" situation,potentially leading to data loss and/or service unavailability. *** Advanced Use Only *** Whether to fence unseen nodes at start-up Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled. Apply fencing delay targeting the lost nodes with the highest total resource priority The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes. Maximum time for node-to-node communication The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load. Maximum number of jobs that the cluster may execute in parallel across all nodes The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) The number of live migration actions that the cluster is allowed to execute in parallel on a node (-1 means no limit) Whether the cluster should stop all active resources Whether the cluster should stop all active resources Whether to stop resources that were removed from the configuration Whether to stop resources that were removed from the configuration Whether to cancel recurring actions removed from the configuration Whether to cancel recurring actions removed from the configuration Values other than default are poorly tested and potentially dangerous. This option will be removed in a future release. *** Deprecated *** Whether to remove stopped resources from the executor Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in errors to save Zero to disable, -1 to store unlimited. The number of scheduler inputs resulting in warnings to save Zero to disable, -1 to store unlimited. The number of scheduler inputs without errors or warnings to save Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". Allowed values: none, migrate-on-red, only-green, progressive, custom How cluster should react to node health attributes - Only used when node-health-strategy is set to progressive. + Only used when "node-health-strategy" is set to "progressive". Base health score assigned to a node - Only used when node-health-strategy is set to custom or progressive. + Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "green" - Only used when node-health-strategy is set to custom or progressive. + Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "yellow" - Only used when node-health-strategy is set to custom or progressive. + Only used when "node-health-strategy" is set to "custom" or "progressive". The score to use for a node health attribute whose value is "red" How the cluster should allocate resources to nodes Allowed values: default, utilization, minimal, balanced How the cluster should allocate resources to nodes =#=#=#= End test: Get scheduler metadata - OK (0) =#=#=#= * Passed: pacemaker-schedulerd - Get scheduler metadata diff --git a/lib/pengine/common.c b/lib/pengine/common.c index a8c9fefc5c..ec9b843950 100644 --- a/lib/pengine/common.c +++ b/lib/pengine/common.c @@ -1,568 +1,561 @@ /* * Copyright 2004-2022 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #include #include #include #include #include #include #include gboolean was_processing_error = FALSE; gboolean was_processing_warning = FALSE; static bool check_placement_strategy(const char *value) { return pcmk__strcase_any_of(value, "default", "utilization", "minimal", "balanced", NULL); } static pcmk__cluster_option_t pe_opts[] = { /* name, old name, type, allowed values, * default value, validator, * short description, * long description */ { "no-quorum-policy", NULL, "select", "stop, freeze, ignore, demote, suicide", "stop", pcmk__valid_quorum, N_("What to do when the cluster does not have quorum"), NULL }, { "symmetric-cluster", NULL, "boolean", NULL, "true", pcmk__valid_boolean, N_("Whether resources can run on any node by default"), NULL }, { "maintenance-mode", NULL, "boolean", NULL, "false", pcmk__valid_boolean, N_("Whether the cluster should refrain from monitoring, starting, " "and stopping resources"), NULL }, { "start-failure-is-fatal", NULL, "boolean", NULL, "true", pcmk__valid_boolean, N_("Whether a start failure should prevent a resource from being " "recovered on the same node"), N_("When true, the cluster will immediately ban a resource from a node " "if it fails to start there. When false, the cluster will instead " "check the resource's fail count against its migration-threshold.") }, { "enable-startup-probes", NULL, "boolean", NULL, "true", pcmk__valid_boolean, N_("Whether the cluster should check for active resources during start-up"), NULL }, { XML_CONFIG_ATTR_SHUTDOWN_LOCK, NULL, "boolean", NULL, "false", pcmk__valid_boolean, N_("Whether to lock resources to a cleanly shut down node"), N_("When true, resources active on a node when it is cleanly shut down " "are kept \"locked\" to that node (not allowed to run elsewhere) " "until they start again on that node after it rejoins (or for at " "most shutdown-lock-limit, if set). Stonith resources and " "Pacemaker Remote connections are never locked. Clone and bundle " "instances and the promoted role of promotable clones are currently" " never locked, though support could be added in a future release.") }, { XML_CONFIG_ATTR_SHUTDOWN_LOCK_LIMIT, NULL, "time", NULL, "0", pcmk__valid_interval_spec, N_("Do not lock resources to a cleanly shut down node longer than this"), N_("If shutdown-lock is true and this is set to a nonzero time duration, " "shutdown locks will expire after this much time has passed since " "the shutdown was initiated, even if the node has not rejoined.") }, // Fencing-related options { "stonith-enabled", NULL, "boolean", NULL, "true", pcmk__valid_boolean, N_("*** Advanced Use Only *** " "Whether nodes may be fenced as part of recovery"), N_("If false, unresponsive nodes are immediately assumed to be harmless, " "and resources that were active on them may be recovered " "elsewhere. This can result in a \"split-brain\" situation, " "potentially leading to data loss and/or service unavailability.") }, { "stonith-action", NULL, "select", "reboot, off, poweroff", "reboot", pcmk__is_fencing_action, N_("Action to send to fence device when a node needs to be fenced " "(\"poweroff\" is a deprecated alias for \"off\")"), NULL }, { "stonith-timeout", NULL, "time", NULL, "60s", pcmk__valid_interval_spec, N_("*** Advanced Use Only *** Unused by Pacemaker"), N_("This value is not used by Pacemaker, but is kept for backward " "compatibility, and certain legacy fence agents might use it.") }, { XML_ATTR_HAVE_WATCHDOG, NULL, "boolean", NULL, "false", pcmk__valid_boolean, N_("Whether watchdog integration is enabled"), N_("This is set automatically by the cluster according to whether SBD " "is detected to be in use. User-configured values are ignored. " "The value `true` is meaningful if diskless SBD is used and " "`stonith-watchdog-timeout` is nonzero. In that case, if fencing " "is required, watchdog-based self-fencing will be performed via " "SBD without requiring a fencing resource explicitly configured.") }, { "concurrent-fencing", NULL, "boolean", NULL, PCMK__CONCURRENT_FENCING_DEFAULT, pcmk__valid_boolean, N_("Allow performing fencing operations in parallel"), NULL }, { "startup-fencing", NULL, "boolean", NULL, "true", pcmk__valid_boolean, N_("*** Advanced Use Only *** Whether to fence unseen nodes at start-up"), N_("Setting this to false may lead to a \"split-brain\" situation," "potentially leading to data loss and/or service unavailability.") }, { XML_CONFIG_ATTR_PRIORITY_FENCING_DELAY, NULL, "time", NULL, "0", pcmk__valid_interval_spec, N_("Apply fencing delay targeting the lost nodes with the highest total resource priority"), N_("Apply specified delay for the fencings that are targeting the lost " "nodes with the highest total resource priority in case we don't " "have the majority of the nodes in our cluster partition, so that " "the more significant nodes potentially win any fencing match, " "which is especially meaningful under split-brain of 2-node " "cluster. A promoted resource instance takes the base priority + 1 " "on calculation if the base priority is not 0. Any static/random " "delays that are introduced by `pcmk_delay_base/max` configured " "for the corresponding fencing resources will be added to this " "delay. This delay should be significantly greater than, safely " "twice, the maximum `pcmk_delay_base/max`. By default, priority " "fencing delay is disabled.") }, { "cluster-delay", NULL, "time", NULL, "60s", pcmk__valid_interval_spec, N_("Maximum time for node-to-node communication"), N_("The node elected Designated Controller (DC) will consider an action " "failed if it does not get a response from the node executing the " "action within this time (after considering the action's own " "timeout). The \"correct\" value will depend on the speed and " "load of your network and cluster nodes.") }, { "batch-limit", NULL, "integer", NULL, "0", pcmk__valid_number, "Maximum number of jobs that the cluster may execute in parallel " "across all nodes", "The \"correct\" value will depend on the speed and load of your " "network and cluster nodes. If set to 0, the cluster will " "impose a dynamically calculated limit when any node has a " "high load." }, { "migration-limit", NULL, "integer", NULL, "-1", pcmk__valid_number, "The number of live migration actions that the cluster is allowed " "to execute in parallel on a node (-1 means no limit)" }, /* Orphans and stopping */ { "stop-all-resources", NULL, "boolean", NULL, "false", pcmk__valid_boolean, N_("Whether the cluster should stop all active resources"), NULL }, { "stop-orphan-resources", NULL, "boolean", NULL, "true", pcmk__valid_boolean, N_("Whether to stop resources that were removed from the configuration"), NULL }, { "stop-orphan-actions", NULL, "boolean", NULL, "true", pcmk__valid_boolean, N_("Whether to cancel recurring actions removed from the configuration"), NULL }, { "remove-after-stop", NULL, "boolean", NULL, "false", pcmk__valid_boolean, N_("*** Deprecated *** Whether to remove stopped resources from " "the executor"), "Values other than default are poorly tested and potentially dangerous." " This option will be removed in a future release." }, /* Storing inputs */ { "pe-error-series-max", NULL, "integer", NULL, "-1", pcmk__valid_number, "The number of scheduler inputs resulting in errors to save", "Zero to disable, -1 to store unlimited." }, { "pe-warn-series-max", NULL, "integer", NULL, "5000", pcmk__valid_number, "The number of scheduler inputs resulting in warnings to save", "Zero to disable, -1 to store unlimited." }, { "pe-input-series-max", NULL, "integer", NULL, "4000", pcmk__valid_number, "The number of scheduler inputs without errors or warnings to save", "Zero to disable, -1 to store unlimited." }, /* Node health */ { PCMK__OPT_NODE_HEALTH_STRATEGY, NULL, "select", PCMK__VALUE_NONE ", " PCMK__VALUE_MIGRATE_ON_RED ", " PCMK__VALUE_ONLY_GREEN ", " PCMK__VALUE_PROGRESSIVE ", " PCMK__VALUE_CUSTOM, PCMK__VALUE_NONE, pcmk__validate_health_strategy, - "How cluster should react to node health attributes", - "Requires external entities to create node attributes (named with " - "the prefix \"#health\") with values \"" PCMK__VALUE_RED "\", " - "\"" PCMK__VALUE_YELLOW "\", or \"" PCMK__VALUE_GREEN "\"." + N_("How cluster should react to node health attributes"), + N_("Requires external entities to create node attributes (named with " + "the prefix \"#health\") with values \"red\", " + "\"yellow\", or \"green\".") }, { PCMK__OPT_NODE_HEALTH_BASE, NULL, "integer", NULL, "0", pcmk__valid_number, - "Base health score assigned to a node", - "Only used when " PCMK__OPT_NODE_HEALTH_STRATEGY " is set to " - PCMK__VALUE_PROGRESSIVE "." + N_("Base health score assigned to a node"), + N_("Only used when \"node-health-strategy\" is set to \"progressive\".") }, { PCMK__OPT_NODE_HEALTH_GREEN, NULL, "integer", NULL, "0", pcmk__valid_number, - "The score to use for a node health attribute whose value is \"" - PCMK__VALUE_GREEN "\"", - "Only used when " PCMK__OPT_NODE_HEALTH_STRATEGY " is set to " - PCMK__VALUE_CUSTOM " or " PCMK__VALUE_PROGRESSIVE "." + N_("The score to use for a node health attribute whose value is \"green\""), + N_("Only used when \"node-health-strategy\" is set to \"custom\" or \"progressive\".") }, { PCMK__OPT_NODE_HEALTH_YELLOW, NULL, "integer", NULL, "0", pcmk__valid_number, - "The score to use for a node health attribute whose value is \"" - PCMK__VALUE_YELLOW "\"", - "Only used when " PCMK__OPT_NODE_HEALTH_STRATEGY " is set to " - PCMK__VALUE_CUSTOM " or " PCMK__VALUE_PROGRESSIVE "." + N_("The score to use for a node health attribute whose value is \"yellow\""), + N_("Only used when \"node-health-strategy\" is set to \"custom\" or \"progressive\".") }, { PCMK__OPT_NODE_HEALTH_RED, NULL, "integer", NULL, "-INFINITY", pcmk__valid_number, - "The score to use for a node health attribute whose value is \"" - PCMK__VALUE_RED "\"", - "Only used when " PCMK__OPT_NODE_HEALTH_STRATEGY " is set to " - PCMK__VALUE_CUSTOM " or " PCMK__VALUE_PROGRESSIVE "." + N_("The score to use for a node health attribute whose value is \"red\""), + N_("Only used when \"node-health-strategy\" is set to \"custom\" or \"progressive\".") }, /*Placement Strategy*/ { "placement-strategy", NULL, "select", "default, utilization, minimal, balanced", "default", check_placement_strategy, - "How the cluster should allocate resources to nodes", + N_("How the cluster should allocate resources to nodes"), NULL }, }; void pe_metadata(pcmk__output_t *out) { const char *desc_short = "Pacemaker scheduler options"; const char *desc_long = "Cluster options used by Pacemaker's scheduler"; gchar *s = pcmk__format_option_metadata("pacemaker-schedulerd", desc_short, desc_long, pe_opts, PCMK__NELEM(pe_opts)); out->output_xml(out, "metadata", s); g_free(s); } void verify_pe_options(GHashTable * options) { pcmk__validate_cluster_options(options, pe_opts, PCMK__NELEM(pe_opts)); } const char * pe_pref(GHashTable * options, const char *name) { return pcmk__cluster_option(options, pe_opts, PCMK__NELEM(pe_opts), name); } const char * fail2text(enum action_fail_response fail) { const char *result = ""; switch (fail) { case action_fail_ignore: result = "ignore"; break; case action_fail_demote: result = "demote"; break; case action_fail_block: result = "block"; break; case action_fail_recover: result = "recover"; break; case action_fail_migrate: result = "migrate"; break; case action_fail_stop: result = "stop"; break; case action_fail_fence: result = "fence"; break; case action_fail_standby: result = "standby"; break; case action_fail_restart_container: result = "restart-container"; break; case action_fail_reset_remote: result = "reset-remote"; break; } return result; } enum action_tasks text2task(const char *task) { if (pcmk__str_eq(task, CRMD_ACTION_STOP, pcmk__str_casei)) { return stop_rsc; } else if (pcmk__str_eq(task, CRMD_ACTION_STOPPED, pcmk__str_casei)) { return stopped_rsc; } else if (pcmk__str_eq(task, CRMD_ACTION_START, pcmk__str_casei)) { return start_rsc; } else if (pcmk__str_eq(task, CRMD_ACTION_STARTED, pcmk__str_casei)) { return started_rsc; } else if (pcmk__str_eq(task, CRM_OP_SHUTDOWN, pcmk__str_casei)) { return shutdown_crm; } else if (pcmk__str_eq(task, CRM_OP_FENCE, pcmk__str_casei)) { return stonith_node; } else if (pcmk__str_eq(task, CRMD_ACTION_STATUS, pcmk__str_casei)) { return monitor_rsc; } else if (pcmk__str_eq(task, CRMD_ACTION_NOTIFY, pcmk__str_casei)) { return action_notify; } else if (pcmk__str_eq(task, CRMD_ACTION_NOTIFIED, pcmk__str_casei)) { return action_notified; } else if (pcmk__str_eq(task, CRMD_ACTION_PROMOTE, pcmk__str_casei)) { return action_promote; } else if (pcmk__str_eq(task, CRMD_ACTION_DEMOTE, pcmk__str_casei)) { return action_demote; } else if (pcmk__str_eq(task, CRMD_ACTION_PROMOTED, pcmk__str_casei)) { return action_promoted; } else if (pcmk__str_eq(task, CRMD_ACTION_DEMOTED, pcmk__str_casei)) { return action_demoted; } #if SUPPORT_TRACING if (pcmk__str_eq(task, CRMD_ACTION_CANCEL, pcmk__str_casei)) { return no_action; } else if (pcmk__str_eq(task, CRMD_ACTION_DELETE, pcmk__str_casei)) { return no_action; } else if (pcmk__str_eq(task, CRMD_ACTION_STATUS, pcmk__str_casei)) { return no_action; } else if (pcmk__str_eq(task, CRMD_ACTION_MIGRATE, pcmk__str_casei)) { return no_action; } else if (pcmk__str_eq(task, CRMD_ACTION_MIGRATED, pcmk__str_casei)) { return no_action; } crm_trace("Unsupported action: %s", task); #endif return no_action; } const char * task2text(enum action_tasks task) { const char *result = ""; switch (task) { case no_action: result = "no_action"; break; case stop_rsc: result = CRMD_ACTION_STOP; break; case stopped_rsc: result = CRMD_ACTION_STOPPED; break; case start_rsc: result = CRMD_ACTION_START; break; case started_rsc: result = CRMD_ACTION_STARTED; break; case shutdown_crm: result = CRM_OP_SHUTDOWN; break; case stonith_node: result = CRM_OP_FENCE; break; case monitor_rsc: result = CRMD_ACTION_STATUS; break; case action_notify: result = CRMD_ACTION_NOTIFY; break; case action_notified: result = CRMD_ACTION_NOTIFIED; break; case action_promote: result = CRMD_ACTION_PROMOTE; break; case action_promoted: result = CRMD_ACTION_PROMOTED; break; case action_demote: result = CRMD_ACTION_DEMOTE; break; case action_demoted: result = CRMD_ACTION_DEMOTED; break; } return result; } const char * role2text(enum rsc_role_e role) { switch (role) { case RSC_ROLE_UNKNOWN: return RSC_ROLE_UNKNOWN_S; case RSC_ROLE_STOPPED: return RSC_ROLE_STOPPED_S; case RSC_ROLE_STARTED: return RSC_ROLE_STARTED_S; case RSC_ROLE_UNPROMOTED: #ifdef PCMK__COMPAT_2_0 return RSC_ROLE_UNPROMOTED_LEGACY_S; #else return RSC_ROLE_UNPROMOTED_S; #endif case RSC_ROLE_PROMOTED: #ifdef PCMK__COMPAT_2_0 return RSC_ROLE_PROMOTED_LEGACY_S; #else return RSC_ROLE_PROMOTED_S; #endif } CRM_CHECK(role >= RSC_ROLE_UNKNOWN, return RSC_ROLE_UNKNOWN_S); CRM_CHECK(role < RSC_ROLE_MAX, return RSC_ROLE_UNKNOWN_S); // coverity[dead_error_line] return RSC_ROLE_UNKNOWN_S; } enum rsc_role_e text2role(const char *role) { CRM_ASSERT(role != NULL); if (pcmk__str_eq(role, RSC_ROLE_STOPPED_S, pcmk__str_casei)) { return RSC_ROLE_STOPPED; } else if (pcmk__str_eq(role, RSC_ROLE_STARTED_S, pcmk__str_casei)) { return RSC_ROLE_STARTED; } else if (pcmk__strcase_any_of(role, RSC_ROLE_UNPROMOTED_S, RSC_ROLE_UNPROMOTED_LEGACY_S, NULL)) { return RSC_ROLE_UNPROMOTED; } else if (pcmk__strcase_any_of(role, RSC_ROLE_PROMOTED_S, RSC_ROLE_PROMOTED_LEGACY_S, NULL)) { return RSC_ROLE_PROMOTED; } else if (pcmk__str_eq(role, RSC_ROLE_UNKNOWN_S, pcmk__str_casei)) { return RSC_ROLE_UNKNOWN; } crm_err("Unknown role: %s", role); return RSC_ROLE_UNKNOWN; } void add_hash_param(GHashTable * hash, const char *name, const char *value) { CRM_CHECK(hash != NULL, return); crm_trace("Adding name='%s' value='%s' to hash table", pcmk__s(name, ""), pcmk__s(value, "")); if (name == NULL || value == NULL) { return; } else if (pcmk__str_eq(value, "#default", pcmk__str_casei)) { return; } else if (g_hash_table_lookup(hash, name) == NULL) { g_hash_table_insert(hash, strdup(name), strdup(value)); } } const char * pe_node_attribute_calculated(const pe_node_t *node, const char *name, const pe_resource_t *rsc) { const char *source; if(node == NULL) { return NULL; } else if(rsc == NULL) { return g_hash_table_lookup(node->details->attrs, name); } source = g_hash_table_lookup(rsc->meta, XML_RSC_ATTR_TARGET); if(source == NULL || !pcmk__str_eq("host", source, pcmk__str_casei)) { return g_hash_table_lookup(node->details->attrs, name); } /* Use attributes set for the containers location * instead of for the container itself * * Useful when the container is using the host's local * storage */ CRM_ASSERT(node->details->remote_rsc); CRM_ASSERT(node->details->remote_rsc->container); if(node->details->remote_rsc->container->running_on) { pe_node_t *host = node->details->remote_rsc->container->running_on->data; pe_rsc_trace(rsc, "%s: Looking for %s on the container host %s", rsc->id, name, pe__node_name(host)); return g_hash_table_lookup(host->details->attrs, name); } pe_rsc_trace(rsc, "%s: Not looking for %s on the container host: %s is inactive", rsc->id, name, node->details->remote_rsc->container->id); return NULL; } const char * pe_node_attribute_raw(const pe_node_t *node, const char *name) { if(node == NULL) { return NULL; } return g_hash_table_lookup(node->details->attrs, name); } diff --git a/po/zh_CN.po b/po/zh_CN.po index 82b2ced597..20bd50427b 100644 --- a/po/zh_CN.po +++ b/po/zh_CN.po @@ -1,1018 +1,1061 @@ # # Copyright 2003-2022 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU Lesser General Public License # version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: Pacemaker 2\n" "Report-Msgid-Bugs-To: developers@clusterlabs.org\n" -"POT-Creation-Date: 2022-11-24 17:50+0800\n" +"POT-Creation-Date: 2022-12-02 10:26+0800\n" "PO-Revision-Date: 2021-11-08 11:04+0800\n" "Last-Translator: Vivi \n" "Language-Team: CHINESE \n" "Language: zh_CN\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" -#: daemons/controld/controld_control.c:530 +#: daemons/controld/controld_control.c:531 msgid "Pacemaker version on cluster node elected Designated Controller (DC)" msgstr "集群选定的控制器节点(DC)的 Pacemaker 版本" -#: daemons/controld/controld_control.c:531 +#: daemons/controld/controld_control.c:532 msgid "" "Includes a hash which identifies the exact changeset the code was built " "from. Used for diagnostic purposes." msgstr "它包含一个标识所构建代码变更版本的哈希值,其可用于诊断。" -#: daemons/controld/controld_control.c:536 +#: daemons/controld/controld_control.c:537 msgid "The messaging stack on which Pacemaker is currently running" msgstr "Pacemaker 正在使用的消息传输引擎" -#: daemons/controld/controld_control.c:537 +#: daemons/controld/controld_control.c:538 msgid "Used for informational and diagnostic purposes." msgstr "用于提供信息和诊断。" -#: daemons/controld/controld_control.c:541 +#: daemons/controld/controld_control.c:542 msgid "An arbitrary name for the cluster" msgstr "任意的集群名称" -#: daemons/controld/controld_control.c:542 +#: daemons/controld/controld_control.c:543 msgid "" "This optional value is mostly for users' convenience as desired in " "administration, but may also be used in Pacemaker configuration rules via " "the #cluster-name node attribute, and by higher-level tools and resource " "agents." msgstr "" "该可选值主要是为了方便用户管理使用,也可以在pacemaker 配置规则中通过 " "#cluster-name 节点属性配置使用,也可以通过高级工具和资源代理使用。" -#: daemons/controld/controld_control.c:550 +#: daemons/controld/controld_control.c:551 msgid "How long to wait for a response from other nodes during start-up" msgstr "启动过程中等待其他节点响应的时间" -#: daemons/controld/controld_control.c:551 +#: daemons/controld/controld_control.c:552 msgid "" "The optimal value will depend on the speed and load of your network and the " "type of switches used." msgstr "其最佳值将取决于你的网络速度和负载以及所用交换机的类型。" -#: daemons/controld/controld_control.c:556 +#: daemons/controld/controld_control.c:557 msgid "" "Zero disables polling, while positive values are an interval in " "seconds(unless other units are specified, for example \"5min\")" msgstr "" "设置为0将禁用轮询,设置为正数将是以秒为单位的时间间隔(除非使用了其他单位,比" "如\"5min\"表示5分钟)" -#: daemons/controld/controld_control.c:559 +#: daemons/controld/controld_control.c:560 msgid "" "Polling interval to recheck cluster state and evaluate rules with date " "specifications" msgstr "重新检查集群状态并且评估具有日期规格的配置规则的轮询间隔" -#: daemons/controld/controld_control.c:561 +#: daemons/controld/controld_control.c:562 msgid "" "Pacemaker is primarily event-driven, and looks ahead to know when to recheck " "cluster state for failure timeouts and most time-based rules. However, it " "will also recheck the cluster after this amount of inactivity, to evaluate " "rules with date specifications and serve as a fail-safe for certain types of " "scheduler bugs." msgstr "" "Pacemaker 主要是通过事件驱动的,并能预期重新检查集群状态以评估大多数基于时间" "的规则以及过期的错误。然而无论如何,在集群经过该时间间隔的不活动状态后,它还" "将重新检查集群,以评估具有日期规格的规则,并为某些类型的调度程序缺陷提供故障" "保护。" -#: daemons/controld/controld_control.c:570 +#: daemons/controld/controld_control.c:571 msgid "Maximum amount of system load that should be used by cluster nodes" msgstr "集群节点应该使用的最大系统负载量" -#: daemons/controld/controld_control.c:571 +#: daemons/controld/controld_control.c:572 msgid "" "The cluster will slow down its recovery process when the amount of system " "resources used (currently CPU) approaches this limit" msgstr "当使用的系统资源量(当前为CPU)接近此限制时,集群将减慢其恢复过程" -#: daemons/controld/controld_control.c:577 +#: daemons/controld/controld_control.c:578 msgid "" "Maximum number of jobs that can be scheduled per node (defaults to 2x cores)" msgstr "每个节点可以调度的最大作业数(默认为2x内核数)" -#: daemons/controld/controld_control.c:581 +#: daemons/controld/controld_control.c:582 msgid "How a cluster node should react if notified of its own fencing" msgstr "集群节点在收到针对自己的 fence 操作结果通知时应如何反应" -#: daemons/controld/controld_control.c:582 +#: daemons/controld/controld_control.c:583 msgid "" "A cluster node may receive notification of its own fencing if fencing is " "misconfigured, or if fabric fencing is in use that doesn't cut cluster " "communication. Allowed values are \"stop\" to attempt to immediately stop " "Pacemaker and stay stopped, or \"panic\" to attempt to immediately reboot " "the local node, falling back to stop on failure." msgstr "" "如果有错误的 fence 配置,或者在使用 fabric fence 机制 (并不会切断集群通信)," "则集群节点可能会收到针对自己的 fence 结果通知。允许的值为 \"stop\" 尝试立即停" "止 pacemaker 并保持停用状态,或者 \"panic\" 尝试立即重新启动本地节点,并在失败" "时返回执行stop。" -#: daemons/controld/controld_control.c:592 +#: daemons/controld/controld_control.c:593 msgid "" "Declare an election failed if it is not decided within this much time. If " "you need to adjust this value, it probably indicates the presence of a bug." msgstr "" "如果集群在本项设置时间内没有作出决定则宣布选举失败。如果您需要调整该值,这可" "能代表存在某些缺陷。" -#: daemons/controld/controld_control.c:600 +#: daemons/controld/controld_control.c:601 msgid "" "Exit immediately if shutdown does not complete within this much time. If you " "need to adjust this value, it probably indicates the presence of a bug." msgstr "" "如果在这段时间内关机仍未完成,则立即退出。如果您需要调整该值,这可能代表存在" "某些缺陷。" -#: daemons/controld/controld_control.c:608 -#: daemons/controld/controld_control.c:615 +#: daemons/controld/controld_control.c:609 +#: daemons/controld/controld_control.c:616 msgid "" "If you need to adjust this value, it probably indicates the presence of a " "bug." msgstr "如果您需要调整该值,这可能代表存在某些缺陷。" -#: daemons/controld/controld_control.c:621 +#: daemons/controld/controld_control.c:622 msgid "" "*** Advanced Use Only *** Enabling this option will slow down cluster " "recovery under all conditions" msgstr "*** Advanced Use Only *** 启用此选项将在所有情况下减慢集群恢复的速度" -#: daemons/controld/controld_control.c:623 +#: daemons/controld/controld_control.c:624 msgid "" "Delay cluster recovery for this much time to allow for additional events to " "occur. Useful if your configuration is sensitive to the order in which ping " "updates arrive." msgstr "" "集群恢复将被推迟指定的时间间隔,以等待更多事件发生。如果您的配置对 ping 更新" "到达的顺序很敏感,这就很有用" -#: daemons/controld/controld_control.c:630 +#: daemons/controld/controld_control.c:631 #, fuzzy msgid "" "How long before nodes can be assumed to be safely down when watchdog-based " "self-fencing via SBD is in use" msgstr "" "当基于 watchdog 的自我 fence 机制通过SBD 被执行时,我们可以假设节点安全关闭之" "前需要等待多长时间" -#: daemons/controld/controld_control.c:632 +#: daemons/controld/controld_control.c:633 msgid "" "If this is set to a positive value, lost nodes are assumed to self-fence " "using watchdog-based SBD within this much time. This does not require a " "fencing resource to be explicitly configured, though a fence_watchdog " "resource can be configured, to limit use to specific nodes. If this is set " "to 0 (the default), the cluster will never assume watchdog-based self-" "fencing. If this is set to a negative value, the cluster will use twice the " "local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is " "positive, or otherwise treat this as 0. WARNING: When used, this timeout " "must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-" "based SBD, and Pacemaker will refuse to start on any of those nodes where " "this is not true for the local value or SBD is not active. When this is set " "to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on " "all nodes that use SBD, otherwise data corruption or loss could occur." msgstr "" "如果设置为正值,则假定丢失的节点在这段时间内使用基于watchdog的SBD进行自我防" "护。这不需要明确配置fence资源,但可以配置一个fence_watchdog资源,以限制特定节" "点的使用。如果设置为0(默认值),集群将永远不会假定基于watchdog的自我防护。如" "果设置为负值,且如果`SBD_WATCHDOG_TIMEOUT`环境变量的本地值为正值,则集群将使" "用该值的两倍,否则将其视为0。警告:在使用基于watchdog的SBD的所有节点上,此超" "时必须大于`SBD_WATCGDOG_TIMEOUT`,如果本地值不是这样,或者SBD未运行,则" "Pacemaker将拒绝在任何节点上启动。如果设置为负值,则在使用SBD的所有节点上," "`SBD_WATCHDOG_TIMEOUT`必须设置为相同的值,否则可能会发生数据损坏或丢失。" -#: daemons/controld/controld_control.c:651 +#: daemons/controld/controld_control.c:652 msgid "" "How many times fencing can fail before it will no longer be immediately re-" "attempted on a target" msgstr "fence操作失败多少次会停止立即尝试" -#: daemons/fenced/pacemaker-fenced.c:1389 +#: daemons/fenced/pacemaker-fenced.c:1378 msgid "Advanced use only: An alternate parameter to supply instead of 'port'" msgstr "仅高级使用:使用替代的参数名,而不是'port'" -#: daemons/fenced/pacemaker-fenced.c:1390 +#: daemons/fenced/pacemaker-fenced.c:1379 msgid "" "some devices do not support the standard 'port' parameter or may provide " "additional ones. Use this to specify an alternate, device-specific, " "parameter that should indicate the machine to be fenced. A value of none can " "be used to tell the cluster not to supply any additional parameters." msgstr "" "一些设备不支持标准的'port'参数,或者可能提供其他参数。使用此选项可指定一个该" "设备专用的参数名,该参数用于标识需要fence的机器。值none可以用于告诉集群不要提" "供任何其他的参数。" -#: daemons/fenced/pacemaker-fenced.c:1399 +#: daemons/fenced/pacemaker-fenced.c:1388 msgid "" "A mapping of host names to ports numbers for devices that do not support " "host names." msgstr "为不支持主机名的设备提供主机名到端口号的映射。" -#: daemons/fenced/pacemaker-fenced.c:1400 +#: daemons/fenced/pacemaker-fenced.c:1389 msgid "" "Eg. node1:1;node2:2,3 would tell the cluster to use port 1 for node1 and " "ports 2 and 3 for node2" msgstr "" "例如 node1:1;node2:2,3,将会告诉集群对node1使用端口1,对node2使用端口2和3 " -#: daemons/fenced/pacemaker-fenced.c:1404 +#: daemons/fenced/pacemaker-fenced.c:1393 msgid "Eg. node1,node2,node3" msgstr "例如 node1,node2,node3" -#: daemons/fenced/pacemaker-fenced.c:1405 +#: daemons/fenced/pacemaker-fenced.c:1394 msgid "" "A list of machines controlled by this device (Optional unless " "pcmk_host_list=static-list)" msgstr "该设备控制的机器列表(可选参数,除非 pcmk_host_list 设置为 static-list)" -#: daemons/fenced/pacemaker-fenced.c:1410 +#: daemons/fenced/pacemaker-fenced.c:1399 msgid "How to determine which machines are controlled by the device." msgstr "如何确定设备控制哪些机器。" -#: daemons/fenced/pacemaker-fenced.c:1411 +#: daemons/fenced/pacemaker-fenced.c:1400 msgid "" "Allowed values: dynamic-list (query the device via the 'list' command), " "static-list (check the pcmk_host_list attribute), status (query the device " "via the 'status' command), none (assume every device can fence every machine)" msgstr "" "允许的值:dynamic-list(通过'list'命令查询设备),static-list(检查" "pcmk_host_list属性),status(通过'status'命令查询设备),none(假设每个设备" "都可fence 每台机器 )" -#: daemons/fenced/pacemaker-fenced.c:1420 -#: daemons/fenced/pacemaker-fenced.c:1429 +#: daemons/fenced/pacemaker-fenced.c:1409 +#: daemons/fenced/pacemaker-fenced.c:1418 msgid "Enable a base delay for fencing actions and specify base delay value." msgstr "在执行 fencing 操作前启用不超过指定时间的延迟。" -#: daemons/fenced/pacemaker-fenced.c:1421 +#: daemons/fenced/pacemaker-fenced.c:1410 msgid "" "Enable a delay of no more than the time specified before executing fencing " "actions. Pacemaker derives the overall delay by taking the value of " "pcmk_delay_base and adding a random delay value such that the sum is kept " "below this maximum." msgstr "" "在执行 fencing 操作前启用不超过指定时间的延迟。 Pacemaker通过获取" "pcmk_delay_base的值并添加随机延迟值来得出总体延迟,从而使总和保持在此最大值以" "下。" -#: daemons/fenced/pacemaker-fenced.c:1431 +#: daemons/fenced/pacemaker-fenced.c:1420 msgid "" "This enables a static delay for fencing actions, which can help avoid " "\"death matches\" where two nodes try to fence each other at the same time. " "If pcmk_delay_max is also used, a random delay will be added such that the " "total delay is kept below that value.This can be set to a single time value " "to apply to any node targeted by this device (useful if a separate device is " "configured for each target), or to a node map (for example, \"node1:1s;" "node2:5\") to set a different value per target." msgstr "" "这使fencing 操作启用静态延迟,这可以帮助避免\"death matches\"即两个节点试图同" "时互相fence.如果还使用了pcmk_delay_max,则将添加随机延迟,以使总延迟保持在该" "值以下。可以将其设置为单个时间值,以应用于该设备针对的任何节点(适用于为每个" "目标分别配置了各自的设备的情况), 或着设置为一个节点映射 (例如,\"node1:1s;" "node2:5\")从而为每个目标设置不同值。" -#: daemons/fenced/pacemaker-fenced.c:1443 +#: daemons/fenced/pacemaker-fenced.c:1432 msgid "" "The maximum number of actions can be performed in parallel on this device" msgstr "可以在该设备上并发执行的最多操作数量" -#: daemons/fenced/pacemaker-fenced.c:1444 +#: daemons/fenced/pacemaker-fenced.c:1433 msgid "" "Cluster property concurrent-fencing=true needs to be configured first.Then " "use this to specify the maximum number of actions can be performed in " "parallel on this device. -1 is unlimited." msgstr "" "需要首先配置集群属性 concurrent-fencing=true 。然后使用此参数指定可以在该设备" "上并发执行的最多操作数量。 -1 代表没有限制" -#: daemons/fenced/pacemaker-fenced.c:1449 +#: daemons/fenced/pacemaker-fenced.c:1438 msgid "Advanced use only: An alternate command to run instead of 'reboot'" msgstr "仅高级使用:运行替代命令,而不是'reboot'" -#: daemons/fenced/pacemaker-fenced.c:1450 +#: daemons/fenced/pacemaker-fenced.c:1439 msgid "" "Some devices do not support the standard commands or may provide additional " "ones.\n" "Use this to specify an alternate, device-specific, command that implements " "the 'reboot' action." msgstr "" "一些设备不支持标准命令或可能提供其他命令,使用此选项可以指定一个该设备特定的" "替代命令,用来实现'reboot'操作。" -#: daemons/fenced/pacemaker-fenced.c:1455 +#: daemons/fenced/pacemaker-fenced.c:1444 msgid "" "Advanced use only: Specify an alternate timeout to use for reboot actions " "instead of stonith-timeout" msgstr "仅高级使用:指定用于'reboot' 操作的替代超时,而不是stonith-timeout" -#: daemons/fenced/pacemaker-fenced.c:1456 +#: daemons/fenced/pacemaker-fenced.c:1445 msgid "" "Some devices need much more/less time to complete than normal.Use this to " "specify an alternate, device-specific, timeout for 'reboot' actions." msgstr "" "一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" "于'reboot'操作的该设备特定的替代超时。" -#: daemons/fenced/pacemaker-fenced.c:1461 +#: daemons/fenced/pacemaker-fenced.c:1450 msgid "" "Advanced use only: The maximum number of times to retry the 'reboot' command " "within the timeout period" msgstr "仅高级使用:在超时前重试'reboot'命令的最大次数" -#: daemons/fenced/pacemaker-fenced.c:1462 +#: daemons/fenced/pacemaker-fenced.c:1451 msgid "" "Some devices do not support multiple connections. Operations may 'fail' if " "the device is busy with another task so Pacemaker will automatically retry " "the operation, if there is time remaining. Use this option to alter the " "number of times Pacemaker retries 'reboot' actions before giving up." msgstr "" "一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' ,因此" "Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" "试'reboot' 操作的次数." -#: daemons/fenced/pacemaker-fenced.c:1468 +#: daemons/fenced/pacemaker-fenced.c:1457 msgid "Advanced use only: An alternate command to run instead of 'off'" msgstr "仅高级使用:运行替代命令,而不是'off'" -#: daemons/fenced/pacemaker-fenced.c:1469 +#: daemons/fenced/pacemaker-fenced.c:1458 msgid "" "Some devices do not support the standard commands or may provide additional " "ones.Use this to specify an alternate, device-specific, command that " "implements the 'off' action." msgstr "" "一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备专用的替代" "命令,用来实现'off'操作。" -#: daemons/fenced/pacemaker-fenced.c:1474 +#: daemons/fenced/pacemaker-fenced.c:1463 msgid "" "Advanced use only: Specify an alternate timeout to use for off actions " "instead of stonith-timeout" msgstr "仅高级使用:指定用于off 操作的替代超时,而不是stonith-timeout" -#: daemons/fenced/pacemaker-fenced.c:1475 +#: daemons/fenced/pacemaker-fenced.c:1464 msgid "" "Some devices need much more/less time to complete than normal.Use this to " "specify an alternate, device-specific, timeout for 'off' actions." msgstr "" "一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" "于'off'操作的该设备特定的替代超时。" -#: daemons/fenced/pacemaker-fenced.c:1480 +#: daemons/fenced/pacemaker-fenced.c:1469 msgid "" "Advanced use only: The maximum number of times to retry the 'off' command " "within the timeout period" msgstr "仅高级使用:在超时前重试'off'命令的最大次数" -#: daemons/fenced/pacemaker-fenced.c:1481 +#: daemons/fenced/pacemaker-fenced.c:1470 msgid "" "Some devices do not support multiple connections. Operations may 'fail' if " "the device is busy with another task so Pacemaker will automatically retry " "the operation, if there is time remaining. Use this option to alter the " "number of times Pacemaker retries 'off' actions before giving up." msgstr "" " 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" "Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" "试'off' 操作的次数." -#: daemons/fenced/pacemaker-fenced.c:1487 +#: daemons/fenced/pacemaker-fenced.c:1476 msgid "Advanced use only: An alternate command to run instead of 'on'" msgstr "仅高级使用:运行替代命令,而不是'on'" -#: daemons/fenced/pacemaker-fenced.c:1488 +#: daemons/fenced/pacemaker-fenced.c:1477 msgid "" "Some devices do not support the standard commands or may provide additional " "ones.Use this to specify an alternate, device-specific, command that " "implements the 'on' action." msgstr "" "一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" "代命令,用来实现'on'操作。" -#: daemons/fenced/pacemaker-fenced.c:1493 +#: daemons/fenced/pacemaker-fenced.c:1482 msgid "" "Advanced use only: Specify an alternate timeout to use for on actions " "instead of stonith-timeout" msgstr "仅高级使用:指定用于on 操作的替代超时,而不是stonith-timeout" -#: daemons/fenced/pacemaker-fenced.c:1494 +#: daemons/fenced/pacemaker-fenced.c:1483 msgid "" "Some devices need much more/less time to complete than normal.Use this to " "specify an alternate, device-specific, timeout for 'on' actions." msgstr "" "一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" "于'on'操作的该设备特定的替代超时。" -#: daemons/fenced/pacemaker-fenced.c:1499 +#: daemons/fenced/pacemaker-fenced.c:1488 msgid "" "Advanced use only: The maximum number of times to retry the 'on' command " "within the timeout period" msgstr "仅高级使用:在超时前重试'on'命令的最大次数" -#: daemons/fenced/pacemaker-fenced.c:1500 +#: daemons/fenced/pacemaker-fenced.c:1489 msgid "" "Some devices do not support multiple connections. Operations may 'fail' if " "the device is busy with another task so Pacemaker will automatically retry " "the operation, if there is time remaining. Use this option to alter the " "number of times Pacemaker retries 'on' actions before giving up." msgstr "" " 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" "Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" "试'on' 操作的次数." -#: daemons/fenced/pacemaker-fenced.c:1506 +#: daemons/fenced/pacemaker-fenced.c:1495 msgid "Advanced use only: An alternate command to run instead of 'list'" msgstr "仅高级使用:运行替代命令,而不是'list'" -#: daemons/fenced/pacemaker-fenced.c:1507 +#: daemons/fenced/pacemaker-fenced.c:1496 msgid "" "Some devices do not support the standard commands or may provide additional " "ones.Use this to specify an alternate, device-specific, command that " "implements the 'list' action." msgstr "" "一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" "代命令,用来实现'list'操作。" -#: daemons/fenced/pacemaker-fenced.c:1512 +#: daemons/fenced/pacemaker-fenced.c:1501 msgid "" "Advanced use only: Specify an alternate timeout to use for list actions " "instead of stonith-timeout" msgstr "仅高级使用:指定用于list 操作的替代超时,而不是stonith-timeout" -#: daemons/fenced/pacemaker-fenced.c:1513 +#: daemons/fenced/pacemaker-fenced.c:1502 msgid "" "Some devices need much more/less time to complete than normal.Use this to " "specify an alternate, device-specific, timeout for 'list' actions." msgstr "" "一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" "于'list'操作的该设备特定的替代超时。" -#: daemons/fenced/pacemaker-fenced.c:1518 +#: daemons/fenced/pacemaker-fenced.c:1507 msgid "" "Advanced use only: The maximum number of times to retry the 'list' command " "within the timeout period" msgstr "仅高级使用:在超时前重试'list'命令的最大次数" -#: daemons/fenced/pacemaker-fenced.c:1519 +#: daemons/fenced/pacemaker-fenced.c:1508 msgid "" "Some devices do not support multiple connections. Operations may 'fail' if " "the device is busy with another task so Pacemaker will automatically retry " "the operation, if there is time remaining. Use this option to alter the " "number of times Pacemaker retries 'list' actions before giving up." msgstr "" " 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" "Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" "试'list' 操作的次数." -#: daemons/fenced/pacemaker-fenced.c:1525 +#: daemons/fenced/pacemaker-fenced.c:1514 msgid "Advanced use only: An alternate command to run instead of 'monitor'" msgstr "仅高级使用:运行替代命令,而不是'monitor'" -#: daemons/fenced/pacemaker-fenced.c:1526 +#: daemons/fenced/pacemaker-fenced.c:1515 msgid "" "Some devices do not support the standard commands or may provide additional " "ones.Use this to specify an alternate, device-specific, command that " "implements the 'monitor' action." msgstr "" "一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" "代命令,用来实现'monitor'操作。" -#: daemons/fenced/pacemaker-fenced.c:1531 +#: daemons/fenced/pacemaker-fenced.c:1520 msgid "" "Advanced use only: Specify an alternate timeout to use for monitor actions " "instead of stonith-timeout" msgstr "仅高级使用:指定用于monitor 操作的替代超时,而不是stonith-timeout" -#: daemons/fenced/pacemaker-fenced.c:1532 +#: daemons/fenced/pacemaker-fenced.c:1521 msgid "" "Some devices need much more/less time to complete than normal.\n" "Use this to specify an alternate, device-specific, timeout for 'monitor' " "actions." msgstr "" "一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" "于'monitor'操作的该设备特定的替代超时。" -#: daemons/fenced/pacemaker-fenced.c:1537 +#: daemons/fenced/pacemaker-fenced.c:1526 msgid "" "Advanced use only: The maximum number of times to retry the 'monitor' " "command within the timeout period" msgstr "仅高级使用:在超时前重试'monitor'命令的最大次数" -#: daemons/fenced/pacemaker-fenced.c:1538 +#: daemons/fenced/pacemaker-fenced.c:1527 msgid "" "Some devices do not support multiple connections. Operations may 'fail' if " "the device is busy with another task so Pacemaker will automatically retry " "the operation, if there is time remaining. Use this option to alter the " "number of times Pacemaker retries 'monitor' actions before giving up." msgstr "" " 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" "Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" "试'monitor' 操作的次数." -#: daemons/fenced/pacemaker-fenced.c:1544 +#: daemons/fenced/pacemaker-fenced.c:1533 msgid "Advanced use only: An alternate command to run instead of 'status'" msgstr "仅高级使用:运行替代命令,而不是'status'" -#: daemons/fenced/pacemaker-fenced.c:1545 +#: daemons/fenced/pacemaker-fenced.c:1534 msgid "" "Some devices do not support the standard commands or may provide additional " "ones.Use this to specify an alternate, device-specific, command that " "implements the 'status' action." msgstr "" "一些设备不支持标准命令或可能提供其他命令,使用此选项可指定一个该设备特定的替" "代命令,用来实现'status'操作。" -#: daemons/fenced/pacemaker-fenced.c:1550 +#: daemons/fenced/pacemaker-fenced.c:1539 msgid "" "Advanced use only: Specify an alternate timeout to use for status actions " "instead of stonith-timeout" msgstr "仅高级使用:指定用于status 操作的替代超时,而不是stonith-timeout" -#: daemons/fenced/pacemaker-fenced.c:1551 +#: daemons/fenced/pacemaker-fenced.c:1540 msgid "" "Some devices need much more/less time to complete than normal.Use this to " "specify an alternate, device-specific, timeout for 'status' actions." msgstr "" "一些设备需要比正常情况下更多或更少的时间来完成操作,使用此选项指定一个用" "于'status'操作的该设备特定的替代超时" -#: daemons/fenced/pacemaker-fenced.c:1556 +#: daemons/fenced/pacemaker-fenced.c:1545 msgid "" "Advanced use only: The maximum number of times to retry the 'status' command " "within the timeout period" msgstr "仅高级使用:在超时前重试'status'命令的最大次数" -#: daemons/fenced/pacemaker-fenced.c:1557 +#: daemons/fenced/pacemaker-fenced.c:1546 msgid "" "Some devices do not support multiple connections. Operations may 'fail' if " "the device is busy with another task so Pacemaker will automatically retry " "the operation, if there is time remaining. Use this option to alter the " "number of times Pacemaker retries 'status' actions before giving up." msgstr "" " 一些设备不支持多个连接。 如果设备忙于另一个任务,则操作可能会'失败' , 因此" "Pacemaker将自动重试(如果时间允许)。 使用此选项更改Pacemaker在放弃之前重" "试'status' 操作的次数." -#: daemons/fenced/pacemaker-fenced.c:1566 +#: daemons/fenced/pacemaker-fenced.c:1555 msgid "Instance attributes available for all \"stonith\"-class resources" msgstr " 可用于所有stonith类资源的实例属性" -#: daemons/fenced/pacemaker-fenced.c:1568 +#: daemons/fenced/pacemaker-fenced.c:1557 msgid "" "Instance attributes available for all \"stonith\"-class resources and used " "by Pacemaker's fence daemon, formerly known as stonithd" msgstr "" " 可用于所有stonith类资源的实例属性,并由Pacemaker的fence守护程序使用(以前称" "为stonithd)" #: lib/cib/cib_utils.c:559 msgid "Enable Access Control Lists (ACLs) for the CIB" msgstr "为CIB启用访问控制列表(ACL)" #: lib/cib/cib_utils.c:565 msgid "Maximum IPC message backlog before disconnecting a cluster daemon" msgstr "断开集群守护程序之前的最大IPC消息积压" #: lib/cib/cib_utils.c:566 msgid "" "Raise this if log has \"Evicting client\" messages for cluster daemon PIDs " "(a good value is the number of resources in the cluster multiplied by the " "number of nodes)." msgstr "" "如果日志中有针对集群守护程序PID的消息“Evicting client”,(则建议将值设为集群" "中的资源数量乘以节点数量)" #: lib/common/options.c:633 msgid " Allowed values: " msgstr " 允许的值: " -#: lib/common/cmdline.c:71 +#: lib/common/cmdline.c:70 msgid "Display software version and exit" msgstr "显示软件版本信息" -#: lib/common/cmdline.c:74 +#: lib/common/cmdline.c:73 msgid "Increase debug output (may be specified multiple times)" msgstr "显示更多调试信息(可多次指定)" #: lib/common/cmdline.c:92 msgid "FORMAT" msgstr "格式" #: lib/common/cmdline.c:94 msgid "Specify file name for output (or \"-\" for stdout)" msgstr "指定输出的文件名 或指定'-' 表示标准输出" #: lib/common/cmdline.c:94 msgid "DEST" msgstr "目标" #: lib/common/cmdline.c:100 msgid "Output Options:" msgstr "输出选项" #: lib/common/cmdline.c:100 msgid "Show output help" msgstr "显示输出帮助" #: lib/pengine/common.c:39 msgid "What to do when the cluster does not have quorum" msgstr "当集群没有必需票数时该如何作" #: lib/pengine/common.c:45 msgid "Whether resources can run on any node by default" msgstr "资源是否默认可以在任何节点上运行" #: lib/pengine/common.c:51 msgid "" "Whether the cluster should refrain from monitoring, starting, and stopping " "resources" msgstr "集群是否应避免监视,启动和停止资源" #: lib/pengine/common.c:58 msgid "" "Whether a start failure should prevent a resource from being recovered on " "the same node" msgstr "是否避免在同一节点上重启启动失败的资源" #: lib/pengine/common.c:60 msgid "" "When true, the cluster will immediately ban a resource from a node if it " "fails to start there. When false, the cluster will instead check the " "resource's fail count against its migration-threshold." msgstr "" "当为true,如果资源启动失败,集群将立即禁止节点启动该资源,当为false,群集将根" "据其迁移阈值来检查资源的失败计数。" #: lib/pengine/common.c:67 msgid "Whether the cluster should check for active resources during start-up" msgstr "群集是否在启动期间检查运行资源" #: lib/pengine/common.c:73 msgid "Whether to lock resources to a cleanly shut down node" msgstr "是否锁定资源到完全关闭的节点" #: lib/pengine/common.c:74 msgid "" "When true, resources active on a node when it is cleanly shut down are kept " "\"locked\" to that node (not allowed to run elsewhere) until they start " "again on that node after it rejoins (or for at most shutdown-lock-limit, if " "set). Stonith resources and Pacemaker Remote connections are never locked. " "Clone and bundle instances and the promoted role of promotable clones are " "currently never locked, though support could be added in a future release." msgstr "" "设置为true时,在完全关闭的节点上活动的资源将被“锁定”到该节点(不允许在其他地" "方运行),直到该节点重新加入后资源重新启动(或最长shutdown-lock-limit,如果已" "设置)。 Stonith资源和Pacemaker Remote连接永远不会被锁定。 克隆和捆绑实例以及" "可升级克隆的主角色目前从未锁定,尽管可以在将来的发行版中添加支持。" #: lib/pengine/common.c:85 msgid "Do not lock resources to a cleanly shut down node longer than this" msgstr "资源会被锁定到完全关闭的节点的最长时间" #: lib/pengine/common.c:86 msgid "" "If shutdown-lock is true and this is set to a nonzero time duration, " "shutdown locks will expire after this much time has passed since the " "shutdown was initiated, even if the node has not rejoined." msgstr "" "如果shutdown-lock为true,并且将此选项设置为非零持续时间,则自从开始shutdown以" "来经过了这么长的时间后,shutdown锁将过期,即使该节点尚未重新加入。" #: lib/pengine/common.c:95 msgid "" "*** Advanced Use Only *** Whether nodes may be fenced as part of recovery" msgstr "*** Advanced Use Only *** 节点是否可以被 fence 以作为集群恢复的一部分" #: lib/pengine/common.c:97 msgid "" "If false, unresponsive nodes are immediately assumed to be harmless, and " "resources that were active on them may be recovered elsewhere. This can " "result in a \"split-brain\" situation, potentially leading to data loss and/" "or service unavailability." msgstr "" "如果为false,则立即假定无响应的节点是无害的,并且可以在其他位置恢复在其上活动" "的资源。 这可能会导致 \"split-brain\" 情况,可能导致数据丢失和/或服务不可用。" #: lib/pengine/common.c:105 msgid "" "Action to send to fence device when a node needs to be fenced (\"poweroff\" " "is a deprecated alias for \"off\")" msgstr "发送到 fence 设备的操作( \"poweroff\" 是 \"off \"的别名,不建议使用)" #: lib/pengine/common.c:112 msgid "*** Advanced Use Only *** Unused by Pacemaker" msgstr "*** Advanced Use Only *** pacemaker未使用" #: lib/pengine/common.c:113 msgid "" "This value is not used by Pacemaker, but is kept for backward compatibility, " "and certain legacy fence agents might use it." msgstr "" "Pacemaker不使用此值,但保留此值是为了向后兼容,某些传统的fence 代理可能会使用" "它。" #: lib/pengine/common.c:119 msgid "Whether watchdog integration is enabled" msgstr "是否启用watchdog集成设置" #: lib/pengine/common.c:120 msgid "" "This is set automatically by the cluster according to whether SBD is " "detected to be in use. User-configured values are ignored. The value `true` " "is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is " "nonzero. In that case, if fencing is required, watchdog-based self-fencing " "will be performed via SBD without requiring a fencing resource explicitly " "configured." msgstr "" "这是由集群检测是否正在使用 SBD 并自动设置。用户配置的值将被忽略。如果使用无" "盘 SBD 并且 stonith-watchdog-timeout 不为零时,此选项为 true 才有实际意义。在" "这种情况下,无需明确配置fence资源,如果需要fence时,基于watchdog的自我fence会" "通过SBD执行。" #: lib/pengine/common.c:130 msgid "Allow performing fencing operations in parallel" msgstr "允许并行执行 fencing 操作" #: lib/pengine/common.c:136 msgid "*** Advanced Use Only *** Whether to fence unseen nodes at start-up" msgstr "*** 仅高级使用 *** 是否在启动时fence不可见节点" #: lib/pengine/common.c:137 msgid "" "Setting this to false may lead to a \"split-brain\" situation,potentially " "leading to data loss and/or service unavailability." msgstr "" "将此设置为 false 可能会导致 \"split-brain\" 的情况,可能导致数据丢失和/或服务" "不可用。" #: lib/pengine/common.c:143 msgid "" "Apply fencing delay targeting the lost nodes with the highest total resource " "priority" msgstr "针对具有最高总资源优先级的丢失节点应用fencing延迟" #: lib/pengine/common.c:144 msgid "" "Apply specified delay for the fencings that are targeting the lost nodes " "with the highest total resource priority in case we don't have the majority " "of the nodes in our cluster partition, so that the more significant nodes " "potentially win any fencing match, which is especially meaningful under " "split-brain of 2-node cluster. A promoted resource instance takes the base " "priority + 1 on calculation if the base priority is not 0. Any static/random " "delays that are introduced by `pcmk_delay_base/max` configured for the " "corresponding fencing resources will be added to this delay. This delay " "should be significantly greater than, safely twice, the maximum " "`pcmk_delay_base/max`. By default, priority fencing delay is disabled." msgstr "" "如果我们所在的集群分区并不拥有大多数集群节点,则针对丢失节点的fence操作应用指" "定的延迟,这样更重要的节点就能够赢得fence竞赛。这对于双节点集群在split-brain" "状况下尤其有意义。如果基本优先级不为0,在计算时主资源实例获得基本优先级+1。任" "何对于相应的 fence 资源由 pcmk_delay_base/max 配置所引入的静态/随机延迟会被添" "加到此延迟。为了安全, 这个延迟应该明显大于 pcmk_delay_base/max 的最大设置值," "例如两倍。默认情况下,优先级fencing延迟已禁用。" #: lib/pengine/common.c:161 msgid "Maximum time for node-to-node communication" msgstr "最大节点间通信时间" #: lib/pengine/common.c:162 msgid "" "The node elected Designated Controller (DC) will consider an action failed " "if it does not get a response from the node executing the action within this " "time (after considering the action's own timeout). The \"correct\" value " "will depend on the speed and load of your network and cluster nodes." msgstr "" "如果一个操作未在该时间内(并且考虑操作本身的超时时长)从执行该操作的节点获得" "响应,则会被选为指定控制器(DC)的节点认定为失败。\"正确\" 值将取决于速度和您" "的网络和集群节点的负载。" #: lib/pengine/common.c:189 #, fuzzy msgid "Whether the cluster should stop all active resources" msgstr "群集是否在启动期间检查运行资源" #: lib/pengine/common.c:195 msgid "Whether to stop resources that were removed from the configuration" msgstr "是否停止配置已被删除的资源" #: lib/pengine/common.c:201 msgid "Whether to cancel recurring actions removed from the configuration" msgstr "是否取消配置已被删除的的重复操作" #: lib/pengine/common.c:207 msgid "" "*** Deprecated *** Whether to remove stopped resources from the executor" msgstr "***不推荐***是否从pacemaker-execd 守护进程中清除已停止的资源" -#: tools/crm_resource.c:257 +#: lib/pengine/common.c:240 +#, fuzzy +msgid "How cluster should react to node health attributes" +msgstr "集群节点对节点健康属性如何反应" + +#: lib/pengine/common.c:241 +msgid "" +"Requires external entities to create node attributes (named with the prefix " +"\"#health\") with values \"red\", \"yellow\", or \"green\"." +msgstr "" +"需要外部实体创建具有“red”,“yellow”或“green”值的节点属性(前缀为“#health”)" + +#: lib/pengine/common.c:248 +msgid "Base health score assigned to a node" +msgstr "分配给节点的基本健康分数" + +#: lib/pengine/common.c:249 +msgid "Only used when \"node-health-strategy\" is set to \"progressive\"." +msgstr "仅在“node-health-strategy”设置为“progressive”时使用。" + +#: lib/pengine/common.c:254 +msgid "The score to use for a node health attribute whose value is \"green\"" +msgstr "为节点健康属性值为“green”所使用的分数" + +#: lib/pengine/common.c:255 lib/pengine/common.c:261 lib/pengine/common.c:267 +msgid "" +"Only used when \"node-health-strategy\" is set to \"custom\" or \"progressive" +"\"." +msgstr "仅在“node-health-strategy”设置为“custom”或“progressive”时使用。" + +#: lib/pengine/common.c:260 +msgid "The score to use for a node health attribute whose value is \"yellow\"" +msgstr "为节点健康属性值为“yellow”所使用的分数" + +#: lib/pengine/common.c:266 +msgid "The score to use for a node health attribute whose value is \"red\"" +msgstr "为节点健康属性值为“red”所使用的分数" + +#: lib/pengine/common.c:275 +#, fuzzy +msgid "How the cluster should allocate resources to nodes" +msgstr "群集应该如何分配资源到节点" + +#: tools/crm_resource.c:258 #, c-format msgid "Aborting because no messages received in %d seconds" msgstr "中止,因为在%d秒内没有接收到消息" -#: tools/crm_resource.c:908 +#: tools/crm_resource.c:909 #, c-format msgid "Invalid check level setting: %s" msgstr "无效的检查级别设置:%s" -#: tools/crm_resource.c:992 +#: tools/crm_resource.c:993 #, c-format msgid "" "Resource '%s' not moved: active in %d locations (promoted in %d).\n" "To prevent '%s' from running on a specific location, specify a node.To " "prevent '%s' from being promoted at a specific location, specify a node and " "the --promoted option." msgstr "" "资源'%s'未移动:在%d个位置运行(其中在%d个位置为主实例)\n" "若要阻止'%s'在特定位置运行,请指定一个节点。若要防止'%s'在指定位置升级,指定" "一个节点并使用--promoted选项" -#: tools/crm_resource.c:1003 +#: tools/crm_resource.c:1004 #, c-format msgid "" "Resource '%s' not moved: active in %d locations.\n" "To prevent '%s' from running on a specific location, specify a node." msgstr "" "资源%s未移动:在%d个位置运行\n" "若要防止'%s'运行在特定位置,指定一个节点" -#: tools/crm_resource.c:1078 +#: tools/crm_resource.c:1079 #, c-format msgid "Could not get modified CIB: %s\n" msgstr "无法获得修改的CIB:%s\n" -#: tools/crm_resource.c:1112 +#: tools/crm_resource.c:1113 msgid "You need to specify a resource type with -t" msgstr "需要使用-t指定资源类型" -#: tools/crm_resource.c:1155 +#: tools/crm_resource.c:1156 #, c-format msgid "No agents found for standard '%s'" msgstr "没有发现指定的'%s'标准代理" -#: tools/crm_resource.c:1158 +#: tools/crm_resource.c:1159 #, fuzzy, c-format msgid "No agents found for standard '%s' and provider '%s'" msgstr "没有发现指定的标准%s和提供者%S的资源代理" -#: tools/crm_resource.c:1225 +#: tools/crm_resource.c:1226 #, c-format msgid "No %s found for %s" msgstr "没有发现%s符合%s" -#: tools/crm_resource.c:1230 +#: tools/crm_resource.c:1231 #, c-format msgid "No %s found" msgstr "没有发现%s" -#: tools/crm_resource.c:1290 +#: tools/crm_resource.c:1291 #, c-format msgid "No cluster connection to Pacemaker Remote node %s detected" msgstr "未检测到至pacemaker远程节点%s的集群连接" -#: tools/crm_resource.c:1351 +#: tools/crm_resource.c:1352 msgid "Must specify -t with resource type" msgstr "需要使用-t指定资源类型" -#: tools/crm_resource.c:1357 +#: tools/crm_resource.c:1358 msgid "Must supply -v with new value" msgstr "必须使用-v指定新值" -#: tools/crm_resource.c:1389 +#: tools/crm_resource.c:1390 msgid "Could not create executor connection" msgstr "无法创建到pacemaker-execd守护进程的连接" -#: tools/crm_resource.c:1414 +#: tools/crm_resource.c:1415 #, fuzzy, c-format msgid "Metadata query for %s failed: %s" msgstr ",查询%s的元数据失败: %s\n" -#: tools/crm_resource.c:1420 +#: tools/crm_resource.c:1421 #, c-format msgid "'%s' is not a valid agent specification" msgstr "'%s' 是一个无效的代理" -#: tools/crm_resource.c:1433 +#: tools/crm_resource.c:1434 msgid "--resource cannot be used with --class, --agent, and --provider" msgstr "--resource 不能与 --class, --agent, --provider一起使用" -#: tools/crm_resource.c:1438 +#: tools/crm_resource.c:1439 msgid "" "--class, --agent, and --provider can only be used with --validate and --" "force-*" msgstr "--class, --agent和--provider只能被用于--validate和--force-*" -#: tools/crm_resource.c:1447 +#: tools/crm_resource.c:1448 msgid "stonith does not support providers" msgstr "stonith 不支持提供者" -#: tools/crm_resource.c:1451 +#: tools/crm_resource.c:1452 #, c-format msgid "%s is not a known stonith agent" msgstr "%s 不是一个已知stonith代理" -#: tools/crm_resource.c:1456 +#: tools/crm_resource.c:1457 #, c-format msgid "%s:%s:%s is not a known resource" msgstr "%s:%s:%s 不是一个已知资源" -#: tools/crm_resource.c:1570 +#: tools/crm_resource.c:1571 #, c-format msgid "Error creating output format %s: %s" msgstr "创建输出格式错误 %s:%s" -#: tools/crm_resource.c:1597 +#: tools/crm_resource.c:1598 msgid "--expired requires --clear or -U" msgstr "--expired需要和--clear或-U一起使用" -#: tools/crm_resource.c:1614 +#: tools/crm_resource.c:1615 #, c-format msgid "Error parsing '%s' as a name=value pair" msgstr "'%s'解析错误,格式为name=value" -#: tools/crm_resource.c:1711 +#: tools/crm_resource.c:1712 msgid "Must supply a resource id with -r" msgstr "必须使用-r指定资源id" -#: tools/crm_resource.c:1717 +#: tools/crm_resource.c:1718 msgid "Must supply a node name with -N" msgstr "必须使用-N指定节点名称" -#: tools/crm_resource.c:1741 +#: tools/crm_resource.c:1742 msgid "Could not create CIB connection" msgstr "无法创建到CIB的连接" -#: tools/crm_resource.c:1749 +#: tools/crm_resource.c:1750 #, c-format msgid "Could not connect to the CIB: %s" msgstr "不能连接到CIB:%s" -#: tools/crm_resource.c:1770 +#: tools/crm_resource.c:1771 #, c-format msgid "Resource '%s' not found" msgstr "没有发现'%s'资源" -#: tools/crm_resource.c:1782 +#: tools/crm_resource.c:1783 #, c-format msgid "Cannot operate on clone resource instance '%s'" msgstr "不能操作克隆资源实例'%s'" -#: tools/crm_resource.c:1794 +#: tools/crm_resource.c:1795 #, c-format msgid "Node '%s' not found" msgstr "没有发现%s节点" -#: tools/crm_resource.c:1805 tools/crm_resource.c:1814 +#: tools/crm_resource.c:1806 tools/crm_resource.c:1815 #, c-format msgid "Error connecting to the controller: %s" msgstr "连接到控制器错误:%s" -#: tools/crm_resource.c:2050 +#: tools/crm_resource.c:2051 msgid "You need to supply a value with the -v option" msgstr "需要使用-v选项提供一个值" -#: tools/crm_resource.c:2105 +#: tools/crm_resource.c:2106 #, c-format msgid "Unimplemented command: %d" msgstr "无效的命令:%d" -#: tools/crm_resource.c:2139 +#: tools/crm_resource.c:2140 #, c-format msgid "Error performing operation: %s" msgstr "执行操作错误:%s" #~ msgid "" #~ "If nonzero, along with `have-watchdog=true` automatically set by the " #~ "cluster, when fencing is required, watchdog-based self-fencing will be " #~ "performed via SBD without requiring a fencing resource explicitly " #~ "configured. If `stonith-watchdog-timeout` is set to a positive value, " #~ "unseen nodes are assumed to self-fence within this much time. +WARNING:+ " #~ "It must be ensured that this value is larger than the " #~ "`SBD_WATCHDOG_TIMEOUT` environment variable on all nodes. Pacemaker " #~ "verifies the settings individually on all nodes and prevents startup or " #~ "shuts down if configured wrongly on the fly. It's strongly recommended " #~ "that `SBD_WATCHDOG_TIMEOUT` is set to the same value on all nodes. If " #~ "`stonith-watchdog-timeout` is set to a negative value, and " #~ "`SBD_WATCHDOG_TIMEOUT` is set, twice that value will be used. +WARNING:+ " #~ "In this case, it's essential (currently not verified by Pacemaker) that " #~ "`SBD_WATCHDOG_TIMEOUT` is set to the same value on all nodes." #~ msgstr "" #~ "如果值非零,且集群设置了 `have-watchdog=true` ,当需要 fence 操作时,基于 " #~ "watchdog 的自我 fence 机制将通过SBD执行,而不需要显式配置 fence 资源。如" #~ "果 `stonith-watchdog-timeout` 被设为正值,则假定不可见的节点在这段时间内自" #~ "我fence。 +WARNING:+ 必须确保该值大于所有节点上的`SBD_WATCHDOG_TIMEOUT` 环" #~ "境变量。Pacemaker将在所有节点上单独验证设置,如发现有错误的动态配置,将防" #~ "止节点启动或关闭。强烈建议在所有节点上将 `SBD_WATCHDOG_TIMEOUT` 设置为相同" #~ "的值。如果 `stonith-watchdog-timeout` 设置为负值。并且设置了 " #~ "`SBD_WATCHDOG_TIMEOUT` ,则将使用该值的两倍, +WARNING:+ 在这种情况下,必" #~ "须将所有节点上 `SBD_WATCHDOG_TIMEOUT` 设置为相同的值(目前没有通过pacemaker" #~ "验证)。"