diff --git a/doc/Pacemaker_Explained/en-US/Ap-OCF.txt b/doc/Pacemaker_Explained/en-US/Ap-OCF.txt
index 2e36516cce..25a9b721f1 100644
--- a/doc/Pacemaker_Explained/en-US/Ap-OCF.txt
+++ b/doc/Pacemaker_Explained/en-US/Ap-OCF.txt
@@ -1,258 +1,261 @@
[appendix]
[[ap-ocf]]
== More About OCF Resource Agents ==
=== Location of Custom Scripts ===
indexterm:[OCF Resource Agents]
OCF Resource Agents are found in +/usr/lib/ocf/resource.d/pass:[provider]+
When creating your own agents, you are encouraged to create a new
directory under +/usr/lib/ocf/resource.d/+ so that they are not
confused with (or overwritten by) the agents shipped by existing providers.
So, for example, if you choose the provider name of bigCorp and want
a new resource named bigApp, you would create a resource agent called
+/usr/lib/ocf/resource.d/bigCorp/bigApp+ and define a resource:
[source,XML]
----
----
=== Actions ===
All OCF resource agents are required to implement the following actions.
.Required Actions for OCF Agents
[width="95%",cols="3m,3,7",options="header",align="center"]
|=========================================================
|Action
|Description
|Instructions
|start
|Start the resource
|Return 0 on success and an appropriate error code otherwise. Must not
report success until the resource is fully active.
indexterm:[start,OCF Action]
indexterm:[OCF,Action,start]
|stop
|Stop the resource
|Return 0 on success and an appropriate error code otherwise. Must not
report success until the resource is fully stopped.
indexterm:[stop,OCF Action]
indexterm:[OCF,Action,stop]
|monitor
|Check the resource's state
|Exit 0 if the resource is running, 7 if it is stopped, and anything
else if it is failed.
indexterm:[monitor,OCF Action]
indexterm:[OCF,Action,monitor]
NOTE: The monitor script should test the state of the resource on the local machine only.
|meta-data
|Describe the resource
|Provide information about this resource as an XML snippet. Exit with 0.
indexterm:[meta-data,OCF Action]
indexterm:[OCF,Action,meta-data]
NOTE: This is _not_ performed as root.
|validate-all
|Verify the supplied parameters
-|Exit with 0 if parameters are valid, 2 if not valid, 6 if resource is not configured.
+|Return 0 if parameters are valid, 2 if not valid, and 6 if resource is not configured.
indexterm:[validate-all,OCF Action]
indexterm:[OCF,Action,validate-all]
|=========================================================
-Additional requirements (not part of the OCF specs) are placed on
-agents that will be used for advanced concepts like
+Additional requirements (not part of the OCF specification) are placed on
+agents that will be used for advanced concepts such as
<> and <> resources.
-.Optional Actions for OCF Agents
+.Optional Actions for OCF Resource Agents
[width="95%",cols="2m,6,3",options="header",align="center"]
|=========================================================
|Action
|Description
|Instructions
|promote
-|Promote the local instance of a multi-state resource to the master/primary state.
+|Promote the local instance of a multi-state resource to the master (primary) state.
|Return 0 on success
indexterm:[promote,OCF Action]
indexterm:[OCF,Action,promote]
|demote
-|Demote the local instance of a multi-state resource to the slave/secondary state.
+|Demote the local instance of a multi-state resource to the slave (secondary) state.
|Return 0 on success
indexterm:[demote,OCF Action]
indexterm:[OCF,Action,demote]
|notify
-|Used by the cluster to send the agent pre and post notification
+|Used by the cluster to send the agent pre- and post-notification
events telling the resource what has happened and will happen.
|Must not fail. Must exit with 0
indexterm:[notify,OCF Action]
indexterm:[OCF,Action,notify]
|=========================================================
-One action specified in the OCF specs is not currently used by the cluster:
+One action specified in the OCF specs, +recover+, is not currently used by the
+cluster. It is intended to be a variant of the +start+ action that tries to
+recover a resource locally.
-* +recover+ - a variant of the +start+ action, this should try to
- recover a resource locally.
-
-Remember to use indexterm:[ocf-tester]`ocf-tester` to verify that your
-new agent complies with the OCF standard properly.
+[IMPORTANT]
+====
+If you create a new OCF resource agent, use indexterm:[ocf-tester]`ocf-tester`
+to verify that the agent complies with the OCF standard properly.
+====
=== How are OCF Return Codes Interpreted? ===
The first thing the cluster does is to check the return code against
the expected result. If the result does not match the expected value,
then the operation is considered to have failed, and recovery action is
initiated.
There are three types of failure recovery:
.Types of recovery performed by the cluster
[width="95%",cols="1m,4,4",options="header",align="center"]
|=========================================================
|Type
|Description
|Action Taken by the Cluster
|soft
|A transient error occurred
|Restart the resource or move it to a new location
indexterm:[soft,OCF error]
indexterm:[OCF,error,soft]
|hard
|A non-transient error that may be specific to the current node occurred
|Move the resource elsewhere and prevent it from being retried on the current node
indexterm:[hard,OCF error]
indexterm:[OCF,error,hard]
|fatal
|A non-transient error that will be common to all cluster nodes (e.g. a bad configuration was specified)
|Stop the resource and prevent it from being started on any cluster node
indexterm:[fatal,OCF error]
indexterm:[OCF,error,fatal]
|=========================================================
-Assuming an action is considered to have failed, the following table
-outlines the different OCF return codes and the type of recovery the
-cluster will initiate when it is received.
-
[[s-ocf-return-codes]]
=== OCF Return Codes ===
+The following table outlines the different OCF return codes and the type of
+recovery the cluster will initiate when a failure code is received.
+Although counterintuitive, even actions that return 0
+(aka. +OCF_SUCCESS+) can be considered to have failed, if 0 was not
+the expected return value.
+
.OCF Return Codes and their Recovery Types
[width="95%",cols="1m,4>).
* Recurring actions that return +OCF_ERR_UNIMPLEMENTED+
- do not cause any type of recovery
+ do not cause any type of recovery.
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
index 25c207349c..a6ffd42970 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
@@ -1,692 +1,696 @@
= Advanced Configuration =
[[s-remote-connection]]
== Connecting from a Remote Machine ==
indexterm:[Cluster,Remote connection]
indexterm:[Cluster,Remote administration]
Provided Pacemaker is installed on a machine, it is possible to
connect to the cluster even if the machine itself is not in the same
cluster. To do this, one simply sets up a number of environment
variables and runs the same commands as when working on a cluster
node.
.Environment Variables Used to Connect to Remote Instances of the CIB
[width="95%",cols="1m,1,3<",options="header",align="center"]
|=========================================================
|Environment Variable
|Default
|Description
|CIB_user
|$USER
|The user to connect as. Needs to be part of the +hacluster+ group on
the target host.
indexterm:[Environment Variable,CIB_user]
|CIB_passwd
|
|The user's password. Read from the command line if unset.
indexterm:[Environment Variable,CIB_passwd]
|CIB_server
|localhost
|The host to contact
indexterm:[Environment Variable,CIB_server]
|CIB_port
|
|The port on which to contact the server; required.
indexterm:[Environment Variable,CIB_port]
|CIB_encrypted
|TRUE
|Whether to encrypt network traffic
indexterm:[Environment Variable,CIB_encrypted]
|=========================================================
So, if *c001n01* is an active cluster node and is listening on port 1234
for connections, and *someuser* is a member of the *hacluster* group,
then the following would prompt for *someuser*'s password and return
the cluster's current configuration:
----
# export CIB_port=1234; export CIB_server=c001n01; export CIB_user=someuser;
# cibadmin -Q
----
For security reasons, the cluster does not listen for remote
connections by default. If you wish to allow remote access, you need
to set the +remote-tls-port+ (encrypted) or +remote-clear-port+
(unencrypted) CIB properties (i.e., those kept in the +cib+ tag, like
+num_updates+ and +epoch+).
.Extra top-level CIB properties for remote access
[width="95%",cols="1m,1,3<",options="header",align="center"]
|=========================================================
|Field
|Default
|Description
|remote-tls-port
|_none_
|Listen for encrypted remote connections on this port.
indexterm:[remote-tls-port,Remote Connection Option]
indexterm:[Remote Connection,Option,remote-tls-port]
|remote-clear-port
|_none_
|Listen for plaintext remote connections on this port.
indexterm:[remote-clear-port,Remote Connection Option]
indexterm:[Remote Connection,Option,remote-clear-port]
|=========================================================
[[s-recurring-start]]
== Specifying When Recurring Actions are Performed ==
By default, recurring actions are scheduled relative to when the
resource started. So if your resource was last started at 14:32 and
you have a backup set to be performed every 24 hours, then the backup
will always run at in the middle of the business day -- hardly
desirable.
To specify a date and time that the operation should be relative to, set
the operation's +interval-origin+. The cluster uses this point to
calculate the correct +start-delay+ such that the operation will occur
at _origin + (interval * N)_.
So, if the operation's interval is 24h, its interval-origin is set to
02:00 and it is currently 14:32, then the cluster would initiate
the operation with a start delay of 11 hours and 28 minutes. If the
resource is moved to another node before 2am, then the operation is
cancelled.
The value specified for +interval+ and +interval-origin+ can be any
date/time conforming to the
http://en.wikipedia.org/wiki/ISO_8601[ISO8601 standard]. By way of
example, to specify an operation that would run on the first Monday of
2009 and every Monday after that, you would add:
.Specifying a Base for Recurring Action Intervals
=====
[source,XML]
=====
== Moving Resources ==
indexterm:[Moving,Resources]
indexterm:[Resource,Moving]
-=== Manual Intervention ===
+=== Moving Resources Manually ===
There are primarily two occasions when you would want to move a
resource from its current location: when the whole node is under
maintenance, and when a single resource needs to be moved.
Since everything eventually comes down to a score, you could create
constraints for every resource to prevent them from running on one
-node. While the configuration can seem convoluted at times, not even
+node. While pacemaker configuration can seem convoluted at times, not even
we would require this of administrators.
Instead, one can set a special node attribute which tells the cluster
"don't let anything run here". There is even a helpful tool to help
query and set it, called `crm_standby`. To check the standby status
-of the current machine, simply run:
+of the current machine, run:
----
# crm_standby -G
----
A value of +on+ indicates that the node is _not_ able to host any
resources, while a value of +off+ says that it _can_.
You can also check the status of other nodes in the cluster by
specifying the `--node` option:
----
# crm_standby -G --node sles-2
----
To change the current node's standby status, use `-v` instead of `-G`:
----
# crm_standby -v on
----
Again, you can change another host's value by supplying a hostname with `--node`.
==== Moving One Resource ====
-When only one resource is required to move, we do this by creating
-location constraints. However, once again we provide a user friendly
+When only one resource is required to move, we could do this by creating
+location constraints. However, once again we provide a user-friendly
shortcut as part of the `crm_resource` command, which creates and
modifies the extra constraints for you. If +Email+ were running on
+sles-1+ and you wanted it moved to a specific location, the command
would look something like:
----
# crm_resource -M -r Email -H sles-2
----
Behind the scenes, the tool will create the following location constraint:
[source,XML]
It is important to note that subsequent invocations of `crm_resource
-M` are not cumulative. So, if you ran these commands
----
# crm_resource -M -r Email -H sles-2
# crm_resource -M -r Email -H sles-3
----
then it is as if you had never performed the first command.
To allow the resource to move back again, use:
----
# crm_resource -U -r Email
----
Note the use of the word _allow_. The resource can move back to its
original location but, depending on +resource-stickiness+, it might
stay where it is. To be absolutely certain that it moves back to
+sles-1+, move it there before issuing the call to `crm_resource -U`:
----
# crm_resource -M -r Email -H sles-1
# crm_resource -U -r Email
----
Alternatively, if you only care that the resource should be moved from
its current location, try:
----
# crm_resource -B -r Email
----
Which will instead create a negative constraint, like
[source,XML]
This will achieve the desired effect, but will also have long-term
consequences. As the tool will warn you, the creation of a
+-INFINITY+ constraint will prevent the resource from running on that
node until `crm_resource -U` is used. This includes the situation
where every other cluster node is no longer available!
In some cases, such as when +resource-stickiness+ is set to
+INFINITY+, it is possible that you will end up with the problem
described in <>. The tool can detect
some of these cases and deals with them by creating both
positive and negative constraints. E.g.
+Email+ prefers +sles-1+ with a score of +-INFINITY+
+Email+ prefers +sles-2+ with a score of +INFINITY+
which has the same long-term consequences as discussed earlier.
[[s-failure-migration]]
-=== Moving Resources Due to Failure ===
+=== Moving Resources Due to Repeated Failure ===
New in 1.0 is the concept of a migration threshold.
footnote:[
The naming of this option was perhaps unfortunate as it is easily
-confused with true migration, the process of moving a resource from
+confused with live migration, the process of moving a resource from
one node to another without stopping it. Xen virtual guests are the
most common example of resources that can be migrated in this manner.
]
Simply define +migration-threshold=pass:[N]+ for a resource and it will
migrate to a new node after 'N' failures. There is no threshold defined
by default. To determine the resource's current failure status and
limits, run `crm_mon --failcounts`.
-By default, once the threshold has been reached, this node will no
+By default, once the threshold has been reached, the troublesome node will no
longer be allowed to run the failed resource until the administrator
manually resets the resource's failcount using `crm_failcount` (after
-hopefully first fixing the failure's cause). However it is possible
-to expire them by setting the resource's +failure-timeout+ option.
+hopefully first fixing the failure's cause). Alternatively, it is possible
+to expire them by setting the +failure-timeout+ option for the resource.
-So a setting of +migration-threshold=2+ and +failure-timeout=60s+
+For example, a setting of +migration-threshold=2+ and +failure-timeout=60s+
would cause the resource to move to a new node after 2 failures, and
-allow it to move back (depending on the stickiness and constraint
-scores) after one minute.
+allow it to move back (depending on stickiness and constraint scores) after one
+minute.
-There are two exceptions to the migration threshold concept; they
-occur when a resource either fails to start or fails to stop. Start
-failures cause the failcount to be set to +INFINITY+ and thus always
+There are two exceptions to the migration threshold concept:
+when a resource either fails to start or fails to stop.
+
+Start failures cause the failcount to be set to +INFINITY+ and thus always
cause the resource to move immediately.
Stop failures are slightly different and crucial. If a resource fails
to stop and STONITH is enabled, then the cluster will fence the node
in order to be able to start the resource elsewhere. If STONITH is
not enabled, then the cluster has no way to continue and will not try
to start the resource elsewhere, but will try to stop it again after
the failure timeout.
[IMPORTANT]
-Please read <> before enabling this option.
+Please read <> to understand how timeouts work
+before configuring a +failure-timeout+.
=== Moving Resources Due to Connectivity Changes ===
-Setting up the cluster to move resources when external connectivity is
-lost is a two-step process.
-
-==== Tell Pacemaker to monitor connectivity ====
+You can configure the cluster to move resources when external connectivity is
+lost in two steps.
+==== Tell Pacemaker to Monitor Connectivity ====
-To do this, you need to add a +ping+ resource to the cluster. The
-+ping+ resource uses the system utility of the same name to a test if
+First, add an *ocf:pacemaker:ping* resource to the cluster. The
+*ping* resource uses the system utility of the same name to a test whether
list of machines (specified by DNS hostname or IPv4/IPv6 address) are
-reachable and uses the results to maintain a node attribute normally
-called +pingd+.
+reachable and uses the results to maintain a node attribute called +pingd+
+by default.
footnote:[
-The attribute name is customizable; that allows multiple ping groups to be defined.
+The attribute name is customizable, in order to allow multiple ping groups to be defined.
]
[NOTE]
Older versions of Heartbeat required users to add ping nodes to _ha.cf_ - this is no longer required.
[IMPORTANT]
===========
Older versions of Pacemaker used a custom binary called 'pingd' for
this functionality; this is now deprecated in favor of 'ping'.
If your version of Pacemaker does not contain the ping agent, you can
download the latest version from
https://github.com/ClusterLabs/pacemaker/tree/master/extra/resources/ping
===========
-Normally the resource will run on all cluster nodes, which means that
+Normally, the ping resource should run on all cluster nodes, which means that
you'll need to create a clone. A template for this can be found below
along with a description of the most interesting parameters.
.Common Options for a 'ping' Resource
[width="95%",cols="1m,4<",options="header",align="center"]
|=========================================================
|Field
|Description
|dampen
|The time to wait (dampening) for further changes to occur. Use this
to prevent a resource from bouncing around the cluster when cluster
nodes notice the loss of connectivity at slightly different times.
indexterm:[dampen,Ping Resource Option]
indexterm:[Ping Resource,Option,dampen]
|multiplier
|The number of connected ping nodes gets multiplied by this value to
get a score. Useful when there are multiple ping nodes configured.
indexterm:[multiplier,Ping Resource Option]
indexterm:[Ping Resource,Option,multiplier]
|host_list
|The machines to contact in order to determine the current
connectivity status. Allowed values include resolvable DNS host
names, IPv4 and IPv6 addresses.
indexterm:[host_list,Ping Resource Option]
indexterm:[Ping Resource,Option,host_list]
|=========================================================
.An example ping cluster resource that checks node connectivity once every minute
=====
[source,XML]
------------
------------
=====
[IMPORTANT]
===========
You're only half done. The next section deals with telling Pacemaker
how to deal with the connectivity status that +ocf:pacemaker:ping+ is
recording.
===========
-==== Tell Pacemaker how to interpret the connectivity data ====
+==== Tell Pacemaker How to Interpret the Connectivity Data ====
-[NOTE]
+[IMPORTANT]
======
-Before reading the following, please make sure you have read and
-understood <> above.
+Before attempting the following, make sure you understand
+<>.
======
-There are a number of ways to use the connectivity data provided by
-Heartbeat. The most common setup is for people to have a single ping
-node, to prevent the cluster from running a resource on any
-unconnected node.
+There are a number of ways to use the connectivity data.
-////
-TODO: is the idea that only nodes that can reach eg. the router should have active resources?
-////
+The most common setup is for people to have a single ping
+target (e.g. the service network's default gateway), to prevent the cluster
+from running a resource on any unconnected node.
-.Don't run on unconnected nodes
+.Don't run a resource on unconnected nodes
=====
[source,XML]
-------
-------
=====
-A more complex setup is to have a number of ping nodes configured.
+A more complex setup is to have a number of ping targets configured.
You can require the cluster to only run resources on nodes that can
connect to all (or a minimum subset) of them.
-.Run only on nodes connected to three or more ping nodes; this assumes +multiplier+ is set to 1000:
+.Run only on nodes connected to three or more ping targets.
=====
[source,XML]
-------
+
+...
+
+...
+
+...
-
+
-------
=====
-Instead you can tell the cluster only to _prefer_ nodes with the best
+Alternatively, you can tell the cluster only to _prefer_ nodes with the best
connectivity. Just be sure to set +multiplier+ to a value higher than
that of +resource-stickiness+ (and don't set either of them to
+INFINITY+).
.Prefer the node with the most connected ping nodes
=====
[source,XML]
-------
-------
=====
It is perhaps easier to think of this in terms of the simple
constraints that the cluster translates it into. For example, if
*sles-1* is connected to all five ping nodes but *sles-2* is only
connected to two, then it would be as if you instead had the following
constraints in your configuration:
.How the cluster translates the above location constraint
=====
[source,XML]
-------
-------
=====
The advantage is that you don't have to manually update any
constraints whenever your network connectivity changes.
You can also combine the concepts above into something even more
complex. The example below shows how you can prefer the node with the
most connected ping nodes provided they have connectivity to at least
three (again assuming that +multiplier+ is set to 1000).
.A more complex example of choosing a location based on connectivity
=====
[source,XML]
-------
-------
=====
=== Resource Migration ===
Some resources, such as Xen virtual guests, are able to move to
another location without loss of state. We call this resource
migration; this is different from the normal practice of stopping the
resource on the first machine and starting it elsewhere.
-Not all resources are able to migrate, see the Migration Checklist
+Not all resources are able to migrate; see the Migration Checklist
below, and those that can, won't do so in all situations.
-Conceptually there are two requirements from which the other
+Conceptually, there are two requirements from which the other
prerequisites follow:
-* the resource must be active and healthy at the old location
+* The resource must be active and healthy at the old location; and
* everything required for the resource to run must be available on
- both the old and new locations
+ both the old and new locations.
-The cluster is able to accommodate both push and pull migration models
-by requiring the resource agent to support two new actions:
+The cluster is able to accommodate both 'push' and 'pull' migration models
+by requiring the resource agent to support two special actions:
+migrate_to+ (performed on the current location) and +migrate_from+
(performed on the destination).
In push migration, the process on the current location transfers the
resource to the new location where is it later activated. In this
scenario, most of the work would be done in the +migrate_to+ action
and, if anything, the activation would occur during +migrate_from+.
Conversely for pull, the +migrate_to+ action is practically empty and
+migrate_from+ does most of the work, extracting the relevant resource
state from the old location and activating it.
There is no wrong or right way to implement migration for your
service, as long as it works.
==== Migration Checklist ====
* The resource may not be a clone.
* The resource must use an OCF style agent.
* The resource must not be in a failed or degraded state.
* The resource must not, directly or indirectly, depend on any
primitive or group resources.
* The resource must support two new actions: +migrate_to+ and
+migrate_from+, and advertise them in its metadata.
* The resource must have the +allow-migrate+ meta-attribute set to
+true+ (which is not the default).
////
TODO: how can a KVM with DRBD migrate?
////
If the resource depends on a clone, and at the time the resource needs
to be move, the clone has instances that are stopping and instances
that are starting, then the resource will be moved in the traditional
manner. The Policy Engine is not yet able to model this situation
correctly and so takes the safe (yet less optimal) path.
[[s-reusing-config-elements]]
== Reusing Rules, Options and Sets of Operations ==
Sometimes a number of constraints need to use the same set of rules,
and resources need to set the same options and parameters. To
simplify this situation, you can refer to an existing object using an
+id-ref+ instead of an id.
So if for one resource you have
[source,XML]
------
------
Then instead of duplicating the rule for all your other resources, you can instead specify:
.Referencing rules from other constraints
=====
[source,XML]
-------
-------
=====
[IMPORTANT]
===========
The cluster will insist that the +rule+ exists somewhere. Attempting
to add a reference to a non-existing rule will cause a validation
failure, as will attempting to remove a +rule+ that is referenced
elsewhere.
===========
The same principle applies for +meta_attributes+ and
+instance_attributes+ as illustrated in the example below:
.Referencing attributes, options, and operations from other resources
=====
[source,XML]
-------
-------
=====
== Reloading Services After a Definition Change ==
The cluster automatically detects changes to the definition of
-services it manages. However, the normal response is to stop the
+services it manages. The normal response is to stop the
service (using the old definition) and start it again (with the new
definition). This works well, but some services are smarter and can
be told to use a new set of options without restarting.
-To take advantage of this capability, your resource agent must:
+To take advantage of this capability, the resource agent must:
. Accept the +reload+ operation and perform any required actions.
- _The steps required here depend completely on your application!_
+ _The actions here depend completely on your application!_
+
-.The DRBD Agent's Control logic for Supporting the +reload+ Operation
+.The DRBD agent's logic for supporting +reload+
=====
[source,Bash]
-------
case $1 in
start)
drbd_start
;;
stop)
drbd_stop
;;
reload)
drbd_reload
;;
monitor)
drbd_monitor
;;
*)
drbd_usage
exit $OCF_ERR_UNIMPLEMENTED
;;
esac
exit $?
-------
=====
. Advertise the +reload+ operation in the +actions+ section of its metadata
+
.The DRBD Agent Advertising Support for the +reload+ Operation
=====
[source,XML]
-------
1.1
Master/Slave OCF Resource Agent for DRBD
...
-------
=====
. Advertise one or more parameters that can take effect using +reload+.
+
Any parameter with the +unique+ set to 0 is eligible to be used in this way.
+
.Parameter that can be changed using reload
=====
[source,XML]
-------
Full path to the drbd.conf file.
Path to drbd.conf
-------
=====
Once these requirements are satisfied, the cluster will automatically
know to reload the resource (instead of restarting) when a non-unique
field changes.
[NOTE]
======
-The metadata is re-read when the resource is started. This may mean
-that the resource will be restarted the first time, even though you
-changed a parameter with +unique=0+
+Metadata will not be re-read unless the resource needs to be started. This may
+mean that the resource will be restarted the first time, even though you
+changed a parameter with +unique=0+.
======
[NOTE]
======
If both a unique and non-unique field are changed simultaneously, the
resource will still be restarted.
======
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt
index 4b34ef5ad1..670eee324e 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Advanced-Resources.txt
@@ -1,1030 +1,1034 @@
= Advanced Resource Types =
[[group-resources]]
== Groups - A Syntactic Shortcut ==
indexterm:[Group Resources]
indexterm:[Resources,Groups]
One of the most common elements of a cluster is a set of resources
that need to be located together, start sequentially, and stop in the
-reverse order. To simplify this configuration we support the concept
+reverse order. To simplify this configuration, we support the concept
of groups.
-.An example group
+.A group of two primitive resources
======
[source,XML]
-------
-------
======
Although the example above contains only two resources, there is no
limit to the number of resources a group can contain. The example is
also sufficient to explain the fundamental properties of a group:
* Resources are started in the order they appear in (+Public-IP+
first, then +Email+)
* Resources are stopped in the reverse order to which they appear in
(+Email+ first, then +Public-IP+)
If a resource in the group can't run anywhere, then nothing after that
is allowed to run, too.
* If +Public-IP+ can't run anywhere, neither can +Email+;
* but if +Email+ can't run anywhere, this does not affect +Public-IP+
in any way
The group above is logically equivalent to writing:
.How the cluster sees a group resource
======
[source,XML]
-------
-------
======
Obviously as the group grows bigger, the reduced configuration effort
can become significant.
Another (typical) example of a group is a DRBD volume, the filesystem
mount, an IP address, and an application that uses them.
=== Group Properties ===
.Properties of a Group Resource
[width="95%",cols="3m,5<",options="header",align="center"]
|=========================================================
|Field
|Description
|id
-|Your name for the group
+|A unique name for the group
indexterm:[id,Group Resource Property]
indexterm:[Resource,Group Property,id]
|=========================================================
=== Group Options ===
-Options inherited from <> resources:
-+priority, target-role, is-managed+
+Groups inherit the +priority+, +target-role+, and +is-managed+ properties
+from primitive resources. See <> for information about
+those properties.
=== Group Instance Attributes ===
-Groups have no instance attributes, however any that are set here will
-be inherited by the group's children.
+Groups have no instance attributes. However, any that are set for the group
+object will be inherited by the group's children.
=== Group Contents ===
-Groups may only contain a collection of
-<> cluster resources. To refer to
-the child of a group resource, just use the child's id instead of the
-group's.
+Groups may only contain a collection of cluster resources (see
+<>). To refer to a child of a group resource, just use
+the child's +id+ instead of the group's.
=== Group Constraints ===
-Although it is possible to reference the group's children in
-constraints, it is usually preferable to use the group's name instead.
+Although it is possible to reference a group's children in
+constraints, it is usually preferable to reference the group itself.
-.Example constraints involving groups
+.Some constraints involving groups
======
[source,XML]
-------
-------
======
=== Group Stickiness ===
indexterm:[resource-stickiness,Groups]
Stickiness, the measure of how much a resource wants to stay where it
is, is additive in groups. Every active resource of the group will
contribute its stickiness value to the group's total. So if the
default +resource-stickiness+ is 100, and a group has seven members,
five of which are active, then the group as a whole will prefer its
current location with a score of 500.
[[s-resource-clone]]
== Clones - Resources That Get Active on Multiple Hosts ==
indexterm:[Clone Resources]
indexterm:[Resources,Clones]
-Clones were initially conceived as a convenient way to start N
-instances of an IP resource and have them distributed throughout the
+Clones were initially conceived as a convenient way to start multiple
+instances of an IP address resource and have them distributed throughout the
cluster for load balancing. They have turned out to quite useful for
-a number of purposes including integrating with Red Hat's DLM, the
-fencing subsystem, and OCFS2.
+a number of purposes including integrating with the Distributed Lock Manager
+(used by many cluster filesystems), the fencing subsystem, and OCFS2.
You can clone any resource, provided the resource agent supports it.
Three types of cloned resources exist:
* Anonymous
-* Globally Unique
+* Globally unique
* Stateful
-Anonymous clones are the simplest type. These resources behave
+'Anonymous' clones are the simplest. These behave
completely identically everywhere they are running. Because of this,
-there can only be one copy of an anonymous clone active per machine.
+there can be only one copy of an anonymous clone active per machine.
-Globally unique clones are distinct entities. A copy of the clone
+'Globally unique' clones are distinct entities. A copy of the clone
running on one machine is not equivalent to another instance on
-another node. Nor would any two copies on the same node be
+another node, nor would any two copies on the same node be
equivalent.
-Stateful clones are covered later in <>.
+'Stateful' clones are covered later in <>.
-.An example clone
+.A clone of an LSB resource
======
[source,XML]
-------
-------
======
=== Clone Properties ===
.Properties of a Clone Resource
[width="95%",cols="3m,5<",options="header",align="center"]
|=========================================================
|Field
|Description
|id
-|Your name for the clone
+|A unique name for the clone
indexterm:[id,Clone Property]
indexterm:[Clone,Property,id]
|=========================================================
=== Clone Options ===
Options inherited from <> resources:
+priority, target-role, is-managed+
.Clone-specific configuration options
[width="95%",cols="1m,1,3<",options="header",align="center"]
|=========================================================
|Field
|Default
|Description
|clone-max
|number of nodes in cluster
|How many copies of the resource to start
indexterm:[clone-max,Clone Option]
indexterm:[Clone,Option,clone-max]
|clone-node-max
|1
|How many copies of the resource can be started on a single node
indexterm:[clone-node-max,Clone Option]
indexterm:[Clone,Option,clone-node-max]
|notify
|true
|When stopping or starting a copy of the clone, tell all the other
copies beforehand and again when the action was successful. Allowed values:
+false+, +true+
indexterm:[notify,Clone Option]
indexterm:[Clone,Option,notify]
|globally-unique
|false
|Does each copy of the clone perform a different function? Allowed
values: +false+, +true+
indexterm:[globally-unique,Clone Option]
indexterm:[Clone,Option,globally-unique]
|ordered
|false
|Should the copies be started in series (instead of in
parallel)? Allowed values: +false+, +true+
indexterm:[ordered,Clone Option]
indexterm:[Clone,Option,ordered]
|interleave
|false
|If this clone depends on another clone via an ordering constraint,
is it allowed to start after the local instance of the other clone
starts, rather than wait for all instances of the other clone to start?
Allowed values: +false+, +true+
indexterm:[interleave,Clone Option]
indexterm:[Clone,Option,interleave]
|=========================================================
=== Clone Instance Attributes ===
Clones have no instance attributes; however, any that are set here
will be inherited by the clone's children.
=== Clone Contents ===
-Clones must contain exactly one group or one regular resource.
+Clones must contain exactly one primitive or group resource.
[WARNING]
You should never reference the name of a clone's child.
If you think you need to do this, you probably need to re-evaluate your design.
=== Clone Constraints ===
In most cases, a clone will have a single copy on each active cluster
node. If this is not the case, you can indicate which nodes the
cluster should preferentially assign copies to with resource location
constraints. These constraints are written no differently to those
for regular resources except that the clone's id is used.
Ordering constraints behave slightly differently for clones. In the
example below, +apache-stats+ will wait until all copies of the clone
that need to be started have done so before being started itself.
Only if _no_ copies can be started +apache-stats+ will be prevented
from being active. Additionally, the clone will wait for
+apache-stats+ to be stopped before stopping the clone.
Colocation of a regular (or group) resource with a clone means that
the resource can run on any machine with an active copy of the clone.
The cluster will choose a copy based on where the clone is running and
the resource's own location preferences.
Colocation between clones is also possible. In such cases, the set of
allowed locations for the clone is limited to nodes on which the clone
is (or will be) active. Allocation is then performed as normally.
-.Example constraints involving clones
+.Some constraints involving clones
======
[source,XML]
-------
-------
======
=== Clone Stickiness ===
indexterm:[resource-stickiness,Clones]
To achieve a stable allocation pattern, clones are slightly sticky by
default. If no value for +resource-stickiness+ is provided, the clone
will use a value of 1. Being a small value, it causes minimal
disturbance to the score calculations of other resources but is enough
to prevent Pacemaker from needlessly moving copies around the cluster.
=== Clone Resource Agent Requirements ===
Any resource can be used as an anonymous clone, as it requires no
additional support from the resource agent. Whether it makes sense to
do so depends on your resource and its resource agent.
Globally unique clones do require some additional support in the
resource agent. In particular, it must only respond with
+$\{OCF_SUCCESS}+ if the node has that exact instance active. All
other probes for instances of the clone should result in
+$\{OCF_NOT_RUNNING}+ (or one of the other OCF error codes if
they are failed).
-Copies of a clone are identified by appending a colon and a numerical
-offset, eg. +apache:2+.
+Individual instances of a clone are identified by appending a colon and a
+numerical offset, e.g. +apache:2+.
Resource agents can find out how many copies there are by examining
the +OCF_RESKEY_CRM_meta_clone_max+ environment variable and which
copy it is by examining +OCF_RESKEY_CRM_meta_clone+.
-You should not make any assumptions (based on
-+OCF_RESKEY_CRM_meta_clone+) about which copies are active. In
+The resource agent must not make any assumptions (based on
++OCF_RESKEY_CRM_meta_clone+) about which numerical instances are active. In
particular, the list of active copies will not always be an unbroken
sequence, nor always start at 0.
==== Clone Notifications ====
Supporting notifications requires the +notify+ action to be
-implemented. Once supported, the notify action will be passed a
+implemented. If supported, the notify action will be passed a
number of extra variables which, when combined with additional
context, can be used to calculate the current state of the cluster and
what is about to happen to it.
.Environment variables supplied with Clone notify actions
[width="95%",cols="5,3<",options="header",align="center"]
|=========================================================
|Variable
|Description
|OCF_RESKEY_CRM_meta_notify_type
|Allowed values: +pre+, +post+
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,type]
indexterm:[type,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_operation
|Allowed values: +start+, +stop+
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,operation]
indexterm:[operation,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_start_resource
|Resources to be started
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_resource]
indexterm:[start_resource,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_stop_resource
|Resources to be stopped
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_resource]
indexterm:[stop_resource,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_active_resource
|Resources that are running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_resource]
indexterm:[active_resource,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_inactive_resource
|Resources that are not running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,inactive_resource]
indexterm:[inactive_resource,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_start_uname
|Nodes on which resources will be started
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_uname]
indexterm:[start_uname,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_stop_uname
|Nodes on which resources will be stopped
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_uname]
indexterm:[stop_uname,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_active_uname
|Nodes on which resources are running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_uname]
indexterm:[active_uname,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_inactive_uname
|Nodes on which resources are not running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,inactive_uname]
indexterm:[inactive_uname,Notification Environment Variable]
|=========================================================
The variables come in pairs, such as
+OCF_RESKEY_CRM_meta_notify_start_resource+ and
+OCF_RESKEY_CRM_meta_notify_start_uname+ and should be treated as an
array of whitespace-separated elements.
Thus in order to indicate that +clone:0+ will be started on +sles-1+,
+clone:2+ will be started on +sles-3+, and +clone:3+ will be started
on +sles-2+, the cluster would set
.Notification variables
======
[source,Bash]
-------
OCF_RESKEY_CRM_meta_notify_start_resource="clone:0 clone:2 clone:3"
OCF_RESKEY_CRM_meta_notify_start_uname="sles-1 sles-3 sles-2"
-------
======
==== Proper Interpretation of Notification Environment Variables ====
.Pre-notification (stop):
* Active resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+
* Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
* Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
.Post-notification (stop) / Pre-notification (start):
* Active resources
** +$OCF_RESKEY_CRM_meta_notify_active_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Inactive resources
** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
.Post-notification (start):
* Active resources:
** +$OCF_RESKEY_CRM_meta_notify_active_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Inactive resources:
** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
[[s-resource-multistate]]
== Multi-state - Resources That Have Multiple Modes ==
indexterm:[Multi-state Resources]
indexterm:[Resources,Multi-state]
-Multi-state resources are a specialization of Clone resources; please
-ensure you understand the section on clones before continuing! They
-allow the instances to be in one of two operating modes; these are
-called +Master+ and +Slave+, but can mean whatever you wish them to
-mean. The only limitation is that when an instance is started, it
-must come up in the +Slave+ state.
+Multi-state resources are a specialization of clone resources; please
+ensure you understand <> before continuing!
+
+Multi-state resources allow the instances to be in one of two operating modes
+(called 'roles'). The roles are called 'master' and 'slave', but can mean
+whatever you wish them to mean. The only limitation is that when an instance is
+started, it must come up in the slave role.
=== Multi-state Properties ===
.Properties of a Multi-State Resource
[width="95%",cols="3m,5<",options="header",align="center"]
|=========================================================
|Field
|Description
|id
|Your name for the multi-state resource
indexterm:[id,Multi-State Property]
indexterm:[Multi-State,Property,id]
|=========================================================
=== Multi-state Options ===
Options inherited from <> resources:
+priority+, +target-role+, +is-managed+
Options inherited from <> resources:
+clone-max+, +clone-node-max+, +notify+, +globally-unique+, +ordered+,
+interleave+
.Multi-state-specific resource configuration options
[width="95%",cols="1m,1,3<",options="header",align="center"]
|=========================================================
|Field
|Default
|Description
|master-max
|1
|How many copies of the resource can be promoted to the +master+ role
indexterm:[master-max,Multi-State Option]
indexterm:[Multi-State,Option,master-max]
|master-node-max
|1
|How many copies of the resource can be promoted to the +master+ role on
a single node
indexterm:[master-node-max,Multi-State Option]
indexterm:[Multi-State,Option,master-node-max]
|=========================================================
=== Multi-state Instance Attributes ===
Multi-state resources have no instance attributes; however, any that
-are set here will be inherited by master's children.
+are set here will be inherited by a master's children.
=== Multi-state Contents ===
-Masters must contain exactly one group or one regular resource.
+Masters must contain exactly one primitive or group resource.
[WARNING]
You should never reference the name of a master's child.
If you think you need to do this, you probably need to re-evaluate your design.
=== Monitoring Multi-State Resources ===
The normal type of monitor actions are not sufficient to monitor a
multi-state resource in the +Master+ state. To detect failures of the
+Master+ instance, you need to define an additional monitor action
with +role="Master"+.
[IMPORTANT]
===========
It is crucial that _every_ monitor operation has a different interval!
This is because Pacemaker currently differentiates between operations
only by resource and interval; so if eg. a master/slave resource has
the same monitor interval for both roles, Pacemaker would ignore the
role when checking the status - which would cause unexpected return
codes, and therefore unnecessary complications.
===========
.Monitoring both states of a multi-state resource
======
[source,XML]
-------
-------
======
=== Multi-state Constraints ===
In most cases, multi-state resources will have a single copy on each
active cluster node. If this is not the case, you can indicate which
nodes the cluster should preferentially assign copies to with resource
location constraints. These constraints are written no differently from
those for primitive resources except that the master's +id+ is used.
When considering multi-state resources in constraints, for most
purposes it is sufficient to treat them as clones. The exception is
when the +rsc-role+ and/or +with-rsc-role+ fields (for colocation
constraints) and +first-action+ and/or +then-action+ fields (for
ordering constraints) are used.
.Additional constraint options relevant to multi-state resources
[width="95%",cols="1m,1,3<",options="header",align="center"]
|=========================================================
|Field
|Default
|Description
|rsc-role
|started
|An additional attribute of colocation constraints that specifies the
role that +rsc+ must be in. Allowed values: +started+, +master+,
+slave+.
indexterm:[rsc-role,Ordering Constraints]
indexterm:[Constraints,Ordering,rsc-role]
|with-rsc-role
|started
|An additional attribute of colocation constraints that specifies the
role that +with-rsc+ must be in. Allowed values: +started+,
+master+, +slave+.
indexterm:[with-rsc-role,Ordering Constraints]
indexterm:[Constraints,Ordering,with-rsc-role]
|first-action
|start
|An additional attribute of ordering constraints that specifies the
action that the +first+ resource must complete before executing the
specified action for the +then+ resource. Allowed values: +start+,
+stop+, +promote+, +demote+.
indexterm:[first-action,Ordering Constraints]
indexterm:[Constraints,Ordering,first-action]
|then-action
|value of +first-action+
|An additional attribute of ordering constraints that specifies the
action that the +then+ resource can only execute after the
+first-action+ on the +first+ resource has completed. Allowed
values: +start+, +stop+, +promote+, +demote+.
indexterm:[then-action,Ordering Constraints]
indexterm:[Constraints,Ordering,then-action]
|=========================================================
In the example below, +myApp+ will wait until one of the database
copies has been started and promoted to master before being started
itself. Only if no copies can be promoted will +apache-stats+ be
prevented from being active. Additionally, the database will wait for
+myApp+ to be stopped before it is demoted.
.Example constraints involving multi-state resources
======
[source,XML]
-------
-------
======
Colocation of a regular (or group) resource with a multi-state
resource means that it can run on any machine with an active copy of
the multi-state resource that has the specified role (+master+ or
+slave+). In the example above, the cluster will choose a location based on
where database is running as a +master+, and if there are multiple
+master+ instances it will also factor in +myApp+'s own location
preferences when deciding which location to choose.
Colocation with regular clones and other multi-state resources is also
possible. In such cases, the set of allowed locations for the +rsc+
clone is (after role filtering) limited to nodes on which the
+with-rsc+ multi-state resource is (or will be) in the specified role.
-Allocation is then performed as-per-normal.
+Placement is then performed as normal.
-==== Using Multi-state Resources in Colocation/Ordering Sets ====
+==== Using Multi-state Resources in Colocation Sets ====
.Additional colocation set options relevant to multi-state resources
[width="95%",cols="1m,1,6<",options="header",align="center"]
|=========================================================
|Field
|Default
|Description
|role
|started
|The role that 'all members' of the set must be in. Allowed values: +started+, +master+,
+slave+.
indexterm:[role,Ordering Constraints]
indexterm:[Constraints,Ordering,role]
|=========================================================
In the following example +B+'s master must be located on the same node as +A+'s master.
-Additionally resources +C+ and +D+ must be located on the same node as +B+'s master.
+Additionally resources +C+ and +D+ must be located on the same node as +A+'s
+and +B+'s masters.
.Colocate C and D with A's and B's master instances
======
[source,XML]
-------
-------
======
.Additional ordered set options relevant to multi-state resources
[width="95%",cols="1m,1,3<",options="header",align="center"]
|=========================================================
|Field
|Default
|Description
|action
|value of +first-action+
|An additional attribute of ordering constraint sets that specifies the
action that applies to 'all members' of the set. Allowed
values: +start+, +stop+, +promote+, +demote+.
indexterm:[action,Ordering Constraints]
indexterm:[Constraints,Ordering,action]
|=========================================================
In the following example +B+ cannot be promoted until +A+'s has been promoted.
Additionally resources +C+ and +D+ must wait until +A+ and +B+ have been promoted before they can start.
.Start C and D after first promoting A and B
======
[source,XML]
-------
-------
======
+In the above example, +B+ cannot be promoted to a master role until +A+ has
+been promoted. Additionally, resources +C+ and +D+ must wait until +A+ and +B+
+have been promoted before they can start.
+
=== Multi-state Stickiness ===
indexterm:[resource-stickiness,Multi-State]
-To achieve a stable allocation pattern, multi-state resources are
-slightly sticky by default. If no value for +resource-stickiness+ is
-provided, the multi-state resource will use a value of 1. Being a
-small value, it causes minimal disturbance to the score calculations
-of other resources but is enough to prevent Pacemaker from needlessly
-moving copies around the cluster.
+As with regular clones, multi-state resources are
+slightly sticky by default. See <> for details.
=== Which Resource Instance is Promoted ===
-During the start operation, most Resource Agent scripts should call
+During the start operation, most resource agents should call
the `crm_master` utility. This tool automatically detects both the
resource and host and should be used to set a preference for being
promoted. Based on this, +master-max+, and +master-node-max+, the
instance(s) with the highest preference will be promoted.
-The other alternative is to create a location constraint that
+An alternative is to create a location constraint that
indicates which nodes are most preferred as masters.
-.Manually specifying which node should be promoted
+.Explicitly preferring node1 to be promoted to master
======
[source,XML]
-------
-------
======
-=== Multi-state Resource Agent Requirements ===
+=== Requirements for Multi-state Resource Agents ===
Since multi-state resources are an extension of cloned resources, all
-the requirements of Clones are also requirements of multi-state
-resources. Additionally, multi-state resources require two extra
-actions: +demote+ and +promote+; these actions are responsible for
+the requirements for resource agents that support clones are also requirements
+for resource agents that support multi-state resources.
+
+Additionally, multi-state resources require two extra
+actions, +demote+ and +promote+, which are responsible for
changing the state of the resource. Like +start+ and +stop+, they
should return +$\{OCF_SUCCESS}+ if they completed successfully or a
relevant error code if they did not.
The states can mean whatever you wish, but when the resource is
started, it must come up in the mode called +slave+. From there the
cluster will decide which instances to promote to +master+.
In addition to the clone requirements for monitor actions, agents must
also _accurately_ report which state they are in. The cluster relies
on the agent to report its status (including role) accurately and does
not indicate to the agent what role it currently believes it to be in.
.Role implications of OCF return codes
[width="95%",cols="1,1<",options="header",align="center"]
|=========================================================
|Monitor Return Code
|Description
|OCF_NOT_RUNNING
|Stopped
indexterm:[Return Code,OCF_NOT_RUNNING]
|OCF_SUCCESS
|Running (Slave)
indexterm:[Return Code,OCF_SUCCESS]
|OCF_RUNNING_MASTER
|Running (Master)
indexterm:[Return Code,OCF_RUNNING_MASTER]
|OCF_FAILED_MASTER
|Failed (Master)
indexterm:[Return Code,OCF_FAILED_MASTER]
|Other
|Failed (Slave)
|=========================================================
=== Multi-state Notifications ===
Like clones, supporting notifications requires the +notify+ action to
-be implemented. Once supported the notify action will be passed a
+be implemented. If supported, the notify action will be passed a
number of extra variables which, when combined with additional
context, can be used to calculate the current state of the cluster and
what is about to happen to it.
-.Environment variables supplied with Master notify actions footnote:[Emphasized variables are specific to +Master+ resources and all behave in the same manner as described for Clone resources.]
+.Environment variables supplied with multi-state notify actions footnote:[Emphasized variables are specific to +Master+ resources, and all behave in the same manner as described for Clone resources.]
[width="95%",cols="5,3<",options="header",align="center"]
|=========================================================
|Variable
|Description
|OCF_RESKEY_CRM_meta_notify_type
|Allowed values: +pre+, +post+
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,type]
indexterm:[type,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_operation
|Allowed values: +start+, +stop+
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,operation]
indexterm:[operation,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_active_resource
|Resources the that are running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_resource]
indexterm:[active_resource,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_inactive_resource
|Resources the that are not running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,inactive_resource]
indexterm:[inactive_resource,Notification Environment Variable]
|_OCF_RESKEY_CRM_meta_notify_master_resource_
|Resources that are running in +Master+ mode
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,master_resource]
indexterm:[master_resource,Notification Environment Variable]
|_OCF_RESKEY_CRM_meta_notify_slave_resource_
|Resources that are running in +Slave+ mode
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,slave_resource]
indexterm:[slave_resource,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_start_resource
|Resources to be started
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_resource]
indexterm:[start_resource,Notification Environment Variable]
|indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_resource]
indexterm:[stop_resource,Notification Environment Variable]
OCF_RESKEY_CRM_meta_notify_stop_resource
|Resources to be stopped
|_OCF_RESKEY_CRM_meta_notify_promote_resource_
|Resources to be promoted
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,promote_resource]
indexterm:[promote_resource,Notification Environment Variable]
|_OCF_RESKEY_CRM_meta_notify_demote_resource_
|Resources to be demoted
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,demote_resource]
indexterm:[demote_resource,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_start_uname
|Nodes on which resources will be started
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,start_uname]
indexterm:[start_uname,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_stop_uname
|Nodes on which resources will be stopped
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,stop_uname]
indexterm:[stop_uname,Notification Environment Variable]
|_OCF_RESKEY_CRM_meta_notify_promote_uname_
|Nodes on which resources will be promote
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,promote_uname]
indexterm:[promote_uname,Notification Environment Variable]
|_OCF_RESKEY_CRM_meta_notify_demote_uname_
|Nodes on which resources will be demoted
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,demote_uname]
indexterm:[demote_uname,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_active_uname
|Nodes on which resources are running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,active_uname]
indexterm:[active_uname,Notification Environment Variable]
|OCF_RESKEY_CRM_meta_notify_inactive_uname
|Nodes on which resources are not running
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,inactive_uname]
indexterm:[inactive_uname,Notification Environment Variable]
|_OCF_RESKEY_CRM_meta_notify_master_uname_
|Nodes on which resources are running in +Master+ mode
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,master_uname]
indexterm:[master_uname,Notification Environment Variable]
|_OCF_RESKEY_CRM_meta_notify_slave_uname_
|Nodes on which resources are running in +Slave+ mode
indexterm:[Environment Variable,OCF_RESKEY_CRM_meta_notify_,slave_uname]
indexterm:[slave_uname,Notification Environment Variable]
|=========================================================
=== Multi-state - Proper Interpretation of Notification Environment Variables ===
.Pre-notification (demote):
* +Active+ resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+
* +Master+ resources: +$OCF_RESKEY_CRM_meta_notify_master_resource+
* +Slave+ resources: +$OCF_RESKEY_CRM_meta_notify_slave_resource+
* Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
* Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
.Post-notification (demote) / Pre-notification (stop):
* +Active+ resources: +$OCF_RESKEY_CRM_meta_notify_active_resource+
* +Master+ resources:
** +$OCF_RESKEY_CRM_meta_notify_master_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* +Slave+ resources: +$OCF_RESKEY_CRM_meta_notify_slave_resource+
* Inactive resources: +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
* Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
.Post-notification (stop) / Pre-notification (start)
* +Active+ resources:
** +$OCF_RESKEY_CRM_meta_notify_active_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* +Master+ resources:
** +$OCF_RESKEY_CRM_meta_notify_master_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* +Slave+ resources:
** +$OCF_RESKEY_CRM_meta_notify_slave_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Inactive resources:
** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
.Post-notification (start) / Pre-notification (promote)
* +Active+ resources:
** +$OCF_RESKEY_CRM_meta_notify_active_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+
* +Master+ resources:
** +$OCF_RESKEY_CRM_meta_notify_master_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* +Slave+ resources:
** +$OCF_RESKEY_CRM_meta_notify_slave_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Inactive resources:
** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
.Post-notification (promote)
* +Active+ resources:
** +$OCF_RESKEY_CRM_meta_notify_active_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+
* +Master+ resources:
** +$OCF_RESKEY_CRM_meta_notify_master_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_demote_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* +Slave+ resources:
** +$OCF_RESKEY_CRM_meta_notify_slave_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_start_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* Inactive resources:
** +$OCF_RESKEY_CRM_meta_notify_inactive_resource+
** plus +$OCF_RESKEY_CRM_meta_notify_stop_resource+
** minus +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources to be promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* Resources to be demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources to be stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
* Resources that were started: +$OCF_RESKEY_CRM_meta_notify_start_resource+
* Resources that were promoted: +$OCF_RESKEY_CRM_meta_notify_promote_resource+
* Resources that were demoted: +$OCF_RESKEY_CRM_meta_notify_demote_resource+
* Resources that were stopped: +$OCF_RESKEY_CRM_meta_notify_stop_resource+
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Basics.txt b/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
index a876ce4d37..27b3175a93 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
@@ -1,379 +1,377 @@
= Configuration Basics =
== Configuration Layout ==
The cluster is written using XML notation and divided into two main
sections: configuration and status.
The status section contains the history of each resource on each node
and based on this data, the cluster can construct the complete current
state of the cluster. The authoritative source for the status section
is the local resource manager (lrmd) process on each cluster node and
the cluster will occasionally repopulate the entire section. For this
reason it is never written to disk and administrators are advised
against modifying it in any way.
The configuration section contains the more traditional information
like cluster options, lists of resources and indications of where they
should be placed. The configuration section is the primary focus of
this document.
The configuration section itself is divided into four parts:
* Configuration options (called +crm_config+)
* Nodes
* Resources
* Resource relationships (called +constraints+)
.An empty configuration
======
[source,XML]
-------
-------
======
== The Current State of the Cluster ==
Before one starts to configure a cluster, it is worth explaining how
to view the finished product. For this purpose we have created the
`crm_mon` utility, which will display the
current state of an active cluster. It can show the cluster status by
node or by resource and can be used in either single-shot or
dynamically-updating mode. There are also modes for displaying a list
of the operations performed (grouped by node and resource) as well as
information about failures.
Using this tool, you can examine the state of the cluster for
irregularities and see how it responds when you cause or simulate
failures.
Details on all the available options can be obtained using the
`crm_mon --help` command.
.Sample output from crm_mon
======
-------
============
Last updated: Fri Nov 23 15:26:13 2007
Current DC: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec)
3 Nodes configured.
5 Resources configured.
============
Node: sles-1 (1186dc9a-324d-425a-966e-d757e693dc86): online
192.168.100.181 (heartbeat::ocf:IPaddr): Started sles-1
192.168.100.182 (heartbeat:IPaddr): Started sles-1
192.168.100.183 (heartbeat::ocf:IPaddr): Started sles-1
rsc_sles-1 (heartbeat::ocf:IPaddr): Started sles-1
child_DoFencing:2 (stonith:external/vmware): Started sles-1
Node: sles-2 (02fb99a8-e30e-482f-b3ad-0fb3ce27d088): standby
Node: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec): online
rsc_sles-2 (heartbeat::ocf:IPaddr): Started sles-3
rsc_sles-3 (heartbeat::ocf:IPaddr): Started sles-3
child_DoFencing:0 (stonith:external/vmware): Started sles-3
-------
======
.Sample output from crm_mon -n
======
-------
============
Last updated: Fri Nov 23 15:26:13 2007
Current DC: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec)
3 Nodes configured.
5 Resources configured.
============
Node: sles-1 (1186dc9a-324d-425a-966e-d757e693dc86): online
Node: sles-2 (02fb99a8-e30e-482f-b3ad-0fb3ce27d088): standby
Node: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec): online
Resource Group: group-1
192.168.100.181 (heartbeat::ocf:IPaddr): Started sles-1
192.168.100.182 (heartbeat:IPaddr): Started sles-1
192.168.100.183 (heartbeat::ocf:IPaddr): Started sles-1
rsc_sles-1 (heartbeat::ocf:IPaddr): Started sles-1
rsc_sles-2 (heartbeat::ocf:IPaddr): Started sles-3
rsc_sles-3 (heartbeat::ocf:IPaddr): Started sles-3
Clone Set: DoFencing
child_DoFencing:0 (stonith:external/vmware): Started sles-3
child_DoFencing:1 (stonith:external/vmware): Stopped
child_DoFencing:2 (stonith:external/vmware): Started sles-1
-------
======
The DC (Designated Controller) node is where all the decisions are
made, and if the current DC fails a new one is elected from the
remaining cluster nodes. The choice of DC is of no significance to an
administrator beyond the fact that its logs will generally be more
interesting.
== How Should the Configuration be Updated? ==
There are three basic rules for updating the cluster configuration:
* Rule 1 - Never edit the +cib.xml+ file manually. Ever. I'm not making this up.
* Rule 2 - Read Rule 1 again.
* Rule 3 - The cluster will notice if you ignored rules 1 & 2 and refuse to use the configuration.
Now that it is clear how NOT to update the configuration, we can begin
to explain how you should.
The most powerful tool for modifying the configuration is the
-+cibadmin+ command which talks to a running cluster. With +cibadmin+,
-the user can query, add, remove, update or replace any part of the
-configuration; all changes take effect immediately, so there is no
-need to perform a reload-like operation.
-
++cibadmin+ command. With +cibadmin+, you can query, add, remove, update
+or replace any part of the configuration. All changes take effect immediately,
+so there is no need to perform a reload-like operation.
The simplest way of using `cibadmin` is to use it to save the current
configuration to a temporary file, edit that file with your favorite
text or XML editor, and then upload the revised configuration.
.Safely using an editor to modify the cluster configuration
======
--------
# cibadmin --query > tmp.xml
# vi tmp.xml
# cibadmin --replace --xml-file tmp.xml
--------
======
Some of the better XML editors can make use of a Relax NG schema to
help make sure any changes you make are valid. The schema describing
the configuration can be found in +pacemaker.rng+, which may be
deployed in a location such as +/usr/share/pacemaker+ or
+/usr/lib/heartbeat+ depending on your operating system and how you
installed the software.
-If you only wanted to modify the resources section, you could instead
-do
+If you want to modify just one section of the configuration, you can
+query and replace just that section to avoid modifying any others.
-.Safely using an editor to modify a subsection of the cluster configuration
+.Safely using an editor to modify only the resources section
======
--------
# cibadmin --query --obj_type resources > tmp.xml
# vi tmp.xml
# cibadmin --replace --obj_type resources --xml-file tmp.xml
--------
======
to avoid modifying any other part of the configuration.
== Quickly Deleting Part of the Configuration ==
-Identify the object you wish to delete. Eg. run
+Identify the object you wish to delete by XML tag and id. For example,
+you might search the CIB for all STONITH-related configuration:
-.Searching for STONITH related configuration items
+.Searching for STONITH-related configuration items
======
---------
+----
# cibadmin -Q | grep stonith
---------
-[source,XML]
---------
---------
+----
======
-Next identify the resource's tag name and id (in this case we'll
-choose +primitive+ and +child_DoFencing+). Then simply execute:
+If you wanted to delete the +primitive+ tag with id +child_DoFencing+,
+you would run:
----
# cibadmin --delete --crm_xml ''
----
== Updating the Configuration Without Using XML ==
-Some common tasks can also be performed with one of the higher level
-tools that avoid the need to read or edit XML.
+Most tasks can be performed with one of the other command-line
+tools provided with pacemaker, avoiding the need to read or edit XML.
To enable STONITH for example, one could run:
----
# crm_attribute --name stonith-enabled --update 1
----
Or, to check whether *somenode* is allowed to run resources, there is:
----
# crm_standby --get-value --node somenode
----
Or, to find the current location of *my-test-rsc*, one can use:
----
# crm_resource --locate --resource my-test-rsc
----
Examples of using these tools for specific cases will be given throughout this
document where appropriate.
[NOTE]
====
Old versions of pacemaker (1.0.3 and earlier) had different
command-line tool syntax. If you are using an older version,
check your installed manual pages for the proper syntax to use.
====
[[s-config-sandboxes]]
== Making Configuration Changes in a Sandbox ==
Often it is desirable to preview the effects of a series of changes
before updating the configuration atomically. For this purpose we
have created `crm_shadow` which creates a
"shadow" copy of the configuration and arranges for all the command
line tools to use it.
-To begin, simply invoke `crm_shadow` and give
-it the name of a configuration to create footnote:[Shadow copies are
-identified with a name, making it possible to have more than one.] ;
-be sure to follow the simple on-screen instructions.
+To begin, simply invoke `crm_shadow --create` with
+the name of a configuration to create footnote:[Shadow copies are
+identified with a name, making it possible to have more than one.],
+and follow the simple on-screen instructions.
-WARNING: Read the above carefully, failure to do so could result in you
-destroying the cluster's active configuration!
+[WARNING]
+====
+Read this section and the on-screen instructions carefully; failure to do so could
+result in destroying the cluster's active configuration!
+====
.Creating and displaying the active sandbox
======
----
- # crm_shadow --create test
- Setting up shadow instance
- Type Ctrl-D to exit the crm_shadow shell
- shadow[test]:
- shadow[test] # crm_shadow --which
- test
+# crm_shadow --create test
+Setting up shadow instance
+Type Ctrl-D to exit the crm_shadow shell
+shadow[test]:
+shadow[test] # crm_shadow --which
+test
----
======
From this point on, all cluster commands will automatically use the
shadow copy instead of talking to the cluster's active configuration.
-Once you have finished experimenting, you can either commit the
-changes, or discard them as shown below. Again, be sure to follow the
-on-screen instructions carefully.
+Once you have finished experimenting, you can either make the
+changes active via the `--commit` option, or discard them using the `--delete`
+option. Again, be sure to follow the on-screen instructions carefully!
For a full list of `crm_shadow` options and
commands, invoke it with the `--help` option.
-.Using a sandbox to make multiple changes atomically
+.Using a sandbox to make multiple changes atomically, discard them and verify the real configuration is untouched
======
----
shadow[test] # crm_failcount -G -r rsc_c001n01
name=fail-count-rsc_c001n01 value=0
shadow[test] # crm_standby -v on -N c001n02
shadow[test] # crm_standby -G -N c001n02
name=c001n02 scope=nodes value=on
shadow[test] # cibadmin --erase --force
shadow[test] # cibadmin --query
shadow[test] # crm_shadow --delete test --force
Now type Ctrl-D to exit the crm_shadow shell
shadow[test] # exit
# crm_shadow --which
No active shadow configuration defined
# cibadmin -Q
----
======
Making changes in a sandbox and verifying the real configuration is untouched
[[s-config-testing-changes]]
== Testing Your Configuration Changes ==
We saw previously how to make a series of changes to a "shadow" copy
of the configuration. Before loading the changes back into the
-cluster (eg. `crm_shadow --commit mytest --force`), it is often
-advisable to simulate the effect of the changes with +crm_simulate+,
-eg.
+cluster (e.g. `crm_shadow --commit mytest --force`), it is often
+advisable to simulate the effect of the changes with +crm_simulate+.
+For example:
----
# crm_simulate --live-check -VVVVV --save-graph tmp.graph --save-dotfile tmp.dot
----
-
-The tool uses the same library as the live cluster to show what it
-would have done given the supplied input. It's output, in addition to
+This tool uses the same library as the live cluster to show what it
+would have done given the supplied input. Its output, in addition to
a significant amount of logging, is stored in two files +tmp.graph+
-and +tmp.dot+, both are representations of the same thing -- the
+and +tmp.dot+. Both files are representations of the same thing: the
cluster's response to your changes.
In the graph file is stored the complete transition, containing a list
of all the actions, their parameters and their pre-requisites.
Because the transition graph is not terribly easy to read, the tool
also generates a Graphviz dot-file representing the same information.
== Interpreting the Graphviz output ==
* Arrows indicate ordering dependencies
- * Dashed-arrows indicate dependencies that are not present in the transition graph
+ * Dashed arrows indicate dependencies that are not present in the transition graph
* Actions with a dashed border of any color do not form part of the transition graph
* Actions with a green border form part of the transition graph
* Actions with a red border are ones the cluster would like to execute but cannot run
* Actions with a blue border are ones the cluster does not feel need to be executed
* Actions with orange text are pseudo/pretend actions that the cluster uses to simplify the graph
* Actions with black text are sent to the LRM
* Resource actions have text of the form pass:[rsc]_pass:[action]_pass:[interval] pass:[node]
* Any action depending on an action with a red border will not be able to execute.
* Loops are _really_ bad. Please report them to the development team.
=== Small Cluster Transition ===
image::images/Policy-Engine-small.png["An example transition graph as represented by Graphviz",width="16cm",height="6cm",align="center"]
In the above example, it appears that a new node, *pcmk-2*, has come
online and that the cluster is checking to make sure *rsc1*, *rsc2*
and *rsc3* are not already running there (Indicated by the
*rscN_monitor_0* entries). Once it did that, and assuming the resources
were not active there, it would have liked to stop *rsc1* and *rsc2*
on *pcmk-1* and move them to *pcmk-2*. However, there appears to be
some problem and the cluster cannot or is not permitted to perform the
stop actions which implies it also cannot perform the start actions.
For information on the options supported by `crm_simulate`, use
the `--help` option.
For some reason the cluster does not want to start *rsc3* anywhere.
=== Complex Cluster Transition ===
image::images/Policy-Engine-big.png["Another, slightly more complex, transition graph that you're not expected to be able to read",width="16cm",height="20cm",align="center"]
== Do I Need to Update the Configuration on All Cluster Nodes? ==
No. Any changes are immediately synchronized to the other active
members of the cluster.
To reduce bandwidth, the cluster only broadcasts the incremental
updates that result from your changes and uses MD5 checksums to ensure
that each copy is completely consistent.
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt b/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt
index 837613df7a..2682befd4d 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Constraints.txt
@@ -1,662 +1,663 @@
= Resource Constraints =
indexterm:[Resource,Constraints]
== Scores ==
Scores of all kinds are integral to how the cluster works.
Practically everything from moving a resource to deciding which
resource to stop in a degraded cluster is achieved by manipulating
scores in some way.
Scores are calculated on a per-resource basis, and any node with a
negative score for a resource can't run that resource. After
calculating the scores for a resource, the cluster then chooses the
node with the highest one.
=== Infinity Math ===
-+INFINITY+ is currently defined as 1,000,000 and addition/subtraction
-with it follows these three basic rules:
+Pacemaker implements +INFINITY+ internally as a score of 1,000,000.
+Addition/subtraction with it follows these three basic rules:
* Any value + +INFINITY+ = +INFINITY+
* Any value - +INFINITY+ = +-INFINITY+
* +INFINITY+ - +INFINITY+ = +-INFINITY+
== Deciding Which Nodes a Resource Can Run On ==
indexterm:[Location Constraints]
indexterm:[Resource,Constraints,Location]
There are two alternative strategies for specifying which nodes a
resources can run on. One way is to say that by default they can run
anywhere and then create location constraints for nodes that are not
-allowed. The other option is to have nodes "opt-in"... to start with
+allowed. The other option is to have nodes "opt-in" -- to start with
nothing able to run anywhere and selectively enable allowed nodes.
=== Location Properties ===
.Properties for Simple Location Constraints
[width="95%",cols="2m,1,5
-------
======
=== Symmetrical "Opt-Out" Clusters ===
indexterm:[Symmetrical Opt-Out Clusters]
indexterm:[Cluster Type,Symmetrical Opt-Out]
To create an opt-out cluster, start by allowing resources to run
anywhere by default:
----
# crm_attribute --name symmetric-cluster --update true
----
Then start disabling nodes. The following fragment is the equivalent
of the above opt-in configuration.
.Opt-out location constraints for two resources
======
[source,XML]
-------
-------
======
Whether you should choose opt-in or opt-out depends both on your
personal preference and the make-up of your cluster. If most of your
resources can run on most of the nodes, then an opt-out arrangement is
likely to result in a simpler configuration. On the other-hand, if
most resources can only run on a small subset of nodes an opt-in
configuration might be simpler.
[[node-score-equal]]
=== What if Two Nodes Have the Same Score ===
If two nodes have the same score, then the cluster will choose one.
This choice may seem random and may not be what was intended, however
the cluster was not given enough information to know any better.
-.Example of two resources that prefer two nodes equally
+.Constraints where a resource prefers two nodes equally
======
[source,XML]
-------
-------
======
In the example above, assuming no other constraints and an inactive
cluster, +Webserver+ would probably be placed on +sles-1+ and +Database+ on
+sles-2+. It would likely have placed +Webserver+ based on the node's
uname and +Database+ based on the desire to spread the resource load
evenly across the cluster. However other factors can also be involved
in more complex configurations.
[[s-resource-ordering]]
-== Specifying in which Order Resources Should Start/Stop ==
+== Specifying the Order in which Resources Should Start/Stop ==
indexterm:[Resource,Constraints,Ordering]
indexterm:[Resource,Start Order]
indexterm:[Ordering Constraints]
The way to specify the order in which resources should start is by
creating +rsc_order+ constraints.
=== Ordering Properties ===
.Properties of an Ordering Constraint
[width="95%",cols="1m,1,4
-------
======
Because the above example lets +symmetrical+ default to TRUE,
+Webserver+ must be stopped before +Database+ can be stopped,
and +Webserver+ should be stopped before +IP+
if they both need to be stopped.
[[s-resource-colocation]]
== Placing Resources Relative to other Resources ==
indexterm:[Resource,Constraints,Colocation]
indexterm:[Resource,Location Relative to other Resources]
When the location of one resource depends on the location of another
one, we call this colocation.
There is an important side-effect of creating a colocation constraint
between two resources: it affects the order in which resources are
assigned to a node. If you think about it, it's somewhat obvious.
You can't place A relative to B unless you know where B is.
footnote:[
While the human brain is sophisticated enough to read the constraint
in any order and choose the correct one depending on the situation,
the cluster is not quite so smart. Yet.
]
So when you are creating colocation constraints, it is important to
consider whether you should colocate A with B, or B with A.
Another thing to keep in mind is that, assuming A is colocated with
B, the cluster will take into account A's preferences when
deciding which node to choose for B.
For a detailed look at exactly how this occurs, see
http://clusterlabs.org/doc/Colocation_Explained.pdf[Colocation Explained].
=== Colocation Properties ===
.Properties of a Colocation Constraint
[width="95%",cols="2m,5<",options="header",align="center"]
|=========================================================
|Field
|Description
|id
|A unique name for the constraint.
indexterm:[id,Colocation Constraints]
indexterm:[Constraints,Colocation,id]
|rsc
|The name of a resource that should be located relative to +with-rsc+.
indexterm:[rsc,Colocation Constraints]
indexterm:[Constraints,Colocation,rsc]
|with-rsc
|The name of the resource used as the colocation target. The cluster will
decide where to put this resource first and then decide where to put +rsc+.
indexterm:[with-rsc,Colocation Constraints]
indexterm:[Constraints,Colocation,with-rsc]
|score
|Positive values indicate the resources should run on the same
node. Negative values indicate the resources should run on
different nodes. Values of \+/- +INFINITY+ change "should" to "must".
indexterm:[score,Colocation Constraints]
indexterm:[Constraints,Colocation,score]
|=========================================================
=== Mandatory Placement ===
-Mandatory placement occurs any time the constraint's score is
+Mandatory placement occurs when the constraint's score is
++INFINITY+ or +-INFINITY+. In such cases, if the constraint can't be
satisfied, then the +rsc+ resource is not permitted to run. For
+score=INFINITY+, this includes cases where the +with-rsc+ resource is
not active.
If you need +resource1+ to always run on the same machine as
+resource2+, you would add the following constraint:
-.An example colocation constraint
+.Mandatory colocation constraint for two resources
+====
[source,XML]
Remember, because +INFINITY+ was used, if +resource2+ can't run on any
of the cluster nodes (for whatever reason) then +resource1+ will not
be allowed to run.
Alternatively, you may want the opposite... that +resource1+ cannot
run on the same machine as +resource2+. In this case use
+score="-INFINITY"+
.An example anti-colocation constraint
[source,XML]
Again, by specifying +-INFINTY+, the constraint is binding. So if the
only place left to run is where +resource2+ already is, then
+resource1+ may not run anywhere.
=== Advisory Placement ===
If mandatory placement is about "must" and "must not", then advisory
placement is the "I'd prefer if" alternative. For constraints with
scores greater than +-INFINITY+ and less than +INFINITY+, the cluster
-will try and accommodate your wishes but may ignore them if the
+will try to accommodate your wishes but may ignore them if the
alternative is to stop some of the cluster resources.
-
-Like in life, where if enough people prefer something it effectively
+As in life, where if enough people prefer something it effectively
becomes mandatory, advisory colocation constraints can combine with
other elements of the configuration to behave as if they were
mandatory.
-.An example advisory-only colocation constraint
+.Advisory colocation constraint for two resources
+====
[source,XML]
[[s-resource-sets-ordering]]
== Ordering Sets of Resources ==
A common situation is for an administrator to create a chain of
ordered resources, such as:
.A chain of ordered resources
======
[source,XML]
-------
-------
======
.Visual representation of the four resources' start order for the above constraints
image::images/resource-set.png["Ordered set",width="16cm",height="2.5cm",align="center"]
=== Ordered Set ===
To simplify this situation, there is an alternate format for ordering
constraints:
.A chain of ordered resources expressed as a set
======
[source,XML]
-------
-------
======
[WARNING]
=========
Always pay attention to how your tools expose this functionality.
In some tools +create set A B+ is *NOT* equivalent to +create A then B+.
=========
While the set-based format is not less verbose, it is significantly
easier to get right and maintain. It can also be expanded to allow
ordered sets of (un)ordered resources. In the example below, +rscA+
and +rscB+ can both start in parallel, as can +rscC+ and +rscD+,
however +rscC+ and +rscD+ can only start once _both_ +rscA+ _and_
+rscB+ are active.
.Ordered sets of unordered resources
======
[source,XML]
-------
-------
======
.Visual representation of the start order for two ordered sets of unordered resources
image::images/two-sets.png["Two ordered sets",width="13cm",height="7.5cm",align="center"]
Of course either set -- or both sets -- of resources can also be
internally ordered (by setting +sequential="true"+) and there is no
limit to the number of sets that can be specified.
.Advanced use of set ordering - Three ordered sets, two of which are internally unordered
======
[source,XML]
-------
-------
======
.Visual representation of the start order for the three sets defined above
image::images/three-sets.png["Three ordered sets",width="16cm",height="7.5cm",align="center"]
=== Resource Set OR Logic ===
The unordered set logic discussed so far has all been "AND" logic.
To illustrate this take the 3 resource set figure in the previous section.
Those sets can be expressed, +(A and B) then \(C) then (D) then (E and F)+.
Say for example we want to change the first set, +(A and B)+, to use "OR" logic
so the sets look like this: +(A or B) then \(C) then (D) then (E and F)+.
This functionality can be achieved through the use of the +require-all+
-option. By default this option is 'require-all=true' which is why the
-"AND" logic is used by default. Changing +require-all=false+ means only one
+option. This option defaults to TRUE which is why the
+"AND" logic is used by default. Setting +require-all=false+ means only one
resource in the set needs to be started before continuing on to the next set.
Note that the +require-all=false+ option only makes sense to use in conjunction
with unordered sets, +sequential=false+. Think of it like this, +sequential=false+
modifies the set to be an unordered set that uses "AND" logic by default, by adding
+require-all=false+ the unordered set's "AND" logic is flipped to "OR" logic.
.Resource Set "OR" logic: Three ordered sets, where the first set is internally unordered with "OR" logic
======
[source,XML]
-------
-------
======
[[s-resource-sets-colocation]]
== Colocating Sets of Resources ==
Another common situation is for an administrator to create a set of
colocated resources.
One way to do this would be to define a resource group (see
<>), but that cannot always accurately express the desired
state.
Another way would be to define each relationship as an individual constraint,
but that causes a constraint explosion as the number of resources and
combinations grow. An example of this approach:
.Chain of colocated resources
======
[source,XML]
-------
-------
======
To make things easier, we allow an alternate form of colocation
constraints using +resource_set+. As with the chained version, a
resource that can't be active prevents any resource that must be
colocated with it from being active. For example, if +C+ is not
able to run, then both +B+ and by inference +A+ must also remain
stopped. Here is an example +resource_set+:
.Equivalent colocation chain expressed using +resource_set+
======
[source,XML]
-------
-------
======
[WARNING]
=========
Always pay attention to how your tools expose this functionality.
In some tools +create set A B+ is 'not' equivalent to +create A with B+.
=========
.A group resource with the equivalent colocation rules
[source,XML]
-------
-------
This notation can also be used in this context to tell the cluster
that a set of resources must all be located with a common peer, but
have no dependencies on each other. In this scenario, unlike the
previous, +B+ 'would' be allowed to remain active even if +A+ or +C+ (or
both) were inactive.
.Using colocation sets to specify a common peer
======
[source,XML]
-------
-------
======
Of course there is no limit to the number and size of the sets used.
The only thing that matters is that in order for any member of set N
to be active, all the members of set N+1 must also be active (and
naturally on the same node); and if a set has +sequential="true"+,
then in order for member M to be active, member M+1 must also be
active. You can even specify the role in which the members of a set
must be in using the set's role attribute.
.A colocation chain where the members of the middle set have no inter-dependencies and the last has master status.
======
[source,XML]
-------
-------
======
.Visual representation of a colocation chain where the members of the middle set have no inter-dependencies
image::images/three-sets-complex.png["Colocation chain",width="16cm",height="9cm",align="center"]
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt b/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt
index 5794422685..bc81d96813 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Multi-site-Clusters.txt
@@ -1,350 +1,376 @@
= Multi-Site Clusters and Tickets =
[[Multisite]]
== Abstract ==
Apart from local clusters, Pacemaker also supports multi-site clusters.
-That means you can have multiple, geographically dispersed sites with a
-local cluster each. Failover between these clusters can be coordinated
-by a higher level entity, the so-called `CTR (Cluster Ticket Registry)`.
-
+That means you can have multiple, geographically dispersed sites, each with a
+local cluster. Failover between these clusters can be coordinated
+manually by the administrator, or automatically by a higher-level entity called
+a 'Cluster Ticket Registry (CTR)'.
== Challenges for Multi-Site Clusters ==
Typically, multi-site environments are too far apart to support
-synchronous communication between the sites and synchronous data
-replication. That leads to the following challenges:
+synchronous communication and data replication between the sites.
+That leads to significant challenges:
-- How to make sure that a cluster site is up and running?
+- How do we make sure that a cluster site is up and running?
-- How to make sure that resources are only started once?
+- How do we make sure that resources are only started once?
-- How to make sure that quorum can be reached between the different
-sites and a split brain scenario can be avoided?
+- How do we make sure that quorum can be reached between the different
+sites and a split-brain scenario avoided?
-- How to manage failover between the sites?
+- How do we manage failover between sites?
-- How to deal with high latency in case of resources that need to be
+- How do we deal with high latency in case of resources that need to be
stopped?
In the following sections, learn how to meet these challenges.
-
== Conceptual Overview ==
Multi-site clusters can be considered as “overlay” clusters where
each cluster site corresponds to a cluster node in a traditional cluster.
The overlay cluster can be managed by a `CTR (Cluster Ticket Registry)`
mechanism. It guarantees that the cluster resources will be highly
available across different cluster sites. This is achieved by using
-so-called `tickets` that are treated as failover domain between cluster
+'tickets' that are treated as failover domain between cluster
sites, in case a site should be down.
-The following list explains the individual components and mechanisms
+The following sections explain the individual components and mechanisms
that were introduced for multi-site clusters in more detail.
=== Components and Concepts ===
==== Ticket ====
"Tickets" are, essentially, cluster-wide attributes. A ticket grants the
right to run certain resources on a specific cluster site. Resources can
-be bound to a certain ticket by `rsc_ticket` dependencies. Only if the
-ticket is available at a site, the respective resources are started.
+be bound to a certain ticket by +rsc_ticket+ constraints. Only if the
+ticket is available at a site can the respective resources be started there.
Vice versa, if the ticket is revoked, the resources depending on that
ticket need to be stopped.
The ticket thus is similar to a 'site quorum'; i.e., the permission to
manage/own resources associated with that site.
(One can also think of the current `have-quorum` flag as a special, cluster-wide
ticket that is granted in case of node majority.)
-These tickets can be granted/revoked either manually by administrators
-(which could be the default for the classic enterprise clusters), or via
-an automated `CTR` mechanism described further below.
+Tickets can be granted and revoked either manually by administrators
+(which could be the default for classic enterprise clusters), or via
+the automated CTR mechanism described below.
A ticket can only be owned by one site at a time. Initially, none
of the sites has a ticket. Each ticket must be granted once by the cluster
administrator.
The presence or absence of tickets for a site is stored in the CIB as a
cluster status. With regards to a certain ticket, there are only two states
for a site: +true+ (the site has the ticket) or +false+ (the site does
not have the ticket). The absence of a certain ticket (during the initial
state of the multi-site cluster) is the same as the value +false+.
==== Dead Man Dependency ====
A site can only activate resources safely if it can be sure that the
other site has deactivated them. However after a ticket is revoked, it can
take a long time until all resources depending on that ticket are stopped
"cleanly", especially in case of cascaded resources. To cut that process
short, the concept of a 'Dead Man Dependency' was introduced.
- If the ticket is revoked from a site, the nodes that are hosting
dependent resources are fenced. This considerably speeds up the recovery
process of the cluster and makes sure that resources can be migrated more
quickly.
This can be configured by specifying a +loss-policy="fence"+ in
+rsc_ticket+ constraints.
==== CTR (Cluster Ticket Registry) ====
This is for those scenarios where the tickets management is supposed to
be automatic (instead of the administrator revoking the ticket somewhere,
waiting for everything to stop, and then granting it on the desired site).
A `CTR` is a network daemon that handles granting,
revoking, and timing out "tickets". The participating clusters would run
the daemons that would connect to each other, exchange information on
their connectivity details, and vote on which site gets which ticket(s).
-A ticket would only be granted to a site once they can be sure that it
-has been relinquished by the previous owner, which would need to be
-implemented via a timer in most scenarios. If a site loses connection
-to its peers, its tickets time out and recovery occurs. After the
-connection timeout plus the recovery timeout has passed, the other sites
-are allowed to re-acquire the ticket and start the resources again.
+Participating clusters run the CTR daemons, which connect to each other, exchange
+information about their connectivity, and vote on which sites gets which
+tickets.
+
+A ticket is granted to a site only once the CTR is sure that the ticket
+has been relinquished by the previous owner, implemented via a timer in most
+scenarios. If a site loses connection to its peers, its tickets time out and
+recovery occurs. After the connection timeout plus the recovery timeout has
+passed, the other sites are allowed to re-acquire the ticket and start the
+resources again.
This can also be thought of as a "quorum server", except that it is not
a single quorum ticket, but several.
==== Configuration Replication ====
As usual, the CIB is synchronized within each cluster, but it is not synchronized
across cluster sites of a multi-site cluster. You have to configure the resources
that will be highly available across the multi-site cluster for every site
accordingly.
+[[s-ticket-constraints]]
== Configuring Ticket Dependencies ==
The `rsc_ticket` constraint lets you specify the resources depending on a certain
ticket. Together with the constraint, you can set a `loss-policy` that defines
what should happen to the respective resources if the ticket is revoked.
The attribute `loss-policy` can have the following values:
* +fence:+ Fence the nodes that are running the relevant resources.
* +stop:+ Stop the relevant resources.
* +freeze:+ Do nothing to the relevant resources.
* +demote:+ Demote relevant resources that are running in master mode to slave mode.
-An example to configure a `rsc_ticket` constraint:
-
+.Constraint that fences node if +ticketA+ is revoked
+====
[source,XML]
-------
-------
+====
The example above creates a constraint with the ID +rsc1-req-ticketA+. It
defines that the resource +rsc1+ depends on +ticketA+ and that the node running
the resource should be fenced if +ticketA+ is revoked.
If resource +rsc1+ were a multi-state resource (i.e. it could run in master or
slave mode), you might want to configure that only master mode
depends on +ticketA+. With the following configuration, +rsc1+ will be
demoted to slave mode if +ticketA+ is revoked:
[source,XML]
-------
-------
You can create more `rsc_ticket` constraints to let multiple resources
depend on the same ticket.
`rsc_ticket` also supports resource sets. So one can easily list all the
resources in one `rsc_ticket` constraint. For example:
[source,XML]
-------
-------
In the example, there are two resource sets for listing the resources with
different `roles` in one `rsc_ticket` constraint. There's no dependency
between the two resource sets. And there's no dependency among the
resources within a resource set. Each of the resources just depends on
+ticketA+.
Referencing resource templates in +rsc_ticket+ constraints, and even
referencing them within resource sets, is also supported.
If you want other resources to depend on further tickets, create as many
constraints as necessary with +rsc_ticket+.
== Managing Multi-Site Clusters ==
=== Granting and Revoking Tickets Manually ===
You can grant tickets to sites or revoke them from sites manually.
-Though if you want to re-distribute a ticket, you should wait for
-the dependent resources to cleanly stop at the previous site before you
-grant the ticket to another desired site.
+If you want to re-distribute a ticket, you should wait for
+the dependent resources to stop cleanly at the previous site before you
+grant the ticket to the new site.
Use the `crm_ticket` command line tool to grant and revoke tickets.
To grant a ticket to this site:
-------
# crm_ticket --ticket ticketA --grant
-------
To revoke a ticket from this site:
-------
# crm_ticket --ticket ticketA --revoke
-------
[IMPORTANT]
====
-If you are managing tickets manually. Use the `crm_ticket` command with
-great care as they cannot help verify if the same ticket is already
+If you are managing tickets manually, use the `crm_ticket` command with
+great care, because it cannot check whether the same ticket is already
granted elsewhere.
-
====
=== Granting and Revoking Tickets via a Cluster Ticket Registry ===
-==== Booth ====
-Booth is an implementation of `Cluster Ticket Registry` or so-called
-`Cluster Ticket Manager`.
+We will use https://github.com/ClusterLabs/booth[Booth] here as an example of
+software that can be used with pacemaker as a Cluster Ticket Registry. Booth
+implements the
+http://en.wikipedia.org/wiki/Paxos_%28computer_science%29['Paxos'] lease
+algorithm to guarantee the distributed consensus among different
+cluster sites, and manages the ticket distribution (and thus the failover
+process between sites).
+
+Each of the participating clusters and 'arbitrators' runs the Booth daemon
+`boothd`.
+
+An 'arbitrator' is the multi-site equivalent of a quorum-only node in a local
+cluster. If you have a setup with an even number of sites,
+you need an additional instance to reach consensus about decisions such
+as failover of resources across sites. In this case, add one or more
+arbitrators running at additional sites. Arbitrators are single machines
+that run a booth instance in a special mode. An arbitrator is especially
+important for a two-site scenario, otherwise there is no way for one site
+to distinguish between a network failure between it and the other site, and
+a failure of the other site.
+
+The most common multi-site scenario is probably a multi-site cluster with two
+sites and a single arbitrator on a third site. However, technically, there are
+no limitations with regards to the number of sites and the number of
+arbitrators involved.
+
+Nodes belonging to the same cluster site should be synchronized via NTP. However,
+time synchronization is not required between the individual cluster sites.
-Booth is the instance managing the ticket distribution and thus,
-the failover process between the sites of a multi-site cluster. Each of
-the participating clusters and arbitrators runs a service, the boothd.
-It connects to the booth daemons running at the other sites and
+`Boothd` at each site connects to its peers running at the other sites and
exchanges connectivity details. Once a ticket is granted to a site, the
booth mechanism will manage the ticket automatically: If the site which
holds the ticket is out of service, the booth daemons will vote which
of the other sites will get the ticket. To protect against brief
connection failures, sites that lose the vote (either explicitly or
implicitly by being disconnected from the voting body) need to
relinquish the ticket after a time-out. Thus, it is made sure that a
ticket will only be re-distributed after it has been relinquished by the
previous site. The resources that depend on that ticket will fail over
to the new site holding the ticket. The nodes that have run the
resources before will be treated according to the `loss-policy` you set
within the `rsc_ticket` constraint.
Before the booth can manage a certain ticket within the multi-site cluster,
you initially need to grant it to a site manually via `booth client` command.
After you have initially granted a ticket to a site, the booth mechanism
will take over and manage the ticket automatically.
[IMPORTANT]
====
The `booth client` command line tool can be used to grant, list, or
revoke tickets. The `booth client` commands work on any machine where
the booth daemon is running.
If you are managing tickets via `Booth`, only use `booth client` for manual
intervention instead of `crm_ticket`. That can make sure the same ticket
will only be owned by one cluster site at a time.
====
Booth includes an implementation of
http://en.wikipedia.org/wiki/Paxos_algorithm['Paxos'] and 'Paxos Lease'
algorithm, which guarantees the distributed consensus among different
cluster sites.
[NOTE]
====
`Arbitrator`
Each site runs one booth instance that is responsible for communicating
with the other sites. If you have a setup with an even number of sites,
you need an additional instance to reach consensus about decisions such
as failover of resources across sites. In this case, add one or more
arbitrators running at additional sites. Arbitrators are single machines
that run a booth instance in a special mode. As all booth instances
communicate with each other, arbitrators help to make more reliable
decisions about granting or revoking tickets.
An arbitrator is especially important for a two-site scenario: For example,
if site `A` can no longer communicate with site `B`, there are two possible
causes for that:
- `A` network failure between `A` and `B`.
- Site `B` is down.
However, if site `C` (the arbitrator) can still communicate with site `B`,
site `B` must still be up and running.
====
===== Requirements =====
- All clusters that will be part of the multi-site cluster must be based on Pacemaker.
- Booth must be installed on all cluster nodes and on all arbitrators that will
be part of the multi-site cluster.
The most common scenario is probably a multi-site cluster with two sites and a
single arbitrator on a third site. However, technically, there are no limitations
with regards to the number of sites and the number of arbitrators involved.
Nodes belonging to the same cluster site should be synchronized via NTP. However,
time synchronization is not required between the individual cluster sites.
=== General Management of Tickets ===
Display the information of tickets:
-------
# crm_ticket --info
-------
Or you can monitor them with:
-------
# crm_mon --tickets
-------
Display the +rsc_ticket+ constraints that apply to a ticket:
-------
# crm_ticket --ticket ticketA --constraints
-------
When you want to do maintenance or manual switch-over of a ticket,
revoking the ticket would trigger the loss policies. If
+loss-policy="fence"+, the dependent resources could not be gracefully
stopped/demoted, and other unrelated resources could even be affected.
The proper way is making the ticket 'standby' first with:
-------
# crm_ticket --ticket ticketA --standby
-------
Then the dependent resources will be stopped or demoted gracefully without
triggering the loss policies.
If you have finished the maintenance and want to activate the ticket again,
you can run:
-------
# crm_ticket --ticket ticketA --activate
-------
== For more information ==
* http://doc.opensuse.org/products/draft/SLE-HA/SLE-ha-guide_sd_draft/cha.ha.geo.html[SUSE's Multi-site Clusters guide]
* https://github.com/ClusterLabs/booth[Booth]
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Nodes.txt b/doc/Pacemaker_Explained/en-US/Ch-Nodes.txt
index 24cf1a24ff..856a50271f 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Nodes.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Nodes.txt
@@ -1,224 +1,225 @@
= Cluster Nodes =
== Defining a Cluster Node ==
Each node in the cluster will have an entry in the nodes section
containing its UUID, uname, and type.
.Example Heartbeat cluster node entry
======
[source,XML]
======
.Example Corosync cluster node entry
======
[source,XML]
======
In normal circumstances, the admin should let the cluster populate
this information automatically from the communications and membership
data. However for Heartbeat, one can use the `crm_uuid` tool
to read an existing UUID or define a value before the cluster starts.
[[s-node-name]]
== Where Pacemaker Gets the Node Name ==
Traditionally, Pacemaker required nodes to be referred to by the value
returned by `uname -n`. This can be problematic for services that
require the `uname -n` to be a specific value (e.g. for a licence
file).
-Since version 2.0.0 of Pacemaker, this requirement has been relaxed
-for clusters using Corosync 2.0 or later. The name Pacemaker uses is:
+This requirement has been relaxed for clusters using Corosync 2.0 or later.
+The name Pacemaker uses is:
. The value stored in +corosync.conf+ under *ring0_addr* in the *nodelist*, if it does not contain an IP address; otherwise
. The value stored in +corosync.conf+ under *name* in the *nodelist*; otherwise
. The value of `uname -n`
Pacemaker provides the `crm_node -n` command which displays the name
used by a running cluster.
If a Corosync *nodelist* is used, `crm_node --name-for-id` pass:[number] is also
available to display the name used by the node with the corosync
*nodeid* of pass:[number], for example: `crm_node --name-for-id 2`.
[[s-node-attributes]]
== Describing a Cluster Node ==
indexterm:[Node,attribute]
Beyond the basic definition of a node the administrator can also
describe the node's attributes, such as how much RAM, disk, what OS or
kernel version it has, perhaps even its physical location. This
information can then be used by the cluster when deciding where to
place resources. For more information on the use of node attributes,
see <>.
Node attributes can be specified ahead of time or populated later,
when the cluster is running, using `crm_attribute`.
Below is what the node's definition would look like if the admin ran the command:
-.The result of using crm_attribute to specify which kernel pcmk-1 is running
+.Result of using crm_attribute to specify which kernel pcmk-1 is running
======
-------
# crm_attribute --type nodes --node pcmk-1 --name kernel --update $(uname -r)
-------
[source,XML]
-------
-------
======
-A simpler way to determine the current value of an attribute is to use `crm_attribute` command again:
+Rather than having to read the XML, a simpler way to determine the current
+value of an attribute is to use `crm_attribute` again:
----
# crm_attribute --type nodes --node pcmk-1 --name kernel --query
scope=nodes name=kernel value=3.10.0-123.13.2.el7.x86_64
----
By specifying `--type nodes` the admin tells the cluster that this
attribute is persistent. There are also transient attributes which
are kept in the status section which are "forgotten" whenever the node
rejoins the cluster. The cluster uses this area to store a record of
how many times a resource has failed on that node, but administrators
can also read and write to this section by specifying `--type status`.
== Corosync ==
=== Adding a New Corosync Node ===
indexterm:[Corosync,Add Cluster Node]
indexterm:[Add Cluster Node,Corosync]
Adding a new node is as simple as installing Corosync and Pacemaker,
and copying '/etc/corosync/corosync.conf' and '/etc/corosync/authkey' (if
it exists) from an existing node. You may need to modify the
+mcastaddr+ option to match the new node's IP address.
If a log message containing "Invalid digest" appears from Corosync,
the keys are not consistent between the machines.
=== Removing a Corosync Node ===
indexterm:[Corosync,Remove Cluster Node]
indexterm:[Remove Cluster Node,Corosync]
Because the messaging and membership layers are the authoritative
source for cluster nodes, deleting them from the CIB is not a reliable
solution. First one must arrange for corosync to forget about the
node (_pcmk-1_ in the example below).
On the host to be removed:
. Stop the cluster: `/etc/init.d/corosync stop`
Next, from one of the remaining active cluster nodes:
. Tell Pacemaker to forget about the removed host:
+
----
# crm_node -R pcmk-1
----
+
This includes deleting the node from the CIB
[NOTE]
======
-This proceedure only works for versions after 1.1.8
+This procedure only works for pacemaker 1.1.8 and later.
======
=== Replacing a Corosync Node ===
indexterm:[Corosync,Replace Cluster Node]
indexterm:[Replace Cluster Node,Corosync]
The five-step guide to replacing an existing cluster node:
. Make sure the old node is completely stopped
. Give the new machine the same hostname and IP address as the old one
. Install the cluster software :-)
. Copy '/etc/corosync/corosync.conf' and '/etc/corosync/authkey' (if it exists) to the new node
. Start the new cluster node
If a log message containing "Invalid digest" appears from Corosync,
the keys are not consistent between the machines.
== CMAN ==
=== Adding a New CMAN Node ===
indexterm:[CMAN,Add Cluster Node]
indexterm:[Add Cluster Node,CMAN]
=== Removing a CMAN Node ===
indexterm:[CMAN,Remove Cluster Node]
indexterm:[Remove Cluster Node,CMAN]
== Heartbeat ==
=== Adding a New Heartbeat Node ===
indexterm:[Heartbeat,Add Cluster Node]
indexterm:[Add Cluster Node,Heartbeat]
Provided you specified +autojoin any+ in 'ha.cf', adding a new node is
as simple as installing heartbeat and copying 'ha.cf' and 'authkeys'
from an existing node.
If you don't want to use +autojoin+, then after setting up 'ha.cf' and
'authkeys', you must use `hb_addnode` before starting the new node.
=== Removing a Heartbeat Node ===
indexterm:[Heartbeat,Remove Cluster Node]
indexterm:[Remove Cluster Node,Heartbeat]
Because the messaging and membership layers are the authoritative
source for cluster nodes, deleting them from the CIB is not a reliable
solution.
First one must arrange for Heartbeat to forget about the node (pcmk-1
in the example below).
On the host to be removed:
. Stop the cluster: `/etc/init.d/corosync stop`
Next, from one of the remaining active cluster nodes:
. Tell Heartbeat the node should be removed
----
# hb_delnode pcmk-1
----
. Tell Pacemaker to forget about the removed host:
----
# crm_node -R pcmk-1
----
[NOTE]
======
This proceedure only works for versions after 1.1.8
======
=== Replacing a Heartbeat Node ===
indexterm:[Heartbeat,Replace Cluster Node]
indexterm:[Replace Cluster Node,Heartbeat]
The seven-step guide to replacing an existing cluster node:
. Make sure the old node is completely stopped
. Give the new machine the same hostname as the old one
. Go to an active cluster node and look up the UUID for the old node in '/var/lib/heartbeat/hostcache'
. Install the cluster software
. Copy 'ha.cf' and 'authkeys' to the new node
. On the new node, populate it's UUID using `crm_uuid -w` and the UUID from step 2
. Start the new cluster node
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Notifications.txt b/doc/Pacemaker_Explained/en-US/Ch-Notifications.txt
index 7705c12c11..134ab0c7b9 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Notifications.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Notifications.txt
@@ -1,138 +1,144 @@
= Receiving Notification for Cluster Events =
////
We prefer [[ch-notifications]], but older versions of asciidoc don't deal well
with that construct for chapter headings
////
anchor:ch-notifications[Chapter 7, Receiving Notification for Cluster Events]
indexterm:[Resource,Notification]
A Pacemaker cluster is an event-driven system. In this context, an 'event'
might be a resource failure or a configuration change, among others.
The *ocf:pacemaker:ClusterMon* resource can monitor the cluster status and
trigger alerts on each cluster event. This resource runs `crm_mon` in the
background at regular (configurable) intervals and uses `crm_mon` capabilities
to trigger emails (SMTP), SNMP traps or external programs (via the
+extra_options+ parameter).
[NOTE]
=====
Depending on your system settings and compilation settings, SNMP or email
alerts might be unavailable. Check the output of `crm_mon --help` to see whether these
options are available to you. In any case, executing an external agent will
always be available, and you can use this agent to send emails, SNMP traps
or whatever action you develop.
=====
[[s-notification-snmp]]
== Configuring SNMP Notifications ==
indexterm:[Resource,Notification,SNMP]
-Requires an IP to send SNMP traps to, and a SNMP community.
-Pacemaker MIB is found in _/usr/share/snmp/mibs/PCMK-MIB.txt_
+Requires an IP to send SNMP traps to, and an SNMP community string.
+The Pacemaker MIB is provided with the source, and is typically
+installed in +/usr/share/snmp/mibs/PCMK-MIB.txt+.
+
+This example uses +snmphost.example.com+ as the SNMP IP and
++public+ as the community string:
.Configuring ClusterMon to send SNMP traps
=====
[source,XML]
-----
-----
=====
[[s-notification-email]]
== Configuring Email Notifications ==
indexterm:[Resource,Notification,SMTP,Email]
-Requires a user to send mail alerts to. "Mail-From", SMTP relay and Subject prefix can also be configured.
+Requires the recipient e-mail address. You can also optionally configure
+the sender e-mail address, the hostname of the SMTP relay, and a prefix string
+for the subject line.
.Configuring ClusterMon to send email alerts
=====
[source,XML]
-----
-----
=====
[[s-notification-external]]
== Configuring Notifications via External-Agent ==
Requires a program (external-agent) to run when resource operations take
place, and an external-recipient (IP address, email address, URI). When
triggered, the external-agent is fed with dynamically filled environment
variables describing precisely the cluster event that occurred. By making
smart usage of these variables in your external-agent code, you can trigger
any action.
.Configuring ClusterMon to execute an external-agent
=====
[source,XML]
-----
-----
=====
.Environment Variables Passed to the External Agent
[width="95%",cols="1m,2<",options="header",align="center"]
|=========================================================
|Environment Variable
|Description
|CRM_notify_recipient
| The static external-recipient from the resource definition.
indexterm:[Environment Variable,CRM_notify_recipient]
|CRM_notify_node
| The node on which the status change happened.
indexterm:[Environment Variable,CRM_notify_node]
|CRM_notify_rsc
| The name of the resource that changed the status.
indexterm:[Environment Variable,CRM_notify_rsc]
|CRM_notify_task
| The operation that caused the status change.
indexterm:[Environment Variable,CRM_notify_task]
|CRM_notify_desc
| The textual output relevant error code of the operation (if any) that caused the status change.
indexterm:[Environment Variable,CRM_notify_desc]
|CRM_notify_rc
| The return code of the operation.
indexterm:[Environment Variable,CRM_notify_rc]
|CRM_notify_target_rc
| The expected return code of the operation.
indexterm:[Environment Variable,CRM_notify_target_rc]
|CRM_notify_status
| The numerical representation of the status of the operation.
indexterm:[Environment Variable,CRM_notify_target_rc]
|=========================================================
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Options.txt b/doc/Pacemaker_Explained/en-US/Ch-Options.txt
index cd603aff6d..fa34d62be5 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Options.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Options.txt
@@ -1,398 +1,398 @@
-= Cluster Options =
+= Cluster-Wide Configuration =
== CIB Properties ==
Certain settings are defined by CIB properties (that is, attributes of the
+cib+ tag) rather than with the rest of the cluster configuration in the
+configuration+ section.
The reason is simply a matter of parsing. These options are used by the
configuration database which is, by design, mostly ignorant of the content it
holds. So the decision was made to place them in an easy-to-find location.
.CIB Properties
[width="95%",cols="2m,5<",options="header",align="center"]
|=========================================================
|Field |Description
| admin_epoch |
indexterm:[Configuration Version,Cluster]
indexterm:[Cluster,Option,Configuration Version]
indexterm:[admin_epoch,Cluster Option]
indexterm:[Cluster,Option,admin_epoch]
When a node joins the cluster, the cluster performs a check to see
which node has the best configuration. It asks the node with the highest
(+admin_epoch+, +epoch+, +num_updates+) tuple to replace the configuration on
all the nodes -- which makes setting them, and setting them correctly, very
important. +admin_epoch+ is never modified by the cluster; you can use this
to make the configurations on any inactive nodes obsolete. _Never set this
value to zero_. In such cases, the cluster cannot tell the difference between
your configuration and the "empty" one used when nothing is found on disk.
| epoch |
indexterm:[epoch,Cluster Option]
indexterm:[Cluster,Option,epoch]
The cluster increments this every time the configuration is updated (usually by
the administrator).
| num_updates |
indexterm:[num_updates,Cluster Option]
indexterm:[Cluster,Option,num_updates]
The cluster increments this every time the configuration or status is updated
(usually by the cluster) and resets it to 0 when epoch changes.
| validate-with |
indexterm:[validate-with,Cluster Option]
indexterm:[Cluster,Option,validate-with]
Determines the type of XML validation that will be done on the configuration.
If set to +none+, the cluster will not verify that updates conform to the
DTD (nor reject ones that don't). This option can be useful when
operating a mixed-version cluster during an upgrade.
|cib-last-written |
indexterm:[cib-last-written,Cluster Property]
indexterm:[Cluster,Property,cib-last-written]
Indicates when the configuration was last written to disk. Maintained by the
cluster; for informational purposes only.
|have-quorum |
indexterm:[have-quorum,Cluster Property]
indexterm:[Cluster,Property,have-quorum]
Indicates if the cluster has quorum. If false, this may mean that the
cluster cannot start resources or fence other nodes (see
+no-quorum-policy+ below). Maintained by the cluster.
|dc-uuid |
indexterm:[dc-uuid,Cluster Property]
indexterm:[Cluster,Property,dc-uuid]
Indicates which cluster node is the current leader. Used by the
cluster when placing resources and determining the order of some
events. Maintained by the cluster.
|=========================================================
=== Working with CIB Properties ===
Although these fields can be written to by the user, in
most cases the cluster will overwrite any values specified by the
-admin with the "correct" ones. To change the +admin_epoch+, for
-example, one would use:
+user with the "correct" ones.
+To change the ones that can be specified by the user,
+for example +admin_epoch+, one should use:
----
# cibadmin --modify --crm_xml ''
----
A complete set of CIB properties will look something like this:
.Attributes set for a cib object
======
[source,XML]
-------
-------
======
== Cluster Options ==
Cluster options, as you might expect, control how the cluster behaves
when confronted with certain situations.
They are grouped into sets within the +crm_config+ section, and, in advanced
configurations, there may be more than one set. (This will be described later
in the section on <> where we will show how to have the cluster use
different sets of options during working hours than during weekends.) For now,
we will describe the simple case where each option is present at most once.
You can obtain an up-to-date list of cluster options, including
their default values, by running the `man pengine` and `man crmd` commands.
.Cluster Options
[width="95%",cols="5m,2,11>).
| enable-startup-probes | TRUE |
indexterm:[enable-startup-probes,Cluster Option]
indexterm:[Cluster,Option,enable-startup-probes]
Should the cluster check for active resources during startup?
| maintenance-mode | FALSE |
indexterm:[maintenance-mode,Cluster Option]
indexterm:[Cluster,Option,maintenance-mode]
Should the cluster refrain from monitoring, starting and stopping resources?
| stonith-enabled | TRUE |
indexterm:[stonith-enabled,Cluster Option]
indexterm:[Cluster,Option,stonith-enabled]
Should failed nodes and nodes with resources that can't be stopped be
shot? If you value your data, set up a STONITH device and enable this.
If true, or unset, the cluster will refuse to start resources unless
one or more STONITH resources have been configured.
| stonith-action | reboot |
indexterm:[stonith-action,Cluster Option]
indexterm:[Cluster,Option,stonith-action]
Action to send to STONITH device. Allowed values are +reboot+ and +off+.
The value +poweroff+ is also allowed, but is only used for
legacy devices.
| stonith-timeout | 60s |
indexterm:[stonith-timeout,Cluster Option]
indexterm:[Cluster,Option,stonith-timeout]
How long to wait for STONITH actions to complete
| cluster-delay | 60s |
indexterm:[cluster-delay,Cluster Option]
indexterm:[Cluster,Option,cluster-delay]
Estimated maximum round-trip delay over the network (excluding action
execution). If the TE requires an action to be executed on another node,
it will consider the action failed if it does not get a response
from the other node in this time (after considering the action's
own timeout). The "correct" value will depend on the speed and load of your
network and cluster nodes.
| dc-deadtime | 20s |
indexterm:[dc-deadtime,Cluster Option]
indexterm:[Cluster,Option,dc-deadtime]
How long to wait for a response from other nodes during startup.
The "correct" value will depend on the speed/load of your network and the type of switches used.
| cluster-recheck-interval | 15min |
indexterm:[cluster-recheck-interval,Cluster Option]
indexterm:[Cluster,Option,cluster-recheck-interval]
Polling interval for time-based changes to options, resource parameters and constraints.
The Cluster is primarily event-driven, but your configuration can have
elements that take effect based on the time of day. To ensure these changes
take effect, we can optionally poll the cluster's status for changes. A value
of 0 disables polling. Positive values are an interval (in seconds unless other
SI units are specified, e.g. 5min).
| pe-error-series-max | -1 |
indexterm:[pe-error-series-max,Cluster Option]
indexterm:[Cluster,Option,pe-error-series-max]
The number of PE inputs resulting in ERRORs to save. Used when reporting problems.
A value of -1 means unlimited (report all).
| pe-warn-series-max | -1 |
indexterm:[pe-warn-series-max,Cluster Option]
indexterm:[Cluster,Option,pe-warn-series-max]
The number of PE inputs resulting in WARNINGs to save. Used when reporting problems.
A value of -1 means unlimited (report all).
| pe-input-series-max | -1 |
indexterm:[pe-input-series-max,Cluster Option]
indexterm:[Cluster,Option,pe-input-series-max]
The number of "normal" PE inputs to save. Used when reporting problems.
A value of -1 means unlimited (report all).
| remove-after-stop | FALSE |
indexterm:[remove-after-stop,Cluster Option]
indexterm:[Cluster,Option,remove-after-stop]
_Advanced Use Only:_ Should the cluster remove resources from the LRM after
they are stopped? Values other than the default are, at best, poorly tested and
potentially dangerous.
| startup-fencing | TRUE |
indexterm:[startup-fencing,Cluster Option]
indexterm:[Cluster,Option,startup-fencing]
_Advanced Use Only:_ Should the cluster shoot unseen nodes?
Not using the default is very unsafe!
| election-timeout | 2min |
indexterm:[election-timeout,Cluster Option]
indexterm:[Cluster,Option,election-timeout]
_Advanced Use Only:_ If you need to adjust this value, it probably indicates
the presence of a bug.
| shutdown-escalation | 20min |
indexterm:[shutdown-escalation,Cluster Option]
indexterm:[Cluster,Option,shutdown-escalation]
_Advanced Use Only:_ If you need to adjust this value, it probably indicates
the presence of a bug.
| crmd-integration-timeout | 3min |
indexterm:[crmd-integration-timeout,Cluster Option]
indexterm:[Cluster,Option,crmd-integration-timeout]
_Advanced Use Only:_ If you need to adjust this value, it probably indicates
the presence of a bug.
| crmd-finalization-timeout | 30min |
indexterm:[crmd-finalization-timeout,Cluster Option]
indexterm:[Cluster,Option,crmd-finalization-timeout]
_Advanced Use Only:_ If you need to adjust this value, it probably indicates
the presence of a bug.
| crmd-transition-delay | 0s |
indexterm:[crmd-transition-delay,Cluster Option]
indexterm:[Cluster,Option,crmd-transition-delay]
_Advanced Use Only:_ Delay cluster recovery for the configured interval to
allow for additional/related events to occur. Useful if your configuration is
sensitive to the order in which ping updates arrive.
Enabling this option will slow down cluster recovery under
all conditions.
|default-resource-stickiness | 0 |
indexterm:[default-resource-stickiness,Cluster Option]
indexterm:[Cluster,Option,default-resource-stickiness]
_Deprecated:_ See <> instead
| is-managed-default | TRUE |
indexterm:[is-managed-default,Cluster Option]
indexterm:[Cluster,Option,is-managed-default]
_Deprecated:_ See <> instead
| default-action-timeout | 20s |
indexterm:[default-action-timeout,Cluster Option]
indexterm:[Cluster,Option,default-action-timeout]
_Deprecated:_ See <> instead
|=========================================================
== Querying and Setting Cluster Options ==
indexterm:[Querying,Cluster Option]
indexterm:[Setting,Cluster Option]
indexterm:[Cluster,Querying Options]
indexterm:[Cluster,Setting Options]
-Cluster options can be queried and modified using the
-`crm_attribute` tool. To get the current
-value of +cluster-delay+, simply use:
+Cluster options can be queried and modified using the `crm_attribute` tool. To
+get the current value of +cluster-delay+, you can run:
----
# crm_attribute --query --name cluster-delay
----
which is more simply written as
----
# crm_attribute -G -n cluster-delay
----
If a value is found, you'll see a result like this:
----
# crm_attribute -G -n cluster-delay
scope=crm_config name=cluster-delay value=60s
----
-However, if no value is found, the tool will display an error:
+If no value is found, the tool will display an error:
----
# crm_attribute -G -n clusta-deway
scope=crm_config name=clusta-deway value=(null)
Error performing operation: No such device or address
----
-To use a different value, eg. +30+, simply run:
+To use a different value (for example, 30 seconds), simply run:
----
# crm_attribute --name cluster-delay --update 30s
----
-To go back to the cluster's default value you can delete the value, for example with this command:
+To go back to the cluster's default value, you can delete the value, for example:
----
# crm_attribute --name cluster-delay --delete
Deleted crm_config option: id=cib-bootstrap-options-cluster-delay name=cluster-delay
----
== When Options are Listed More Than Once ==
If you ever see something like the following, it means that the option you're modifying is present more than once.
.Deleting an option that is listed twice
=======
------
# crm_attribute --name batch-limit --delete
Multiple attributes match name=batch-limit in crm_config:
Value: 50 (set=cib-bootstrap-options, id=cib-bootstrap-options-batch-limit)
Value: 100 (set=custom, id=custom-batch-limit)
Please choose from one of the matches above and supply the 'id' with --id
-------
=======
In such cases, follow the on-screen instructions to perform the
requested action. To determine which value is currently being used by
the cluster, refer to <>.
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Resource-Templates.txt b/doc/Pacemaker_Explained/en-US/Ch-Resource-Templates.txt
index fb82f9d269..8c880ac1b9 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Resource-Templates.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Resource-Templates.txt
@@ -1,217 +1,230 @@
= Resource Templates =
== Abstract ==
If you want to create lots of resources with similar configurations, defining a
resource template simplifies the task. Once defined, it can be referenced in
primitives or in certain types of constraints.
== Configuring Resources with Templates ==
The primitives referencing the template will inherit all meta-attributes,
instance attributes, utilization attributes and operations defined
in the template. And you can define specific attributes and operations for any
of the primitives. If any of these are defined in both the template and the
primitive, the values defined in the primitive will take precedence over the
ones defined in the template.
Hence, resource templates help to reduce the amount of configuration work.
If any changes are needed, they can be done to the template definition and
will take effect globally in all resource definitions referencing that
template.
-Resource templates have a similar syntax like primitives. For example:
+Resource templates have a syntax similar to that of primitives.
+.Resource template for a migratable Xen virtual machine
+====
[source,XML]
----
----
+====
-Once you defined the new resource template, you can use it in primitives:
+Once you define a resource template, you can use it in primitives by specifying the
++template+ property.
+.Xen primitive resource using a resource template
+====
[source,XML]
----
----
+====
-The new primitive `vm1` is going to inherit everything from the `vm-template`. For
-example, the equivalent of the above two would be:
+In the example above, the new primitive +vm1+ will inherit everything from +vm-template+. For
+example, the equivalent of the above two examples would be:
+.Equivalent Xen primitive resource not using a resource template
+====
[source,XML]
----
----
+====
If you want to overwrite some attributes or operations, add them to the
particular primitive's definition.
-For instance, the following new primitive `vm2` has special
-attribute values. Its `monitor` operation has a longer `timeout` and `interval`, and
-the primitive has an additional `stop` operation.
-
+.Xen resource overriding template values
+====
[source,XML]
----
----
+====
-The following command shows the resulting definition of a resource:
+In the example above, the new primitive +vm2+ has special
+attribute values. Its +monitor+ operation has a longer +timeout+ and +interval+, and
+the primitive has an additional +stop+ operation.
+
+To see the resulting definition of a resource, run:
----
# crm_resource --query-xml --resource vm2
----
-The following command shows its raw definition in cib:
+To see the raw definition of a resource in the CIB, run:
----
# crm_resource --query-xml-raw --resource vm2
----
== Referencing Templates in Constraints ==
A resource template can be referenced in the following types of constraints:
-- `order` constraints
-- `colocation` constraints,
-- `rsc_ticket` constraints (for multi-site clusters).
+- +order+ constraints (see <>)
+- +colocation+ constraints (see <>)
+- +rsc_ticket+ constraints (for multi-site clusters as described in <>)
Resource templates referenced in constraints stand for all primitives which are
derived from that template. This means, the constraint applies to all primitive
resources referencing the resource template. Referencing resource templates in
constraints is an alternative to resource sets and can simplify the cluster
configuration considerably.
-For example:
+For example, given the example templates earlier in this chapter:
[source,XML]
-is the equivalent of the following constraint configuration:
+would colocate all VMs with +base-rsc+ and is the equivalent of the following constraint configuration:
[source,XML]
----
----
[NOTE]
======
In a colocation constraint, only one template may be referenced from either
-`rsc` or `with-rsc`, and the other reference must be a regular resource.
+`rsc` or `with-rsc`; the other reference must be a regular resource.
======
Resource templates can also be referenced in resource sets.
For example:
[source,XML]
----
----
is the equivalent of the following constraint configuration:
[source,XML]
----
----
If the resources referencing the template can run in parallel:
[source,XML]
----
----
is the equivalent of the following constraint configuration:
[source,XML]
----
----
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
index 3ea32e9548..e2c2a51d92 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
@@ -1,741 +1,744 @@
= Cluster Resources =
-== What is a Cluster Resource ==
+== What is a Cluster Resource? ==
indexterm:[Resource]
-The role of a resource agent is to abstract the service it provides
-and present a consistent view to the cluster, which allows the cluster
-to be agnostic about the resources it manages.
+A resource is a service made highly available by a cluster.
+The simplest type of resource, a 'primitive' resource, is described
+in this chapter. More complex forms, such as groups and clones,
+are described in later chapters.
+Every primitive resource has a 'resource agent'. A resource agent is an
+external program that abstracts the service it provides and present a
+consistent view to the cluster.
+
+This allows the cluster to be agnostic about the resources it manages.
The cluster doesn't need to understand how the resource works because
it relies on the resource agent to do the right thing when given a
-+start+, +stop+ or +monitor+ command.
-
-For this reason it is crucial that resource agents are well tested.
+`start`, `stop` or `monitor` command. For this reason, it is crucial that
+resource agents are well-tested.
-Typically resource agents come in the form of shell scripts, however
+Typically, resource agents come in the form of shell scripts. However,
they can be written using any technology (such as C, Python or Perl)
that the author is comfortable with.
[[s-resource-supported]]
-== Supported Resource Classes ==
+== Resource Classes ==
indexterm:[Resource,class]
-There are six classes of agents supported by Pacemaker:
+Pacemaker supports six classes of agents:
* OCF
-* LSB
-* Upstart
-* Systemd
-* Fencing
* Service
-* Nagios
+** LSB
+** Upstart
+** Systemd
+* Fencing
+* Nagios Plugins
=== Open Cluster Framework ===
indexterm:[Resource,OCF]
indexterm:[OCF,Resources]
indexterm:[Open Cluster Framework,Resources]
The OCF standard
-footnote:[
-http://www.opencf.org/cgi-bin/viewcvs.cgi/specs/ra/resource-agent-api.txt?rev=HEAD - at least as it relates to resource agents.
-] footnote:[
-The Pacemaker implementation has been somewhat extended from the OCF
-Specs, but none of those changes are incompatible with the original
-OCF specification.
-]
+footnote:[See
+http://www.opencf.org/cgi-bin/viewcvs.cgi/specs/ra/resource-agent-api.txt?rev=HEAD
+ -- at least as it relates to resource agents. The Pacemaker implementation has
+been somewhat extended from the OCF specs, but none of those changes are
+incompatible with the original OCF specification.]
is basically an extension of the Linux Standard Base conventions for
init scripts to:
* support parameters,
-* make them self describing and
-* extensible
+* make them self-describing, and
+* make them extensible
OCF specs have strict definitions of the exit codes that actions must return.
footnote:[
-Included with the cluster is the ocf-tester script, which can be
-useful in this regard.
+The resource-agents source code includes the `ocf-tester` script, which
+can be useful in this regard.
]
The cluster follows these specifications exactly, and giving the wrong
exit code will cause the cluster to behave in ways you will likely
find puzzling and annoying. In particular, the cluster needs to
distinguish a completely stopped resource from one which is in some
erroneous and indeterminate state.
-Parameters are passed to the script as environment variables, with the
+Parameters are passed to the resource agent as environment variables, with the
special prefix +OCF_RESKEY_+. So, a parameter which the user thinks
-of as ip it will be passed to the script as +OCF_RESKEY_ip+. The
-number and purpose of the parameters is completely arbitrary, however
-your script should advertise any that it supports using the
-+meta-data+ command.
+of as +ip+ will be passed to the resource agent as +OCF_RESKEY_ip+. The
+number and purpose of the parameters is left to the resource agent; however,
+the resource agent should use the `meta-data` command to advertise any that it
+supports.
-
-The OCF class is the most preferred one as it is an industry standard,
+The OCF class is the most preferred as it is an industry standard,
highly flexible (allowing parameters to be passed to agents in a
non-positional manner) and self-describing.
For more information, see the
http://www.linux-ha.org/wiki/OCF_Resource_Agents[reference] and
<>.
=== Linux Standard Base ===
indexterm:[Resource,LSB]
indexterm:[LSB,Resources]
indexterm:[Linux Standard Base,Resources]
LSB resource agents are those found in +/etc/init.d+.
Generally, they are provided by the OS distribution and, in order to be used
with the cluster, they must conform to the LSB Spec.
footnote:[
See
http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
-for the LSB Spec (as it relates to init scripts).
+for the LSB Spec as it relates to init scripts.
]
Many distributions claim LSB compliance but ship with broken init
-scripts. For details on how to check if your init script is
-LSB-compatible, see <>. The most common problems are:
+scripts. For details on how to check whether your init script is
+LSB-compatible, see <>. Common problematic violations of
+the LSB standard include:
* Not implementing the status operation at all
* Not observing the correct exit status codes for start/stop/status actions
* Starting a started resource returns an error (this violates the LSB spec)
* Stopping a stopped resource returns an error (this violates the LSB spec)
=== Systemd ===
indexterm:[Resource,Systemd]
indexterm:[Systemd,Resources]
Some newer distributions have replaced the old
-http://en.wikipedia.org/wiki/Init#SysV-style[SYS-V] style of
-initialization daemons (and scripts) with an alternative called
+http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of
+initialization daemons and scripts with an alternative called
http://www.freedesktop.org/wiki/Software/systemd[Systemd].
Pacemaker is able to manage these services _if they are present_.
-Instead of +init scripts+, systemd has +unit files+. Generally the
-services (or unit files) are provided by the OS/distribution but there
-are some instructions for converting from init scripts at:
-http://0pointer.de/blog/projects/systemd-for-admins-3.html
+Instead of init scripts, systemd has 'unit files'. Generally, the
+services (unit files) are provided by the OS distribution, but there
+are online guides for converting from init scripts.
+footnote:[For example,
+http://0pointer.de/blog/projects/systemd-for-admins-3.html]
[NOTE]
======
Remember to make sure the computer is +not+ configured to start any
services at boot time that should be controlled by the cluster.
======
=== Upstart ===
indexterm:[Resource,Upstart]
indexterm:[Upstart,Resources]
Some newer distributions have replaced the old
-http://en.wikipedia.org/wiki/Init#SysV-style[SYS-V] style of
+http://en.wikipedia.org/wiki/Init#SysV-style["SysV"] style of
initialization daemons (and scripts) with an alternative called
http://upstart.ubuntu.com/[Upstart].
Pacemaker is able to manage these services _if they are present_.
-Instead of +init scripts+, upstart has +jobs+. Generally the
-services (or jobs) are provided by the OS/distribution.
+Instead of init scripts, upstart has 'jobs'. Generally, the
+services (jobs) are provided by the OS distribution.
[NOTE]
======
Remember to make sure the computer is +not+ configured to start any
services at boot time that should be controlled by the cluster.
======
=== System Services ===
indexterm:[Resource,System Services]
indexterm:[System Service,Resources]
-Since there are now many "common" types of system services (+systemd+,
-+upstart+, and +lsb+), Pacemaker supports a special alias which
+Since there are various types of system services (+systemd+,
++upstart+, and +lsb+), Pacemaker supports a special +service+ alias which
intelligently figures out which one applies to a given cluster node.
This is particularly useful when the cluster contains a mix of
+systemd+, +upstart+, and +lsb+.
In order, Pacemaker will try to find the named service as:
-. an LSB (SYS-V) init script
+. an LSB init script
. a Systemd unit file
. an Upstart job
=== STONITH ===
indexterm:[Resource,STONITH]
indexterm:[STONITH,Resources]
-There is also an additional class, STONITH, which is used exclusively
-for fencing related resources. This is discussed later in
-<>.
+The STONITH class is used exclusively for fencing-related resources. This is
+discussed later in <>.
=== Nagios Plugins ===
indexterm:[Resource,Nagios Plugins]
indexterm:[Nagios Plugins,Resources]
Nagios plugins allow us to monitor services on the remote hosts.
http://nagiosplugins.org[Nagios Plugins].
Pacemaker is able to do remote monitoring with the plugins _if they are
present_.
A common use case is to configure them as resources belonging to a resource
container (usually a virtual machine), and the container will be restarted
if any of them has failed. Another use is to configure them as ordinary
resources to be used for monitoring hosts or services via the network.
-The supported parameters are same as the long options of a nagios plugin.
+The supported parameters are same as the long options of the plugin.
[[primitive-resource]]
== Resource Properties ==
-These values tell the cluster which script to use for the resource,
-where to find that script and what standards it conforms to.
+These values tell the cluster which resource agent to use for the resource,
+where to find that resource agent and what standards it conforms to.
.Properties of a Primitive Resource
[width="95%",cols="1m,6<",options="header",align="center"]
|=========================================================
|Field
|Description
|id
|Your name for the resource
indexterm:[id,Resource]
indexterm:[Resource,Property,id]
|class
|The standard the resource agent conforms to. Allowed values:
+lsb+, +nagios+, +ocf+, +service+, +stonith+, +systemd+, +upstart+
indexterm:[class,Resource]
indexterm:[Resource,Property,class]
|type
|The name of the Resource Agent you wish to use. E.g. +IPaddr+ or +Filesystem+
indexterm:[type,Resource]
indexterm:[Resource,Property,type]
|provider
|The OCF spec allows multiple vendors to supply the same
resource agent. To use the OCF resource agents supplied by
the Heartbeat project, you would specify +heartbeat+ here.
indexterm:[provider,Resource]
indexterm:[Resource,Property,provider]
|=========================================================
-Resource definitions can be queried with the `crm_resource` tool. For example
+The XML definition of a resource can be queried with the `crm_resource` tool.
+For example:
----
# crm_resource --resource Email --query-xml
----
might produce:
-.An example system resource
+.A system resource definition
=====
[source,XML]
=====
[NOTE]
=====
-One of the main drawbacks to system services (such as LSB, Systemd and
+One of the main drawbacks to system services (LSB, systemd or
Upstart) resources is that they do not allow any parameters!
=====
////
See https://tools.ietf.org/html/rfc5737 for choice of example IP address
////
.An OCF resource definition
=====
[source,XML]
-------
-------
=====
[[s-resource-options]]
== Resource Options ==
Resources have two types of options: 'meta-attributes' and 'instance attributes'.
Meta-attributes apply to any type of resource, while instance attributes
are specific to each resource agent.
=== Resource Meta-Attributes ===
Meta-attributes are used by the cluster to decide how a resource should
behave and can be easily set using the `--meta` option of the
`crm_resource` command.
.Meta-attributes of a Primitive Resource
[width="95%",cols="2m,2,5> resources, they will not promoted to
master)
* +master:+ Allow the resource to be started and, if appropriate, promoted
indexterm:[target-role,Resource Option]
indexterm:[Resource,Option,target-role]
|is-managed
|TRUE
|Is the cluster allowed to start and stop the resource? Allowed
values: +true+, +false+
indexterm:[is-managed,Resource Option]
indexterm:[Resource,Option,is-managed]
|resource-stickiness
|value of +resource-stickiness+ in the +rsc_defaults+ section
|How much does the resource prefer to stay where it is?
indexterm:[resource-stickiness,Resource Option]
indexterm:[Resource,Option,resource-stickiness]
|requires
|fencing (unless +stonith-enabled+ is +false+ or +class+ is
+stonith+, in which case it defaults to quorum)
|Conditions under which the resource can be started ('Since 1.1.8')
Allowed values:
* +nothing:+ can always be started
* +quorum:+ The cluster can only start this resource if a majority of
the configured nodes are active
* +fencing:+ The cluster can only start this resource if a majority
of the configured nodes are active _and_ any failed or unknown nodes
have been powered off
* +unfencing:+ The cluster can only start this resource if a majority
of the configured nodes are active _and_ any failed or unknown nodes
have been powered off _and_ only on nodes that have been 'unfenced'
indexterm:[requires,Resource Option]
indexterm:[Resource,Option,requires]
|migration-threshold
|INFINITY
|How many failures may occur for this resource on a node, before this
node is marked ineligible to host this resource. A value of INFINITY
indicates that this feature is disabled.
indexterm:[migration-threshold,Resource Option]
indexterm:[Resource,Option,migration-threshold]
|failure-timeout
|0
|How many seconds to wait before acting as if the failure had not
occurred, and potentially allowing the resource back to the node on
which it failed. A value of 0 indicates that this feature is disabled.
indexterm:[failure-timeout,Resource Option]
indexterm:[Resource,Option,failure-timeout]
|multiple-active
|stop_start
|What should the cluster do if it ever finds the resource active on
more than one node? Allowed values:
* +block:+ mark the resource as unmanaged
* +stop_only:+ stop all active instances and leave them that way
* +stop_start:+ stop all active instances and start the resource in
one location only
indexterm:[multiple-active,Resource Option]
indexterm:[Resource,Option,multiple-active]
|remote-node
|
|The name of the remote-node this resource defines. This both enables the
resource as a remote-node and defines the unique name used to identify the
remote-node. If no other parameters are set, this value will also be assumed as
the hostname to connect to at the port specified by +remote-port+. +WARNING:+
This value cannot overlap with any resource or node IDs. If not specified,
this feature is disabled.
|remote-port
|3121
|Port to use for the guest connection to pacemaker_remote
|remote-addr
|value of +remote-node+
|The IP address or hostname to connect to if remote-node's name is not the
hostname of the guest.
|+remote-connect-timeout+
|60s
|How long before a pending guest connection will time out.
|=========================================================
[NOTE]
====
Support for remote nodes was added in pacemaker 1.1.10. If you are using an
earlier version, options related to remote nodes will not be available.
====
As an example of setting resource options, if you performed the following
commands on an LSB Email resource:
-------
# crm_resource --meta --resource Email --set-parameter priority --parameter-value 100
# crm_resource -m -r Email -p multiple-active -v block
-------
the resulting resource definition might be:
.An LSB resource with cluster options
=====
[source,XML]
-------
-------
=====
[[s-resource-defaults]]
=== Setting Global Defaults for Resource Meta-Attributes ===
-To set a default value for a resource option, simply add it to the
-+rsc_defaults+ section with `crm_attribute`. Thus,
+To set a default value for a resource option, add it to the
++rsc_defaults+ section with `crm_attribute`. For example,
----
# crm_attribute --type rsc_defaults --name is-managed --update false
----
would prevent the cluster from starting or stopping any of the
resources in the configuration (unless of course the individual
-resources were specifically enabled and had +is-managed+ set to
+resources were specifically enabled by having their +is-managed+ set to
+true+).
== Instance Attributes ==
-The scripts of some resource classes (LSB not being one of them) can
-be given parameters which determine how they behave and which instance
+The resource agents of some resource classes (lsb, systemd and upstart 'not' among them)
+can be given parameters which determine how they behave and which instance
of a service they control.
If your resource agent supports parameters, you can add them with the
-`crm_resource` command. For instance
+`crm_resource` command. For example,
----
# crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2
----
would create an entry in the resource like this:
.An example OCF resource with instance attributes
=====
[source,XML]
-------
-------
=====
For an OCF resource, the result would be an environment variable
called +OCF_RESKEY_ip+ with a value of +192.0.2.2+.
-The list of instance attributes supported by an OCF script can be
-found by calling the resource script with the `meta-data` command.
+The list of instance attributes supported by an OCF resource agent can be
+found by calling the resource agent with the `meta-data` command.
The output contains an XML description of all the supported
attributes, their purpose and default values.
.Displaying the metadata for the Dummy resource agent template
=====
--------
+----
# export OCF_ROOT=/usr/lib/ocf
# $OCF_ROOT/resource.d/pacemaker/Dummy meta-data
--------
+----
[source,XML]
-------
1.0
This is a Dummy Resource Agent. It does absolutely nothing except
keep track of whether its running or not.
Its purpose in life is for testing and to serve as a template for RA writers.
NB: Please pay attention to the timeouts specified in the actions
section below. They should be meaningful for the kind of resource
the agent manages. They should be the minimum advised timeouts,
but they shouldn't/cannot cover _all_ possible resource
instances. So, try to be neither overly generous nor too stingy,
but moderate. The minimum timeouts should never be below 10 seconds.
Example stateless resource agent
Location to store the resource state in.
State file
Fake attribute that can be changed to cause a reload
Fake attribute that can be changed to cause a reload
Number of seconds to sleep during operations. This can be used to test how
the cluster reacts to operation timeouts.
Operation sleep duration in seconds.
-------
=====
== Resource Operations ==
indexterm:[Resource,Action]
=== Monitoring Resources for Failure ===
By default, the cluster will not ensure your resources are still
healthy. To instruct the cluster to do this, you need to add a
+monitor+ operation to the resource's definition.
.An OCF resource with a recurring health check
=====
[source,XML]
-------
-------
=====
.Properties of an Operation
[width="95%",cols="2m,3,6
-------
=====
=== Multiple Monitor Operations ===
Provided no two operations (for a single resource) have the same name
and interval, you can have as many monitor operations as you like. In
this way, you can do a superficial health check every minute and
progressively more intense ones at higher intervals.
To tell the resource agent what kind of check to perform, you need to
provide each monitor with a different value for a common parameter.
The OCF standard creates a special parameter called +OCF_CHECK_LEVEL+
for this purpose and dictates that it is "made available to the
resource agent without the normal +OCF_RESKEY+ prefix".
Whatever name you choose, you can specify it by adding an
+instance_attributes+ block to the +op+ tag. It is up to each
resource agent to look for the parameter and decide how to use it.
.An OCF resource with two recurring health checks, performing different levels of checks specified via +OCF_CHECK_LEVEL+.
=====
[source,XML]
-------
-------
=====
=== Disabling a Monitor Operation ===
The easiest way to stop a recurring monitor is to just delete it.
However, there can be times when you only want to disable it
temporarily. In such cases, simply add +enabled="false"+ to the
operation's definition.
.Example of an OCF resource with a disabled health check
=====
[source,XML]
-------
-------
=====
This can be achieved from the command line by executing:
----
# cibadmin --modify --xml-text ''
----
Once you've done whatever you needed to do, you can then re-enable it with
----
# cibadmin --modify --xml-text ''
----
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
index 2d05ff89ce..5facda1678 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
@@ -1,825 +1,824 @@
-= Configure STONITH =
+= STONITH =
////
We prefer [[ch-stonith]], but older versions of asciidoc don't deal well
with that construct for chapter headings
////
anchor:ch-stonith[Chapter 13, STONITH]
indexterm:[STONITH, Configuration]
-== What Is STONITH ==
+== What Is STONITH? ==
-STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and it
-protects your data from being corrupted by rogue nodes or concurrent
+STONITH (an acronym for "Shoot The Other Node In The Head"), also called
+'fencing', protects your data from being corrupted by rogue nodes or concurrent
access.
Just because a node is unresponsive, this doesn't mean it isn't
accessing your data. The only way to be 100% sure that your data is
safe, is to use STONITH so we can be certain that the node is truly
offline, before allowing the data to be accessed from another node.
STONITH also has a role to play in the event that a clustered service
cannot be stopped. In this case, the cluster uses STONITH to force the
whole node offline, thereby making it safe to start the service
elsewhere.
== What STONITH Device Should You Use? ==
It is crucial that the STONITH device can allow the cluster to
differentiate between a node failure and a network one.
The biggest mistake people make in choosing a STONITH device is to
use a remote power switch (such as many on-board IPMI controllers) that
shares power with the node it controls. In such cases, the cluster
cannot be sure if the node is really offline, or active and suffering
from a network fault.
Likewise, any device that relies on the machine being active (such as
SSH-based "devices" used during testing) are inappropriate.
-== Differences of STONITH Resources ==
+== Special Treatment of STONITH Resources ==
-Stonith resources are somewhat special in Pacemaker.
+STONITH resources are somewhat special in Pacemaker.
In previous versions, only "running" resources could be used by
Pacemaker for fencing. This requirement has been relaxed to allow
other parts of the cluster (such as resources like DRBD) to reliably
initiate fencing. footnote:[Fencing a node while Pacemaker was moving
stonith resources around would otherwise fail]
Now all nodes have access to their definitions and instantiate them
on-the-fly when needed, however preference is given to 'verified'
instances which are the ones the cluster has explicitly started.
In the case of a cluster split, the partition with a verified instance
-will have a slight advantage as stonith-ng in the other partition will
-have to hear from all its current peers before choosing a node to
+will have a slight advantage, because the STONITH daemon in the other partition
+will have to hear from all its current peers before choosing a node to
perform the fencing.
[NOTE]
===========
To disable a fencing device/resource, 'target-role' can be set as you would for a normal resource.
===========
[NOTE]
===========
To prevent a specific node from using a fencing device, location constraints will work as expected.
===========
[IMPORTANT]
===========
Currently there is a limitation that fencing resources may only have
one set of meta-attributes and one set of instance attributes. This
can be revisited if it becomes a significant limitation for people.
===========
.Properties of Fencing Resources
[width="95%",cols="5m,2,3,10
----
====
-from which we would create a STONITH resource fragment that might look
+Based on that, we would create a STONITH resource fragment that might look
like this:
-.Sample STONITH Resource
+.An IPMI-based STONITH Resource
====
[source,XML]
----
----
====
-And finally, since we disabled it earlier, we need to re-enable STONITH.
-
+Finally, we need to enable STONITH:
----
# crm_attribute -t crm_config -n stonith-enabled -v true
----
-== Advanced Fencing Configurations ==
+== Advanced STONITH Configurations ==
Some people consider that having one fencing device is a single point
of failure footnote:[Not true, since a node or resource must fail
before fencing even has a chance to]; others prefer removing the node
from the storage and network instead of turning it off.
Whatever the reason, Pacemaker supports fencing nodes with multiple
devices through a feature called 'fencing topologies'.
Simply create the individual devices as you normally would, then
define one or more +fencing-level+ entries in the +fencing-topology+ section of
the configuration.
-* Each level is attempted in +ascending index+ order
-* If a device fails, +processing terminates+ for the current level.
- No further devices in that level are exercised and the next level is attempted instead.
-* If the operation succeeds for all the listed devices in a level, the level is deemed to have passed
-* The operation is finished +when a level has passed+ (success), or all levels have been attempted (failed)
-* If the operation failed, the next step is determined by the Policy Engine and/or crmd.
+* Each fencing level is attempted in order of ascending +index+.
+* If a device fails, processing terminates for the current level.
+ No further devices in that level are exercised, and the next level is attempted instead.
+* If the operation succeeds for all the listed devices in a level, the level is deemed to have passed.
+* The operation is finished when a level has passed (success), or all levels have been attempted (failed).
+* If the operation failed, the next step is determined by the Policy Engine and/or `crmd`.
Some possible uses of topologies include:
* Try poison-pill and fail back to power
* Try disk and network, and fall back to power if either fails
* Initiate a kdump and then poweroff the node
.Properties of Fencing Levels
[width="95%",cols="1m,6<",options="header",align="center"]
|=========================================================
|Field
|Description
|id
-|Your name for the level
+|A unique name for the level
indexterm:[id,fencing-level]
indexterm:[Fencing,fencing-level,id]
|target
|The node to which this level applies
indexterm:[target,fencing-level]
indexterm:[Fencing,fencing-level,target]
|index
|The order in which to attempt the levels.
- Levels are attempted in +ascending index+ order +until one succeeds+.
+ Levels are attempted in ascending order 'until one succeeds'.
indexterm:[index,fencing-level]
indexterm:[Fencing,fencing-level,index]
|devices
|A comma-separated list of devices that must all be tried for this level
indexterm:[devices,fencing-level]
indexterm:[Fencing,fencing-level,devices]
|=========================================================
=== Example use of Fencing Topologies ===
[source,XML]
----
...
...
----
-=== Example use of advanced Fencing Topologies: dual layer and dual devices ===
+=== Example Dual-Layer, Dual-Device Fencing Topologies ===
The following example illustrates an advanced use of +fencing-topology+ in a cluster with the following properties:
* 3 nodes (2 active prod-mysql nodes, 1 prod_mysql-rep in standby for quorum purposes)
* the active nodes have an IPMI-controlled power board reached at 192.0.2.1 and 192.0.2.2
* the active nodes also have two independent PSUs (Power Supply Units)
connected to two independent PDUs (Power Distribution Units) reached at
198.51.100.1 (port 10 and port 11) and 203.0.113.1 (port 10 and port 11)
* the first fencing method uses the `fence_ipmi` agent
* the second fencing method uses the `fence_apc_snmp` agent targetting 2 fencing devices (one per PSU, either port 10 or 11)
* fencing is only implemented for the active nodes and has location constraints
* fencing topology is set to try IPMI fencing first then default to a "sure-kill" dual PDU fencing
-In a normal failure scenario, STONITH will first select +fence_ipmi+ to try and kill the faulty node.
+In a normal failure scenario, STONITH will first select +fence_ipmi+ to try to kill the faulty node.
Using a fencing topology, if that first method fails, STONITH will then move on to selecting +fence_apc_snmp+ twice:
* once for the first PDU
* again for the second PDU
The fence action is considered successful only if both PDUs report the required status. If any of them fails, STONITH loops back to the first fencing method, +fence_ipmi+, and so on until the node is fenced or fencing action is cancelled.
.First fencing method: single IPMI device
Each cluster node has it own dedicated IPMI channel that can be called for fencing using the following primitives:
[source,XML]
----
----
.Second fencing method: dual PDU devices
-Each cluster node also has two distinct power channels controlled by two distinct PDUs. That means a total of 4 fencing devices configured as follows:
+Each cluster node also has two distinct power channels controlled by two
+distinct PDUs. That means a total of 4 fencing devices configured as follows:
- Node 1, PDU 1, PSU 1 @ port 10
- Node 1, PDU 2, PSU 2 @ port 10
- Node 2, PDU 1, PSU 1 @ port 11
- Node 2, PDU 2, PSU 2 @ port 11
The matching fencing agents are configured as follows:
[source,XML]
----
----
.Location Constraints
-To prevent STONITH from running a fencing agent on the very same node it is supposed to fence, constraints are placed on all the fencing primitives:
+To prevent STONITH from trying to run a fencing agent on the same node it is
+supposed to fence, constraints are placed on all the fencing primitives:
[source,XML]
----
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
----
.Fencing topology
Now that all the fencing resources are defined, it's time to create the right topology.
We want to first fence using IPMI and if that does not work, fence both PDUs to effectively and surely kill the node.
[source,XML]
----
-
-
-
-
-
-
+
+
+
+
+
+
----
-Please note, in +fencing-topology+, the lowest +index+ value determines the priority of the first fencing method.
+Please note, in +fencing-topology+, the lowest +index+ value determines the priority of the first fencing method.
.Final configuration
Put together, the configuration looks like this:
[source,XML]
----
...
...
----
diff --git a/doc/Pacemaker_Explained/en-US/Ch-Utilization.txt b/doc/Pacemaker_Explained/en-US/Ch-Utilization.txt
index 5dee0a25e6..addcb2102a 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Utilization.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Utilization.txt
@@ -1,221 +1,221 @@
= Utilization and Placement Strategy =
== Background ==
Pacemaker decides where to place a resource according to the resource
allocation scores on every node. The resource will be allocated to the
node where the resource has the highest score. If the resource allocation
scores on all the nodes are equal, by the `default` placement strategy,
Pacemaker will choose a node with the least number of allocated resources
for balancing the load. If the number of resources on each node is equal,
the first eligible node listed in cib will be chosen to run the resource.
Though resources are different. They may consume different amounts of the
capacities of the nodes. Actually, we cannot ideally balance the load just
according to the number of resources allocated to a node. Besides, if
resources are placed such that their combined requirements exceed the
provided capacity, they may fail to start completely or run with degraded
performance.
To take these into account, Pacemaker allows you to specify the following
configurations:
. The `capacity` a certain `node provides`.
. The `capacity` a certain `resource requires`.
. An overall `strategy` for placement of resources.
== Utilization attributes ==
To configure the capacity a node provides and the resource's requirements,
use `utilization` attributes. You can name the `utilization` attributes
according to your preferences and define as many `name/value` pairs as your
configuration needs. However, the attribute's values must be `integers`.
First, specify the capacities the nodes provide:
[source,XML]
----
----
Then, specify the capacities the resources require:
[source,XML]
----
----
A node is considered eligible for a resource if it has sufficient free
capacity to satisfy the resource's requirements. The nature of the required
or provided capacities is completely irrelevant for Pacemaker, it just makes
sure that all capacity requirements of a resource are satisfied before placing
a resource to a node.
== Placement Strategy ==
After you have configured the capacities your nodes provide and the
capacities your resources require, you need to set the `placement-strategy`
in the global cluster options, otherwise the capacity configurations have
`no effect`.
Four values are available for the `placement-strategy`:
`default`::
Utilization values are not taken into account at all, per default.
Resources are allocated according to allocation scores. If scores are equal,
resources are evenly distributed across nodes.
`utilization`::
Utilization values are taken into account when deciding whether a node
is considered eligible if it has sufficient free capacity to satisfy the
resource's requirements. However, load-balancing is still done based on the
number of resources allocated to a node.
`balanced`::
Utilization values are taken into account when deciding whether a node
is eligible to serve a resource; an attempt is made to spread the resources
evenly, optimizing resource performance.
`minimal`::
Utilization values are taken into account when deciding whether a node
is eligible to serve a resource; an attempt is made to concentrate the
resources on as few nodes as possible, thereby enabling possible power savings
on the remaining nodes.
Set `placement-strategy` with `crm_attribute`:
----
# crm_attribute --attr-name placement-strategy --attr-value balanced
----
Now Pacemaker will ensure the load from your resources will be distributed
evenly throughout the cluster - without the need for convoluted sets of
colocation constraints.
== Allocation Details ==
=== Which node is preferred to be chosen to get consumed first on allocating resources? ===
- The node that is most healthy (which has the highest node weight) gets
consumed first.
- If their weights are equal:
* If `placement-strategy="default|utilization"`,
the node that has the least number of allocated resources gets consumed first.
** If their numbers of allocated resources are equal,
the first eligible node listed in cib gets consumed first.
* If `placement-strategy="balanced"`,
the node that has more free capacity gets consumed first.
** If the free capacities of the nodes are equal,
the node that has the least number of allocated resources gets consumed first.
*** If their numbers of allocated resources are equal,
the first eligible node listed in cib gets consumed first.
* If `placement-strategy="minimal"`,
the first eligible node listed in cib gets consumed first.
==== Which node has more free capacity? ====
This will be quite clear if we only define one type of `capacity`. While if we
define multiple types of `capacity`, for example:
- If `nodeA` has more free `cpus`, `nodeB` has more free `memory`,
their free capacities are equal.
- If `nodeA` has more free `cpus`, while `nodeB` has more free `memory` and `storage`,
`nodeB` has more free capacity.
=== Which resource is preferred to be chosen to get assigned first? ===
- The resource that has the highest priority gets allocated first.
- If their priorities are equal, check if they are already running. The
resource that has the highest score on the node where it's running gets allocated
first (to prevent resource shuffling).
- If the scores above are equal or they are not running, the resource has
the highest score on the preferred node gets allocated first.
- If the scores above are equal, the first runnable resource listed in cib gets allocated first.
== Limitations ==
This type of problem Pacemaker is dealing with here is known as the
http://en.wikipedia.org/wiki/Knapsack_problem[knapsack problem] and falls into
the http://en.wikipedia.org/wiki/NP-complete[NP-complete] category of computer
science problems - which is fancy way of saying "it takes a really long time
to solve".
Clearly in a HA cluster, it's not acceptable to spend minutes, let alone hours
or days, finding an optional solution while services remain unavailable.
So instead of trying to solve the problem completely, Pacemaker uses a
'best effort' algorithm for determining which node should host a particular
service. This means it arrives at a solution much faster than traditional
linear programming algorithms, but by doing so at the price of leaving some
services stopped.
-In the contrived example above:
+In the contrived example at the start of this chapter:
- +rsc-small+ would be allocated to +node1+
- +rsc-medium+ would be allocated to +node2+
- +rsc-large+ would remain inactive
Which is not ideal.
== Strategies for Dealing with the Limitations ==
It might sound obvious, but if the physical capacity of your nodes is (close to)
maxed out by the cluster under normal conditions, then failover isn't going to
go well. Even without the Utilization feature, you'll start hitting timeouts and
getting secondary failures'.
- Build some buffer into the capabilities advertised by the nodes.
Advertise slightly more resources than we physically have on the (usually valid)
assumption that a resource will not use 100% of the configured number of
cpu/memory/etc `all` the time. This practice is also known as 'over commit'.
- Specify resource priorities.
If the cluster is going to sacrifice services, it should be the ones you care
(comparatively) about the least. Ensure that resource priorities are properly set
so that your most important resources are scheduled first.