diff --git a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt index 8eacb05f06..b4e9ebe79a 100644 --- a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt +++ b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt @@ -1,672 +1,672 @@ = Cluster Resources = == What is a Cluster Resource == indexterm:[Resource] The role of a resource agent is to abstract the service it provides and present a consistent view to the cluster, which allows the cluster to be agnostic about the resources it manages. The cluster doesn't need to understand how the resource works because it relies on the resource agent to do the right thing when given a +start+, +stop+ or +monitor+ command. -For this reason it is crucial that resource agents are well tested. +For this reason it is crucial that resource agents are well tested. Typically resource agents come in the form of shell scripts, however they can be written using any technology (such as C, Python or Perl) that the author is comfortable with. [[s-resource-supported]] == Supported Resource Classes == indexterm:[Resource,class] There are five classes of agents supported by Pacemaker: * OCF * LSB * Upstart * Systemd * Fencing * Service indexterm:[Resource,Heartbeat] indexterm:[Heartbeat,Resources] Version 1 of Heartbeat came with its own style of resource agents and it is highly likely that many people have written their own agents based on its conventions. footnote:[ See http://wiki.linux-ha.org/HeartbeatResourceAgent for more information ] Although deprecated with the release of Heartbeat v2, they were supported by Pacemaker up until the release of 1.1.8 to enable administrators to continue to use these agents. === Open Cluster Framework === indexterm:[Resource,OCF] indexterm:[OCF,Resources] indexterm:[Open Cluster Framework,Resources] The OCF standard footnote:[ http://www.opencf.org/cgi-bin/viewcvs.cgi/specs/ra/resource-agent-api.txt?rev=HEAD - at least as it relates to resource agents. ] footnote:[ The Pacemaker implementation has been somewhat extended from the OCF Specs, but none of those changes are incompatible with the original OCF specification. ] is basically an extension of the Linux Standard Base conventions for init scripts to: * support parameters, * make them self describing and * extensible OCF specs have strict definitions of the exit codes that actions must return. footnote:[ Included with the cluster is the ocf-tester script, which can be useful in this regard. ] The cluster follows these specifications exactly, and giving the wrong exit code will cause the cluster to behave in ways you will likely find puzzling and annoying. In particular, the cluster needs to distinguish a completely stopped resource from one which is in some erroneous and indeterminate state. Parameters are passed to the script as environment variables, with the special prefix +OCF_RESKEY_+. So, a parameter which the user thinks of as ip it will be passed to the script as +OCF_RESKEY_ip+. The number and purpose of the parameters is completely arbitrary, however your script should advertise any that it supports using the +meta-data+ command. - + The OCF class is the most preferred one as it is an industry standard, highly flexible (allowing parameters to be passed to agents in a non-positional manner) and self-describing. For more information, see the http://www.linux-ha.org/wiki/OCF_Resource_Agents[reference] and <>. === Linux Standard Base === indexterm:[Resource,LSB] indexterm:[LSB,Resources] indexterm:[Linux Standard Base,Resources] LSB resource agents are those found in '/etc/init.d'. Generally they are provided by the OS/distribution and, in order to be used with the cluster, they must conform to the LSB Spec. footnote:[ See http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html for the LSB Spec (as it relates to init scripts). ] Many distributions claim LSB compliance but ship with broken init scripts. For details on how to check if your init script is LSB-compatible, see <>. The most common problems are: * Not implementing the status operation at all * Not observing the correct exit status codes for start/stop/status actions * Starting a started resource returns an error (this violates the LSB spec) * Stopping a stopped resource returns an error (this violates the LSB spec) === Systemd === indexterm:[Resource,Systemd] indexterm:[Systemd,Resources] Some newer distributions have replaced the old http://en.wikipedia.org/wiki/Init#SysV-style[SYS-V] style of initialization daemons (and scripts) with an alternative called http://www.freedesktop.org/wiki/Software/systemd[Systemd]. Pacemaker is able to manage these services _if they are present_. Instead of +init scripts+, systemd has +unit files+. Generally the services (or unit files) are provided by the OS/distribution but there are some instructions for converting from init scripts at: http://0pointer.de/blog/projects/systemd-for-admins-3.html [NOTE] ====== Remember to make sure the computer is +not+ configured to start any services at boot time that should be controlled by the cluster. ====== === Upstart === indexterm:[Resource,Upstart] indexterm:[Upstart,Resources] Some newer distributions have replaced the old http://en.wikipedia.org/wiki/Init#SysV-style[SYS-V] style of initialization daemons (and scripts) with an alternative called http://upstart.ubuntu.com[Upstart]. Pacemaker is able to manage these services _if they are present_. Instead of +init scripts+, upstart has +jobs+. Generally the services (or jobs) are provided by the OS/distribution. [NOTE] ====== Remember to make sure the computer is +not+ configured to start any services at boot time that should be controlled by the cluster. ====== === System Services === indexterm:[Resource,System Services] indexterm:[System Service,Resources] Since there are now many "common" types of system services (+systemd+, +upstart+, and +lsb+), Pacemaker supports a special alias which intelligently figures out which one applies to a given cluster node. This is particularly useful when the cluster contains a mix of +systemd+, +upstart+, and +lsb+. In order, Pacemaker will try to find the named service as: . an LSB (SYS-V) init script . a Systemd unit file . an Upstart job === STONITH === indexterm:[Resource,STONITH] indexterm:[STONITH,Resources] There is also an additional class, STONITH, which is used exclusively for fencing related resources. This is discussed later in <>. [[primitive-resource]] == Resource Properties == These values tell the cluster which script to use for the resource, where to find that script and what standards it conforms to. .Properties of a Primitive Resource [width="95%",cols="1m,6<",options="header",align="center"] |========================================================= |Field |Description |id |Your name for the resource indexterm:[id,Resource] indexterm:[Resource,Property,id] |class |The standard the script conforms to. Allowed values: +ocf+, +service+, +upstart+, +systemd+, +lsb+, +stonith+ indexterm:[class,Resource] indexterm:[Resource,Property,class] |type |The name of the Resource Agent you wish to use. Eg. _IPaddr_ or _Filesystem_ indexterm:[type,Resource] indexterm:[Resource,Property,type] |provider |The OCF spec allows multiple vendors to supply the same ResourceAgent. To use the OCF resource agents supplied with Heartbeat, you should specify +heartbeat+ here. indexterm:[provider,Resource] indexterm:[Resource,Property,provider] |========================================================= Resource definitions can be queried with the `crm_resource` tool. For example [source,C] # crm_resource --resource Email --query-xml might produce: .An example system resource ===== [source,XML] ===== -[NOTE] +[NOTE] ===== One of the main drawbacks to system services (such as LSB, Systemd and Upstart) resources is that they do not allow any parameters! ===== .An example OCF resource ===== [source,XML] ------- ------- ===== [[s-resource-options]] == Resource Options == Options are used by the cluster to decide how your resource should behave and can be easily set using the `--meta` option of the `crm_resource` command. .Options for a Primitive Resource [width="95%",cols="1m,1,4<",options="header",align="center"] |========================================================= |Field |Default |Description - + |priority |+0+ |If not all resources can be active, the cluster will stop lower priority resources in order to keep higher priority ones active. indexterm:[priority,Resource Option] indexterm:[Resource,Option,priority] |target-role |+Started+ |What state should the cluster attempt to keep this resource in? Allowed values: * 'Stopped' - Force the resource to be stopped * 'Started' - Allow the resource to be started (In the case of <> resources, they will not promoted to master) * 'Master' - Allow the resource to be started and, if appropriate, promoted indexterm:[target-role,Resource Option] indexterm:[Resource,Option,target-role] |is-managed |+TRUE+ |Is the cluster allowed to start and stop the resource? Allowed values: +true+, +false+ indexterm:[is-managed,Resource Option] indexterm:[Resource,Option,is-managed] |resource-stickiness |Calculated |How much does the resource prefer to stay where it is? Defaults to the value of +resource-stickiness+ in the +rsc_defaults+ section indexterm:[resource-stickiness,Resource Option] indexterm:[Resource,Option,resource-stickiness] |requires |Calculated |Under what conditions can the resource be started. ('Since 1.1.8') Defaults to +fencing+ unless +stonith-enabled+ is 'false' or +class+ is 'stonith' - under those conditions the default is +quorum+. Possible values: * 'nothing' - can always be started * 'quorum' - The cluster can only start this resource if a majority of the configured nodes are active * 'fencing' - The cluster can only start this resource if a majority of the configured nodes are active _and_ any failed or unknown nodes have been powered off. * 'unfencing' - The cluster can only start this resource if a majority of the configured nodes are active _and_ any failed or unknown nodes have been powered off _and_ only on nodes that have been 'unfenced' indexterm: Option[requires,Resource] indexterm:[Resource,Option,requires] |migration-threshold |+INFINITY+ (disabled) |How many failures may occur for this resource on a node, before this node is marked ineligible to host this resource. indexterm:[migration-threshold,Resource Option] indexterm:[Resource,Option,migration-threshold] |failure-timeout |+0+ (disabled) |How many seconds to wait before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. indexterm:[failure-timeout,Resource Option] indexterm:[Resource,Option,failure-timeout] |multiple-active |+stop_start+ |What should the cluster do if it ever finds the resource active on more than one node. Allowed values: * 'block' - mark the resource as unmanaged * 'stop_only' - stop all active instances and leave them that way * 'stop_start' - stop all active instances and start the resource in one location only indexterm:[multiple-active,Resource Option] indexterm:[Resource,Option,multiple-active] |========================================================= If you performed the following commands on the previous LSB Email resource [source,C] ------- # crm_resource --meta --resource Email --set-parameter priority --property-value 100 # crm_resource --meta --resource Email --set-parameter multiple-active --property-value block ------- the resulting resource definition would be .An LSB resource with cluster options ===== [source,XML] ------- ------- ===== [[s-resource-defaults]] == Setting Global Defaults for Resource Options == To set a default value for a resource option, simply add it to the +rsc_defaults+ section with `crm_attribute`. Thus, [source,C] # crm_attribute --type rsc_defaults --attr-name is-managed --attr-value false would prevent the cluster from starting or stopping any of the resources in the configuration (unless of course the individual resources were specifically enabled and had +is-managed+ set to +true+). == Instance Attributes == The scripts of some resource classes (LSB not being one of them) can be given parameters which determine how they behave and which instance of a service they control. If your resource agent supports parameters, you can add them with the `crm_resource` command. For instance [source,C] # crm_resource --resource Public-IP --set-parameter ip --property-value 1.2.3.4 would create an entry in the resource like this: .An example OCF resource with instance attributes ===== [source,XML] ------- ------- ===== For an OCF resource, the result would be an environment variable called +OCF_RESKEY_ip+ with a value of +1.2.3.4+. The list of instance attributes supported by an OCF script can be found by calling the resource script with the `meta-data` command. The output contains an XML description of all the supported attributes, their purpose and default values. - + .Displaying the metadata for the Dummy resource agent template ===== [source,C] ------- # export OCF_ROOT=/usr/lib/ocf # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data ------- [source,XML] ------- 1.0 - + This is a Dummy Resource Agent. It does absolutely nothing except keep track of whether its running or not. Its purpose in life is for testing and to serve as a template for RA writers. Dummy resource agent - + Location to store the resource state in. State file - + Dummy attribute that can be changed to cause a reload Dummy attribute that can be changed to cause a reload - + ------- ===== == Resource Operations == indexterm:[Resource,Action] === Monitoring Resources for Failure === By default, the cluster will not ensure your resources are still healthy. To instruct the cluster to do this, you need to add a +monitor+ operation to the resource's definition. - + .An OCF resource with a recurring health check ===== [source,XML] ------- ------- ===== .Properties of an Operation [width="95%",cols="1m,6<",options="header",align="center"] |========================================================= |Field |Description |id |Your name for the action. Must be unique. indexterm:[id,Action Property] indexterm:[Action,Property,id] |name |The action to perform. Common values: +monitor+, +start+, +stop+ indexterm:[name,Action Property] indexterm:[Action,Property,name] |interval |How frequently (in seconds) to perform the operation. Default value: +0+, meaning never. indexterm:[interval,Action Property] indexterm:[Action,Property,interval] |timeout |How long to wait before declaring the action has failed. indexterm:[timeout,Action Property] indexterm:[Action,Property,timeout] |on-fail |The action to take if this action ever fails. Allowed values: * 'ignore' - Pretend the resource did not fail * 'block' - Don't perform any further operations on the resource * 'stop' - Stop the resource and do not start it elsewhere * 'restart' - Stop the resource and start it again (possibly on a different node) * 'fence' - STONITH the node on which the resource failed * 'standby' - Move _all_ resources away from the node on which the resource failed The default for the +stop+ operation is +fence+ when STONITH is enabled and +block+ otherwise. All other operations default to +stop+. indexterm:[on-fail,Action Property] indexterm:[Action,Property,on-fail] |enabled |If +false+, the operation is treated as if it does not exist. Allowed values: +true+, +false+ indexterm:[enabled,Action Property] indexterm:[Action,Property,enabled] |========================================================= [[s-operation-defaults]] === Setting Global Defaults for Operations === To set a default value for a operation option, simply add it to the +op_defaults+ section with `crm_attribute`. Thus, [source,C] # crm_attribute --type op_defaults --attr-name timeout --attr-value 20s would default each operation's +timeout+ to 20 seconds. If an operation's definition also includes a value for +timeout+, then that value would be used instead (for that operation only). - + ==== When Resources Take a Long Time to Start/Stop ==== There are a number of implicit operations that the cluster will always perform - +start+, +stop+ and a non-recurring +monitor+ operation (used at startup to check the resource isn't already active). If one of these is taking too long, then you can create an entry for them and simply specify a new value. -.An OCF resource with custom timeouts for its implicit actions +.An OCF resource with custom timeouts for its implicit actions ===== [source,XML] ------- ------- ===== ==== Multiple Monitor Operations ==== Provided no two operations (for a single resource) have the same name and interval you can have as many monitor operations as you like. In this way you can do a superficial health check every minute and progressively more intense ones at higher intervals. To tell the resource agent what kind of check to perform, you need to provide each monitor with a different value for a common parameter. The OCF standard creates a special parameter called +OCF_CHECK_LEVEL+ for this purpose and dictates that it is _"made available to the resource agent without the normal +OCF_RESKEY+ prefix"_. - + Whatever name you choose, you can specify it by adding an +instance_attributes+ block to the op tag. Note that it is up to each resource agent to look for the parameter and decide how to use it. - + .An OCF resource with two recurring health checks, performing different levels of checks - specified via +OCF_CHECK_LEVEL+. ===== [source,XML] ------- ------- ===== ==== Disabling a Monitor Operation ==== The easiest way to stop a recurring monitor is to just delete it. However, there can be times when you only want to disable it temporarily. In such cases, simply add +enabled="false"+ to the operation's definition. -.Example of an OCF resource with a disabled health check +.Example of an OCF resource with a disabled health check ===== [source,XML] ------- ------- ===== This can be achieved from the command-line by executing [source,C] # cibadmin -M -X '' Once you've done whatever you needed to do, you can then re-enable it with diff --git a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt index 1df1b9fe29..d4affa45b7 100644 --- a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt +++ b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt @@ -1,314 +1,207 @@ = Configure STONITH = -//// +//// We prefer [[ch-stonith]], but older versions of asciidoc dont deal well with that construct for chapter headings //// anchor:ch-stonith[Chapter 13, STONITH] indexterm:[STONITH, Configuration] == What Is STONITH == - STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and it protects your data from being corrupted by rogue nodes or concurrent access. Just because a node is unresponsive, this doesn't mean it isn't accessing your data. The only way to be 100% sure that your data is safe, is to use STONITH so we can be certain that the node is truly offline, before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. == What STONITH Device Should You Use == It is crucial that the STONITH device can allow the cluster to differentiate between a node failure and a network one. The biggest mistake people make in choosing a STONITH device is to use remote power switch (such as many on-board IMPI controllers) that shares power with the node it controls. In such cases, the cluster cannot be sure if the node is really offline, or active and suffering from a network fault. Likewise, any device that relies on the machine being active (such as SSH-based "devices" used during testing) are inappropriate. == Configuring STONITH == -ifdef::pcs[] -. Find the correct driver: +pcs stonith list+ - -. Find the parameters associated with the device: +pcs stonith describe + - -. Create a local config to make changes to +pcs cluster cib stonith_cfg+ +[NOTE] +=========== -. Create the fencing resource using +pcs -f stonith_cfg stonith create - [stonith device options]+ +Both configuration shells include functionality to simplify the +process below, particularly the step for deciding which parameters are +required. However since this document deals only with core +components, you should refer to the Stonith chapter of +Clusters from +Scratch+ for those details. -. Set stonith-enable to true. +pcs -f stonith_cfg property set stonith-enabled=true+ -endif::pcs[] +=========== -ifdef::crmsh[] . Find the correct driver: +stonith_admin --list-installed+ -. Since every device is different, the parameters needed to configure - it will vary. To find out the parameters associated with the device, - run: +stonith_admin --metadata --agent type+ +. Find the required parameters associated with the device: +stonith_admin --metadata --agent + - The output should be XML formatted text containing additional - parameter descriptions. We will endevor to make the output more - friendly in a later version. - -. Enter the shell crm Create an editable copy of the existing - configuration +cib new stonith+ Create a fencing resource containing a - primitive resource with a class of stonith, a type of type and a - parameter for each of the values returned in step 2: +configure - primitive ...+ -endif::crmsh[] +. Create a file called +stonith.xml+ containing a primitive resource + with a class of 'stonith', a type of and a parameter + for each of the values returned in step 2. . If the device does not know how to fence nodes based on their uname, you may also need to set the special +pcmk_host_map+ parameter. See +man stonithd+ for details. . If the device does not support the list command, you may also need to set the special +pcmk_host_list+ and/or +pcmk_host_check+ parameters. See +man stonithd+ for details. . If the device does not expect the victim to be specified with the port parameter, you may also need to set the special +pcmk_host_argument+ parameter. See +man stonithd+ for details. -ifdef::crmsh[] -. Upload it into the CIB from the shell: +cib commit stonith+ -endif::crmsh[] +. Upload it into the CIB using cibadmin: +cibadmin -C -o resources --xml-file stonith.xml+ -ifdef::pcs[] -. Commit the new configuration. +pcs cluster push cib stonith_cfg+ -endif::pcs[] +. Set stonith-enabled to true. +crm_attribute -t crm_config -n stonith-enabled -v true+ . Once the stonith resource is running, you can test it by executing: +stonith_admin --reboot nodename+. Although you might want to stop the cluster on that machine first. -== Example == +=== Example === Assuming we have an chassis containing four nodes and an IPMI device active on 10.0.0.1, then we would chose the fence_ipmilan driver in step 2 and obtain the following list of parameters .Obtaining a list of STONITH Parameters -ifdef::pcs[] -[source,Bash] ----- -# pcs stonith describe fence_ipmilan -Stonith options for: fence_ipmilan - auth: IPMI Lan Auth type (md5, password, or none) - ipaddr: IPMI Lan IP to talk to - passwd: Password (if required) to control power on IPMI device - passwd_script: Script to retrieve password (if required) - lanplus: Use Lanplus - login: Username/Login (if required) to control power on IPMI device - action: Operation to perform. Valid operations: on, off, reboot, status, list, diag, monitor or metadata - timeout: Timeout (sec) for IPMI operation - cipher: Ciphersuite to use (same as ipmitool -C parameter) - method: Method to fence (onoff or cycle) - power_wait: Wait X seconds after on/off operation - delay: Wait X seconds before fencing is started - privlvl: Privilege level on IPMI device - verbose: Verbose mode ----- -endif::pcs[] - -ifdef::crmsh[] [source,C] ---- # stonith_admin --metadata -a fence_ipmilan ---- + [source,XML] ---- fence_ipmilan is an I/O Fencing agent which can be used with machines controlled by IPMI. This agent calls support software using ipmitool (http://ipmitool.sf.net/). To use fence_ipmilan with HP iLO 3 you have to enable lanplus option (lanplus / -P) and increase wait after operation to 4 seconds (power_wait=4 / -T 4) IPMI Lan Auth type (md5, password, or none) IPMI Lan IP to talk to Password (if required) to control power on IPMI device Script to retrieve password (if required) Use Lanplus Username/Login (if required) to control power on IPMI device Operation to perform. Valid operations: on, off, reboot, status, list, diag, monitor or metadata Timeout (sec) for IPMI operation Ciphersuite to use (same as ipmitool -C parameter) Method to fence (onoff or cycle) Wait X seconds after on/off operation Wait X seconds before fencing is started Verbose mode ---- -endif::crmsh[] from which we would create a STONITH resource fragment that might look -like this +like this: .Sample STONITH Resource -ifdef::pcs[] -[source,Bash] ----- -# pcs cluster cib stonith_cfg -# pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan \ - pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser \ - passwd=acd123 op monitor interval=60s -# pcs -f stonith_cfg stonith - impi-fencing (stonith:fence_ipmilan) Stopped ----- -endif::pcs[] - -ifdef::crmsh[] -[source,Bash] +[source,XML] ---- -# crm crm(live)# cib new stonith -INFO: stonith shadow CIB created -crm(stonith)# configure primitive impi-fencing stonith::fence_ipmilan \ - params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \ - op monitor interval="60s" + + + + + + + + + + + ---- -endif::crmsh[] And finally, since we disabled it earlier, we need to re-enable STONITH. -At this point we should have the following configuration. - -ifdef::pcs[] -[source,Bash] ----- -# pcs -f stonith_cfg property set stonith-enabled=true -# pcs -f stonith_cfg property -dc-version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0 -cluster-infrastructure: corosync -no-quorum-policy: ignore -stonith-enabled: true ----- -endif::pcs[] - -Now push the configuration into the cluster. - -ifdef::pcs[] -[source,C] ----- -# pcs cluster push cib stonith_cfg ----- -endif::pcs[] -ifdef::crmsh[] [source,Bash] ---- -crm(stonith)# configure property stonith-enabled="true" -crm(stonith)# configure shownode pcmk-1 -node pcmk-2 -primitive WebData ocf:linbit:drbd \ - params drbd_resource="wwwdata" \ - op monitor interval="60s" -primitive WebFS ocf:heartbeat:Filesystem \ - params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="gfs2" -primitive WebSite ocf:heartbeat:apache \ - params configfile="/etc/httpd/conf/httpd.conf" \ - op monitor interval="1min" -primitive ClusterIP ocf:heartbeat:IPaddr2 \ - params ip="192.168.122.101" cidr_netmask="32" clusterip_hash="sourceip" \ - op monitor interval="30s"primitive ipmi-fencing stonith::fence_ipmilan \ params pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser passwd=abc123 \ op monitor interval="60s"ms WebDataClone WebData \ - meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" -clone WebFSClone WebFS -clone WebIP ClusterIP \ - meta globally-unique="true" clone-max="2" clone-node-max="2" -clone WebSiteClone WebSite -colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone -colocation fs_on_drbd inf: WebFSClone WebDataClone:Master -colocation website-with-ip inf: WebSiteClone WebIP -order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start -order WebSite-after-WebFS inf: WebFSClone WebSiteClone -order apache-after-ip inf: WebIP WebSiteClone -property $id="cib-bootstrap-options" \ - dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \ - cluster-infrastructure="openais" \ - expected-quorum-votes="2" \ - stonith-enabled="true" \ - no-quorum-policy="ignore" -rsc_defaults $id="rsc-options" \ - resource-stickiness="100" -crm(stonith)# cib commit stonithINFO: commited 'stonith' shadow CIB to the cluster -crm(stonith)# quit -bye +# crm_attribute -t crm_config -n stonith-enabled -v true ---- -endif::crmsh[]