diff --git a/doc/Pacemaker_Explained/en-US/Ap-Changes.txt b/doc/Pacemaker_Explained/en-US/Ap-Changes.txt deleted file mode 100644 index 3626753de6..0000000000 --- a/doc/Pacemaker_Explained/en-US/Ap-Changes.txt +++ /dev/null @@ -1,63 +0,0 @@ -[appendix] - - -== What Changed in 1.0 == - -=== New === - -* Failure timeouts. See <> -* New section for resource and operation defaults. See <> and <> -* Tool for making offline configuration changes. See <> -* +Rules, instance_attributes, meta_attributes+ and sets of operations can be defined once and referenced in multiple places. See <> -* The CIB now accepts XPath-based create/modify/delete operations. See the pass:[cibadmin] help text. -* Multi-dimensional colocation and ordering constraints. See <> and <> -* The ability to connect to the CIB from non-cluster machines. See <> -* Allow recurring actions to be triggered at known times. See <> - - -=== Changed === - -* Syntax -** All resource and cluster options now use dashes (-) instead of underscores (_) -** +master_slave+ was renamed to +master+ -** The +attributes+ container tag was removed -** The operation field +pre-req+ has been renamed +requires+ -** All operations must have an +interval+, +start+/+stop+ must have it set to zero -* The +stonith-enabled+ option now defaults to true. -* The cluster will refuse to start resources if +stonith-enabled+ is true (or unset) and no STONITH resources have been defined -* The attributes of colocation and ordering constraints were renamed for clarity. See <> and <> -* +resource-failure-stickiness+ has been replaced by +migration-threshold+. See <> -* The parameters for command-line tools have been made consistent -* Switched to 'RelaxNG' schema validation and 'libxml2' parser -** id fields are now XML IDs which have the following limitations: -*** id's cannot contain colons (:) -*** id's cannot begin with a number -*** id's must be globally unique (not just unique for that tag) -** Some fields (such as those in constraints that refer to resources) are IDREFs. -+ -This means that they must reference existing resources or objects in -order for the configuration to be valid. Removing an object which is -referenced elsewhere will therefore fail. -+ -** The CIB representation, from which a MD5 digest is calculated to verify CIBs on the nodes, has changed. -+ -This means that every CIB update will require a full refresh on any -upgraded nodes until the cluster is fully upgraded to 1.0. This will -result in significant performance degradation and it is therefore -highly inadvisable to run a mixed 1.0/0.6 cluster for any longer than -absolutely necessary. -+ -* Ping node information no longer needs to be added to _ha.cf_. -+ -Simply include the lists of hosts in your ping resource(s). - - -=== Removed === - - -* Syntax -** It is no longer possible to set resource meta options as top-level - attributes. Use meta attributes instead. -** Resource and operation defaults are no longer read from - +crm_config+. See <> and - <> instead. diff --git a/doc/Pacemaker_Explained/en-US/Ap-Upgrade-Config.txt b/doc/Pacemaker_Explained/en-US/Ap-Upgrade-Config.txt deleted file mode 100644 index 7f1eb06f37..0000000000 --- a/doc/Pacemaker_Explained/en-US/Ap-Upgrade-Config.txt +++ /dev/null @@ -1,130 +0,0 @@ -[appendix] - -== Upgrading the Configuration == - -This process was originally written for the upgrade from 0.6.'x' to 1.'y', -but the concepts should apply for any upgrade involving a change in -the XML schema version. - -indexterm:[Upgrading the Configuration] -indexterm:[Configuration,Upgrading] - -=== Perform the upgrade === - -==== Upgrade the software ==== - -Refer to the appendix: <> - -==== Upgrade the Configuration ==== - -As XML is not the friendliest of languages, it is common for cluster -administrators to have scripted some of their activities. In such -cases, it is likely that those scripts will not work with the new XML -syntax. - -In order to support such environments, it is actually possible to -continue using the old XML syntax. - -The downside is, however, that not all the new features will be -available and there is a performance impact since the cluster must do -a non-persistent configuration upgrade before each transition. So -while using the old syntax is possible, it is not advisable to -continue using it indefinitely. - -Even if you wish to continue using the old syntax, it is advisable to -follow the upgrade procedure (except for the last step) to ensure that the -cluster is able to use your existing configuration (since it will perform much -the same task internally). - -. Create a shadow copy to work with -+ ------ -# crm_shadow --create upgrade06 ------ -. Verify the configuration is valid indexterm:[Configuration,Verify]indexterm:[Verify,Configuration] -+ ------ -# crm_verify --live-check ------ -. Fix any errors or warnings -. Perform the upgrade: -+ ------ -# cibadmin --upgrade ------ -. If this step fails, there are three main possibilities: -.. The configuration was not valid to start with - go back to step 2 -.. The transformation failed - report a bug or mailto:pacemaker@oss.clusterlabs.org?subject=Transformation%20failed%20during%20upgrade[email the project] -.. The transformation was successful but produced an invalid result footnote:[ -The most common reason is ID values being repeated or invalid. Pacemaker 1.0 is much stricter regarding this type of validation. -] -+ -If the result of the transformation is invalid, you may see a number of errors -from the validation library. If these are not helpful, visit the -http://clusterlabs.org/wiki/Validation_FAQ[Validation FAQ wiki page] and/or try -the procedure described below under <> -+ -. Check the changes -+ ------ -# crm_shadow --diff ------ -+ -If at this point there is anything about the upgrade that you wish to fine-tune (for example, to change some of the automatic IDs) now is the time to do so. Since the shadow configuration is not in use by the cluster, it is safe to edit the file manually: -+ ------ -# crm_shadow --edit ------ -+ -This will open the configuration in your favorite editor (whichever is -specified by the standard *$EDITOR* environment variable) -+ -. Preview how the cluster will react: -+ ------- -# crm_simulate --live-check --save-dotfile upgrade06.dot -S -# graphviz upgrade06.dot ------- -+ -Verify that either no resource actions will occur or that you are -happy with any that are scheduled. If the output contains actions you -do not expect (possibly due to changes to the score calculations), you -may need to make further manual changes. See -<> for further details on how to interpret -the output of `crm_simulate` and `graphviz`. -+ -. Upload the changes -+ ------ -# crm_shadow --commit upgrade06 --force ------ -+ -In the unlikely event this step fails, please report a bug. - -[[s-upgrade-config-manual]] -==== Manually Upgrading the Configuration ==== - -indexterm:[Configuration,Upgrade manually] -It is also possible to perform the configuration upgrade steps manually: - -. Locate the +upgrade06.xsl+ conversion script provided with the source code - (the https://github.com/ClusterLabs/pacemaker/tree/master/xml/upgrade06.xsl[latest version] is available via - git). - -. Convert the XML blob: indexterm:[XML,Convert] -+ ------ -# xsltproc /path/to/upgrade06.xsl config06.xml > config10.xml ------ -+ -. Locate the +pacemaker.rng+ script. -. Check the XML validity: indexterm:[Validate Configuration]indexterm:[Configuration,Validate XML] -+ ----- -# xmllint --relaxng /path/to/pacemaker.rng config10.xml ----- - -The advantage of this method is that it can be performed without the -cluster running and any validation errors should be more informative -(despite being generated by the same library!) since they include line -numbers. diff --git a/doc/Pacemaker_Explained/en-US/Ap-Upgrade.txt b/doc/Pacemaker_Explained/en-US/Ap-Upgrade.txt index 66f5cc5936..34632d7e5b 100644 --- a/doc/Pacemaker_Explained/en-US/Ap-Upgrade.txt +++ b/doc/Pacemaker_Explained/en-US/Ap-Upgrade.txt @@ -1,193 +1,386 @@ [appendix] +== Upgrading a Pacemaker Cluster == + [[ap-upgrade]] -== Upgrading Cluster Software == +=== Upgrading Cluster Software === There will always be an upgrade path from any pacemaker 1._x_ release to any other 1._y_ release. Consult the documentation for your messaging layer (Heartbeat or Corosync) to see whether upgrading them to a newer version is also supported. There are three approaches to upgrading your cluster software: * Complete Cluster Shutdown * Rolling (node by node) * Disconnect and Reattach Each method has advantages and disadvantages, some of which are listed in the table below, and you should choose the one most appropriate to your needs. .Upgrade Methods [width="95%",cols="6*",options="header",align="center"] |========================================================= |Type |Available between all software versions |Service Outage During Upgrade |Service Recovery During Upgrade |Exercises Failover Logic/Configuration |Allows change of cluster stack type indexterm:[Cluster,Switching between Stacks] indexterm:[Changing Cluster Stack] footnote:[For example, switching from Heartbeat to Corosync.] |Shutdown indexterm:[Upgrade,Shutdown] indexterm:[Shutdown Upgrade] |yes |always |N/A |no |yes |Rolling indexterm:[Upgrade,Rolling] indexterm:[Rolling Upgrade] |no |always |yes |yes |no |Reattach indexterm:[Upgrade,Reattach] indexterm:[Reattach Upgrade] |yes |only due to failure |no |no |yes |========================================================= -=== Complete Cluster Shutdown === +==== Complete Cluster Shutdown ==== In this scenario, one shuts down all cluster nodes and resources, then upgrades all the nodes before restarting the cluster. . On each node: .. Shutdown the cluster software (pacemaker and the messaging layer). .. Upgrade the Pacemaker software. This may also include upgrading the messaging layer and/or the underlying operating system. .. Check the configuration manually or with the `crm_verify` tool if available. . On each node: .. Start the cluster software. The messaging layer can be either Corosync or Heartbeat and does not need to be the same one before the upgrade. -=== Rolling (node by node) === +==== Rolling (node by node) ==== In this scenario, each node is removed from the cluster, upgraded and then brought back online until all nodes are running the newest version. Rolling upgrades should always be possible for pacemaker versions 1.0.0 and later. On each node: . Put the node into standby mode, and wait for any active resources to be moved cleanly to another node. . Shutdown the cluster software (pacemaker and the messaging layer) on the node. . Upgrade the Pacemaker software. This may also include upgrading the messaging layer and/or the underlying operating system. . If this is the first node to be upgraded, check the configuration manually or with the `crm_verify` tool if available. . Start the messaging layer. This must be the same messaging layer (Corosync or Heartbeat) that the rest of the cluster is using. Upgrading the messaging layer may also be possible; consult the documentation for those projects to see whether the two versions will be compatible. [NOTE] ==== Rolling upgrades were not always possible with older heartbeat and pacemaker versions. The table below shows which versions were compatible during rolling upgrades. Rolling upgrades that cross compatibility boundaries must be performed in multiple steps (for example, upgrading heartbeat 2.0.6 to heartbeat 2.1.3, and then upgrading again to pacemaker 0.6.6). Rolling upgrades from pacemaker 0._x_ to 1._y_ are not possible. .Version Compatibility Table [width="95%",cols="2*",options="header",align="center"] |========================================================= |Version being Installed |Oldest Compatible Version |Pacemaker 1.0.x |Pacemaker 1.0.0 |Pacemaker 0.7.x |Pacemaker 0.6 or Heartbeat 2.1.3 |Pacemaker 0.6.x |Heartbeat 2.0.8 |Heartbeat 2.1.3 (or less) |Heartbeat 2.0.4 |Heartbeat 2.0.4 (or less) |Heartbeat 2.0.0 |Heartbeat 2.0.0 |None. Use an alternate upgrade strategy. |========================================================= ==== -=== Disconnect and Reattach === +==== Disconnect and Reattach ==== The reattach method is a variant of a complete cluster shutdown, where the resources are left active and get re-detected when the cluster is restarted. . Tell the cluster to stop managing services. This is required to allow the services to remain active after the cluster shuts down. + ---- # crm_attribute -t crm_config -n is-managed-default -v false ---- . For any resource that has a value for +is-managed+, make sure it is set to +false+ so that the cluster will not stop it (replacing $rsc_id appropriately): + ---- # crm_resource -t primitive -r $rsc_id -p is-managed -v false ---- . On each node: .. Shutdown the cluster software (pacemaker and the messaging layer). .. Upgrade the Pacemaker software. This may also include upgrading the messaging layer and/or the underlying operating system. . Check the configuration manually or with the `crm_verify` tool if available. . On each node: .. Start the cluster software. The messaging layer can be either Corosync or Heartbeat and does not need to be the same one as before the upgrade. . Verify that the cluster re-detected all resources correctly. . Allow the cluster to resume managing resources again: + ---- # crm_attribute -t crm_config -n is-managed-default -v true ---- . For any resource that has a value for +is-managed+, reset it to +true+ (so the cluster can recover the service if it fails) if desired: + ---- # crm_resource -t primitive -r $rsc_id -p is-managed -v true ---- [NOTE] The oldest version of the CRM to support this upgrade type was in Heartbeat 2.0.4. [IMPORTANT] =========== Always check your existing configuration is still compatible with the version you are installing before starting the cluster. =========== + +=== Upgrading the Configuration === + +This process was originally written for the upgrade from 0.6.'x' to 1.'y', +but the concepts should apply for any upgrade involving a change in +the XML schema version. + +indexterm:[Upgrading the Configuration] +indexterm:[Configuration,Upgrading] + +==== Perform the upgrade ==== + +===== Upgrade the software ===== + +Refer to the appendix: <> + +===== Upgrade the Configuration ===== + +As XML is not the friendliest of languages, it is common for cluster +administrators to have scripted some of their activities. In such +cases, it is likely that those scripts will not work with the new XML +syntax. + +In order to support such environments, it is actually possible to +continue using the old XML syntax. + +The downside is, however, that not all the new features will be +available and there is a performance impact since the cluster must do +a non-persistent configuration upgrade before each transition. So +while using the old syntax is possible, it is not advisable to +continue using it indefinitely. + +Even if you wish to continue using the old syntax, it is advisable to +follow the upgrade procedure (except for the last step) to ensure that the +cluster is able to use your existing configuration (since it will perform much +the same task internally). + +. Create a shadow copy to work with ++ +----- +# crm_shadow --create upgrade06 +----- +. Verify the configuration is valid indexterm:[Configuration,Verify]indexterm:[Verify,Configuration] ++ +----- +# crm_verify --live-check +----- +. Fix any errors or warnings +. Perform the upgrade: ++ +----- +# cibadmin --upgrade +----- +. If this step fails, there are three main possibilities: +.. The configuration was not valid to start with - go back to step 2 +.. The transformation failed - report a bug or mailto:pacemaker@oss.clusterlabs.org?subject=Transformation%20failed%20during%20upgrade[email the project] +.. The transformation was successful but produced an invalid result footnote:[ +The most common reason is ID values being repeated or invalid. Pacemaker 1.0 is much stricter regarding this type of validation. +] ++ +If the result of the transformation is invalid, you may see a number of errors +from the validation library. If these are not helpful, visit the +http://clusterlabs.org/wiki/Validation_FAQ[Validation FAQ wiki page] and/or try +the procedure described below under <> ++ +. Check the changes ++ +----- +# crm_shadow --diff +----- ++ +If at this point there is anything about the upgrade that you wish to fine-tune (for example, to change some of the automatic IDs) now is the time to do so. Since the shadow configuration is not in use by the cluster, it is safe to edit the file manually: ++ +----- +# crm_shadow --edit +----- ++ +This will open the configuration in your favorite editor (whichever is +specified by the standard *$EDITOR* environment variable) ++ +. Preview how the cluster will react: ++ +------ +# crm_simulate --live-check --save-dotfile upgrade06.dot -S +# graphviz upgrade06.dot +------ ++ +Verify that either no resource actions will occur or that you are +happy with any that are scheduled. If the output contains actions you +do not expect (possibly due to changes to the score calculations), you +may need to make further manual changes. See +<> for further details on how to interpret +the output of `crm_simulate` and `graphviz`. ++ +. Upload the changes ++ +----- +# crm_shadow --commit upgrade06 --force +----- ++ +In the unlikely event this step fails, please report a bug. + +[[s-upgrade-config-manual]] +===== Manually Upgrading the Configuration ===== + +indexterm:[Configuration,Upgrade manually] +It is also possible to perform the configuration upgrade steps manually: + +. Locate the +upgrade06.xsl+ conversion script provided with the source code + (the https://github.com/ClusterLabs/pacemaker/tree/master/xml/upgrade06.xsl[latest version] is available via + git). + +. Convert the XML blob: indexterm:[XML,Convert] ++ +----- +# xsltproc /path/to/upgrade06.xsl config06.xml > config10.xml +----- ++ +. Locate the +pacemaker.rng+ script. +. Check the XML validity: indexterm:[Validate Configuration]indexterm:[Configuration,Validate XML] ++ +---- +# xmllint --relaxng /path/to/pacemaker.rng config10.xml +---- + +The advantage of this method is that it can be performed without the +cluster running and any validation errors should be more informative +(despite being generated by the same library!) since they include line +numbers. + + +=== What Changed in 1.0 === + +==== New ==== + +* Failure timeouts. See <> +* New section for resource and operation defaults. See <> and <> +* Tool for making offline configuration changes. See <> +* +Rules, instance_attributes, meta_attributes+ and sets of operations can be defined once and referenced in multiple places. See <> +* The CIB now accepts XPath-based create/modify/delete operations. See the pass:[cibadmin] help text. +* Multi-dimensional colocation and ordering constraints. See <> and <> +* The ability to connect to the CIB from non-cluster machines. See <> +* Allow recurring actions to be triggered at known times. See <> + + +==== Changed ==== + +* Syntax +** All resource and cluster options now use dashes (-) instead of underscores (_) +** +master_slave+ was renamed to +master+ +** The +attributes+ container tag was removed +** The operation field +pre-req+ has been renamed +requires+ +** All operations must have an +interval+, +start+/+stop+ must have it set to zero +* The +stonith-enabled+ option now defaults to true. +* The cluster will refuse to start resources if +stonith-enabled+ is true (or unset) and no STONITH resources have been defined +* The attributes of colocation and ordering constraints were renamed for clarity. See <> and <> +* +resource-failure-stickiness+ has been replaced by +migration-threshold+. See <> +* The parameters for command-line tools have been made consistent +* Switched to 'RelaxNG' schema validation and 'libxml2' parser +** id fields are now XML IDs which have the following limitations: +*** id's cannot contain colons (:) +*** id's cannot begin with a number +*** id's must be globally unique (not just unique for that tag) +** Some fields (such as those in constraints that refer to resources) are IDREFs. ++ +This means that they must reference existing resources or objects in +order for the configuration to be valid. Removing an object which is +referenced elsewhere will therefore fail. ++ +** The CIB representation, from which a MD5 digest is calculated to verify CIBs on the nodes, has changed. ++ +This means that every CIB update will require a full refresh on any +upgraded nodes until the cluster is fully upgraded to 1.0. This will +result in significant performance degradation and it is therefore +highly inadvisable to run a mixed 1.0/0.6 cluster for any longer than +absolutely necessary. ++ +* Ping node information no longer needs to be added to _ha.cf_. ++ +Simply include the lists of hosts in your ping resource(s). + + +==== Removed ==== + + +* Syntax +** It is no longer possible to set resource meta options as top-level + attributes. Use meta attributes instead. +** Resource and operation defaults are no longer read from + +crm_config+. See <> and + <> instead. diff --git a/doc/Pacemaker_Explained/en-US/Pacemaker_Explained.xml b/doc/Pacemaker_Explained/en-US/Pacemaker_Explained.xml index 991e002a3a..52f9236d8d 100644 --- a/doc/Pacemaker_Explained/en-US/Pacemaker_Explained.xml +++ b/doc/Pacemaker_Explained/en-US/Pacemaker_Explained.xml @@ -1,47 +1,45 @@ - - Further Reading Project Website: Project Documentation: SUSE High Availibility Guide: Heartbeat configuration: Corosync Configuration: