Phriction Welcome to the ClusterLabs Wiki Pacemaker Converting Between Cluster And Remote Nodes History Version 1 vs 2
Version 1 vs 2
Version 1 vs 2
Content Changes
Content Changes
In a Pacemaker cluster, nodes can be full cluster nodes running Corosync, or lightweight Pacemaker Remote nodes. This describes how to convert a cluster node to a remote node and vice versa (whether using the Pacemaker command-line interface, `pcs`, or `crm` shell).
Replace `$NODE_NAME` with the name of the node being converted in any commands below.
== Cluster node to remote node ==
1. **Install Pacemaker Remote** on the node to be converted if not already present (this varies by platform, for example `yum install pacemaker-remote`).
1. If you want resources on the node to continue running during the transition, either unmanage them, or put the entire cluster into **maintenance mode**.
* //CLI:// `crm_attribute --name=maintenance-mode --update=true`
* //pcs:// `pcs property set maintenance-mode=true`
* //crm:// `crm configure property maintenance-mode=true`
1. If you want to maintain any **permanent node attributes**, make a note of them. They will have to be manually set after the conversion.
1. **Stop the cluster** software on the node to be converted.
* //CLI:// `systemctl stop pacemaker corosync` (add `sbd` if used)
* //pcs:// `pcs cluster stop`
1. **Remove the node** from Corosync on all cluster nodes, then remove the node from Pacemaker.
* //CLI and crm:// Edit `/etc/corosync.conf` on all nodes to remove the node entry, run `corosync-cfgtool -R` on each active cluster node, then run `crm_node --force --remove=$NODE_NAME` on any active cluster node
* //pcs:// `pcs cluster node remove $NODE_NAME`
1. Generate a Pacemaker Remote authentication key and copy it to all nodes, if not already present. Start the Pacemaker Remote daemon on the node to be converted, and enable it to start at boot if desired (which allows the cluster to reconnect to it after it is fenced). Then **add an ocf:pacemaker:remote resource** for the node to the cluster configuration. (The commands below use default values for parameters such as port, but they can be easily modified to use any desired values. See the relevant man pages for details.)
* //CLI:// `dd if=/dev/random bs=512 count=1 of=/etc/pacemaker/authkey` on any one node, and `scp` that file to all nodes. Then run `systemctl enable --now pacemaker_remote` on the node to be converted, and run `cibadmin --create --scope resources -X '<primitive id="$NODE_NAME" class="ocf" provider="pacemaker" type="remote"><operations><op id="$NODE_NAME-monitor" name="monitor" interval="30s" timeout="15s"/></operations></primitive>` on any cluster node.
* //pcs:// `pcs cluster node add-remote $NODE_NAME`
1. The remote node will be named the same as the resource ID. If this is different from the name used for the node as a cluster node, other parts of the configuration (such as constraints or rules) may need to be updated to use the remote node name. Any node attributes saved earlier can be set now.
1. Take the cluster out of maintenance mode.
* //CLI:// `crm_attribute --name=maintenance-mode --update=false`
* //pcs:// `pcs property set maintenance-mode=false`
* //crm:// `crm configure property maintenance-mode=false`
1. The cluster should start the remote connection and detect any resources still active on the node.
== Remote node to cluster node ==
1. **Install full Pacemaker** on the node to be converted if not already present (this varies by platform, for example `yum install pacemaker`>).
1. If you want resources on the node to continue running during the transition, **unmanage** them. (Putting the entire cluster into maintenance mode is not recommended, because the remote connection itself must be stopped.)
* //CLI:// For each affected resource, use `cibadmin --modify` appropriately to set the `is-managed` meta-attribute to `false`.
* //pcs:// For each affected resource, run `pcs resource unmanage $RESOURCE`, replacing `$RESOURCE` with the resource ID
* //crm:// For each affected resource, run `crm resource unmanage $RESOURCE`, replacing `$RESOURCE` with the resource ID
1. If you want to maintain any **permanent node attributes**, make a note of them. They will have to be manually set after the conversion.
1. **Remove the remote node's ocf:pacemaker:remote resource** and any other references to the node (including permanent node attributes, constraints, etc.) from the configuration, remove the node from Pacemaker, and stop Pacemaker Remote on the node (ensuring it is not enabled to start at boot).
* //CLI:// Run `cibadmin --delete` appropriately then `crm_node --force --remove $NODE_NAME` on any cluster node, then run `systemctl disable --now pacemaker_remote` on the node being converted.
* //pcs:// `pcs cluster node delete-remote $NODE_NAME`
1. Add the node to Corosync on all nodes.
* //CLI:// Edit `corosync.conf` on one cluster node to add the new node, and `scp` it to all other cluster nodes plus the node being converted, and run `corosync-cfgtool -R` on each active cluster node. Finally run `systemctl start pacemaker` on the node being converted (you can also run `systemctl enable pacemaker` to start Pacemaker at boot if desired).
* //pcs:// Run `pcs cluster node add $NODE_NAME --start` on any active cluster node. Add `--enable` if you want cluster services to start at boot.
In a Pacemaker cluster, nodes can be full cluster nodes running Corosync, or lightweight Pacemaker Remote nodes. This describes how to convert a cluster node to a remote node and vice versa (whether using the Pacemaker command-line interface, `pcs`, or `crm` shell).
Replace `$NODE_NAME` with the name of the node being converted in any commands below.
== Cluster node to remote node ==
1. **Install Pacemaker Remote** on the node to be converted if not already present (this varies by platform, for example `yum install pacemaker-remote`).
1. If you want resources on the node to continue running during the transition, either unmanage them, or put the entire cluster into **maintenance mode**.
* //CLI:// `crm_attribute --name=maintenance-mode --update=true`
* //pcs:// `pcs property set maintenance-mode=true`
* //crm:// `crm configure property maintenance-mode=true`
1. If you want to maintain any **permanent node attributes**, make a note of them. They will have to be manually set after the conversion.
1. **Stop the cluster** software on the node to be converted.
* //CLI:// `systemctl stop pacemaker corosync` (add `sbd` if used)
* //pcs:// `pcs cluster stop`
1. **Remove the node** from Corosync on all cluster nodes, then remove the node from Pacemaker.
* //CLI and crm:// Edit `/etc/corosync.conf` on all nodes to remove the node entry, run `corosync-cfgtool -R` on each active cluster node, then run `crm_node --force --remove=$NODE_NAME` on any active cluster node
* //pcs:// `pcs cluster node remove $NODE_NAME`
1. Generate a Pacemaker Remote authentication key and copy it to all nodes, if not already present. Start the Pacemaker Remote daemon on the node to be converted, and enable it to start at boot if desired (which allows the cluster to reconnect to it after it is fenced). Then **add an ocf:pacemaker:remote resource** for the node to the cluster configuration. (The commands below use default values for parameters such as port, but they can be easily modified to use any desired values. See the relevant man pages for details.)
* //CLI:// `dd if=/dev/random bs=512 count=1 of=/etc/pacemaker/authkey` on any one node, and `scp` that file to all nodes. Then run `systemctl enable --now pacemaker_remote` on the node to be converted, and run `cibadmin --create --scope resources -X '<primitive id="$NODE_NAME" class="ocf" provider="pacemaker" type="remote"><operations><op id="$NODE_NAME-monitor" name="monitor" interval="30s" timeout="15s"/></operations></primitive>` on any cluster node.
* //pcs:// `pcs cluster node add-remote $NODE_NAME`
1. The remote node will be named the same as the resource ID. If this is different from the name used for the node as a cluster node, other parts of the configuration (such as constraints or rules) may need to be updated to use the remote node name. Any node attributes saved earlier can be set now.
1. Take the cluster out of maintenance mode.
* //CLI:// `crm_attribute --name=maintenance-mode --update=false`
* //pcs:// `pcs property set maintenance-mode=false`
* //crm:// `crm configure property maintenance-mode=false`
1. The cluster should start the remote connection and detect any resources still active on the node.
== Remote node to cluster node ==
1. **Install full Pacemaker** on the node to be converted if not already present (this varies by platform, for example `yum install pacemaker`>).
1. If you want resources on the node to continue running during the transition, **unmanage** them. (Putting the entire cluster into maintenance mode is not recommended, because the remote connection itself must be stopped.)
* //CLI:// For each affected resource, use `cibadmin --modify` appropriately to set the `is-managed` meta-attribute to `false`.
* //pcs:// For each affected resource, run `pcs resource unmanage $RESOURCE`, replacing `$RESOURCE` with the resource ID
* //crm:// For each affected resource, run `crm resource unmanage $RESOURCE`, replacing `$RESOURCE` with the resource ID
1. If you want to maintain any **permanent node attributes**, make a note of them. They will have to be manually set after the conversion.
1. **Remove the remote node's ocf:pacemaker:remote resource** and any other references to the node (including permanent node attributes, constraints, etc.) from the configuration, remove the node from Pacemaker, and stop Pacemaker Remote on the node (ensuring it is not enabled to start at boot).
* //CLI:// Run `cibadmin --delete` appropriately then `crm_node --force --remove $NODE_NAME` on any cluster node, then run `systemctl disable --now pacemaker_remote` on the node being converted.
* //pcs:// `pcs cluster node delete-remote $NODE_NAME`
1. Add the node to Corosync on all nodes.
* //CLI:// Edit `corosync.conf` on one cluster node to add the new node, and `scp` it to all other cluster nodes plus the node being converted, and run `corosync-cfgtool -R` on each active cluster node. Finally run `systemctl start pacemaker` on the node being converted (you can also run `systemctl enable pacemaker` to start Pacemaker at boot if desired).
* //pcs:// Run `pcs cluster node add $NODE_NAME --start` on any active cluster node. Add `--enable` if you want cluster services to start at boot.