diff --git a/doc/sphinx/Clusters_from_Scratch/fencing.rst b/doc/sphinx/Clusters_from_Scratch/fencing.rst index 25975e0eab..3abc9d6c84 100644 --- a/doc/sphinx/Clusters_from_Scratch/fencing.rst +++ b/doc/sphinx/Clusters_from_Scratch/fencing.rst @@ -1,224 +1,215 @@ .. index:: fencing Configure Fencing ----------------- What is Fencing? ################ Fencing protects your data from being corrupted, and your application from becoming unavailable, due to unintended concurrent access by rogue nodes. Just because a node is unresponsive doesn't mean it has stopped accessing your data. The only way to be 100% sure that your data is safe, is to use fencing to ensure that the node is truly offline before allowing the data to be accessed from another node. Fencing also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses fencing to force the whole node offline, thereby making it safe to start the service elsewhere. Fencing is also known as STONITH, an acronym for "Shoot The Other Node In The Head", since the most popular form of fencing is cutting a host's power. In order to guarantee the safety of your data [#]_, fencing is enabled by default. .. NOTE:: It is possible to tell the cluster not to use fencing, by setting the **stonith-enabled** cluster option to false: .. code-block:: none [root@pcmk-1 ~]# pcs property set stonith-enabled=false [root@pcmk-1 ~]# crm_verify -L However, this is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely powered off. Some vendors will refuse to support clusters that have fencing disabled. Even disabling it for a test cluster means you won't be able to test real failure scenarios. .. index:: single: fencing; device Choose a Fence Device ##################### The two broad categories of fence device are power fencing, which cuts off power to the target, and fabric fencing, which cuts off the target's access to some critical resource, such as a shared disk or access to the local network. Power fencing devices include: * Intelligent power switches * IPMI * Hardware watchdog device (alone, or in combination with shared storage used as a "poison pill" mechanism) Fabric fencing devices include: * Shared storage that can be cut off for a target host by another host (for example, an external storage device that supports SCSI-3 persistent reservations) * Intelligent network switches Using IPMI as a power fencing device may seem like a good choice. However, if the IPMI shares power and/or network access with the host (such as most onboard IPMI controllers), a power or network failure will cause both the host and its fencing device to fail. The cluster will be unable to recover, and must stop all resources to avoid a possible split-brain situation. Likewise, any device that relies on the machine being active (such as SSH-based "devices" sometimes used during testing) is inappropriate, because fencing will be required when the node is completely unresponsive. Configure the Cluster for Fencing ################################# #. Install the fence agent(s). To see what packages are available, run ``yum search fence-``. Be sure to install the package(s) on all cluster nodes. #. Configure the fence device itself to be able to fence your nodes and accept fencing requests. This includes any necessary configuration on the device and on the nodes, and any firewall or SELinux changes needed. Test the communication between the device and your nodes. #. Find the name of the correct fence agent: ``pcs stonith list`` #. Find the parameters associated with the device: ``pcs stonith describe `` #. Create a local copy of the CIB: ``pcs cluster cib stonith_cfg`` #. Create the fencing resource: ``pcs -f stonith_cfg stonith create [STONITH_DEVICE_OPTIONS]`` Any flags that do not take arguments, such as ``--ssl``, should be passed as ``ssl=1``. #. Enable fencing in the cluster: ``pcs -f stonith_cfg property set stonith-enabled=true`` #. If the device does not know how to fence nodes based on their cluster node name, you may also need to set the special **pcmk_host_map** parameter. See ``man pacemaker-fenced`` for details. #. If the device does not support the **list** command, you may also need to set the special **pcmk_host_list** and/or **pcmk_host_check** parameters. See ``man pacemaker-fenced`` for details. #. If the device does not expect the victim to be specified with the **port** parameter, you may also need to set the special **pcmk_host_argument** parameter. See ``man pacemaker-fenced`` for details. #. Commit the new configuration: ``pcs cluster cib-push stonith_cfg`` #. Once the fence device resource is running, test it (you might want to stop the cluster on that machine first): ``stonith_admin --reboot `` Example ####### For this example, assume we have a chassis containing four nodes and a separately powered IPMI device active on 10.0.0.1. Following the steps above would go something like this: -Step 1: Install the **fence-agents-ipmilan** package on both nodes. +Step 1: Install the **fence-virt** package on both nodes. Step 2: Configure the IP address, authentication credentials, etc. in the IPMI device itself. -Step 3: Choose the **fence_ipmilan** STONITH agent. +Step 3: Choose the **fence_virt** STONITH agent. Step 4: Obtain the agent's possible parameters: .. code-block:: none - [root@pcmk-1 ~]# pcs stonith describe fence_ipmilan - fence_ipmilan - Fence agent for IPMI - - fence_ipmilan is an I/O Fencing agentwhich can be used with machines controlled by IPMI.This agent calls support software ipmitool (http://ipmitool.sf.net/). WARNING! This fence agent might report success before the node is powered off. You should use -m/method onoff if your fence device works correctly with that option. - + [root@pcmk-1 ~]# pcs stonith describe fence_virt + fence_virt - Fence agent for virtual machines + + fence_virt is an I/O Fencing agent which can be used withvirtual machines. + Stonith options: - ipport: TCP/UDP port to use for connection with device - hexadecimal_kg: Hexadecimal-encoded Kg key for IPMIv2 authentication - port: IP address or hostname of fencing device (together with --port-as-ip) - inet6_only: Forces agent to use IPv6 addresses only - ipaddr: IP Address or Hostname - passwd_script: Script to retrieve password - method: Method to fence (onoff|cycle) - inet4_only: Forces agent to use IPv4 addresses only - passwd: Login password or passphrase - lanplus: Use Lanplus to improve security of connection - auth: IPMI Lan Auth type. - cipher: Ciphersuite to use (same as ipmitool -C parameter) - target: Bridge IPMI requests to the remote target address - privlvl: Privilege level on IPMI device - timeout: Timeout (sec) for IPMI operation - login: Login Name - verbose: Verbose mode - debug: Write debug information to given file - power_wait: Wait X seconds after issuing ON/OFF - login_timeout: Wait X seconds for cmd prompt after login - delay: Wait X seconds before fencing is started - power_timeout: Test X seconds for status change after ON/OFF - ipmitool_path: Path to ipmitool binary - shell_timeout: Wait X seconds for cmd prompt after issuing command - port_as_ip: Make "port/plug" to be an alias to IP address - retry_on: Count of attempts to retry power on - sudo: Use sudo (without password) when calling 3rd party sotfware. - priority: The priority of the stonith resource. Devices are tried in order of highest priority to lowest. - pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. Eg. node1:1;node2:2,3 would tell the cluster to use port 1 for node1 and ports 2 and - 3 for node2 + debug: Specify (stdin) or increment (command line) debug level + serial_device: Serial device (default=/dev/ttyS1) + serial_params: Serial Parameters (default=115200,8N1) + channel_address: VM Channel IP address (default=10.0.2.179) + ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229) + port: Virtual Machine (domain name) to fence + timeout: Fencing timeout (in seconds; default=30) + ipaddr: IP address to connect to in TCP mode (default=127.0.0.1 / ::1) + vsock: vm socket CID to connect to in vsock mode + auth: Authentication (none, sha1, [sha256], sha512) + hash: Packet hash strength (none, sha1, [sha256], sha512) + key_file: Shared key file (default=/etc/cluster/fence_xvm.key) + delay: Fencing delay (in seconds; default=0) + domain: Virtual Machine (domain name) to fence (deprecated; use port) + pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. Eg. + node1:1;node2:2,3 would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2 pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list). - pcmk_host_check: How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device), static-list (check the pcmk_host_list attribute), none - (assume every device can fence every machine) - pcmk_delay_max: Enable a random delay for stonith actions and specify the maximum of random delay. This prevents double fencing when using slow devices such as sbd. Use this to enable a - random delay for stonith actions. The overall delay is derived from this random delay value adding a static delay so that the sum is kept below the maximum delay. - pcmk_delay_base: Enable a base delay for stonith actions and specify base delay value. This prevents double fencing when different delays are configured on the nodes. Use this to enable - a static delay for stonith actions. The overall delay is derived from a random delay value adding this static delay so that the sum is kept below the maximum delay. - pcmk_action_limit: The maximum number of actions can be performed in parallel on this device Pengine property concurrent-fencing=true needs to be configured first. Then use this to - specify the maximum number of actions can be performed in parallel on this device. -1 is unlimited. - - Default operations: - monitor: interval=60s + pcmk_host_check: How to determine which machines are controlled by the device. Allowed values: dynamic-list (query + the device via the 'list' command), static-list (check the pcmk_host_list attribute), status + (query the device via the 'status' command), none (assume every device can fence every machine) + pcmk_delay_max: Enable a random delay for stonith actions and specify the maximum of random delay. This prevents + double fencing when using slow devices such as sbd. Use this to enable a random delay for stonith + actions. The overall delay is derived from this random delay value adding a static delay so that + the sum is kept below the maximum delay. + pcmk_delay_base: Enable a base delay for stonith actions and specify base delay value. This prevents double + fencing when different delays are configured on the nodes. Use this to enable a static delay for + stonith actions. The overall delay is derived from a random delay value adding this static delay + so that the sum is kept below the maximum delay. + pcmk_action_limit: The maximum number of actions can be performed in parallel on this device Cluster property + concurrent-fencing=true needs to be configured first. Then use this to specify the maximum + number of actions can be performed in parallel on this device. -1 is unlimited. + + Default operations: + monitor: interval=60s Step 5: ``pcs cluster cib stonith_cfg`` Step 6: Here are example parameters for creating our fence device resource: .. code-block:: none - [root@pcmk-1 ~]# pcs -f stonith_cfg stonith create ipmi-fencing fence_ipmilan \ - pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser \ - passwd=acd123 op monitor interval=60s + [root@pcmk-1 ~]# pcs -f stonith_cfg stonith create my_stonith fence_virt \ + pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 op monitor interval=60s [root@pcmk-1 ~]# pcs -f stonith_cfg stonith ipmi-fencing (stonith:fence_ipmilan): Stopped Steps 7-10: Enable fencing in the cluster: .. code-block:: none [root@pcmk-1 ~]# pcs -f stonith_cfg property set stonith-enabled=true [root@pcmk-1 ~]# pcs -f stonith_cfg property Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster - dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9 + dc-version: 2.0.5-4.el8-ba59be7122 have-watchdog: false stonith-enabled: true Step 11: ``pcs cluster cib-push stonith_cfg --config`` Step 12: Test: .. code-block:: none [root@pcmk-1 ~]# pcs cluster stop pcmk-2 [root@pcmk-1 ~]# stonith_admin --reboot pcmk-2 After a successful test, login to any rebooted nodes, and start the cluster (with ``pcs cluster start``). .. [#] If the data is corrupt, there is little point in continuing to make it available.