diff --git a/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst b/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst index 6ed4d02054..96609a5a77 100644 --- a/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst +++ b/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst @@ -1,331 +1,331 @@ Set up a Cluster ---------------- Simplify Administration With a Cluster Shell ############################################ In the dark past, configuring Pacemaker required the administrator to read and write XML. In true UNIX style, there were also a number of different commands that specialized in different aspects of querying and updating the cluster. In addition, the various components of the cluster stack (corosync, pacemaker, etc.) had to be configured separately, with different configuration tools and formats. All of that has been greatly simplified with the creation of higher-level tools, whether command-line or GUIs, that hide all the mess underneath. Command-line cluster shells take all the individual aspects required for managing and configuring a cluster, and pack them into one simple-to-use command-line tool. They even allow you to queue up several changes at once and commit them all at once. Two popular command-line shells are ``pcs`` and ``crmsh``. Clusters from Scratch is based on ``pcs`` because it comes with CentOS, but both have similar functionality. Choosing a shell or GUI is a matter of personal preference and what comes with (and perhaps is supported by) your choice of operating system. Install the Cluster Software ############################ Fire up a shell on both nodes and run the following to install pacemaker, pcs, and some other command-line tools that will make our lives easier: :: # yum install -y pacemaker pcs psmisc policycoreutils-python .. IMPORTANT:: This document will show commands that need to be executed on both nodes with a simple ``#`` prompt. Be sure to run them on each node individually. .. NOTE:: This document uses ``pcs`` for cluster management. Other alternatives, such as ``crmsh``, are available, but their syntax will differ from the examples used here. Configure the Cluster Software ############################## Allow cluster services through firewall _______________________________________ On each node, allow cluster-related services through the local firewall: :: # firewall-cmd --permanent --add-service=high-availability success # firewall-cmd --reload success .. NOTE :: If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports, which can be used by various clustering components: TCP ports 2224, 3121, and 21064, and UDP port 5405. If you run into any problems during testing, you might want to disable the firewall and SELinux entirely until you have everything working. This may create significant security issues and should not be performed on machines that will be exposed to the outside world, but may be appropriate during development and testing on a protected host. To disable security measures: :: [root@pcmk-1 ~]# setenforce 0 [root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config [root@pcmk-1 ~]# systemctl mask firewalld.service [root@pcmk-1 ~]# systemctl stop firewalld.service [root@pcmk-1 ~]# iptables --flush Enable pcs Daemon _________________ Before the cluster can be configured, the pcs daemon must be started and enabled to start at boot time on each node. This daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster. Start and enable the daemon by issuing the following commands on each node: :: # systemctl start pcsd.service # systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service. The installed packages will create a **hacluster** user with a disabled password. While this is fine for running ``pcs`` commands locally, the account needs a login password in order to perform such tasks as syncing the corosync configuration, or starting and stopping the cluster on other nodes. This tutorial will make use of such commands, so now we will set a password for the **hacluster** user, using the same password on both nodes: :: # passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully. .. NOTE:: Alternatively, to script this process or set the password on a different machine from the one you're logged into, you can use the ``--stdin`` option for ``passwd``: :: [root@pcmk-1 ~]# ssh pcmk-2 -- 'echo mysupersecretpassword | passwd --stdin hacluster' Configure Corosync __________________ On either node, use ``pcs cluster auth`` to authenticate as the **hacluster** user: :: [root@pcmk-1 ~]# pcs cluster auth pcmk-1 pcmk-2 Username: hacluster Password: pcmk-2: Authorized pcmk-1: Authorized .. NOTE:: - In Fedora 29 and CentOS 8.0, the command has been changed to `pcs host auth`: + In Fedora 29 and CentOS 8.0, the command has been changed to ``pcs host auth``: :: [root@pcmk-1 ~]# pcs host auth pcmk-1 pcmk-2 Username: hacluster Password: pcmk-2: Authorized pcmk-1: Authorized Next, use ``pcs cluster setup`` on the same node to generate and synchronize the corosync configuration: :: [root@pcmk-1 ~]# pcs cluster setup --name mycluster pcmk-1 pcmk-2 Destroying cluster on nodes: pcmk-1, pcmk-2... pcmk-2: Stopping Cluster (pacemaker)... pcmk-1: Stopping Cluster (pacemaker)... pcmk-1: Successfully destroyed cluster pcmk-2: Successfully destroyed cluster Sending 'pacemaker_remote authkey' to 'pcmk-1', 'pcmk-2' pcmk-2: successful distribution of the file 'pacemaker_remote authkey' pcmk-1: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes... pcmk-1: Succeeded pcmk-2: Succeeded Synchronizing pcsd certificates on nodes pcmk-1, pcmk-2... pcmk-2: Success pcmk-1: Success Restarting pcsd on the nodes in order to reload the certificates... pcmk-2: Success pcmk-1: Success .. NOTE :: In Fedora 29 and CentOS 8.0, the syntax has been changed and the ``--name`` option has been dropped: :: [root@pcmk-1 ~]# pcs cluster setup mycluster pcmk-1 pcmk-2 No addresses specified for host 'pcmk-1', using 'pcmk-1' No addresses specified for host 'pcmk-2', using 'pcmk-2' Destroying cluster on hosts: 'pcmk-1', 'pcmk-2'... pcmk-1: Successfully destroyed cluster pcmk-2: Successfully destroyed cluster Requesting remove 'pcsd settings' from 'pcmk-1', 'pcmk-2' pcmk-1: successful removal of the file 'pcsd settings' pcmk-2: successful removal of the file 'pcsd settings' Sending 'corosync authkey', 'pacemaker authkey' to 'pcmk-1', 'pcmk-2' pcmk-2: successful distribution of the file 'corosync authkey' pcmk-2: successful distribution of the file 'pacemaker authkey' pcmk-1: successful distribution of the file 'corosync authkey' pcmk-1: successful distribution of the file 'pacemaker authkey' Synchronizing pcsd SSL certificates on nodes 'pcmk-1', 'pcmk-2'... pcmk-1: Success pcmk-2: Success Sending 'corosync.conf' to 'pcmk-1', 'pcmk-2' pcmk-2: successful distribution of the file 'corosync.conf' pcmk-1: successful distribution of the file 'corosync.conf' Cluster has been successfully set up. If you received an authorization error for either of those commands, make sure you configured the **hacluster** user account on each node with the same password. .. NOTE:: If you are not using ``pcs`` for cluster administration, follow whatever procedures are appropriate for your tools to create a corosync.conf and copy it to all nodes. The ``pcs`` command will configure corosync to use UDP unicast transport; if you choose to use multicast instead, choose a multicast address carefully [#]_. The final corosync.conf configuration on each node should look something like the sample in :ref:`sample-corosync-configuration`. Explore pcs ########### Start by taking some time to familiarize yourself with what ``pcs`` can do. :: [root@pcmk-1 ~]# pcs Usage: pcs [-f file] [-h] [commands]... Control and configure pacemaker and corosync. Options: -h, --help Display usage and exit. -f file Perform actions on file instead of active CIB. --debug Print all network traffic and external commands run. --version Print pcs version information. List pcs capabilities if --full is specified. --request-timeout Timeout for each outgoing request to another node in seconds. Default is 60s. --force Override checks and errors, the exact behavior depends on the command. WARNING: Using the --force option is strongly discouraged unless you know what you are doing. Commands: cluster Configure cluster options and nodes. resource Manage cluster resources. stonith Manage fence devices. constraint Manage resource constraints. property Manage pacemaker properties. acl Manage pacemaker access control lists. qdevice Manage quorum device provider on the local host. quorum Manage cluster quorum settings. booth Manage booth (cluster ticket manager). status View cluster status. config View and manage cluster configuration. pcsd Manage pcs daemon. node Manage cluster nodes. alert Manage pacemaker alerts. As you can see, the different aspects of cluster management are separated into categories. To discover the functionality available in each of these categories, one can issue the command ``pcs help``. Below is an example of all the options available under the status category. :: [root@pcmk-1 ~]# pcs status help Usage: pcs status [commands]... View current cluster and resource status Commands: [status] [--full | --hide-inactive] View all information about the cluster and resources (--full provides more details, --hide-inactive hides inactive resources). resources [ | --full | --groups | --hide-inactive] Show all currently configured resources or if a resource is specified show the options for the configured resource. If --full is specified, all configured resource options will be displayed. If --groups is specified, only show groups (and their resources). If --hide-inactive is specified, only show active resources. groups View currently configured groups and their resources. cluster View current cluster status. corosync View current membership information as seen by corosync. quorum View current quorum status. qdevice [--full] [] Show runtime status of specified model of quorum device provider. Using --full will give more detailed output. If is specified, only information about the specified cluster will be displayed. nodes [corosync | both | config] View current status of nodes from pacemaker. If 'corosync' is specified, view current status of nodes from corosync instead. If 'both' is specified, view current status of nodes from both corosync & pacemaker. If 'config' is specified, print nodes from corosync & pacemaker configuration. pcsd []... Show current status of pcsd on nodes specified, or on all nodes configured in the local cluster if no nodes are specified. xml View xml version of status (output from crm_mon -r -1 -X). Additionally, if you are interested in the version and supported cluster stack(s) available with your Pacemaker installation, run: :: [root@pcmk-1 ~]# pacemakerd --features Pacemaker 1.1.18-11.el7_5.3 (Build: 2b07d5c5a9) Supporting v3.0.14: generated-manpages agent-manpages ncurses libqb-logging libqb-ipc systemd nagios corosync-native atomic-attrd acls .. [#] For some subtle issues, see `Topics in High-Performance Messaging: Multicast Address Assignment `_ or the more detailed treatment in `Cisco's Guidelines for Enterprise IP Multicast Address Allocation `_. diff --git a/doc/sphinx/Clusters_from_Scratch/verification.rst b/doc/sphinx/Clusters_from_Scratch/verification.rst index f42deac924..2bdd431aeb 100644 --- a/doc/sphinx/Clusters_from_Scratch/verification.rst +++ b/doc/sphinx/Clusters_from_Scratch/verification.rst @@ -1,211 +1,211 @@ Start and Verify Cluster ------------------------ Start the Cluster ################# Now that corosync is configured, it is time to start the cluster. The command below will start corosync and pacemaker on both nodes in the cluster. If you are issuing the start command from a different node than the one you ran the ``pcs cluster auth`` command on earlier, you must authenticate on the current node you are logged into before you will be allowed to start the cluster. :: [root@pcmk-1 ~]# pcs cluster start --all pcmk-1: Starting Cluster... pcmk-2: Starting Cluster... .. NOTE:: An alternative to using the ``pcs cluster start --all`` command is to issue either of the below command sequences on each node in the cluster separately: :: # pcs cluster start Starting Cluster... or :: # systemctl start corosync.service # systemctl start pacemaker.service .. IMPORTANT:: In this example, we are not enabling the corosync and pacemaker services to start at boot. If a cluster node fails or is rebooted, you will need to run ``pcs cluster start `` (or ``--all``) to start the cluster on it. While you could enable the services to start at boot, requiring a manual start of cluster services gives you the opportunity to do a post-mortem investigation of a node failure before returning it to the cluster. Verify Corosync Installation ############################ First, use ``corosync-cfgtool`` to check whether cluster communication is happy: :: [root@pcmk-1 ~]# corosync-cfgtool -s Printing ring status. Local node ID 1 RING ID 0 id = 192.168.122.101 status = ring 0 active with no faults We can see here that everything appears normal with our fixed IP address (not a 127.0.0.x loopback address) listed as the **id**, and **no faults** for the status. If you see something different, you might want to start by checking the node's network, firewall and SELinux configurations. Next, check the membership and quorum APIs: :: [root@pcmk-1 ~]# corosync-cmapctl | grep members runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0 runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.122.101) runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1 runtime.totem.pg.mrp.srp.members.1.status (str) = joined runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0 runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.122.102) runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1 runtime.totem.pg.mrp.srp.members.2.status (str) = joined [root@pcmk-1 ~]# pcs status corosync Membership information \---------------------- Nodeid Votes Name 1 1 pcmk-1 (local) 2 1 pcmk-2 You should see both nodes have joined the cluster. Verify Pacemaker Installation ############################# Now that we have confirmed that Corosync is functional, we can check the rest of the stack. Pacemaker has already been started, so verify the necessary processes are running: :: [root@pcmk-1 ~]# ps axf PID TTY STAT TIME COMMAND 2 ? S 0:00 [kthreadd] ...lots of processes... 11635 ? SLsl 0:03 corosync 11642 ? Ss 0:00 /usr/sbin/pacemakerd -f 11643 ? Ss 0:00 \_ /usr/libexec/pacemaker/cib 11644 ? Ss 0:00 \_ /usr/libexec/pacemaker/stonithd 11645 ? Ss 0:00 \_ /usr/libexec/pacemaker/lrmd 11646 ? Ss 0:00 \_ /usr/libexec/pacemaker/attrd 11647 ? Ss 0:00 \_ /usr/libexec/pacemaker/pengine 11648 ? Ss 0:00 \_ /usr/libexec/pacemaker/crmd If that looks OK, check the ``pcs status`` output: :: [root@pcmk-1 ~]# pcs status Cluster name: mycluster WARNING: no stonith devices and stonith-enabled is not false Stack: corosync Current DC: pcmk-2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Mon Sep 10 16:37:34 2018 Last change: Mon Sep 10 16:30:53 2018 by hacluster via crmd on pcmk-2 2 nodes configured 0 resources configured Online: [ pcmk-1 pcmk-2 ] No resources Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled Finally, ensure there are no start-up errors from corosync or pacemaker (aside from messages relating to not having STONITH configured, which are OK at this point): :: [root@pcmk-1 ~]# journalctl -b | grep -i error .. NOTE:: Other operating systems may report startup errors in other locations, for example ``/var/log/messages``. Repeat these checks on the other node. The results should be the same. Explore the Existing Configuration ################################## For those who are not of afraid of XML, you can see the raw cluster -configuration and status by using the `pcs cluster cib` command. +configuration and status by using the ``pcs cluster cib`` command. .. topic:: The last XML you'll see in this document :: [root@pcmk-1 ~]# pcs cluster cib .. code-block:: xml Before we make any changes, it's a good idea to check the validity of the configuration. :: [root@pcmk-1 ~]# crm_verify -L -V error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Errors found during check: config not valid As you can see, the tool has found some errors. The cluster will not start any resources until we configure STONITH.