diff --git a/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst b/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst index 4e5a71e529..438eb8b78d 100644 --- a/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst +++ b/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst @@ -1,379 +1,364 @@ Configuration Recap ------------------- Final Cluster Configuration ########################### -.. NOTE:: - - Because of `an open CentOS bug `_, - installing dlm is not trivial. This chapter will be updated once the bug - is resolved. - .. code-block:: none [root@pcmk-1 ~]# pcs resource - Master/Slave Set: WebDataClone [WebData] - Masters: [ pcmk-1 pcmk-2 ] - Clone Set: dlm-clone [dlm] - Started: [ pcmk-1 pcmk-2 ] - ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 - Clone Set: WebFS-clone [WebFS] - Started: [ pcmk-1 pcmk-2 ] - WebSite (ocf::heartbeat:apache): Started pcmk-1 + * ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 + * WebSite (ocf::heartbeat:apache): Started pcmk-1 + * Clone Set: WebData-clone [WebData] (promotable): + * Masters: [ pcmk-1 pcmk-2 ] + * Clone Set: dlm-clone [dlm]: + * Started: [ pcmk-1 pcmk-2 ] + * Clone Set: WebFS-clone [WebFS]: + * Started: [ pcmk-1 pcmk-2 ] .. code-block:: none [root@pcmk-1 ~]# pcs resource op defaults - timeout: 240s + Meta Attrs: op_defaults-meta_attributes + timeout=240s .. code-block:: none [root@pcmk-1 ~]# pcs stonith - * my_stonith (stonith:fence_virt): Started pcmk-1 + * ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1 .. code-block:: none [root@pcmk-1 ~]# pcs constraint Location Constraints: Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) promote WebDataClone then start WebFS-clone (kind:Mandatory) start WebFS-clone then start WebSite (kind:Mandatory) start dlm-clone then start WebFS-clone (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY) WebFS-clone with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite with WebFS-clone (score:INFINITY) WebFS-clone with dlm-clone (score:INFINITY) Ticket Constraints: .. code-block:: none [root@pcmk-1 ~]# pcs status Cluster name: mycluster Stack: corosync Current DC: pcmk-1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum - Last updated: Tue Sep 11 10:41:53 2018 - Last change: Tue Sep 11 10:40:16 2018 by root via cibadmin on pcmk-1 + Last updated: Fri Jul 16 09:00:56 2021 + Last change: Wed Jul 14 11:06:25 2021 by root via cibadmin on pcmk-1 2 nodes configured 11 resources configured Online: [ pcmk-1 pcmk-2 ] - Full list of resources: - - my_stonith (stonith:fence_virt): Started pcmk-1 - Master/Slave Set: WebDataClone [WebData] - Masters: [ pcmk-1 pcmk-2 ] - Clone Set: dlm-clone [dlm] - Started: [ pcmk-1 pcmk-2 ] - ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 - Clone Set: WebFS-clone [WebFS] - Started: [ pcmk-1 pcmk-2 ] - WebSite (ocf::heartbeat:apache): Started pcmk-1 + Full List of Resources: + * ipmi-fencing (stonith:fence_ipmilan): Stopped + * ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 + * WebSite (ocf::heartbeat:apache): Started pcmk-1 + * Clone Set: WebData-clone [WebData] (promotable): + * Masters: [ pcmk-1 pcmk-2 ] + * Clone Set: dlm-clone [dlm]: + * Started: [ pcmk-1 pcmk-2 ] + * Clone Set: WebFS-clone [WebFS]: + * Started: [ pcmk-1 pcmk-2 ] Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled .. code-block:: none [root@pcmk-1 ~]# pcs cluster cib --config .. code-block:: xml + - + - - - - - - - - + + + + + + + + + + + + + + + - + + + - + + + + + + + + + + + + - - - - - - - + + + + + + + - - + + - - - - - - - - - - - - - - - + - + - - + + - - - - - - - - - - - - - + + + - - - - - - - - - + + Node List ######### .. code-block:: none [root@pcmk-1 ~]# pcs status nodes Pacemaker Nodes: Online: pcmk-1 pcmk-2 Standby: + Standby with resource(s) running: Maintenance: Offline: Pacemaker Remote Nodes: Online: Standby: + Standby with resource(s) running: Maintenance: Offline: Cluster Options ############### .. code-block:: none [root@pcmk-1 ~]# pcs property Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster - dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9 + dc-version: 2.1.0-3.el8-7c3f660707 have-watchdog: false - last-lrm-refresh: 1536679009 stonith-enabled: true The output shows state information automatically obtained about the cluster, including: * **cluster-infrastructure** - the cluster communications layer in use * **cluster-name** - the cluster name chosen by the administrator when the cluster was created * **dc-version** - the version (including upstream source-code hash) of Pacemaker used on the Designated Controller, which is the node elected to determine what actions are needed when events occur The output also shows options set by the administrator that control the way the cluster operates, including: * **stonith-enabled=true** - whether the cluster is allowed to use STONITH resources Resources ######### Default Options _______________ .. code-block:: none [root@pcmk-1 ~]# pcs resource defaults resource-stickiness: 100 This shows cluster option defaults that apply to every resource that does not explicitly set the option itself. Above: * **resource-stickiness** - Specify the aversion to moving healthy resources to other machines Fencing _______ .. code-block:: none - [root@pcmk-1 ~]# pcs stonith show - * my_stonith (stonith:fence_virt): Started pcmk-1 - [root@pcmk-1 ~]# pcs stonith show my_stonith - Resource: my_stonith (class=stonith type=fence_virt) - Attributes: ipaddr="10.0.0.1" login="testuser" passwd="acd123" pcmk_host_list="pcmk-1 pcmk-2" - Operations: monitor interval=60s (fence-monitor-interval-60s) + [root@pcmk-1 ~]# pcs stonith status + * impi-fencing (stonith:fence_ipmilan): Started pcmk-1 + [root@pcmk-1 ~]# pcs stonith config + Resource: ipmi-fencing (class=stonith type=fence_ipmilan) + Attributes: ipaddr=10.0.0.1 login=testuser passwd=acd123 pcmk_host_list="pcmk-1 pcmk-2" + Operations: monitor interval=60s (ipmi-fencing-monitor-interval-60s) Service Address _______________ Users of the services provided by the cluster require an unchanging address with which to access it. .. code-block:: none - [root@pcmk-1 ~]# pcs resource show ClusterIP + [root@pcmk-1 ~]# pcs resource config ClusterIP Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) - Attributes: cidr_netmask=24 ip=192.168.122.120 clusterip_hash=sourceip - Meta Attrs: resource-stickiness=0 - Operations: monitor interval=30s (ClusterIP-monitor-interval-30s) - start interval=0s timeout=20s (ClusterIP-start-interval-0s) - stop interval=0s timeout=20s (ClusterIP-stop-interval-0s) + Attributes: cidr_netmask=24 ip=192.168.122.120 + Meta Attrs: resource-stickiness=0 + Operations: monitor interval=30s (ClusterIP-monitor-interval-30s) + start interval=0s timeout=20s (ClusterIP-start-interval-0s) + stop interval=0s timeout=20s (ClusterIP-stop-interval-0s) DRBD - Shared Storage _____________________ Here, we define the DRBD service and specify which DRBD resource (from /etc/drbd.d/\*.res) it should manage. We make it a promotable clone resource and, in order to have an active/active setup, allow both instances to be promoted at the same time. We also set the notify option so that the cluster will tell DRBD agent when its peer changes state. .. code-block:: none - [root@pcmk-1 ~]# pcs resource show WebDataClone - Clone: WebDataClone (promotable) - Meta Attrs: promoted-node-max=1 clone-max=2 notify=true promoted-max=2 clone-node-max=1 + [root@pcmk-1 ~]# pcs resource show WebData-clone + Clone: WebData-clone + Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=2 promoted-node-max=1 Resource: WebData (class=ocf provider=linbit type=drbd) - Attributes: drbd_resource=wwwdata - Operations: demote interval=0s timeout=90 (WebData-demote-interval-0s) - monitor interval=60s (WebData-monitor-interval-60s) - notify interval=0s timeout=90 (WebData-notify-interval-0s) - promote interval=0s timeout=90 (WebData-promote-interval-0s) - reload interval=0s timeout=30 (WebData-reload-interval-0s) - start interval=0s timeout=240 (WebData-start-interval-0s) - stop interval=0s timeout=100 (WebData-stop-interval-0s) - [root@pcmk-1 ~]# pcs constraint ref WebDataClone - Resource: WebDataClone + Attributes: drbd_resource=wwwdata + Operations: demote interval=0s timeout=90 (WebData-demote-interval-0s) + monitor interval=60s (WebData-monitor-interval-60s) + notify interval=0s timeout=90 (WebData-notify-interval-0s) + promote interval=0s timeout=90 (WebData-promote-interval-0s) + reload interval=0s timeout=30 (WebData-reload-interval-0s) + start interval=0s timeout=240 (WebData-start-interval-0s) + stop interval=0s timeout=100 (WebData-stop-interval-0s) + [root@pcmk-1 ~]# pcs constraint ref WebData-clone + Resource: WebData-clone colocation-WebFS-WebDataClone-INFINITY order-WebDataClone-WebFS-mandatory Cluster Filesystem __________________ The cluster filesystem ensures that files are read and written correctly. We need to specify the block device (provided by DRBD), where we want it mounted and that we are using GFS2. Again, it is a clone because it is intended to be active on both nodes. The additional constraints ensure that it can only be started on nodes with active DLM and DRBD instances. .. code-block:: none [root@pcmk-1 ~]# pcs resource show WebFS-clone Clone: WebFS-clone Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2 Operations: monitor interval=20 timeout=40 (WebFS-monitor-interval-20) notify interval=0s timeout=60 (WebFS-notify-interval-0s) start interval=0s timeout=60 (WebFS-start-interval-0s) stop interval=0s timeout=60 (WebFS-stop-interval-0s) [root@pcmk-1 ~]# pcs constraint ref WebFS-clone Resource: WebFS-clone colocation-WebFS-WebDataClone-INFINITY colocation-WebSite-WebFS-INFINITY colocation-WebFS-dlm-clone-INFINITY order-WebDataClone-WebFS-mandatory order-WebFS-WebSite-mandatory order-dlm-clone-WebFS-mandatory Apache ______ Lastly, we have the actual service, Apache. We need only tell the cluster where to find its main configuration file and restrict it to running on a node that has the required filesystem mounted and the IP address active. .. code-block:: none [root@pcmk-1 ~]# pcs resource show WebSite Resource: WebSite (class=ocf provider=heartbeat type=apache) Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status Operations: monitor interval=1min (WebSite-monitor-interval-1min) start interval=0s timeout=40s (WebSite-start-interval-0s) stop interval=0s timeout=60s (WebSite-stop-interval-0s) [root@pcmk-1 ~]# pcs constraint ref WebSite Resource: WebSite colocation-WebSite-ClusterIP-INFINITY colocation-WebSite-WebFS-INFINITY order-ClusterIP-WebSite-mandatory order-WebFS-WebSite-mandatory