diff --git a/doc/Clusters_from_Scratch/en-US/Book_Info.xml b/doc/Clusters_from_Scratch/en-US/Book_Info.xml
index 6f5917d72c..e436c02aac 100644
--- a/doc/Clusters_from_Scratch/en-US/Book_Info.xml
+++ b/doc/Clusters_from_Scratch/en-US/Book_Info.xml
@@ -1,61 +1,67 @@
%BOOK_ENTITIES;
]>
Clusters from Scratch
Creating Active/Passive and Active/Active Clusters on Fedora
Pacemaker
1.1
- 6
+
+ 8
0
The purpose of this document is to provide a start-to-finish guide to building an example active/passive cluster with Pacemaker and show how it can be converted to an active/active one.
The example cluster will use:
&DISTRO; &DISTRO_VERSION; as the host operating system
Corosync to provide messaging and membership services,
Pacemaker to perform resource management,
DRBD as a cost-effective alternative to shared storage,
GFS2 as the cluster filesystem (in active/active mode)
Given the graphical nature of the Fedora install process, a number of screenshots are included. However the guide is primarily composed of commands, the reasons for executing them and their expected outputs.
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt
index a23a74b44f..ca980c42fd 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt
@@ -1,364 +1,380 @@
= Convert Cluster to Active/Active =
The primary requirement for an Active/Active cluster is that the data
required for your services is available, simultaneously, on both
machines. Pacemaker makes no requirement on how this is achieved; you
could use a SAN if you had one available, but since DRBD supports
multiple Primaries, we can continue to use it here.
== Install Cluster Filesystem Software ==
The only hitch is that we need to use a cluster-aware filesystem. The
one we used earlier with DRBD, ext4, is not one of those. Both OCFS2
and GFS2 are supported; here, we will use GFS2.
On both nodes, install the GFS2 command-line utilities and the
Distributed Lock Manager (DLM) required by cluster filesystems:
----
-# yum install -y gfs2-utils dlm kernel-modules-extra
+# yum install -y gfs2-utils dlm
----
== Configure the Cluster for the DLM ==
The DLM needs to run on both nodes, so we'll start by creating a resource for
it (using the *ocf:pacemaker:controld* resource script), and clone it:
----
[root@pcmk-1 ~]# pcs cluster cib dlm_cfg
[root@pcmk-1 ~]# pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60s
[root@pcmk-1 ~]# pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1
[root@pcmk-1 ~]# pcs -f dlm_cfg resource show
ClusterIP (ocf::heartbeat:IPaddr2): Started
WebSite (ocf::heartbeat:apache): Started
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-2 ]
Slaves: [ pcmk-1 ]
WebFS (ocf::heartbeat:Filesystem): Started
Clone Set: dlm-clone [dlm]
Stopped: [ pcmk-1 pcmk-2 ]
----
Activate our new configuration, and see how the cluster responds:
----
[root@pcmk-1 ~]# pcs cluster cib-push dlm_cfg
CIB updated
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Sat Dec 20 21:53:44 2014
Last change: Sat Dec 20 21:53:40 2014
Stack: corosync
Current DC: pcmk-1 (1) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
8 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
WebSite (ocf::heartbeat:apache): Started pcmk-2
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-2 ]
Slaves: [ pcmk-1 ]
WebFS (ocf::heartbeat:Filesystem): Started pcmk-2
ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1
Clone Set: dlm-clone [dlm]
Started: [ pcmk-1 pcmk-2 ]
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
----
[[GFS2_prep]]
== Create and Populate GFS2 Filesystem ==
Before we do anything to the existing partition, we need to make sure it
is unmounted. We do this by telling the cluster to stop the WebFS resource.
This will ensure that other resources (in our case, Apache) using WebFS
are not only stopped, but stopped in the correct order.
----
[root@pcmk-1 ~]# pcs resource disable WebFS
[root@pcmk-1 ~]# pcs resource
ClusterIP (ocf::heartbeat:IPaddr2): Started
WebSite (ocf::heartbeat:apache): Stopped
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-2 ]
Slaves: [ pcmk-1 ]
WebFS (ocf::heartbeat:Filesystem): Stopped
Clone Set: dlm-clone [dlm]
Started: [ pcmk-1 pcmk-2 ]
----
You can see that both Apache and WebFS have been stopped,
and that *pcmk-2* is the current master for the DRBD device.
Now we can create a new GFS2 filesystem on the DRBD device.
[WARNING]
=========
This will erase all previous content stored on the DRBD device. Ensure
you have a copy of any important data.
=========
[IMPORTANT]
===========
Run the next command on whichever node has the DRBD Primary role.
Otherwise, you will receive the message:
-----
/dev/drbd1: Read-only file system
-----
===========
-----
[root@pcmk-2 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:web /dev/drbd1
It appears to contain an existing filesystem (ext4)
This will destroy any data on /dev/drbd1
Are you sure you want to proceed? [y/n]y
Device: /dev/drbd1
Block size: 4096
Device size: 1.00 GB (262127 blocks)
Filesystem size: 1.00 GB (262124 blocks)
Journals: 2
Resource groups: 3
Locking protocol: "lock_dlm"
Lock table: "mycluster:web"
UUID: b2b30e6c-8890-33fa-a1ba-3c70edd4b5f0
-----
The `mkfs.gfs2` command required a number of additional parameters:
* `-p lock_dlm` specifies that we want to use the
kernel's DLM.
* `-j 2` indicates that the filesystem should reserve enough
space for two journals (one for each node that will access the filesystem).
* `-t mycluster:web` specifies the lock table name. The format for
this field is +pass:[clustername:fsname]+. For
+pass:[clustername]+, we need to use the same
value we specified originally with `pcs cluster setup --name` (which is also
the value of *cluster_name* in +/etc/corosync/corosync.conf+).
If you are unsure what your cluster name is, you can look in
+/etc/corosync/corosync.conf+ or execute the command
`pcs cluster corosync pcmk-1 | grep cluster_name`.
Now we can (re-)populate the new filesystem with data
(web pages). We'll create yet another variation on our home page.
-----
[root@pcmk-2 ~]# mount /dev/drbd1 /mnt
[root@pcmk-2 ~]# cat <<-END >/mnt/index.html
My Test Site - GFS2
END
[root@pcmk-2 ~]# umount /dev/drbd1
[root@pcmk-2 ~]# drbdadm verify wwwdata
-----
== Reconfigure the Cluster for GFS2 ==
With the WebFS resource stopped, let's update the configuration.
----
[root@pcmk-1 ~]# pcs resource show WebFS
Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/drbd1 directory=/var/www/html fstype=ext4
Meta Attrs: target-role=Stopped
Operations: start interval=0s timeout=60 (WebFS-start-timeout-60)
stop interval=0s timeout=60 (WebFS-stop-timeout-60)
monitor interval=20 timeout=40 (WebFS-monitor-interval-20)
----
The fstype option needs to be updated to *gfs2* instead of *ext4*.
----
[root@pcmk-1 ~]# pcs resource update WebFS fstype=gfs2
[root@pcmk-1 ~]# pcs resource show WebFS
Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2
Meta Attrs: target-role=Stopped
Operations: start interval=0s timeout=60 (WebFS-start-timeout-60)
stop interval=0s timeout=60 (WebFS-stop-timeout-60)
monitor interval=20 timeout=40 (WebFS-monitor-interval-20)
----
GFS2 requires that DLM be running, so we also need to set up new colocation
and ordering constraints for it:
----
-[root@pcmk-1 ~]# pcs constraint colocation add WebFS dlm-clone INFINITY
+[root@pcmk-1 ~]# pcs constraint colocation add WebFS with dlm-clone INFINITY
[root@pcmk-1 ~]# pcs constraint order dlm-clone then WebFS
----
== Clone the IP address ==
There's no point making the services active on both locations if we can't
reach them both, so let's clone the IP address.
The *IPaddr2* resource agent has built-in intelligence for when it is configured
as a clone. It will utilize a multicast MAC address to have the local switch
send the relevant packets to all nodes in the cluster, together with *iptables
clusterip* rules on the nodes so that any given packet will be grabbed by
exactly one node. This will give us a simple but effective form of
load-balancing requests between our two nodes.
Let's start a new config, and clone our IP:
----
[root@pcmk-1 ~]# pcs cluster cib loadbalance_cfg
[root@pcmk-1 ~]# pcs -f loadbalance_cfg resource clone ClusterIP \
clone-max=2 clone-node-max=2 globally-unique=true
----
-* `clone-max=2` says packets will be split this many ways. This should
-equal the number of nodes that can host the IP.
+* `clone-max=2` tells the resource agent to split packets this many ways. This
+should equal the number of nodes that can host the IP.
* `clone-node-max=2` says that one node can run up to 2 instances
of the clone. This should also equal the number of nodes that can
host the IP, so that if any node goes down, another node can take over
the failed node's "request bucket". Otherwise, requests intended for
the failed node would be discarded.
* `globally-unique=true` tells the cluster that one clone isn't identical
-to another (each handles a different "request bucket").
+to another (each handles a different "bucket"). This also tells the resource
+agent to insert *iptables* rules so each host only processes packets in its
+bucket(s).
Notice that when the ClusterIP becomes a clone, the constraints
referencing ClusterIP now reference the clone. This is
done automatically by pcs.
----
[root@pcmk-1 ~]# pcs -f loadbalance_cfg constraint
Location Constraints:
Ordering Constraints:
start ClusterIP-clone then start WebSite (kind:Mandatory)
promote WebDataClone then start WebFS (kind:Mandatory)
start WebFS then start WebSite (kind:Mandatory)
start dlm-clone then start WebFS (kind:Mandatory)
Colocation Constraints:
WebSite with ClusterIP-clone (score:INFINITY)
WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master)
WebSite with WebFS (score:INFINITY)
WebFS with dlm-clone (score:INFINITY)
----
Now we must tell the resource how to decide which requests are
processed by which hosts. To do this, we specify the *clusterip_hash* parameter.
The value of *sourceip* means that the source IP address of incoming packets
will be hashed; each node will process a certain range of hashes.
----
[root@pcmk-1 ~]# pcs -f loadbalance_cfg resource update ClusterIP clusterip_hash=sourceip
----
Load our configuration to the cluster, and see how it responds.
-----
[root@pcmk-1 ~]# pcs cluster cib-push loadbalance_cfg
CIB updated
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Sat Dec 20 22:05:48 2014
Last change: Sat Dec 20 22:05:34 2014
Stack: corosync
Current DC: pcmk-1 (1) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
9 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
WebSite (ocf::heartbeat:apache): Stopped
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-1 ]
Slaves: [ pcmk-2 ]
WebFS (ocf::heartbeat:Filesystem): Stopped
ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1
Clone Set: dlm-clone [dlm]
Started: [ pcmk-1 pcmk-2 ]
Clone Set: ClusterIP-clone [ClusterIP] (unique)
ClusterIP:0 (ocf::heartbeat:IPaddr2): Started pcmk-1
ClusterIP:1 (ocf::heartbeat:IPaddr2): Started pcmk-2
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
-----
If desired, you can demonstrate that all request buckets are working
by using a tool such as `arping` from several source hosts
to see which host responds to each.
== Clone the Filesystem and Apache Resources ==
Now that we have a cluster filesystem ready to go,
and our nodes can load-balance requests to a shared IP address,
we can configure the cluster so both nodes mount the filesystem
and respond to web requests.
Clone the filesystem and Apache resources in a new configuration.
Notice how pcs automatically updates the relevant constraints again.
----
[root@pcmk-1 ~]# pcs cluster cib active_cfg
[root@pcmk-1 ~]# pcs -f active_cfg resource clone WebFS
[root@pcmk-1 ~]# pcs -f active_cfg resource clone WebSite
[root@pcmk-1 ~]# pcs -f active_cfg constraint
Location Constraints:
Ordering Constraints:
start ClusterIP-clone then start WebSite-clone (kind:Mandatory)
promote WebDataClone then start WebFS-clone (kind:Mandatory)
start WebFS-clone then start WebSite-clone (kind:Mandatory)
start dlm-clone then start WebFS-clone (kind:Mandatory)
Colocation Constraints:
WebSite-clone with ClusterIP-clone (score:INFINITY)
WebFS-clone with WebDataClone (score:INFINITY) (with-rsc-role:Master)
WebSite-clone with WebFS-clone (score:INFINITY)
WebFS-clone with dlm-clone (score:INFINITY)
----
Tell the cluster that it is now allowed to promote both instances to be DRBD
Primary (aka. master).
-----
[root@pcmk-1 ~]# pcs -f active_cfg resource update WebDataClone master-max=2
-----
Finally, load our configuration to the cluster, and re-enable the WebFS resource
(which we disabled earlier).
-----
[root@pcmk-1 ~]# pcs cluster cib-push active_cfg
CIB updated
[root@pcmk-1 ~]# pcs resource enable WebFS
-----
After all the processes are started, the status should look similar to this.
-----
[root@pcmk-1 ~]# pcs resource
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-1 pcmk-2 ]
Clone Set: dlm-clone [dlm]
Started: [ pcmk-1 pcmk-2 ]
Clone Set: ClusterIP-clone [ClusterIP] (unique)
ClusterIP:0 (ocf::heartbeat:IPaddr2): Started
ClusterIP:1 (ocf::heartbeat:IPaddr2): Started
Clone Set: WebFS-clone [WebFS]
Started: [ pcmk-1 pcmk-2 ]
Clone Set: WebSite-clone [WebSite]
Started: [ pcmk-1 pcmk-2 ]
-----
+== Test Failover ==
+
Testing failover is left as an exercise for the reader.
For example, you can put one node into standby mode,
use `pcs status` to confirm that its ClusterIP clone was
moved to the other node, and use `arping` to verify that
packets are not being lost from any source host.
+
+[NOTE]
+====
+You may find that when a failed node rejoins the cluster,
+both ClusterIP clones stay on one node, due to the
+resource stickiness. While this works fine, it effectively eliminates
+load-balancing and returns the cluster to an active-passive setup again.
+You can avoid this by disabling stickiness for the IP address resource:
+----
+[root@pcmk-1 ~]# pcs resource meta ClusterIP resource-stickiness=0
+----
+====
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Apache.txt b/doc/Clusters_from_Scratch/en-US/Ch-Apache.txt
index ec35154091..cbb1669bdc 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Apache.txt
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Apache.txt
@@ -1,429 +1,429 @@
= Add Apache as a Cluster Service =
Now that we have a basic but functional active/passive two-node cluster,
we're ready to add some real services. We're going to start with Apache
because it is a feature of many clusters and relatively simple to
configure.
== Install Apache ==
Before continuing, we need to make sure Apache is installed on both
hosts. We also need the wget tool in order for the cluster to be able to check
the status of the Apache server.
----
# yum install -y httpd wget
----
[IMPORTANT]
====
Do *not* enable the httpd service. Services that are intended to
be managed via the cluster software should never be managed by the OS.
It is often useful, however, to manually start the service, verify that
it works, then stop it again, before adding it to the cluster. This
allows you to resolve any non-cluster-related problems before continuing.
Since this is a simple example, we'll skip that step here.
====
== Create Website Documents ==
We need to create a page for Apache to serve. On Fedora, the
default Apache document root is /var/www/html, so we'll create an index file
there. For the moment, we will simplify things by serving a static site
and manually synchronizing the data between the two nodes, so run this command
on both nodes:
-----
# cat <<-END >/var/www/html/index.html
My Test Site - $(hostname)
END
-----
== Enable the Apache status URL ==
In order to monitor the health of your Apache instance, and recover it if
it fails, the resource agent used by Pacemaker assumes the server-status
URL is available. On both nodes, enable the URL with:
----
# cat <<-END >/etc/httpd/conf.d/status.conf
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
END
----
[NOTE]
======
If you are using a different operating system or an earlier version of Fedora,
server-status may already be enabled or may be configurable in a different
location.
======
== Configure the Cluster ==
At this point, Apache is ready to go, and all that needs to be done is to
add it to the cluster. Let's call the resource WebSite. We need to use
an OCF resource script called apache in the heartbeat namespace.
footnote:[Compare the key used here, *ocf:heartbeat:apache*, with the one we
used earlier for the IP address, *ocf:heartbeat:IPaddr2*]
The script's only required parameter is the path to the main Apache
configuration file, and we'll tell the cluster to check once a
minute that Apache is still running.
----
[root@pcmk-1 ~]# pcs resource create WebSite ocf:heartbeat:apache \
configfile=/etc/httpd/conf/httpd.conf \
statusurl="http://localhost/server-status" \
op monitor interval=1min
----
By default, the operation timeout for all resources' start, stop, and monitor
operations is 20 seconds. In many cases, this timeout period is less than
a particular resource's advised timeout period. For the purposes of this
tutorial, we will adjust the global operation timeout default to 240 seconds.
----
[root@pcmk-1 ~]# pcs resource op defaults timeout=240s
[root@pcmk-1 ~]# pcs resource op defaults
timeout: 240s
----
[NOTE]
======
In a production cluster, it is usually better to adjust each resource's
start, stop, and monitor timeouts to values that are appropriate to
the behavior observed in your environment, rather than adjust
the global default.
======
After a short delay, we should see the cluster start Apache.
-----
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 12:40:41 2014
Last change: Wed Dec 17 12:40:05 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
2 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
WebSite (ocf::heartbeat:apache): Started pcmk-1
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
-----
Wait a moment, the WebSite resource isn't running on the same host as our
IP address!
[NOTE]
======
If, in the `pcs status` output, you see the WebSite resource has
failed to start, then you've likely not enabled the status URL correctly.
You can check whether this is the problem by running:
....
wget -O - http://127.0.0.1/server-status
....
If you see *Connection refused* in the output, then this is likely the
problem. Ensure that *Allow from 127.0.0.1* is present for
the ** block.
======
== Ensure Resources Run on the Same Host ==
To reduce the load on any one machine, Pacemaker will generally try to
spread the configured resources across the cluster nodes. However, we
can tell the cluster that two resources are related and need to run on
the same host (or not at all). Here, we instruct the cluster that
WebSite can only run on the host that ClusterIP is active on.
To achieve this, we use a _colocation constraint_ that indicates it is
mandatory for WebSite to run on the same node as ClusterIP. The
"mandatory" part of the colocation constraint is indicated by using a
score of INFINITY. The INFINITY score also means that if ClusterIP is not
active anywhere, WebSite will not be permitted to run.
[NOTE]
=======
If ClusterIP is not active anywhere, WebSite will not be permitted to run
anywhere.
=======
[IMPORTANT]
===========
Colocation constraints are "directional", in that they imply certain
things about the order in which the two resources will have a location
chosen. In this case, we're saying that *WebSite* needs to be placed on the
same machine as *ClusterIP*, which implies that the cluster must know the
location of *ClusterIP* before choosing a location for *WebSite*.
===========
-----
-[root@pcmk-1 ~]# pcs constraint colocation add WebSite ClusterIP INFINITY
+[root@pcmk-1 ~]# pcs constraint colocation add WebSite with ClusterIP INFINITY
[root@pcmk-1 ~]# pcs constraint
Location Constraints:
Ordering Constraints:
Colocation Constraints:
WebSite with ClusterIP (score:INFINITY)
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 13:57:58 2014
Last change: Wed Dec 17 13:57:22 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
2 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
WebSite (ocf::heartbeat:apache): Started pcmk-2
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
-----
== Ensure Resources Start and Stop in Order ==
Like many services, Apache can be configured to bind to specific
IP addresses on a host or to the wildcard IP address. If Apache
binds to the wildcard, it doesn't matter whether an IP address
is added before or after Apache starts; Apache will respond on
that IP just the same. However, if Apache binds only to certain IP
address(es), the order matters: If the address is added after Apache
starts, Apache won't respond on that address.
To be sure our WebSite responds regardless of Apache's address configuration,
we need to make sure ClusterIP not only runs on the same node,
but starts before WebSite. A colocation constraint only ensures the
resources run together, not the order in which they are started and stopped.
We do this by adding an ordering constraint. By default, all order constraints
are mandatory, which means that the recovery of ClusterIP will also trigger the
recovery of WebSite.
-----
[root@pcmk-1 ~]# pcs constraint order ClusterIP then WebSite
Adding ClusterIP WebSite (kind: Mandatory) (Options: first-action=start then-action=start)
[root@pcmk-1 ~]# pcs constraint
Location Constraints:
Ordering Constraints:
start ClusterIP then start WebSite (kind:Mandatory)
Colocation Constraints:
WebSite with ClusterIP (score:INFINITY)
-----
== Prefer One Node Over Another ==
Pacemaker does not rely on any sort of hardware symmetry between nodes,
so it may well be that one machine is more powerful than the other. In
such cases, it makes sense to host the resources on the more powerful node if
it is available. To do this, we create a location constraint.
In the location constraint below, we are saying the WebSite resource
prefers the node pcmk-1 with a score of 50. Here, the score indicates
how badly we'd like the resource to run at this location.
-----
[root@pcmk-1 ~]# pcs constraint location WebSite prefers pcmk-1=50
[root@pcmk-1 ~]# pcs constraint
Location Constraints:
Resource: WebSite
Enabled on: pcmk-1 (score:50)
Ordering Constraints:
start ClusterIP then start WebSite (kind:Mandatory)
Colocation Constraints:
WebSite with ClusterIP (score:INFINITY)
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 14:11:49 2014
Last change: Wed Dec 17 14:11:20 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
2 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
WebSite (ocf::heartbeat:apache): Started pcmk-2
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
-----
Wait a minute, the resources are still on pcmk-2!
Even though WebSite now prefers to run on pcmk-1, that preference is
(intentionally) less than the resource stickiness (how much we
preferred not to have unnecessary downtime).
To see the current placement scores, you can use a tool called crm_simulate.
----
[root@pcmk-1 ~]# crm_simulate -sL
Current cluster status:
Online: [ pcmk-1 pcmk-2 ]
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
WebSite (ocf::heartbeat:apache): Started pcmk-2
Allocation scores:
native_color: ClusterIP allocation score on pcmk-1: 50
native_color: ClusterIP allocation score on pcmk-2: 200
native_color: WebSite allocation score on pcmk-1: -INFINITY
native_color: WebSite allocation score on pcmk-2: 100
Transition Summary:
----
== Move Resources Manually ==
There are always times when an administrator needs to override the
cluster and force resources to move to a specific location. In this example,
we will force the WebSite to move to pcmk-1 by
updating our previous location constraint with a score of INFINITY.
-----
[root@pcmk-1 ~]# pcs constraint location WebSite prefers pcmk-1=INFINITY
[root@pcmk-1 ~]# pcs constraint
Location Constraints:
Resource: WebSite
Enabled on: pcmk-1 (score:INFINITY)
Ordering Constraints:
start ClusterIP then start WebSite (kind:Mandatory)
Colocation Constraints:
WebSite with ClusterIP (score:INFINITY)
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 14:19:34 2014
Last change: Wed Dec 17 14:18:37 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
2 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
WebSite (ocf::heartbeat:apache): Started pcmk-1
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
-----
Once we've finished whatever activity required us to move the
resources to pcmk-1 (in our case nothing), we can then allow the cluster
to resume normal operation by removing the new constraint. Since we previously
configured a default stickiness, the resources will remain on pcmk-1.
First, use the `--full` option to get the constraint's ID:
-----
[root@pcmk-1 ~]# pcs constraint --full
Location Constraints:
Resource: WebSite
Enabled on: pcmk-1 (score:INFINITY) (id:location-WebSite-pcmk-1-INFINITY)
Ordering Constraints:
start ClusterIP then start WebSite (kind:Mandatory) (id:order-ClusterIP-WebSite-mandatory)
Colocation Constraints:
WebSite with ClusterIP (score:INFINITY) (id:colocation-WebSite-ClusterIP-INFINITY)
-----
Then remove the desired contraint using its ID:
-----
[root@pcmk-1 ~]# pcs constraint remove location-WebSite-pcmk-1-INFINITY
[root@pcmk-1 ~]# pcs constraint
Location Constraints:
Ordering Constraints:
start ClusterIP then start WebSite (kind:Mandatory)
Colocation Constraints:
WebSite with ClusterIP (score:INFINITY)
-----
Note that the location constraint is now gone. If we check the cluster
status, we can also see that (as expected) the resources are still active
on pcmk-1.
-----
# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 14:25:21 2014
Last change: Wed Dec 17 14:24:29 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
2 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
WebSite (ocf::heartbeat:apache): Started pcmk-1
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
-----
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt
index 74fc67e53c..b5c87f5ec9 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt
@@ -1,509 +1,509 @@
= Replicate Storage Using DRBD =
Even if you're serving up static websites, having to manually synchronize
the contents of that website to all the machines in the cluster is not
ideal. For dynamic websites, such as a wiki, it's not even an option. Not
everyone care afford network-attached storage, but somehow the data needs
to be kept in sync.
Enter DRBD, which can be thought of as network-based RAID-1.
footnote:[See http://www.drbd.org/ for details.]
== Install the DRBD Packages ==
DRBD itself is included in the upstream kernel,
footnote:[Since version 2.6.33]
but we do need some utilities to use it effectively. On both nodes, run:
----
# yum install -y drbd-pacemaker drbd-udev
----
== Allocate a Disk Volume for DRBD ==
DRBD will need its own block device on each node. This can be
a physical disk partition or logical volume, of whatever size
you need for your data. For this document, we will use a
1GiB logical volume, which is more than sufficient for a single HTML file and
(later) GFS2 metadata.
----
[root@pcmk-1 ~]# vgdisplay | grep -e Name -e Free
VG Name fedora-server_pcmk-1
Free PE / Size 511 / 2.00 GiB
[root@pcmk-1 ~]# lvcreate --name drbd-demo --size 1G fedora-server_pcmk-1
Logical volume "drbd-demo" created
[root@pcmk-1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
drbd-demo fedora-server_pcmk-1 -wi-a----- 1.00g
root fedora-server_pcmk-1 -wi-ao---- 5.00g
swap fedora-server_pcmk-1 -wi-ao---- 1.00g
----
Repeat this on the second node, making sure to use the same size.
----
[root@pcmk-1 ~]# ssh pcmk-2 -- lvcreate --name drbd-demo --size 1G fedora-server_pcmk-2
Logical volume "drbd-demo" created
----
== Configure DRBD ==
There is no series of commands for building a DRBD configuration, so simply
run this on both nodes to use this sample configuration:
----
# cat </etc/drbd.d/wwwdata.res
resource wwwdata {
protocol C;
meta-disk internal;
device /dev/drbd1;
syncer {
verify-alg sha1;
}
net {
allow-two-primaries;
}
on pcmk-1 {
disk /dev/fedora-server_pcmk-1/drbd-demo;
address 192.168.122.101:7789;
}
on pcmk-2 {
disk /dev/fedora-server_pcmk-2/drbd-demo;
address 192.168.122.102:7789;
}
}
END
----
[IMPORTANT]
=========
Edit the file to use the hostnames, IP addresses and logical volume paths
of your nodes if they differ from the ones used in this guide.
=========
[NOTE]
=======
Detailed information on the directives used in this configuration (and
other alternatives) is available at
http://www.drbd.org/users-guide/ch-configure.html
The *allow-two-primaries* option would not normally be used in
an active/passive cluster. We are adding it here for the convenience
of changing to an active/active cluster later.
=======
== Initialize DRBD ==
With the configuration in place, we can now get DRBD running.
These commands create the local metadata for the DRBD resource,
ensure the DRBD kernel module is loaded, and bring up the DRBD resource.
Run them on one node:
----
# drbdadm create-md wwwdata
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
# modprobe drbd
# drbdadm up wwwdata
----
We can confirm DRBD's status on this node:
----
# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 153833F4A69E341D3F3E707
1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----s
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
----
Because we have not yet initialized the data, this node's data
is marked as *Inconsistent*. Because we have not yet initialized
the second node, the local state is *WFConnection* (waiting for connection),
and the partner node's status is marked as *Unknown*.
Now, repeat the above commands on the second node. This time,
when we check the status, it shows:
----
# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 153833F4A69E341D3F3E707
1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
----
You can see the state has changed to *Connected*, meaning the two DRBD nodes
are communicating properly, and both nodes are in *Secondary* role
with *Inconsistent* data.
To make the data consistent, we need to tell DRBD which node should be
considered to have the correct data. In this case, since we are creating
a new resource, both have garbage, so we'll just pick pcmk-1
and run this command on it:
----
[root@pcmk-1 ~]# drbdadm primary --force wwwdata
----
[NOTE]
======
In DRBD 8.3 and earlier, the equivalent command is:
----
[root@pcmk-1 ~]# drbdadm -- --overwrite-data-of-peer primary wwwdata
----
======
If we check the status immediately, we'll see something like this:
----
[root@pcmk-1 ~]# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 153833F4A69E341D3F3E707
1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:2872 nr:0 dw:0 dr:3784 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1045636
[>....................] sync'ed: 0.4% (1045636/1048508)K
finish: 0:10:53 speed: 1,436 (1,436) K/sec
----
We can see that this node has the *Primary* role, the partner node has
the *Secondary* role, this node's data is now considered *UpToDate*,
the partner node's data is still *Inconsistent*, and a progress bar
shows how far along the partner node is in synchronizing the data.
After a while, the sync should finish, and you'll see something like:
----
[root@pcmk-1 ~]# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 153833F4A69E341D3F3E707
1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:1048508 nr:0 dw:0 dr:1049420 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
----
Both sets of data are now *UpToDate*, and we can proceed to creating
and populating a filesystem for our WebSite resource's documents.
== Populate the DRBD Disk ==
On the node with the primary role (pcmk-1 in this example),
create a filesystem on the DRBD device:
----
[root@pcmk-1 ~]# mkfs.ext4 /dev/drbd1
mke2fs 1.42.11 (09-Jul-2014)
Creating filesystem with 262127 4k blocks and 65536 inodes
Filesystem UUID: 26879260-9077-4d6d-ad69-7d31d3d8d8d4
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
----
[NOTE]
====
In this example, we create an ext4 filesystem with no special options.
In a production environment, you should choose a filesystem type and
options that are suitable for your application.
====
Mount the newly created filesystem, populate it with our web document,
then unmount it (the cluster will handle mounting and unmounting it later):
----
[root@pcmk-1 ~]# mount /dev/drbd1 /mnt
[root@pcmk-1 ~]# cat <<-END >/mnt/index.html
My Test Site - DRBD
END
[root@pcmk-1 ~]# umount /dev/drbd1
----
== Configure the Cluster for the DRBD device ==
One handy feature `pcs` has is the ability to queue up several changes
into a file and commit those changes atomically. To do this, start by
populating the file with the current raw XML config from the CIB.
----
# pcs cluster cib drbd_cfg
----
Using the `pcs -f` option, make changes to the configuration saved
in the +drbd_cfg+ file. These changes will not be seen by the cluster until
the +drbd_cfg+ file is pushed into the live cluster's CIB later.
Here, we create a cluster resource for the DRBD device, and an additional _clone_
resource to allow the resource to run on both nodes at the same time.
----
[root@pcmk-1 ~]# pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \
drbd_resource=wwwdata op monitor interval=60s
[root@pcmk-1 ~]# pcs -f drbd_cfg resource master WebDataClone WebData \
master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 \
notify=true
[root@pcmk-1 ~]# pcs -f drbd_cfg resource show
ClusterIP (ocf::heartbeat:IPaddr2): Started
WebSite (ocf::heartbeat:apache): Started
Master/Slave Set: WebDataClone [WebData]
Stopped: [ pcmk-1 pcmk-2 ]
----
After you are satisfied with all the changes, you can commit
them all at once by pushing the drbd_cfg file into the live CIB.
----
[root@pcmk-1 ~]# pcs cluster cib-push drbd_cfg
CIB updated
----
[NOTE]
====
Early versions of `pcs` required `push cib` in place of `cib-push` above.
====
Let's see what the cluster did with the new configuration:
----
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 16:39:43 2014
Last change: Wed Dec 17 16:39:30 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
4 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
WebSite (ocf::heartbeat:apache): Started pcmk-1
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-1 ]
Slaves: [ pcmk-2 ]
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
----
We can see that *WebDataClone* (our DRBD device) is running as master (DRBD's
primary role) on *pcmk-1* and slave (DRBD's secondary role) on *pcmk-2*.
[IMPORTANT]
====
The resource agent should load the DRBD module when needed if it's not already
loaded. If that does not happen, configure your operating system to load the
module at boot time. For Fedora 21, you would run this on both nodes:
----
# echo drbd >/etc/modules-load.d/drbd.conf
----
====
== Configure the Cluster for the Filesystem ==
Now that we have a working DRBD device, we need to mount its filesystem.
In addition to defining the filesystem, we also need to
tell the cluster where it can be located (only on the DRBD Primary)
and when it is allowed to start (after the Primary was promoted).
We are going to take a shortcut when creating the resource this time.
Instead of explicitly saying we want the *ocf:heartbeat:Filesystem* script, we
are only going to ask for *Filesystem*. We can do this because we know there is only
one resource script named *Filesystem* available to pacemaker, and that pcs is smart
enough to fill in the *ocf:heartbeat:* portion for us correctly in the configuration.
If there were multiple *Filesystem* scripts from different OCF providers, we would need
to specify the exact one we wanted.
Once again, we will queue our changes to a file and then push the
new configuration to the cluster as the final step.
----
[root@pcmk-1 ~]# pcs cluster cib fs_cfg
[root@pcmk-1 ~]# pcs -f fs_cfg resource create WebFS Filesystem \
device="/dev/drbd1" directory="/var/www/html" \
fstype="ext4"
-[root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebFS WebDataClone INFINITY with-rsc-role=Master
+[root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebFS with WebDataClone INFINITY with-rsc-role=Master
[root@pcmk-1 ~]# pcs -f fs_cfg constraint order promote WebDataClone then start WebFS
Adding WebDataClone WebFS (kind: Mandatory) (Options: first-action=promote then-action=start)
----
We also need to tell the cluster that Apache needs to run on the same
machine as the filesystem and that it must be active before Apache can
start.
----
-[root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebSite WebFS INFINITY
+[root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebSite with WebFS INFINITY
[root@pcmk-1 ~]# pcs -f fs_cfg constraint order WebFS then WebSite
Adding WebFS WebSite (kind: Mandatory) (Options: first-action=start then-action=start)
----
Review the updated configuration.
----
[root@pcmk-1 ~]# pcs -f fs_cfg constraint
Location Constraints:
Ordering Constraints:
start ClusterIP then start WebSite (kind:Mandatory)
promote WebDataClone then start WebFS (kind:Mandatory)
start WebFS then start WebSite (kind:Mandatory)
Colocation Constraints:
WebSite with ClusterIP (score:INFINITY)
WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master)
WebSite with WebFS (score:INFINITY)
[root@pcmk-1 ~]# pcs -f fs_cfg resource show
ClusterIP (ocf::heartbeat:IPaddr2): Started
WebSite (ocf::heartbeat:apache): Started
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-1 ]
Slaves: [ pcmk-2 ]
WebFS (ocf::heartbeat:Filesystem): Stopped
----
After reviewing the new configuration, upload it and watch the
cluster put it into effect.
----
[root@pcmk-1 ~]# pcs cluster cib-push fs_cfg
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 17:02:45 2014
Last change: Wed Dec 17 17:02:42 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
5 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
WebSite (ocf::heartbeat:apache): Started pcmk-1
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-1 ]
Slaves: [ pcmk-2 ]
WebFS (ocf::heartbeat:Filesystem): Started pcmk-1
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
----
== Test Cluster Failover ==
Previously, we used `pcs cluster stop pcmk-1` to stop all cluster
services on *pcmk-1*, failing over the cluster resources, but there is another
way to safely simulate node failure.
We can put the node into _standby mode_. Nodes in this state continue to
run corosync and pacemaker but are not allowed to run resources. Any resources
found active there will be moved elsewhere. This feature can be particularly
useful when performing system administration tasks such as updating packages
used by cluster resources.
Put the active node into standby mode, and observe the cluster move all
the resources to the other node. The node's status will
change to indicate that it can no longer host resources.
----
[root@pcmk-1 ~]# pcs cluster standby pcmk-1
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 17:14:05 2014
Last change: Wed Dec 17 17:14:02 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
5 Resources configured
Node pcmk-1 (1): standby
Online: [ pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
WebSite (ocf::heartbeat:apache): Started pcmk-2
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-2 ]
Stopped: [ pcmk-1 ]
WebFS (ocf::heartbeat:Filesystem): Started pcmk-2
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
----
Once we've done everything we needed to on pcmk-1 (in this case nothing,
we just wanted to see the resources move), we can allow the node to be a
full cluster member again.
----
[root@pcmk-1 ~]# pcs cluster unstandby pcmk-1
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Last updated: Wed Dec 17 17:15:36 2014
Last change: Wed Dec 17 17:15:33 2014
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
5 Resources configured
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
WebSite (ocf::heartbeat:apache): Started pcmk-2
Master/Slave Set: WebDataClone [WebData]
Masters: [ pcmk-2 ]
Slaves: [ pcmk-1 ]
WebFS (ocf::heartbeat:Filesystem): Started pcmk-2
PCSD Status:
pcmk-1: Online
pcmk-2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
----
Notice that *pcmk-1* is back to the *Online* state, and that the cluster resources
stay where they are due to our resource stickiness settings configured earlier.
diff --git a/doc/Clusters_from_Scratch/en-US/Revision_History.xml b/doc/Clusters_from_Scratch/en-US/Revision_History.xml
index aa36a0a585..0df7bbc577 100644
--- a/doc/Clusters_from_Scratch/en-US/Revision_History.xml
+++ b/doc/Clusters_from_Scratch/en-US/Revision_History.xml
@@ -1,61 +1,62 @@
%BOOK_ENTITIES;
]>
+
Revision History
1-0
Mon May 17 2010
AndrewBeekhofandrew@beekhof.net
Import from Pages.app
2-0
Wed Sep 22 2010
RaoulScarazzinirasca@miamammausalinux.org
Italian translation
3-0
Wed Feb 9 2011
AndrewBeekhofandrew@beekhof.net
Updated for Fedora 13
4-0
Wed Oct 5 2011
AndrewBeekhofandrew@beekhof.net
Update the GFS2 section to use CMAN
5-0
Fri Feb 10 2012
AndrewBeekhofandrew@beekhof.net
Generate docbook content from asciidoc sources
6-0
Tues July 3 2012
AndrewBeekhofandrew@beekhof.net
Updated for Fedora 17
7-0
Fri Sept 14 2012
DavidVosseldvossel@redhat.com
Updated for pcs
8-0
- Fri Dec 19 2014
+ Mon Jan 05 2015
KenGaillotkgaillot@redhat.com
Updated for Fedora 21