diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt
index 9e3a202969..95e01eae88 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt
@@ -1,669 +1,669 @@
= Creating an Active/Passive Cluster =
== Exploring the Existing Configuration ==
When Pacemaker starts up, it automatically records the number and details
of the nodes in the cluster as well as which stack is being used and the
version of Pacemaker being used.
This is what the base configuration should look like.
ifdef::pcs[]
[source,Bash]
----
# pcs status
Last updated: Fri Sep 14 10:12:01 2012
Last change: Fri Sep 14 09:51:55 2012 via crmd on pcmk-2
Stack: corosync
Current DC: pcmk-1 (1) - partition with quorum
Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0
2 Nodes configured, unknown expected votes
0 Resources configured.
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm configure show
node $id="1702537408" pcmk-1
node $id="1719314624" pcmk-2
property $id="cib-bootstrap-options" \
dc-version="1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff" \
cluster-infrastructure="corosync"
----
endif::[]
For those that are not of afraid of XML, you can see the raw
configuration by appending "xml" to the previous command.
.The last XML you'll see in this document
ifdef::pcs[]
[source,Bash]
----
# pcs cluster cib
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm configure show xml
----
endif::[]
Before we make any changes, its a good idea to check the validity of
the configuration.
[source,Bash]
----
# crm_verify -L -V
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
-V may provide more details
----
As you can see, the tool has found some errors.
In order to guarantee the safety of your data
footnote:[If the data is corrupt, there is little point in continuing to make it available]
-, Pacemaker ships with STONITH
+, the default for STONITH
footnote:[A common node fencing mechanism. Used to ensure data integrity by powering off "bad" nodes]
-enabled. However it also knows when no STONITH configuration has been
+in Pacemaker is +enabled+. However it also knows when no STONITH configuration has been
supplied and reports this as a problem (since the cluster would not be
able to make progress if a situation requiring node fencing arose).
For now, we will disable this feature and configure it later in the
Configuring STONITH section. It is important to note that the use of
STONITH is highly encouraged, turning it off tells the cluster to
simply pretend that failed nodes are safely powered off. Some vendors
will even refuse to support clusters that have it disabled.
-To disable STONITH, we set the stonith-enabled cluster option to
+To disable STONITH, we set the _stonith-enabled_ cluster option to
false.
ifdef::pcs[]
[source,Bash]
----
# pcs property set stonith-enabled=false
# crm_verify -L
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm configure property stonith-enabled=false
# crm_verify -L
----
endif::[]
With the new cluster option set, the configuration is now valid.
[WARNING]
=========
The use of stonith-enabled=false is completely inappropriate for a
production cluster. We use it here to defer the discussion of its
configuration which can differ widely from one installation to the
next. See <<_what_is_stonith>> for information on why STONITH is important
and details on how to configure it.
=========
== Adding a Resource ==
The first thing we should do is configure an IP address. Regardless of
where the cluster service(s) are running, we need a consistent address
to contact them on. Here I will choose and add 192.168.122.120 as the
floating address, give it the imaginative name ClusterIP and tell the
cluster to check that its running every 30 seconds.
[IMPORTANT]
===========
The chosen address must not be one already associated with
a physical node
===========
ifdef::pcs[]
[source,Bash]
----
# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 \
ip=192.168.0.120 cidr_netmask=32 op monitor interval=30s
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip=192.168.122.120 cidr_netmask=32 \
op monitor interval=30s
----
endif::[]
The other important piece of information here is ocf:heartbeat:IPaddr2.
This tells Pacemaker three things about the resource you want to
add. The first field, ocf, is the standard to which the resource
script conforms to and where to find it. The second field is specific
to OCF resources and tells the cluster which namespace to find the
resource script in, in this case heartbeat. The last field indicates
the name of the resource script.
ifdef::pcs[]
To obtain a list of the available resource standards (the ocf part of
ocf:heartbeat:IPaddr2), run
[source,Bash]
----
# pcs resource standards
ocf
lsb
service
systemd
stonith
----
To obtain a list of the available ocf resource providers (the heartbeat
part of ocf:heartbeat:IPaddr2), run
[source,Bash]
----
# pcs resource providers
heartbeat
linbit
pacemaker
redhat
----
Finally, if you want to see all the resource agents available for
a specific ocf provider (the IPaddr2 part of ocf:heartbeat:IPaddr2), run
[source,Bash]
----
# pcs resource agents ocf:heartbeat
AoEtarget
AudibleAlarm
CTDB
ClusterMon
Delay
Dummy
.
. (skipping lots of resources to save space)
.
IPaddr2
.
.
.
symlink
syslog-ng
tomcat
vmware
----
endif::[]
ifdef::crm[]
To obtain a list of the available resource classes, run
[source,Bash]
----
# crm ra classes
heartbeat
lsb
ocf / heartbeat pacemaker
stonith
----
To then find all the OCF resource agents provided by Pacemaker and
Heartbeat, run
[source,Bash]
----
# crm ra list ocf pacemaker
ClusterMon Dummy HealthCPU HealthSMART Stateful SysInfo
SystemHealth controld o2cb ping pingd
# crm ra list ocf heartbeat
AoEtarget AudibleAlarm CTDB ClusterMon
Delay Dummy EvmsSCC Evmsd
Filesystem ICP IPaddr IPaddr2
IPsrcaddr IPv6addr LVM LinuxSCSI
MailTo ManageRAID ManageVE Pure-FTPd
Raid1 Route SAPDatabase SAPInstance
SendArp ServeRAID SphinxSearchDaemon Squid
Stateful SysInfo VIPArip VirtualDomain
WAS WAS6 WinPopup Xen
Xinetd anything apache conntrackd
db2 drbd eDir88 ethmonitor
exportfs fio iSCSILogicalUnit iSCSITarget
ids iscsi jboss ldirectord
lxc mysql mysql-proxy nfsserver
nginx oracle oralsnr pgsql
pingd portblock postfix proftpd
rsyncd scsi2reservation sfex symlink
syslog-ng tomcat vmware
----
endif::[]
Now verify that the IP resource has been added and display the cluster's
status to see that it is now active.
ifdef::pcs[]
[source,Bash]
----
# pcs status
Last updated: Fri Sep 14 10:17:00 2012
Last change: Fri Sep 14 10:15:48 2012 via cibadmin on pcmk-1
Stack: corosync
Current DC: pcmk-1 (1) - partition with quorum
Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0
2 Nodes configured, unknown expected votes
1 Resources configured.
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm configure show
node $id="1702537408" pcmk-1
node $id="1719314624" pcmk-2
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="192.168.122.120" cidr_netmask="32" \
op monitor interval="30s"
property $id="cib-bootstrap-options" \
dc-version="1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff" \
cluster-infrastructure="corosync" \
stonith-enabled="false"
# crm_mon -1
============
Last updated: Tue Apr 3 09:56:50 2012
Last change: Tue Apr 3 09:54:37 2012 via cibadmin on pcmk-1
Stack: corosync
Current DC: pcmk-1 (1702537408) - partition with quorum
Version: 1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff
2 Nodes configured, unknown expected votes
1 Resources configured.
============
Online: [ pcmk-1 pcmk-2 ]
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
----
endif::[]
== Perform a Failover ==
Being a high-availability cluster, we should test failover of our new
resource before moving on.
First, find the node on which the IP address is running.
ifdef::pcs[]
[source,Bash]
----
# pcs status
Last updated: Fri Sep 14 10:17:00 2012
Last change: Fri Sep 14 10:15:48 2012 via cibadmin on pcmk-1
Stack: corosync
Current DC: pcmk-1 (1) - partition with quorum
Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0
2 Nodes configured, unknown expected votes
1 Resources configured.
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm resource status ClusterIP
resource ClusterIP is running on: pcmk-1
----
endif::[]
Shut down Pacemaker and Corosync on that machine.
ifdef::pcs[]
[source,Bash]
----
#pcs cluster stop pcmk-1
Stopping Cluster...
----
Once Corosync is no longer running, go to the other node and check the
cluster status.
[source,Bash]
----
# pcs status
Last updated: Fri Sep 14 10:31:01 2012
Last change: Fri Sep 14 10:15:48 2012 via cibadmin on pcmk-1
Stack: corosync
Current DC: pcmk-2 (2) - partition WITHOUT quorum
Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0
2 Nodes configured, unknown expected votes
1 Resources configured.
Online: [ pcmk-2 ]
OFFLINE: [ pcmk-1 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Stopped
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# ssh pcmk-1 -- service pacemaker stop
# ssh pcmk-1 -- service corosync stop
----
Once Corosync is no longer running, go to the other node and check the
cluster status with crm_mon.
[source,Bash]
----
# crm_mon -1
============
Last updated: Tue Apr 3 10:01:28 2012
Last change: Tue Apr 3 09:54:39 2012 via cibadmin on pcmk-1
Stack: corosync
Current DC: pcmk-2 (1719314624) - partition WITHOUT quorum
Version: 1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff
2 Nodes configured, unknown expected votes
1 Resources configured.
============
Online: [ pcmk-2 ]
OFFLINE: [ pcmk-1 ]
----
endif::[]
There are three things to notice about the cluster's current
state. The first is that, as expected, +pcmk-1+ is now offline. However
we can also see that +ClusterIP+ isn't running anywhere!
=== Quorum and Two-Node Clusters ===
This is because the cluster no longer has quorum, as can be seen by
the text "partition WITHOUT quorum" in the status output. In order
to reduce the possibility of data corruption, Pacemaker's default
behavior is to stop all resources if the cluster does not have quorum.
A cluster is said to have quorum when more than half the known or
expected nodes are online, or for the mathematically inclined,
whenever the following equation is true:
....
total_nodes < 2 * active_nodes
....
Therefore a two-node cluster only has quorum when both nodes are
running, which is no longer the case for our cluster. This would
normally make the creation of a two-node cluster pointless
footnote:[Actually some would argue that two-node clusters are always pointless, but that is an argument for another time]
, however it is possible to control how Pacemaker behaves when quorum
is lost. In particular, we can tell the cluster to simply ignore
quorum altogether.
ifdef::pcs[]
[source,Bash]
----
# pcs property set no-quorum-policy=ignore
# pcs property
dc-version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0
cluster-infrastructure: corosync
stonith-enabled: false
no-quorum-policy: ignore
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm configure property no-quorum-policy=ignore
# crm configure show
node $id="1702537408" pcmk-1
node $id="1719314624" pcmk-2
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="192.168.122.120" cidr_netmask="32" \
op monitor interval="30s"
property $id="cib-bootstrap-options" \
dc-version="1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff" \
cluster-infrastructure="corosync" \
stonith-enabled="false" \
no-quorum-policy="ignore"
----
endif::[]
After a few moments, the cluster will start the IP address on the
remaining node. Note that the cluster still does not have quorum.
ifdef::pcs[]
[source,Bash]
----
# pcs status
Last updated: Fri Sep 14 10:38:11 2012
Last change: Fri Sep 14 10:37:53 2012 via cibadmin on pcmk-2
Stack: corosync
Current DC: pcmk-2 (2) - partition WITHOUT quorum
Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0
2 Nodes configured, unknown expected votes
1 Resources configured.
Online: [ pcmk-2 ]
OFFLINE: [ pcmk-1 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm_mon -1
============
Last updated: Tue Apr 3 10:02:46 2012
Last change: Tue Apr 3 10:02:08 2012 via cibadmin on pcmk-2
Stack: corosync
Current DC: pcmk-2 (1719314624) - partition WITHOUT quorum
Version: 1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff
2 Nodes configured, unknown expected votes
1 Resources configured.
============
Online: [ pcmk-2 ]
OFFLINE: [ pcmk-1 ]
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
----
endif::[]
Now simulate node recovery by restarting the cluster stack on +pcmk-1+ and
check the cluster's status. Note, if you get an authentication error with
the 'pcs cluster start pcmk-1' command, you must authenticate on the node
using the 'pcs cluster auth pcmk pcmk-1 pcmk-2' command discussed earlier.
ifdef::pcs[]
[source,Bash]
----
# pcs cluster start pcmk-1
Starting Cluster...
# pcs status
Last updated: Fri Sep 14 10:42:56 2012
Last change: Fri Sep 14 10:37:53 2012 via cibadmin on pcmk-2
Stack: corosync
Current DC: pcmk-2 (2) - partition with quorum
Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0
2 Nodes configured, unknown expected votes
1 Resources configured.
Online: [ pcmk-1 pcmk-2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# service corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
# service pacemaker start
Starting Pacemaker Cluster Manager: [ OK ]
# crm_mon
============
Last updated: Fri Aug 28 15:32:13 2009
Stack: openais
Current DC: pcmk-2 - partition with quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ pcmk-1 pcmk-2 ]
ClusterIP (ocf::heartbeat:IPaddr): Started pcmk-2
----
endif::[]
[NOTE]
======
In the dark days, the cluster may have moved the IP back to its
original location (+pcmk-1+). Usually this is no longer the case.
======
=== Prevent Resources from Moving after Recovery ===
In most circumstances, it is highly desirable to prevent healthy
resources from being moved around the cluster. Moving resources almost
always requires a period of downtime. For complex services like Oracle
databases, this period can be quite long.
To address this, Pacemaker has the concept of resource stickiness
which controls how much a service prefers to stay running where it
is. You may like to think of it as the "cost" of any downtime. By
default, Pacemaker assumes there is zero cost associated with moving
resources and will do so to achieve "optimal"
footnote:[It should be noted that Pacemaker's definition of
optimal may not always agree with that of a human's. The order in which
Pacemaker processes lists of resources and nodes creates implicit
preferences in situations where the administrator has not explicitly
specified them]
resource placement. We can specify a different stickiness for every
resource, but it is often sufficient to change the default.
ifdef::pcs[]
[source,Bash]
----
# pcs resource rsc defaults resource-stickiness=100
# pcs resource rsc defaults
resource-stickiness: 100
----
endif::[]
ifdef::crm[]
[source,Bash]
----
# crm configure rsc_defaults resource-stickiness=100
# crm configure show
node $id="1702537408" pcmk-1
node $id="1719314624" pcmk-2
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="192.168.122.120" cidr_netmask="32" \
op monitor interval="30s"
property $id="cib-bootstrap-options" \
dc-version="1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff" \
cluster-infrastructure="corosync" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
----
endif::[]
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt
index 61621c9b6d..58af962602 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt
@@ -1,1013 +1,1015 @@
= Installation =
== OS Installation ==
Detailed instructions for installing Fedora are available at
http://docs.fedoraproject.org/en-US/Fedora/17/html/Installation_Guide/ in a number of
languages. The abbreviated version is as follows...
Point your browser to http://fedoraproject.org/en/get-fedora-all,
locate the +Install Media+ section and download the install DVD that
matches your hardware.
Burn the disk image to a DVD
footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Burning_ISO_images_to_disc/index.html]
and boot from it, or use the image to boot a virtual machine.
After clicking through the welcome screen, select your language,
keyboard layout
footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/sn-keyboard-x86.html]
and storage type
footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/Storage_Devices-x86.html]
Assign your machine a host name.
footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/sn-Netconfig-x86.html]
I happen to control the clusterlabs.org domain name, so I will use
that here.
[IMPORTANT]
===========
Do not accept the default network settings.
Cluster machines should never obtain an IP address via DHCP.
Before clicking next, select +Configure Network+ to specify a fixed IPv4 address for +System eth0+.
Here I will use the internal addresses for the clusterlab.org network.
image::images/Network.png["Custom network settings",align="center"]
Be sure to also enter the +Routes+ section and add an entry for your default gateway.
===========
You will then be prompted to indicate the machine's physical location
footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/s1-timezone-x86.html]
and to supply a root password.
footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/sn-account_configuration-x86.html]
Now select where you want Fedora installed.
footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/s1-diskpartsetup-x86.html]
As I don’t care about any existing data, I will accept the default and
allow Fedora to use the complete drive.
[IMPORTANT]
===========
By default Fedora uses LVM for partitioning which allows us to
dynamically change the amount of space allocated to a given partition.
However, by default it also allocates all free space to the +/+
(aka. +root+) partition which cannot be dynamically _reduced_ in size
(dynamic increases are fine by-the-way).
So if you plan on following the DRBD or GFS2 portions of this guide,
you should reserve at least 1Gb of space on each machine from which to
create a shared volume. To do so select the +Review and modify
partitioning layout+ checkbox before clicking +Next+. You will then
be given an opportunity to reduce the size of the +root+ partition.
===========
Next choose which software should be
installed. footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/s1-pkgselection-x86.html]
Change the selection to Minimal so that we see everything that gets
installed. Don't enable updates yet, we'll do that (and install any
extra software we need) later. After you click next, Fedora will begin
installing.
Go grab something to drink, this may take a while
Once the node reboots, you'll see a (possibly mangled) login prompt on
the console. Login using +root+ and the password you created earlier.
image::images/Console.png["Initial Console",align="center"]
[NOTE]
======
That was the last screenshot, from here on in we're going to be working
exclusively from the terminal.
======
== Post Installation Tasks ==
=== Networking ===
Bring up the network and ensure it starts at boot
[source,Bash]
....
# service network start
# chkconfig network on
....
Check the machine has the static IP address you configured earlier
[source,Bash]
....
# ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:d7:d6:08 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.101/24 brd 192.168.122.255 scope global eth0
inet6 fe80::5054:ff:fed7:d608/64 scope link
valid_lft forever preferred_lft forever
....
Now check the default route setting:
[source,Bash]
....
[root@pcmk-1 ~]# ip route
default via 192.168.122.1 dev eth0
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.101
....
If there is no line beginning with +default via+, then you may need to add a line such as
GATEWAY=192.168.122.1
to '/etc/sysconfig/network' and restart the network.
Now check for connectivity to the outside world. Start small by
testing if we can read the gateway we configured.
[source,Bash]
....
# ping -c 1 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_req=1 ttl=64 time=0.249 ms
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms
....
Now try something external, choose a location you know will be available.
[source,Bash]
....
# ping -c 1 www.google.com
PING www.l.google.com (173.194.72.106) 56(84) bytes of data.
64 bytes from tf-in-f106.1e100.net (173.194.72.106): icmp_req=1 ttl=41 time=167 ms
--- www.l.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 167.618/167.618/167.618/0.000 ms
....
=== Leaving the Console ===
The console isn't a very friendly place to work from, we will now
switch to accessing the machine remotely via SSH where we can
use copy&paste etc.
First we check we can see the newly installed at all:
[source,Bash]
....
beekhof@f16 ~ # ping -c 1 192.168.122.101
PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data.
64 bytes from 192.168.122.101: icmp_req=1 ttl=64 time=1.01 ms
--- 192.168.122.101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.012/1.012/1.012/0.000 ms
....
Next we login via SSH
[source,Bash]
....
beekhof@f16 ~ # ssh -l root 192.168.122.11
root@192.168.122.11's password:
Last login: Fri Mar 30 19:41:19 2012 from 192.168.122.1
[root@pcmk-1 ~]#
....
=== Security Shortcuts ===
To simplify this guide and focus on the aspects directly connected to
clustering, we will now disable the machine's firewall and SELinux
installation.
[WARNING]
===========
Both of these actions create significant security issues
and should not be performed on machines that will be exposed to the
outside world.
===========
[IMPORTANT]
===========
TODO: Create an Appendix that deals with (at least) re-enabling the firewall.
===========
[source,Bash]
----
# setenforce 0
# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
# systemctl disable iptables.service
rm '/etc/systemd/system/basic.target.wants/iptables.service'
# systemctl stop iptables.service
----
=== Short Node Names ===
During installation, we filled in the machine's fully qualifier domain
name (FQDN) which can be rather long when it appears in cluster logs and
status output. See for yourself how the machine identifies itself:
(((Nodes, short name)))
[source,Bash]
----
# uname -n
pcmk-1.clusterlabs.org
# dnsdomainname
clusterlabs.org
----
(((Nodes, Domain name (Query))))
The output from the second command is fine, but we really don't need the
domain name included in the basic host details. To address this, we need
to update /etc/sysconfig/network. This is what it should look like before
we start.
[source,Bash]
----
# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=pcmk-1.clusterlabs.org
GATEWAY=192.168.122.1
----
All we need to do now is strip off the domain name portion, which is
stored elsewhere anyway.
[source,Bash]
----
# sed -i.sed 's/\.[a-z].*//g' /etc/sysconfig/network
----
Now confirm the change was successful. The revised file contents should
look something like this.
[source,Bash]
----
# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=pcmk-1
GATEWAY=192.168.122.1
----
However we're not finished. The machine wont normally see the shortened
host name until about it reboots, but we can force it to update.
[source,Bash]
----
# source /etc/sysconfig/network
# hostname $HOSTNAME
----
(((Nodes, Domain name (Remove from host name))))
Now check the machine is using the correct names
[source,Bash]
----
# uname -n
pcmk-1
# dnsdomainname
clusterlabs.org
----
=== NTP ===
It is highly recommended to enable NTP on your cluster nodes. Doing so
ensures all nodes agree on the current time and makes reading log files
significantly easier. Fedora Installation - Date and TimeFedora
Installation: Enable NTP to keep the times on all your nodes consistent
== Before You Continue ==
Repeat the Installation steps so far, so that you have two Fedora
nodes ready to have the cluster software installed.
For the purposes of this document, the additional node is called
pcmk-2 with address 192.168.122.102.
=== Finalize Networking ===
Confirm that you can communicate between the two new nodes:
[source,Bash]
----
# ping -c 3 192.168.122.102
PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data.
64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=0.343 ms
64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.402 ms
64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.558 ms
--- 192.168.122.102 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.343/0.434/0.558/0.092 ms
----
Now we need to make sure we can communicate with the machines by their
name. If you have a DNS server, add additional entries for the two
machines. Otherwise, you'll need to add the machines to '/etc/hosts' .
Below are the entries for my cluster nodes:
[source,Bash]
----
# grep pcmk /etc/hosts
192.168.122.101 pcmk-1.clusterlabs.org pcmk-1
192.168.122.102 pcmk-2.clusterlabs.org pcmk-2
----
We can now verify the setup by again using ping:
[source,Bash]
----
# ping -c 3 pcmk-2
PING pcmk-2.clusterlabs.org (192.168.122.101) 56(84) bytes of data.
64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=1 ttl=64 time=0.164 ms
64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=2 ttl=64 time=0.475 ms
64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=3 ttl=64 time=0.186 ms
--- pcmk-2.clusterlabs.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.164/0.275/0.475/0.141 ms
----
=== Configure SSH ===
SSH is a convenient and secure way to copy files and perform commands
remotely. For the purposes of this guide, we will create a key without a
password (using the -N option) so that we can perform remote actions
without being prompted.
(((SSH)))
[WARNING]
=========
Unprotected SSH keys, those without a password, are not recommended for servers exposed to the outside world.
We use them here only to simplify the demo.
=========
Create a new key and allow anyone with that key to log in:
.Creating and Activating a new SSH Key
[source,Bash]
----
# ssh-keygen -t dsa -f ~/.ssh/id_dsa -N ""
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
91:09:5c:82:5a:6a:50:08:4e:b2:0c:62:de:cc:74:44 root@pcmk-1.clusterlabs.org
The key's randomart image is:
+--[ DSA 1024]----+
|==.ooEo.. |
|X O + .o o |
| * A + |
| + . |
| . S |
| |
| |
| |
| |
+-----------------+
# cp .ssh/id_dsa.pub .ssh/authorized_keys
----
(((Creating and Activating a new SSH Key)))
Install the key on the other nodes and test that you can now run commands
remotely, without being prompted
.Installing the SSH Key on Another Host
[source,Bash]
----
# scp -r .ssh pcmk-2:
The authenticity of host 'pcmk-2 (192.168.122.102)' can't be established.
RSA key fingerprint is b1:2b:55:93:f1:d9:52:2b:0f:f2:8a:4e:ae:c6:7c:9a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pcmk-2,192.168.122.102' (RSA) to the list of known hosts.root@pcmk-2's password:
id_dsa.pub 100% 616 0.6KB/s 00:00
id_dsa 100% 672 0.7KB/s 00:00
known_hosts 100% 400 0.4KB/s 00:00
authorized_keys 100% 616 0.6KB/s 00:00
-# ssh pcmk-2 -- uname -npcmk-2
+# ssh pcmk-2 -- uname -n
+pcmk-2
#
----
== Cluster Software Installation ==
=== Install the Cluster Software ===
Since version 12, Fedora comes with recent versions of everything you
need, so simply fire up the shell and run:
[source,Bash]
----
# yum install -y pacemaker corosync
fedora/metalink | 38 kB 00:00
fedora | 4.2 kB 00:00
fedora/primary_db | 14 MB 00:21
updates/metalink | 2.7 kB 00:00
updates | 2.6 kB 00:00
updates/primary_db | 1.2 kB 00:00
updates-testing/metalink | 28 kB 00:00
updates-testing | 4.5 kB 00:00
updates-testing/primary_db | 4.5 MB 00:12
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package corosync.x86_64 0:1.99.9-1.fc17 will be installed
--> Processing Dependency: corosynclib = 1.99.9-1.fc17 for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libxslt for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libvotequorum.so.5(COROSYNC_VOTEQUORUM_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libquorum.so.5(COROSYNC_QUORUM_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libcpg.so.4(COROSYNC_CPG_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libcmap.so.4(COROSYNC_CMAP_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libcfg.so.6(COROSYNC_CFG_0.82)(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libvotequorum.so.5()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libtotem_pg.so.5()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libquorum.so.5()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libqb.so.0()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libnetsnmp.so.30()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libcpg.so.4()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libcorosync_common.so.4()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libcmap.so.4()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
--> Processing Dependency: libcfg.so.6()(64bit) for package: corosync-1.99.9-1.fc17.x86_64
---> Package pacemaker.x86_64 0:1.1.7-2.fc17 will be installed
--> Processing Dependency: pacemaker-libs = 1.1.7-2.fc17 for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: pacemaker-cluster-libs = 1.1.7-2.fc17 for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: pacemaker-cli = 1.1.7-2.fc17 for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: resource-agents for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: perl(Getopt::Long) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libgnutls.so.26(GNUTLS_1_4)(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: cluster-glue for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: /usr/bin/perl for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libtransitioner.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libstonithd.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libstonith.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libplumb.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libpils.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libpengine.so.3()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libpe_status.so.3()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libpe_rules.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libltdl.so.7()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: liblrm.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libgnutls.so.26()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libcrmcommon.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libcrmcluster.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Processing Dependency: libcib.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64
--> Running transaction check
---> Package cluster-glue.x86_64 0:1.0.6-9.fc17.1 will be installed
--> Processing Dependency: perl-TimeDate for package: cluster-glue-1.0.6-9.fc17.1.x86_64
--> Processing Dependency: libOpenIPMIutils.so.0()(64bit) for package: cluster-glue-1.0.6-9.fc17.1.x86_64
--> Processing Dependency: libOpenIPMIposix.so.0()(64bit) for package: cluster-glue-1.0.6-9.fc17.1.x86_64
--> Processing Dependency: libOpenIPMI.so.0()(64bit) for package: cluster-glue-1.0.6-9.fc17.1.x86_64
---> Package cluster-glue-libs.x86_64 0:1.0.6-9.fc17.1 will be installed
---> Package corosynclib.x86_64 0:1.99.9-1.fc17 will be installed
--> Processing Dependency: librdmacm.so.1(RDMACM_1.0)(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64
--> Processing Dependency: libibverbs.so.1(IBVERBS_1.1)(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64
--> Processing Dependency: libibverbs.so.1(IBVERBS_1.0)(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64
--> Processing Dependency: librdmacm.so.1()(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64
--> Processing Dependency: libibverbs.so.1()(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64
---> Package gnutls.x86_64 0:2.12.17-1.fc17 will be installed
--> Processing Dependency: libtasn1.so.3(LIBTASN1_0_3)(64bit) for package: gnutls-2.12.17-1.fc17.x86_64
--> Processing Dependency: libtasn1.so.3()(64bit) for package: gnutls-2.12.17-1.fc17.x86_64
--> Processing Dependency: libp11-kit.so.0()(64bit) for package: gnutls-2.12.17-1.fc17.x86_64
---> Package libqb.x86_64 0:0.11.1-1.fc17 will be installed
---> Package libtool-ltdl.x86_64 0:2.4.2-3.fc17 will be installed
---> Package libxslt.x86_64 0:1.1.26-9.fc17 will be installed
---> Package net-snmp-libs.x86_64 1:5.7.1-4.fc17 will be installed
---> Package pacemaker-cli.x86_64 0:1.1.7-2.fc17 will be installed
---> Package pacemaker-cluster-libs.x86_64 0:1.1.7-2.fc17 will be installed
---> Package pacemaker-libs.x86_64 0:1.1.7-2.fc17 will be installed
---> Package perl.x86_64 4:5.14.2-211.fc17 will be installed
--> Processing Dependency: perl-libs = 4:5.14.2-211.fc17 for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(threads::shared) >= 1.21 for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Socket) >= 1.3 for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Scalar::Util) >= 1.10 for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(File::Spec) >= 0.8 for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl-macros for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl-libs for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(threads::shared) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(threads) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Socket) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Scalar::Util) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Pod::Simple) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Module::Pluggable) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(List::Util) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(File::Spec::Unix) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(File::Spec::Functions) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(File::Spec) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Cwd) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: perl(Carp) for package: 4:perl-5.14.2-211.fc17.x86_64
--> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.14.2-211.fc17.x86_64
---> Package resource-agents.x86_64 0:3.9.2-2.fc17.1 will be installed
--> Processing Dependency: /usr/sbin/rpc.nfsd for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /usr/sbin/rpc.mountd for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /usr/sbin/ethtool for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /sbin/rpc.statd for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /sbin/quotaon for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /sbin/quotacheck for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /sbin/mount.nfs4 for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /sbin/mount.nfs for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /sbin/mount.cifs for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: /sbin/fsck.xfs for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Processing Dependency: libnet.so.1()(64bit) for package: resource-agents-3.9.2-2.fc17.1.x86_64
--> Running transaction check
---> Package OpenIPMI-libs.x86_64 0:2.0.18-13.fc17 will be installed
---> Package cifs-utils.x86_64 0:5.3-2.fc17 will be installed
--> Processing Dependency: libtalloc.so.2(TALLOC_2.0.2)(64bit) for package: cifs-utils-5.3-2.fc17.x86_64
--> Processing Dependency: keyutils for package: cifs-utils-5.3-2.fc17.x86_64
--> Processing Dependency: libwbclient.so.0()(64bit) for package: cifs-utils-5.3-2.fc17.x86_64
--> Processing Dependency: libtalloc.so.2()(64bit) for package: cifs-utils-5.3-2.fc17.x86_64
---> Package ethtool.x86_64 2:3.2-2.fc17 will be installed
---> Package libibverbs.x86_64 0:1.1.6-2.fc17 will be installed
---> Package libnet.x86_64 0:1.1.5-3.fc17 will be installed
---> Package librdmacm.x86_64 0:1.0.15-1.fc17 will be installed
---> Package libtasn1.x86_64 0:2.12-1.fc17 will be installed
---> Package nfs-utils.x86_64 1:1.2.5-12.fc17 will be installed
--> Processing Dependency: rpcbind for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libtirpc for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libnfsidmap for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libgssglue.so.1(libgssapi_CITI_2)(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libgssglue for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libevent for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libtirpc.so.1()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libnfsidmap.so.0()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libgssglue.so.1()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
--> Processing Dependency: libevent-2.0.so.5()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64
---> Package p11-kit.x86_64 0:0.12-1.fc17 will be installed
---> Package perl-Carp.noarch 0:1.22-2.fc17 will be installed
---> Package perl-Module-Pluggable.noarch 1:3.90-211.fc17 will be installed
---> Package perl-PathTools.x86_64 0:3.33-211.fc17 will be installed
---> Package perl-Pod-Simple.noarch 1:3.16-211.fc17 will be installed
--> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.16-211.fc17.noarch
---> Package perl-Scalar-List-Utils.x86_64 0:1.25-1.fc17 will be installed
---> Package perl-Socket.x86_64 0:2.001-1.fc17 will be installed
---> Package perl-TimeDate.noarch 1:1.20-6.fc17 will be installed
---> Package perl-libs.x86_64 4:5.14.2-211.fc17 will be installed
---> Package perl-macros.x86_64 4:5.14.2-211.fc17 will be installed
---> Package perl-threads.x86_64 0:1.86-2.fc17 will be installed
---> Package perl-threads-shared.x86_64 0:1.40-2.fc17 will be installed
---> Package quota.x86_64 1:4.00-3.fc17 will be installed
--> Processing Dependency: quota-nls = 1:4.00-3.fc17 for package: 1:quota-4.00-3.fc17.x86_64
--> Processing Dependency: tcp_wrappers for package: 1:quota-4.00-3.fc17.x86_64
---> Package xfsprogs.x86_64 0:3.1.8-1.fc17 will be installed
--> Running transaction check
---> Package keyutils.x86_64 0:1.5.5-2.fc17 will be installed
---> Package libevent.x86_64 0:2.0.14-2.fc17 will be installed
---> Package libgssglue.x86_64 0:0.3-1.fc17 will be installed
---> Package libnfsidmap.x86_64 0:0.25-1.fc17 will be installed
---> Package libtalloc.x86_64 0:2.0.7-4.fc17 will be installed
---> Package libtirpc.x86_64 0:0.2.2-2.1.fc17 will be installed
---> Package libwbclient.x86_64 1:3.6.3-81.fc17.1 will be installed
---> Package perl-Pod-Escapes.noarch 1:1.04-211.fc17 will be installed
---> Package quota-nls.noarch 1:4.00-3.fc17 will be installed
---> Package rpcbind.x86_64 0:0.2.0-16.fc17 will be installed
---> Package tcp_wrappers.x86_64 0:7.6-69.fc17 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================
Package Arch Version Repository Size
=====================================================================================
Installing:
corosync x86_64 1.99.9-1.fc17 updates-testing 159 k
pacemaker x86_64 1.1.7-2.fc17 updates-testing 362 k
Installing for dependencies:
OpenIPMI-libs x86_64 2.0.18-13.fc17 fedora 466 k
cifs-utils x86_64 5.3-2.fc17 updates-testing 66 k
cluster-glue x86_64 1.0.6-9.fc17.1 fedora 229 k
cluster-glue-libs x86_64 1.0.6-9.fc17.1 fedora 121 k
corosynclib x86_64 1.99.9-1.fc17 updates-testing 96 k
ethtool x86_64 2:3.2-2.fc17 fedora 94 k
gnutls x86_64 2.12.17-1.fc17 fedora 385 k
keyutils x86_64 1.5.5-2.fc17 fedora 49 k
libevent x86_64 2.0.14-2.fc17 fedora 160 k
libgssglue x86_64 0.3-1.fc17 fedora 24 k
libibverbs x86_64 1.1.6-2.fc17 fedora 44 k
libnet x86_64 1.1.5-3.fc17 fedora 54 k
libnfsidmap x86_64 0.25-1.fc17 fedora 34 k
libqb x86_64 0.11.1-1.fc17 updates-testing 68 k
librdmacm x86_64 1.0.15-1.fc17 fedora 27 k
libtalloc x86_64 2.0.7-4.fc17 fedora 22 k
libtasn1 x86_64 2.12-1.fc17 updates-testing 319 k
libtirpc x86_64 0.2.2-2.1.fc17 fedora 78 k
libtool-ltdl x86_64 2.4.2-3.fc17 fedora 45 k
libwbclient x86_64 1:3.6.3-81.fc17.1 updates-testing 68 k
libxslt x86_64 1.1.26-9.fc17 fedora 416 k
net-snmp-libs x86_64 1:5.7.1-4.fc17 fedora 713 k
nfs-utils x86_64 1:1.2.5-12.fc17 fedora 311 k
p11-kit x86_64 0.12-1.fc17 updates-testing 36 k
pacemaker-cli x86_64 1.1.7-2.fc17 updates-testing 368 k
pacemaker-cluster-libs x86_64 1.1.7-2.fc17 updates-testing 77 k
pacemaker-libs x86_64 1.1.7-2.fc17 updates-testing 322 k
perl x86_64 4:5.14.2-211.fc17 fedora 10 M
perl-Carp noarch 1.22-2.fc17 fedora 17 k
perl-Module-Pluggable noarch 1:3.90-211.fc17 fedora 47 k
perl-PathTools x86_64 3.33-211.fc17 fedora 105 k
perl-Pod-Escapes noarch 1:1.04-211.fc17 fedora 40 k
perl-Pod-Simple noarch 1:3.16-211.fc17 fedora 223 k
perl-Scalar-List-Utils x86_64 1.25-1.fc17 updates-testing 33 k
perl-Socket x86_64 2.001-1.fc17 updates-testing 44 k
perl-TimeDate noarch 1:1.20-6.fc17 fedora 43 k
perl-libs x86_64 4:5.14.2-211.fc17 fedora 628 k
perl-macros x86_64 4:5.14.2-211.fc17 fedora 32 k
perl-threads x86_64 1.86-2.fc17 fedora 47 k
perl-threads-shared x86_64 1.40-2.fc17 fedora 36 k
quota x86_64 1:4.00-3.fc17 fedora 160 k
quota-nls noarch 1:4.00-3.fc17 fedora 74 k
resource-agents x86_64 3.9.2-2.fc17.1 fedora 466 k
rpcbind x86_64 0.2.0-16.fc17 fedora 52 k
tcp_wrappers x86_64 7.6-69.fc17 fedora 72 k
xfsprogs x86_64 3.1.8-1.fc17 updates-testing 715 k
Transaction Summary
=====================================================================================
Install 2 Packages (+46 Dependent packages)
Total download size: 18 M
Installed size: 59 M
Downloading Packages:
(1/48): OpenIPMI-libs-2.0.18-13.fc17.x86_64.rpm | 466 kB 00:00
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 1aca3465: NOKEY
Public key for OpenIPMI-libs-2.0.18-13.fc17.x86_64.rpm is not installed
(2/48): cifs-utils-5.3-2.fc17.x86_64.rpm | 66 kB 00:01
Public key for cifs-utils-5.3-2.fc17.x86_64.rpm is not installed
(3/48): cluster-glue-1.0.6-9.fc17.1.x86_64.rpm | 229 kB 00:00
(4/48): cluster-glue-libs-1.0.6-9.fc17.1.x86_64.rpm | 121 kB 00:00
(5/48): corosync-1.99.9-1.fc17.x86_64.rpm | 159 kB 00:01
(6/48): corosynclib-1.99.9-1.fc17.x86_64.rpm | 96 kB 00:00
(7/48): ethtool-3.2-2.fc17.x86_64.rpm | 94 kB 00:00
(8/48): gnutls-2.12.17-1.fc17.x86_64.rpm | 385 kB 00:00
(9/48): keyutils-1.5.5-2.fc17.x86_64.rpm | 49 kB 00:00
(10/48): libevent-2.0.14-2.fc17.x86_64.rpm | 160 kB 00:00
(11/48): libgssglue-0.3-1.fc17.x86_64.rpm | 24 kB 00:00
(12/48): libibverbs-1.1.6-2.fc17.x86_64.rpm | 44 kB 00:00
(13/48): libnet-1.1.5-3.fc17.x86_64.rpm | 54 kB 00:00
(14/48): libnfsidmap-0.25-1.fc17.x86_64.rpm | 34 kB 00:00
(15/48): libqb-0.11.1-1.fc17.x86_64.rpm | 68 kB 00:01
(16/48): librdmacm-1.0.15-1.fc17.x86_64.rpm | 27 kB 00:00
(17/48): libtalloc-2.0.7-4.fc17.x86_64.rpm | 22 kB 00:00
(18/48): libtasn1-2.12-1.fc17.x86_64.rpm | 319 kB 00:02
(19/48): libtirpc-0.2.2-2.1.fc17.x86_64.rpm | 78 kB 00:00
(20/48): libtool-ltdl-2.4.2-3.fc17.x86_64.rpm | 45 kB 00:00
(21/48): libwbclient-3.6.3-81.fc17.1.x86_64.rpm | 68 kB 00:00
(22/48): libxslt-1.1.26-9.fc17.x86_64.rpm | 416 kB 00:00
(23/48): net-snmp-libs-5.7.1-4.fc17.x86_64.rpm | 713 kB 00:01
(24/48): nfs-utils-1.2.5-12.fc17.x86_64.rpm | 311 kB 00:00
(25/48): p11-kit-0.12-1.fc17.x86_64.rpm | 36 kB 00:01
(26/48): pacemaker-1.1.7-2.fc17.x86_64.rpm | 362 kB 00:02
(27/48): pacemaker-cli-1.1.7-2.fc17.x86_64.rpm | 368 kB 00:02
(28/48): pacemaker-cluster-libs-1.1.7-2.fc17.x86_64.rpm | 77 kB 00:00
(29/48): pacemaker-libs-1.1.7-2.fc17.x86_64.rpm | 322 kB 00:01
(30/48): perl-5.14.2-211.fc17.x86_64.rpm | 10 MB 00:15
(31/48): perl-Carp-1.22-2.fc17.noarch.rpm | 17 kB 00:00
(32/48): perl-Module-Pluggable-3.90-211.fc17.noarch.rpm | 47 kB 00:00
(33/48): perl-PathTools-3.33-211.fc17.x86_64.rpm | 105 kB 00:00
(34/48): perl-Pod-Escapes-1.04-211.fc17.noarch.rpm | 40 kB 00:00
(35/48): perl-Pod-Simple-3.16-211.fc17.noarch.rpm | 223 kB 00:00
(36/48): perl-Scalar-List-Utils-1.25-1.fc17.x86_64.rpm | 33 kB 00:01
(37/48): perl-Socket-2.001-1.fc17.x86_64.rpm | 44 kB 00:00
(38/48): perl-TimeDate-1.20-6.fc17.noarch.rpm | 43 kB 00:00
(39/48): perl-libs-5.14.2-211.fc17.x86_64.rpm | 628 kB 00:00
(40/48): perl-macros-5.14.2-211.fc17.x86_64.rpm | 32 kB 00:00
(41/48): perl-threads-1.86-2.fc17.x86_64.rpm | 47 kB 00:00
(42/48): perl-threads-shared-1.40-2.fc17.x86_64.rpm | 36 kB 00:00
(43/48): quota-4.00-3.fc17.x86_64.rpm | 160 kB 00:00
(44/48): quota-nls-4.00-3.fc17.noarch.rpm | 74 kB 00:00
(45/48): resource-agents-3.9.2-2.fc17.1.x86_64.rpm | 466 kB 00:00
(46/48): rpcbind-0.2.0-16.fc17.x86_64.rpm | 52 kB 00:00
(47/48): tcp_wrappers-7.6-69.fc17.x86_64.rpm | 72 kB 00:00
(48/48): xfsprogs-3.1.8-1.fc17.x86_64.rpm | 715 kB 00:03
---------------------------------------------------------------------------------------
Total 333 kB/s | 18 MB 00:55
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-x86_64
Importing GPG key 0x1ACA3465:
Userid : "Fedora (17) "
Fingerprint: cac4 3fb7 74a4 a673 d81c 5de7 50e9 4c99 1aca 3465
Package : fedora-release-17-0.8.noarch (@anaconda-0)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-x86_64
Running Transaction Check
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libqb-0.11.1-1.fc17.x86_64 1/48
Installing : libtool-ltdl-2.4.2-3.fc17.x86_64 2/48
Installing : cluster-glue-libs-1.0.6-9.fc17.1.x86_64 3/48
Installing : libxslt-1.1.26-9.fc17.x86_64 4/48
Installing : 1:perl-Pod-Escapes-1.04-211.fc17.noarch 5/48
Installing : perl-threads-1.86-2.fc17.x86_64 6/48
Installing : 4:perl-macros-5.14.2-211.fc17.x86_64 7/48
Installing : 1:perl-Pod-Simple-3.16-211.fc17.noarch 8/48
Installing : perl-Socket-2.001-1.fc17.x86_64 9/48
Installing : perl-Carp-1.22-2.fc17.noarch 10/48
Installing : 4:perl-libs-5.14.2-211.fc17.x86_64 11/48
Installing : perl-threads-shared-1.40-2.fc17.x86_64 12/48
Installing : perl-Scalar-List-Utils-1.25-1.fc17.x86_64 13/48
Installing : 1:perl-Module-Pluggable-3.90-211.fc17.noarch 14/48
Installing : perl-PathTools-3.33-211.fc17.x86_64 15/48
Installing : 4:perl-5.14.2-211.fc17.x86_64 16/48
Installing : libibverbs-1.1.6-2.fc17.x86_64 17/48
Installing : keyutils-1.5.5-2.fc17.x86_64 18/48
Installing : libgssglue-0.3-1.fc17.x86_64 19/48
Installing : libtirpc-0.2.2-2.1.fc17.x86_64 20/48
Installing : 1:net-snmp-libs-5.7.1-4.fc17.x86_64 21/48
Installing : rpcbind-0.2.0-16.fc17.x86_64 22/48
Installing : librdmacm-1.0.15-1.fc17.x86_64 23/48
Installing : corosynclib-1.99.9-1.fc17.x86_64 24/48
Installing : corosync-1.99.9-1.fc17.x86_64 25/48
error reading information on service corosync: No such file or directory
Installing : 1:perl-TimeDate-1.20-6.fc17.noarch 26/48
Installing : 1:quota-nls-4.00-3.fc17.noarch 27/48
Installing : tcp_wrappers-7.6-69.fc17.x86_64 28/48
Installing : 1:quota-4.00-3.fc17.x86_64 29/48
Installing : libnfsidmap-0.25-1.fc17.x86_64 30/48
Installing : 1:libwbclient-3.6.3-81.fc17.1.x86_64 31/48
Installing : libnet-1.1.5-3.fc17.x86_64 32/48
Installing : 2:ethtool-3.2-2.fc17.x86_64 33/48
Installing : libevent-2.0.14-2.fc17.x86_64 34/48
Installing : 1:nfs-utils-1.2.5-12.fc17.x86_64 35/48
Installing : libtalloc-2.0.7-4.fc17.x86_64 36/48
Installing : cifs-utils-5.3-2.fc17.x86_64 37/48
Installing : libtasn1-2.12-1.fc17.x86_64 38/48
Installing : OpenIPMI-libs-2.0.18-13.fc17.x86_64 39/48
Installing : cluster-glue-1.0.6-9.fc17.1.x86_64 40/48
Installing : p11-kit-0.12-1.fc17.x86_64 41/48
Installing : gnutls-2.12.17-1.fc17.x86_64 42/48
Installing : pacemaker-libs-1.1.7-2.fc17.x86_64 43/48
Installing : pacemaker-cluster-libs-1.1.7-2.fc17.x86_64 44/48
Installing : pacemaker-cli-1.1.7-2.fc17.x86_64 45/48
Installing : xfsprogs-3.1.8-1.fc17.x86_64 46/48
Installing : resource-agents-3.9.2-2.fc17.1.x86_64 47/48
Installing : pacemaker-1.1.7-2.fc17.x86_64 48/48
Verifying : xfsprogs-3.1.8-1.fc17.x86_64 1/48
Verifying : 1:net-snmp-libs-5.7.1-4.fc17.x86_64 2/48
Verifying : corosync-1.99.9-1.fc17.x86_64 3/48
Verifying : cluster-glue-1.0.6-9.fc17.1.x86_64 4/48
Verifying : perl-PathTools-3.33-211.fc17.x86_64 5/48
Verifying : p11-kit-0.12-1.fc17.x86_64 6/48
Verifying : 1:perl-Pod-Simple-3.16-211.fc17.noarch 7/48
Verifying : OpenIPMI-libs-2.0.18-13.fc17.x86_64 8/48
Verifying : libtasn1-2.12-1.fc17.x86_64 9/48
Verifying : perl-threads-1.86-2.fc17.x86_64 10/48
Verifying : 1:perl-Pod-Escapes-1.04-211.fc17.noarch 11/48
Verifying : pacemaker-1.1.7-2.fc17.x86_64 12/48
Verifying : 4:perl-5.14.2-211.fc17.x86_64 13/48
Verifying : gnutls-2.12.17-1.fc17.x86_64 14/48
Verifying : perl-threads-shared-1.40-2.fc17.x86_64 15/48
Verifying : 4:perl-macros-5.14.2-211.fc17.x86_64 16/48
Verifying : 1:perl-Module-Pluggable-3.90-211.fc17.noarch 17/48
Verifying : 1:nfs-utils-1.2.5-12.fc17.x86_64 18/48
Verifying : cluster-glue-libs-1.0.6-9.fc17.1.x86_64 19/48
Verifying : pacemaker-libs-1.1.7-2.fc17.x86_64 20/48
Verifying : libtalloc-2.0.7-4.fc17.x86_64 21/48
Verifying : libevent-2.0.14-2.fc17.x86_64 22/48
Verifying : perl-Socket-2.001-1.fc17.x86_64 23/48
Verifying : libgssglue-0.3-1.fc17.x86_64 24/48
Verifying : perl-Carp-1.22-2.fc17.noarch 25/48
Verifying : libtirpc-0.2.2-2.1.fc17.x86_64 26/48
Verifying : 2:ethtool-3.2-2.fc17.x86_64 27/48
Verifying : 4:perl-libs-5.14.2-211.fc17.x86_64 28/48
Verifying : libxslt-1.1.26-9.fc17.x86_64 29/48
Verifying : rpcbind-0.2.0-16.fc17.x86_64 30/48
Verifying : librdmacm-1.0.15-1.fc17.x86_64 31/48
Verifying : resource-agents-3.9.2-2.fc17.1.x86_64 32/48
Verifying : 1:quota-4.00-3.fc17.x86_64 33/48
Verifying : 1:perl-TimeDate-1.20-6.fc17.noarch 34/48
Verifying : perl-Scalar-List-Utils-1.25-1.fc17.x86_64 35/48
Verifying : libtool-ltdl-2.4.2-3.fc17.x86_64 36/48
Verifying : pacemaker-cluster-libs-1.1.7-2.fc17.x86_64 37/48
Verifying : cifs-utils-5.3-2.fc17.x86_64 38/48
Verifying : libnet-1.1.5-3.fc17.x86_64 39/48
Verifying : corosynclib-1.99.9-1.fc17.x86_64 40/48
Verifying : libqb-0.11.1-1.fc17.x86_64 41/48
Verifying : 1:libwbclient-3.6.3-81.fc17.1.x86_64 42/48
Verifying : libnfsidmap-0.25-1.fc17.x86_64 43/48
Verifying : tcp_wrappers-7.6-69.fc17.x86_64 44/48
Verifying : keyutils-1.5.5-2.fc17.x86_64 45/48
Verifying : libibverbs-1.1.6-2.fc17.x86_64 46/48
Verifying : 1:quota-nls-4.00-3.fc17.noarch 47/48
Verifying : pacemaker-cli-1.1.7-2.fc17.x86_64 48/48
Installed:
corosync.x86_64 0:1.99.9-1.fc17 pacemaker.x86_64 0:1.1.7-2.fc17
Dependency Installed:
OpenIPMI-libs.x86_64 0:2.0.18-13.fc17 cifs-utils.x86_64 0:5.3-2.fc17
cluster-glue.x86_64 0:1.0.6-9.fc17.1 cluster-glue-libs.x86_64 0:1.0.6-9.fc17.1
corosynclib.x86_64 0:1.99.9-1.fc17 ethtool.x86_64 2:3.2-2.fc17
gnutls.x86_64 0:2.12.17-1.fc17 keyutils.x86_64 0:1.5.5-2.fc17
libevent.x86_64 0:2.0.14-2.fc17 libgssglue.x86_64 0:0.3-1.fc17
libibverbs.x86_64 0:1.1.6-2.fc17 libnet.x86_64 0:1.1.5-3.fc17
libnfsidmap.x86_64 0:0.25-1.fc17 libqb.x86_64 0:0.11.1-1.fc17
librdmacm.x86_64 0:1.0.15-1.fc17 libtalloc.x86_64 0:2.0.7-4.fc17
libtasn1.x86_64 0:2.12-1.fc17 libtirpc.x86_64 0:0.2.2-2.1.fc17
libtool-ltdl.x86_64 0:2.4.2-3.fc17 libwbclient.x86_64 1:3.6.3-81.fc17.1
libxslt.x86_64 0:1.1.26-9.fc17 net-snmp-libs.x86_64 1:5.7.1-4.fc17
nfs-utils.x86_64 1:1.2.5-12.fc17 p11-kit.x86_64 0:0.12-1.fc17
pacemaker-cli.x86_64 0:1.1.7-2.fc17 pacemaker-cluster-libs.x86_64 0:1.1.7-2.fc17
pacemaker-libs.x86_64 0:1.1.7-2.fc17 perl.x86_64 4:5.14.2-211.fc17
perl-Carp.noarch 0:1.22-2.fc17 perl-Module-Pluggable.noarch 1:3.90-211.fc17
perl-PathTools.x86_64 0:3.33-211.fc17 perl-Pod-Escapes.noarch 1:1.04-211.fc17
perl-Pod-Simple.noarch 1:3.16-211.fc17 perl-Scalar-List-Utils.x86_64 0:1.25-1.fc17
perl-Socket.x86_64 0:2.001-1.fc17 perl-TimeDate.noarch 1:1.20-6.fc17
perl-libs.x86_64 4:5.14.2-211.fc17 perl-macros.x86_64 4:5.14.2-211.fc17
perl-threads.x86_64 0:1.86-2.fc17 perl-threads-shared.x86_64 0:1.40-2.fc17
quota.x86_64 1:4.00-3.fc17 quota-nls.noarch 1:4.00-3.fc17
resource-agents.x86_64 0:3.9.2-2.fc17.1 rpcbind.x86_64 0:0.2.0-16.fc17
tcp_wrappers.x86_64 0:7.6-69.fc17 xfsprogs.x86_64 0:3.1.8-1.fc17
Complete!
[root@pcmk-1 ~]#
----
Now install the cluster software on the second node.
ifdef::pcs[]
=== Install the Cluster Management Software ===
The pcs cli command coupled with the pcs daemon creates a cluster
management system capable of managing all aspects of the cluster stack
across all nodes from a single location.
[source,Bash]
----
# yum install -y pcs
----
Make sure to install the pcs packages on both nodes.
endif::[]
== Setup ==
ifdef::pcs[]
=== Enable pcs Daemon ===
Before the cluster can be configured, the pcs daemon must be started and enabled
to boot on startup on each node. This daemon works with the pcs cli command to manage
syncing the corosync configuration across all the nodes in the cluster.
Start and enable the daemon by issuing the following commands on each node.
[source,Bash]
----
# systemctl start pcsd.service
# systemctl enable pcsd.service
----
-Now setup a common pcs user account on each node in the cluster using the
-pcs_passwd command. In the example below, the user account 'pcmk' is created.
-You will be asked to supply a password. Make sure the username and password is
+Now setup a common pcs user account on each node in the cluster using
+the pcs_passwd command. In the example below, the user account 'pcmk'
+is created. You will be asked to supply a password (or supply one
+with the -p option). Make sure the username and password is
consistent across all the nodes.
[source,Bash]
----
# pcs_passwd pcmk
password:
----
The pcs daemon account is required on each node to enable remote pcs command
authentication. While the pcs cli command can be used locally without setting
up a pcs daemon user account, access to pcs features that require access to remote
nodes (such as syncing the corosync config, or starting/stopping the cluster on remote
nodes) will be unavailable. This tutorial will make use of these remote access commands.
endif::[]
=== Configuring Corosync ===
ifdef::pcs[]
In the past, at this point in the tutorial an explanation of how to configure and
propagate corosync's /etc/corosync.conf file would be necessary. Using pcs with the
pcs daemon greatly simplifies this process by generating the corosync.conf
across all the nodes in the cluster with a single command. The only thing required
to achieve this is to authenticate as the pcs user 'pcmk' on one of the nodes in the
cluster, and then issue the 'pcs cluster setup' command with a list of all the
node names in the cluster.
[source,Bash]
----
# pcs cluster auth pcmk-1 pcmk-2
Username: pcmk
Password:
pcmk-1: Authorized
pcmk-2: Authorized
-# pcs cluster setup pcmk pcmk-1 pcmk-2
+# pcs cluster setup mycluster pcmk-1 pcmk-2
pcmk-1: Succeeded
pcmk-2: Succeeded
----
That's it. Corosync is configured across the cluster. If you received an
authorization error for either of those commands, make sure you setup the
'pcmk' user account using the pcs_passwd command on every node in the cluster
with the same password.
endif::[]
ifdef::crm[]
Choose a port number and multi-cast footnote:[http://en.wikipedia.org/wiki/Multicast] address. footnote:[http://en.wikipedia.org/wiki/Multicast_address]
Be sure that the values you chose do not conflict with any existing clusters you might have.
For advice on choosing a multi-cast address, see
http://www.29west.com/docs/THPM/multicast-address-assignment.html
For this document, I have chosen port 4000 and used 239.255.1.1 as the multi-cast address.
[IMPORTANT]
===========
The instructions below only apply for a machine with a single NIC. If you
have a more complicated setup, you should edit the configuration
manually.
===========
[source,Bash]
----
# export ais_port=4000
# export ais_mcast=239.255.1.1
----
Next we automatically determine the hosts address. By not using the full
address, we make the configuration suitable to be copied to other nodes.
[source,Bash]
----
# export ais_addr=`ip addr | grep "inet " | tail -n 1 | awk '{print $4}' | sed s/255/0/`
----
Display and verify the configuration options
[source,Bash]
----
# env | grep ais_
ais_mcast=239.255.1.1
ais_port=4000
ais_addr=192.168.122.0
----
Once you're happy with the chosen values, update the Corosync
configuration
[source,Bash]
----
# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
# sed -i.bak "s/.*mcastaddr:.*/mcastaddr:\ $ais_mcast/g" /etc/corosync/corosync.conf
# sed -i.bak "s/.*mcastport:.*/mcastport:\ $ais_port/g" /etc/corosync/corosync.conf
# sed -i.bak "s/.*\tbindnetaddr:.*/bindnetaddr:\ $ais_addr/g" /etc/corosync/corosync.conf
----
Lastly, you'll need to enable quorum
[source,Bash]
....
cat << END >> /etc/corosync/corosync.conf
quorum {
provider: corosync_votequorum
expected_votes: 2
}
END
....
endif::[]
The final /etc/corosync.conf configuration on each node should look
something like the sample in Appendix B, Sample Corosync Configuration.
[IMPORTANT]
===========
Pacemaker used to obtain membership and quorum from a custom Corosync plugin.
This plugin also had the capability to start Pacemaker automatically when Corosync was started.
Neither behavior is possible with Corosync 2.0 and beyond as support for plugins was removed.
Instead, Pacemaker must started as a separate job/initscript.
-Also, since Pacemaker used to use the plugin for message routing, a node using the plugin (Corosync prior to 2.0) cannot talk to one that isn't (Corosync 2.0+).
+Also, since Pacemaker made use of the plugin for message routing, a node using the plugin (Corosync prior to 2.0) cannot talk to one that isn't (Corosync 2.0+).
Rolling upgrades between these versions are therefor not possible and an alternate strategy footnote:[http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ap-upgrade.html] must be used.
===========
ifdef::crm[]
=== Propagate the Configuration ===
Now we need to copy the changes so far to the other node:
[source,Bash]
----
# for f in /etc/corosync/corosync.conf /etc/hosts; do scp $f pcmk-2:$f ; done
corosync.conf 100% 1528 1.5KB/s 00:00
hosts 100% 281 0.3KB/s 00:00
#
----
endif::[]
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt b/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt
index 280dbdcd28..0f4839a268 100644
--- a/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt
+++ b/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt
@@ -1,158 +1,163 @@
= Pacemaker Tools =
== Using Pacemaker Tools ==
-ifdef::pcs[]
In the dark past, configuring Pacemaker required the administrator to
read and write XML. In true UNIX style, there were also a number of
different commands that specialized in different aspects of querying
and updating the cluster.
-All of that has been greatly simplified with the creation of pcs. The pcs
-cluster management tool takes all the individual aspects required
-for managing and configuring a cluster, and packs them into one simple
-to use command line tool.
+All of that has been greatly simplified with the creation of unified
+command-line shells (and GUIs) that hide all the messy XML
+scaffolding.
+
+These shells take all the individual aspects required for managing and
+configuring a cluster, and packs them into one simple to use command
+line tool.
+
+They even allow you to queue up several changes at once and commit
+them atomically.
+
+There are currently two command-line shells that people use, `pcs` and
+the `crm shell`.
+
+[NOTE]
+===========
+The two shells share many concepts but the scope, layout and syntax
+does differ, so make sure you read the version of this guide that
+corresponds to the software installed on your system.
+===========
+
+ifdef::pcs[]
+This edition of Clusters from Scratch is based on `pcs`.
+Start by taking some time to familiarize yourself with what it can do.
-Take some time to familiarize yourself with what pcs can do.
+[IMPORTANT]
+===========
+Since `pcs` has the ability to manage all aspects of the cluster (both
+corosync and pacemaker), it requires a specific cluster stack to be in
+use, (corosync 2.0 with votequorum + Pacemaker version >= 1.8).
+===========
[source,Bash]
----
# pcs
Control and configure pacemaker and corosync.
Options:
-h Display usage and exit
-f file Perform actions on file instead of active CIB
Commands:
resource Manage cluster resources
cluster Configure cluster options and nodes
stonith Configure fence devices
property Set pacemaker properties
constraint Set resource constraints
status View cluster status
----
As you can see, the different aspects of cluster management are broken
up into categories: resource, cluster, stonith, property, constraint,
and status. To discover the functionality available in each of these
categories, one can issue the command 'pcs help'. Below
is an example of all the options available under the status category.
[source,Bash]
----
# pcs status help
Usage: pcs status [commands]...
View current cluster and resource status
Commands:
status
View all information about the cluster and resources
status resources
View current status of cluster resources
status groups
View currently configured groups and their resources
status cluster
View current cluster status
status corosync
View current corosync status
status nodes [corosync]
View current status of nodes from pacemaker, or if corosync is
specified, print nodes currently configured in corosync
status actions
View failed actions
status pcsd ...
Show the current status of pcsd on the specified nodes
status xml
View xml version of status (output from crm_mon -r -1 -X)
----
Additionally, if you are interested in the Pacemaker version and
supported cluster stack(s) available with your current Pacemaker
installation, the pacemakerd --features option is available to you.
pass:[# pacemakerd --features]
------------------
sys::[pacemakerd --features]
------------------
[NOTE]
======
If the SNMP and/or email options are not listed, then Pacemaker was not
built to support them. This may be by the choice of your distribution or
the required libraries may not have been available. Please contact
whoever supplied you with the packages for more details.
======
-While not covered in this tutorial, it is worth noting that there is
-another popular cluster management tool that predates pcs called crm.
-Besides having differing command syntax, there are several functional
-differences between these two tools. The crm shell is limited to
-configuring Pacemaker only, while pcs has the ability to manage
-all aspects of the cluster (corosync and pacemaker). As a result
-of the additional functionality present in pcs, the pcs tool requires
-a specific cluster stack to be in use, (corosync 2.0 with votequorum +
-Pacemaker version >= 1.8). The crm shell lacks such a requirement.
-
endif::[]
ifdef::crm[]
-In the dark past, configuring Pacemaker required the administrator to
-read and write XML. In true UNIX style, there were also a number of
-different commands that specialized in different aspects of querying
-and updating the cluster.
-
-Since Pacemaker 1.0, this has all changed and we have an integrated,
-scriptable, cluster shell that hides all the messy XML scaffolding. It
-even allows you to queue up several changes at once and commit them
-atomically.
-
-Take some time to familiarize yourself with what it can do.
+This edition of Clusters from Scratch is based on the `crm` shell.
+Start by taking some time to familiarize yourself with what it can do.
pass:[# crm --help]
The primary tool for monitoring the status of the cluster is crm_mon
(also available as crm status). It can be run in a variety of modes
and has a number of output options. To find out about any of the tools
that come with Pacemaker, simply invoke them with the --help option or
consult the included man pages. Both sets of output are created from
the tool, and so will always be in sync with each other and the tool
itself.
Additionally, the Pacemaker version and supported cluster stack(s) are
available via the --feature option to pacemakerd.
pass:[# pacemakerd --features]
------------------
sys::[pacemakerd --features]
------------------
pass:[# pacemakerd --help]
------------------
sys::[pacemakerd --help]
------------------
pass:[# crm_mon --help]
------------------
sys::[crm_mon --help]
------------------
[NOTE]
======
If the SNMP and/or email options are not listed, then Pacemaker was not
built to support them. This may be by the choice of your distribution or
the required libraries may not have been available. Please contact
whoever supplied you with the packages for more details.
======
endif::[]