diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt index 8f86914c2a..fadb91c7a1 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt @@ -1,1099 +1,1099 @@ = Installation = == OS Installation == Detailed instructions for installing Fedora are available at http://docs.fedoraproject.org/en-US/Fedora/17/html/Installation_Guide/ in a number of languages. The abbreviated version is as follows... Point your browser to http://fedoraproject.org/en/get-fedora-all, locate the +Install Media+ section and download the install DVD that matches your hardware. Burn the disk image to a DVD footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Burning_ISO_images_to_disc/index.html] and boot from it, or use the image to boot a virtual machine. After clicking through the welcome screen, select your language, keyboard layout footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/sn-keyboard-x86.html] and storage type footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/Storage_Devices-x86.html] Assign your machine a host name. footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/sn-Netconfig-x86.html] I happen to control the clusterlabs.org domain name, so I will use that here. [IMPORTANT] =========== Do not accept the default network settings. Cluster machines should never obtain an IP address via DHCP. When you are presented with the +Configure Network+ advanced option, select that option before continuing with the installation process to specify a fixed IPv4 address for +System eth0+. Be sure to also enter the +Routes+ section and add an entry for your default gateway. -image::images/Network.png["Custom network settings",align="center"] +image::images/Network.png["Custom network settings",align="center",scaledwidth="65%"] If you miss this step, this can easily be configured after installation. You will have to navigate to +system settings+ and select +network+. From there you can select what device to configure. =========== You will then be prompted to indicate the machine's physical location footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/s1-timezone-x86.html] and to supply a root password. footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/sn-account_configuration-x86.html] Now select where you want Fedora installed. footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/s1-diskpartsetup-x86.html] As I don’t care about any existing data, I will accept the default and allow Fedora to use the complete drive. [IMPORTANT] =========== By default Fedora uses LVM for partitioning which allows us to dynamically change the amount of space allocated to a given partition. However, by default it also allocates all free space to the +/+ (aka. +root+) partition which cannot be dynamically _reduced_ in size (dynamic increases are fine by-the-way). So if you plan on following the DRBD or GFS2 portions of this guide, you should reserve at least 1Gb of space on each machine from which to create a shared volume. To do so select the +Review and modify partitioning layout+ checkbox before clicking +Next+. You will then be given an opportunity to reduce the size of the +root+ partition. =========== Next choose which software should be installed. footnote:[http://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/s1-pkgselection-x86.html] Change the selection to Minimal so that we see everything that gets installed. Don't enable updates yet, we'll do that (and install any extra software we need) later. After you click next, Fedora will begin installing. Go grab something to drink, this may take a while. Once the node reboots, you'll see a (possibly mangled) login prompt on the console. Login using +root+ and the password you created earlier. -image::images/Console.png["Initial Console",align="center"] +image::images/Console.png["Initial Console",align="center",scaledwidth="65%"] [NOTE] ====== From here on in we're going to be working exclusively from the terminal. ====== == Post Installation Tasks == === Networking === Bring up the network and ensure it starts at boot [source,C] ----- # service network start # chkconfig network on ----- Check the machine has the static IP address you configured earlier [source,C] ----- # ip addr 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:d7:d6:08 brd ff:ff:ff:ff:ff:ff inet 192.168.122.101/24 brd 192.168.122.255 scope global eth0 inet6 fe80::5054:ff:fed7:d608/64 scope link valid_lft forever preferred_lft forever ----- Now check the default route setting: [source,C] ----- [root@pcmk-1 ~]# ip route default via 192.168.122.1 dev eth0 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.101 ----- If there is no line beginning with +default via+, then you may need to add a line such as [source,Bash] GATEWAY=192.168.122.1 to '/etc/sysconfig/network' and restart the network. Now check for connectivity to the outside world. Start small by testing if we can read the gateway we configured. [source,C] ----- # ping -c 1 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_req=1 ttl=64 time=0.249 ms --- 192.168.122.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms ----- Now try something external, choose a location you know will be available. [source,C] ----- # ping -c 1 www.google.com PING www.l.google.com (173.194.72.106) 56(84) bytes of data. 64 bytes from tf-in-f106.1e100.net (173.194.72.106): icmp_req=1 ttl=41 time=167 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 167.618/167.618/167.618/0.000 ms ----- === Leaving the Console === The console isn't a very friendly place to work from, we will now switch to accessing the machine remotely via SSH where we can use copy&paste etc. First we check we can see the newly installed at all: [source,C] ----- beekhof@f16 ~ # ping -c 1 192.168.122.101 PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data. 64 bytes from 192.168.122.101: icmp_req=1 ttl=64 time=1.01 ms --- 192.168.122.101 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.012/1.012/1.012/0.000 ms ----- Next we login via SSH [source,C] ----- beekhof@f16 ~ # ssh -l root 192.168.122.11 root@192.168.122.11's password: Last login: Fri Mar 30 19:41:19 2012 from 192.168.122.1 [root@pcmk-1 ~]# ----- === Security Shortcuts === To simplify this guide and focus on the aspects directly connected to clustering, we will now disable the machine's firewall and SELinux installation. [WARNING] =========== Both of these actions create significant security issues and should not be performed on machines that will be exposed to the outside world. =========== [IMPORTANT] =========== TODO: Create an Appendix that deals with (at least) re-enabling the firewall. =========== [source,C] ---- # setenforce 0 # sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config # systemctl disable iptables.service # rm '/etc/systemd/system/basic.target.wants/iptables.service' # systemctl stop iptables.service ---- === Short Node Names === During installation, we filled in the machine's fully qualified domain name (FQDN) which can be rather long when it appears in cluster logs and status output. See for yourself how the machine identifies itself: (((Nodes, short name))) [source,C] ---- # uname -n pcmk-1.clusterlabs.org # dnsdomainname clusterlabs.org ---- (((Nodes, Domain name (Query)))) The output from the second command is fine, but we really don't need the domain name included in the basic host details. To address this, we need to update /etc/sysconfig/network. This is what it should look like before we start. [source,C] ---- # cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=pcmk-1.clusterlabs.org GATEWAY=192.168.122.1 ---- All we need to do now is strip off the domain name portion, which is stored elsewhere anyway. [source,C] ---- # sed -i.sed 's/\.[a-z].*//g' /etc/sysconfig/network ---- Now confirm the change was successful. The revised file contents should look something like this. [source,C] ---- # cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=pcmk-1 GATEWAY=192.168.122.1 ---- However we're not finished. The machine wont normally see the shortened host name until about it reboots, but we can force it to update. [source,C] ---- # source /etc/sysconfig/network # hostname $HOSTNAME ---- (((Nodes, Domain name (Remove from host name)))) Now check the machine is using the correct names [source,C] ---- # uname -n pcmk-1 # dnsdomainname clusterlabs.org ---- === NTP === It is highly recommended to enable NTP on your cluster nodes. Doing so ensures all nodes agree on the current time and makes reading log files significantly easier. footnote:[http://docs.fedoraproject.org/en-US/Fedora/17/html-single/System_Administrators_Guide/index.html#ch-Configuring_the_Date_and_Time] == Before You Continue == Repeat the Installation steps so far, so that you have two Fedora nodes ready to have the cluster software installed. For the purposes of this document, the additional node is called pcmk-2 with address 192.168.122.102. === Finalize Networking === Confirm that you can communicate between the two new nodes: [source,C] ---- # ping -c 3 192.168.122.102 PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data. 64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=0.343 ms 64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.402 ms 64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.558 ms --- 192.168.122.102 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.343/0.434/0.558/0.092 ms ---- Now we need to make sure we can communicate with the machines by their name. If you have a DNS server, add additional entries for the two machines. Otherwise, you'll need to add the machines to '/etc/hosts' . Below are the entries for my cluster nodes: [source,C] ---- # grep pcmk /etc/hosts 192.168.122.101 pcmk-1.clusterlabs.org pcmk-1 192.168.122.102 pcmk-2.clusterlabs.org pcmk-2 ---- We can now verify the setup by again using ping: [source,C] ---- # ping -c 3 pcmk-2 PING pcmk-2.clusterlabs.org (192.168.122.101) 56(84) bytes of data. 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=1 ttl=64 time=0.164 ms 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=2 ttl=64 time=0.475 ms 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=3 ttl=64 time=0.186 ms --- pcmk-2.clusterlabs.org ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.164/0.275/0.475/0.141 ms ---- === Configure SSH === SSH is a convenient and secure way to copy files and perform commands remotely. For the purposes of this guide, we will create a key without a password (using the -N option) so that we can perform remote actions without being prompted. (((SSH))) [WARNING] ========= Unprotected SSH keys, those without a password, are not recommended for servers exposed to the outside world. We use them here only to simplify the demo. ========= Create a new key and allow anyone with that key to log in: .Creating and Activating a new SSH Key [source,C] ---- # ssh-keygen -t dsa -f ~/.ssh/id_dsa -N "" Generating public/private dsa key pair. Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: 91:09:5c:82:5a:6a:50:08:4e:b2:0c:62:de:cc:74:44 root@pcmk-1.clusterlabs.org The key's randomart image is: +--[ DSA 1024]----+ |==.ooEo.. | |X O + .o o | | * A + | | + . | | . S | | | | | | | | | +-----------------+ # cp .ssh/id_dsa.pub .ssh/authorized_keys ---- (((Creating and Activating a new SSH Key))) Install the key on the other nodes and test that you can now run commands remotely, without being prompted .Installing the SSH Key on Another Host [source,C] ---- # scp -r .ssh pcmk-2: The authenticity of host 'pcmk-2 (192.168.122.102)' can't be established. RSA key fingerprint is b1:2b:55:93:f1:d9:52:2b:0f:f2:8a:4e:ae:c6:7c:9a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'pcmk-2,192.168.122.102' (RSA) to the list of known hosts.root@pcmk-2's password: id_dsa.pub 100% 616 0.6KB/s 00:00 id_dsa 100% 672 0.7KB/s 00:00 known_hosts 100% 400 0.4KB/s 00:00 authorized_keys 100% 616 0.6KB/s 00:00 # ssh pcmk-2 -- uname -n pcmk-2 # ---- == Cluster Software Installation == === Install the Cluster Software === Since version 12, Fedora comes with recent versions of everything you need, so simply fire up a shell on all your nodes and run: [source,C] ---- [ALL] # yum install -y pacemaker corosync ---- ..... fedora/metalink | 38 kB 00:00 fedora | 4.2 kB 00:00 fedora/primary_db | 14 MB 00:21 updates/metalink | 2.7 kB 00:00 updates | 2.6 kB 00:00 updates/primary_db | 1.2 kB 00:00 updates-testing/metalink | 28 kB 00:00 updates-testing | 4.5 kB 00:00 updates-testing/primary_db | 4.5 MB 00:12 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package corosync.x86_64 0:1.99.9-1.fc17 will be installed --> Processing Dependency: corosynclib = 1.99.9-1.fc17 for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libxslt for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libvotequorum.so.5(COROSYNC_VOTEQUORUM_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libquorum.so.5(COROSYNC_QUORUM_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libcpg.so.4(COROSYNC_CPG_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libcmap.so.4(COROSYNC_CMAP_1.0)(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libcfg.so.6(COROSYNC_CFG_0.82)(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libvotequorum.so.5()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libtotem_pg.so.5()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libquorum.so.5()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libqb.so.0()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libnetsnmp.so.30()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libcpg.so.4()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libcorosync_common.so.4()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libcmap.so.4()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 --> Processing Dependency: libcfg.so.6()(64bit) for package: corosync-1.99.9-1.fc17.x86_64 ---> Package pacemaker.x86_64 0:1.1.7-2.fc17 will be installed --> Processing Dependency: pacemaker-libs = 1.1.7-2.fc17 for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: pacemaker-cluster-libs = 1.1.7-2.fc17 for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: pacemaker-cli = 1.1.7-2.fc17 for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: resource-agents for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: perl(Getopt::Long) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libgnutls.so.26(GNUTLS_1_4)(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: cluster-glue for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: /usr/bin/perl for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libtransitioner.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libstonithd.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libstonith.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libplumb.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libpils.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libpengine.so.3()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libpe_status.so.3()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libpe_rules.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libltdl.so.7()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: liblrm.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libgnutls.so.26()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libcrmcommon.so.2()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libcrmcluster.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Processing Dependency: libcib.so.1()(64bit) for package: pacemaker-1.1.7-2.fc17.x86_64 --> Running transaction check ---> Package cluster-glue.x86_64 0:1.0.6-9.fc17.1 will be installed --> Processing Dependency: perl-TimeDate for package: cluster-glue-1.0.6-9.fc17.1.x86_64 --> Processing Dependency: libOpenIPMIutils.so.0()(64bit) for package: cluster-glue-1.0.6-9.fc17.1.x86_64 --> Processing Dependency: libOpenIPMIposix.so.0()(64bit) for package: cluster-glue-1.0.6-9.fc17.1.x86_64 --> Processing Dependency: libOpenIPMI.so.0()(64bit) for package: cluster-glue-1.0.6-9.fc17.1.x86_64 ---> Package cluster-glue-libs.x86_64 0:1.0.6-9.fc17.1 will be installed ---> Package corosynclib.x86_64 0:1.99.9-1.fc17 will be installed --> Processing Dependency: librdmacm.so.1(RDMACM_1.0)(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64 --> Processing Dependency: libibverbs.so.1(IBVERBS_1.1)(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64 --> Processing Dependency: libibverbs.so.1(IBVERBS_1.0)(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64 --> Processing Dependency: librdmacm.so.1()(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64 --> Processing Dependency: libibverbs.so.1()(64bit) for package: corosynclib-1.99.9-1.fc17.x86_64 ---> Package gnutls.x86_64 0:2.12.17-1.fc17 will be installed --> Processing Dependency: libtasn1.so.3(LIBTASN1_0_3)(64bit) for package: gnutls-2.12.17-1.fc17.x86_64 --> Processing Dependency: libtasn1.so.3()(64bit) for package: gnutls-2.12.17-1.fc17.x86_64 --> Processing Dependency: libp11-kit.so.0()(64bit) for package: gnutls-2.12.17-1.fc17.x86_64 ---> Package libqb.x86_64 0:0.11.1-1.fc17 will be installed ---> Package libtool-ltdl.x86_64 0:2.4.2-3.fc17 will be installed ---> Package libxslt.x86_64 0:1.1.26-9.fc17 will be installed ---> Package net-snmp-libs.x86_64 1:5.7.1-4.fc17 will be installed ---> Package pacemaker-cli.x86_64 0:1.1.7-2.fc17 will be installed ---> Package pacemaker-cluster-libs.x86_64 0:1.1.7-2.fc17 will be installed ---> Package pacemaker-libs.x86_64 0:1.1.7-2.fc17 will be installed ---> Package perl.x86_64 4:5.14.2-211.fc17 will be installed --> Processing Dependency: perl-libs = 4:5.14.2-211.fc17 for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(threads::shared) >= 1.21 for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Socket) >= 1.3 for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Scalar::Util) >= 1.10 for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(File::Spec) >= 0.8 for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl-macros for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl-libs for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(threads::shared) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(threads) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Socket) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Scalar::Util) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Pod::Simple) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Module::Pluggable) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(List::Util) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(File::Spec::Unix) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(File::Spec::Functions) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(File::Spec) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Cwd) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: perl(Carp) for package: 4:perl-5.14.2-211.fc17.x86_64 --> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.14.2-211.fc17.x86_64 ---> Package resource-agents.x86_64 0:3.9.2-2.fc17.1 will be installed --> Processing Dependency: /usr/sbin/rpc.nfsd for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /usr/sbin/rpc.mountd for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /usr/sbin/ethtool for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /sbin/rpc.statd for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /sbin/quotaon for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /sbin/quotacheck for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /sbin/mount.nfs4 for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /sbin/mount.nfs for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /sbin/mount.cifs for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: /sbin/fsck.xfs for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Processing Dependency: libnet.so.1()(64bit) for package: resource-agents-3.9.2-2.fc17.1.x86_64 --> Running transaction check ---> Package OpenIPMI-libs.x86_64 0:2.0.18-13.fc17 will be installed ---> Package cifs-utils.x86_64 0:5.3-2.fc17 will be installed --> Processing Dependency: libtalloc.so.2(TALLOC_2.0.2)(64bit) for package: cifs-utils-5.3-2.fc17.x86_64 --> Processing Dependency: keyutils for package: cifs-utils-5.3-2.fc17.x86_64 --> Processing Dependency: libwbclient.so.0()(64bit) for package: cifs-utils-5.3-2.fc17.x86_64 --> Processing Dependency: libtalloc.so.2()(64bit) for package: cifs-utils-5.3-2.fc17.x86_64 ---> Package ethtool.x86_64 2:3.2-2.fc17 will be installed ---> Package libibverbs.x86_64 0:1.1.6-2.fc17 will be installed ---> Package libnet.x86_64 0:1.1.5-3.fc17 will be installed ---> Package librdmacm.x86_64 0:1.0.15-1.fc17 will be installed ---> Package libtasn1.x86_64 0:2.12-1.fc17 will be installed ---> Package nfs-utils.x86_64 1:1.2.5-12.fc17 will be installed --> Processing Dependency: rpcbind for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libtirpc for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libnfsidmap for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libgssglue.so.1(libgssapi_CITI_2)(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libgssglue for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libevent for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libtirpc.so.1()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libnfsidmap.so.0()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libgssglue.so.1()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 --> Processing Dependency: libevent-2.0.so.5()(64bit) for package: 1:nfs-utils-1.2.5-12.fc17.x86_64 ---> Package p11-kit.x86_64 0:0.12-1.fc17 will be installed ---> Package perl-Carp.noarch 0:1.22-2.fc17 will be installed ---> Package perl-Module-Pluggable.noarch 1:3.90-211.fc17 will be installed ---> Package perl-PathTools.x86_64 0:3.33-211.fc17 will be installed ---> Package perl-Pod-Simple.noarch 1:3.16-211.fc17 will be installed --> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.16-211.fc17.noarch ---> Package perl-Scalar-List-Utils.x86_64 0:1.25-1.fc17 will be installed ---> Package perl-Socket.x86_64 0:2.001-1.fc17 will be installed ---> Package perl-TimeDate.noarch 1:1.20-6.fc17 will be installed ---> Package perl-libs.x86_64 4:5.14.2-211.fc17 will be installed ---> Package perl-macros.x86_64 4:5.14.2-211.fc17 will be installed ---> Package perl-threads.x86_64 0:1.86-2.fc17 will be installed ---> Package perl-threads-shared.x86_64 0:1.40-2.fc17 will be installed ---> Package quota.x86_64 1:4.00-3.fc17 will be installed --> Processing Dependency: quota-nls = 1:4.00-3.fc17 for package: 1:quota-4.00-3.fc17.x86_64 --> Processing Dependency: tcp_wrappers for package: 1:quota-4.00-3.fc17.x86_64 ---> Package xfsprogs.x86_64 0:3.1.8-1.fc17 will be installed --> Running transaction check ---> Package keyutils.x86_64 0:1.5.5-2.fc17 will be installed ---> Package libevent.x86_64 0:2.0.14-2.fc17 will be installed ---> Package libgssglue.x86_64 0:0.3-1.fc17 will be installed ---> Package libnfsidmap.x86_64 0:0.25-1.fc17 will be installed ---> Package libtalloc.x86_64 0:2.0.7-4.fc17 will be installed ---> Package libtirpc.x86_64 0:0.2.2-2.1.fc17 will be installed ---> Package libwbclient.x86_64 1:3.6.3-81.fc17.1 will be installed ---> Package perl-Pod-Escapes.noarch 1:1.04-211.fc17 will be installed ---> Package quota-nls.noarch 1:4.00-3.fc17 will be installed ---> Package rpcbind.x86_64 0:0.2.0-16.fc17 will be installed ---> Package tcp_wrappers.x86_64 0:7.6-69.fc17 will be installed --> Finished Dependency Resolution Dependencies Resolved ===================================================================================== Package Arch Version Repository Size ===================================================================================== Installing: corosync x86_64 1.99.9-1.fc17 updates-testing 159 k pacemaker x86_64 1.1.7-2.fc17 updates-testing 362 k Installing for dependencies: OpenIPMI-libs x86_64 2.0.18-13.fc17 fedora 466 k cifs-utils x86_64 5.3-2.fc17 updates-testing 66 k cluster-glue x86_64 1.0.6-9.fc17.1 fedora 229 k cluster-glue-libs x86_64 1.0.6-9.fc17.1 fedora 121 k corosynclib x86_64 1.99.9-1.fc17 updates-testing 96 k ethtool x86_64 2:3.2-2.fc17 fedora 94 k gnutls x86_64 2.12.17-1.fc17 fedora 385 k keyutils x86_64 1.5.5-2.fc17 fedora 49 k libevent x86_64 2.0.14-2.fc17 fedora 160 k libgssglue x86_64 0.3-1.fc17 fedora 24 k libibverbs x86_64 1.1.6-2.fc17 fedora 44 k libnet x86_64 1.1.5-3.fc17 fedora 54 k libnfsidmap x86_64 0.25-1.fc17 fedora 34 k libqb x86_64 0.11.1-1.fc17 updates-testing 68 k librdmacm x86_64 1.0.15-1.fc17 fedora 27 k libtalloc x86_64 2.0.7-4.fc17 fedora 22 k libtasn1 x86_64 2.12-1.fc17 updates-testing 319 k libtirpc x86_64 0.2.2-2.1.fc17 fedora 78 k libtool-ltdl x86_64 2.4.2-3.fc17 fedora 45 k libwbclient x86_64 1:3.6.3-81.fc17.1 updates-testing 68 k libxslt x86_64 1.1.26-9.fc17 fedora 416 k net-snmp-libs x86_64 1:5.7.1-4.fc17 fedora 713 k nfs-utils x86_64 1:1.2.5-12.fc17 fedora 311 k p11-kit x86_64 0.12-1.fc17 updates-testing 36 k pacemaker-cli x86_64 1.1.7-2.fc17 updates-testing 368 k pacemaker-cluster-libs x86_64 1.1.7-2.fc17 updates-testing 77 k pacemaker-libs x86_64 1.1.7-2.fc17 updates-testing 322 k perl x86_64 4:5.14.2-211.fc17 fedora 10 M perl-Carp noarch 1.22-2.fc17 fedora 17 k perl-Module-Pluggable noarch 1:3.90-211.fc17 fedora 47 k perl-PathTools x86_64 3.33-211.fc17 fedora 105 k perl-Pod-Escapes noarch 1:1.04-211.fc17 fedora 40 k perl-Pod-Simple noarch 1:3.16-211.fc17 fedora 223 k perl-Scalar-List-Utils x86_64 1.25-1.fc17 updates-testing 33 k perl-Socket x86_64 2.001-1.fc17 updates-testing 44 k perl-TimeDate noarch 1:1.20-6.fc17 fedora 43 k perl-libs x86_64 4:5.14.2-211.fc17 fedora 628 k perl-macros x86_64 4:5.14.2-211.fc17 fedora 32 k perl-threads x86_64 1.86-2.fc17 fedora 47 k perl-threads-shared x86_64 1.40-2.fc17 fedora 36 k quota x86_64 1:4.00-3.fc17 fedora 160 k quota-nls noarch 1:4.00-3.fc17 fedora 74 k resource-agents x86_64 3.9.2-2.fc17.1 fedora 466 k rpcbind x86_64 0.2.0-16.fc17 fedora 52 k tcp_wrappers x86_64 7.6-69.fc17 fedora 72 k xfsprogs x86_64 3.1.8-1.fc17 updates-testing 715 k Transaction Summary ===================================================================================== Install 2 Packages (+46 Dependent packages) Total download size: 18 M Installed size: 59 M Downloading Packages: (1/48): OpenIPMI-libs-2.0.18-13.fc17.x86_64.rpm | 466 kB 00:00 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 1aca3465: NOKEY Public key for OpenIPMI-libs-2.0.18-13.fc17.x86_64.rpm is not installed (2/48): cifs-utils-5.3-2.fc17.x86_64.rpm | 66 kB 00:01 Public key for cifs-utils-5.3-2.fc17.x86_64.rpm is not installed (3/48): cluster-glue-1.0.6-9.fc17.1.x86_64.rpm | 229 kB 00:00 (4/48): cluster-glue-libs-1.0.6-9.fc17.1.x86_64.rpm | 121 kB 00:00 (5/48): corosync-1.99.9-1.fc17.x86_64.rpm | 159 kB 00:01 (6/48): corosynclib-1.99.9-1.fc17.x86_64.rpm | 96 kB 00:00 (7/48): ethtool-3.2-2.fc17.x86_64.rpm | 94 kB 00:00 (8/48): gnutls-2.12.17-1.fc17.x86_64.rpm | 385 kB 00:00 (9/48): keyutils-1.5.5-2.fc17.x86_64.rpm | 49 kB 00:00 (10/48): libevent-2.0.14-2.fc17.x86_64.rpm | 160 kB 00:00 (11/48): libgssglue-0.3-1.fc17.x86_64.rpm | 24 kB 00:00 (12/48): libibverbs-1.1.6-2.fc17.x86_64.rpm | 44 kB 00:00 (13/48): libnet-1.1.5-3.fc17.x86_64.rpm | 54 kB 00:00 (14/48): libnfsidmap-0.25-1.fc17.x86_64.rpm | 34 kB 00:00 (15/48): libqb-0.11.1-1.fc17.x86_64.rpm | 68 kB 00:01 (16/48): librdmacm-1.0.15-1.fc17.x86_64.rpm | 27 kB 00:00 (17/48): libtalloc-2.0.7-4.fc17.x86_64.rpm | 22 kB 00:00 (18/48): libtasn1-2.12-1.fc17.x86_64.rpm | 319 kB 00:02 (19/48): libtirpc-0.2.2-2.1.fc17.x86_64.rpm | 78 kB 00:00 (20/48): libtool-ltdl-2.4.2-3.fc17.x86_64.rpm | 45 kB 00:00 (21/48): libwbclient-3.6.3-81.fc17.1.x86_64.rpm | 68 kB 00:00 (22/48): libxslt-1.1.26-9.fc17.x86_64.rpm | 416 kB 00:00 (23/48): net-snmp-libs-5.7.1-4.fc17.x86_64.rpm | 713 kB 00:01 (24/48): nfs-utils-1.2.5-12.fc17.x86_64.rpm | 311 kB 00:00 (25/48): p11-kit-0.12-1.fc17.x86_64.rpm | 36 kB 00:01 (26/48): pacemaker-1.1.7-2.fc17.x86_64.rpm | 362 kB 00:02 (27/48): pacemaker-cli-1.1.7-2.fc17.x86_64.rpm | 368 kB 00:02 (28/48): pacemaker-cluster-libs-1.1.7-2.fc17.x86_64.rpm | 77 kB 00:00 (29/48): pacemaker-libs-1.1.7-2.fc17.x86_64.rpm | 322 kB 00:01 (30/48): perl-5.14.2-211.fc17.x86_64.rpm | 10 MB 00:15 (31/48): perl-Carp-1.22-2.fc17.noarch.rpm | 17 kB 00:00 (32/48): perl-Module-Pluggable-3.90-211.fc17.noarch.rpm | 47 kB 00:00 (33/48): perl-PathTools-3.33-211.fc17.x86_64.rpm | 105 kB 00:00 (34/48): perl-Pod-Escapes-1.04-211.fc17.noarch.rpm | 40 kB 00:00 (35/48): perl-Pod-Simple-3.16-211.fc17.noarch.rpm | 223 kB 00:00 (36/48): perl-Scalar-List-Utils-1.25-1.fc17.x86_64.rpm | 33 kB 00:01 (37/48): perl-Socket-2.001-1.fc17.x86_64.rpm | 44 kB 00:00 (38/48): perl-TimeDate-1.20-6.fc17.noarch.rpm | 43 kB 00:00 (39/48): perl-libs-5.14.2-211.fc17.x86_64.rpm | 628 kB 00:00 (40/48): perl-macros-5.14.2-211.fc17.x86_64.rpm | 32 kB 00:00 (41/48): perl-threads-1.86-2.fc17.x86_64.rpm | 47 kB 00:00 (42/48): perl-threads-shared-1.40-2.fc17.x86_64.rpm | 36 kB 00:00 (43/48): quota-4.00-3.fc17.x86_64.rpm | 160 kB 00:00 (44/48): quota-nls-4.00-3.fc17.noarch.rpm | 74 kB 00:00 (45/48): resource-agents-3.9.2-2.fc17.1.x86_64.rpm | 466 kB 00:00 (46/48): rpcbind-0.2.0-16.fc17.x86_64.rpm | 52 kB 00:00 (47/48): tcp_wrappers-7.6-69.fc17.x86_64.rpm | 72 kB 00:00 (48/48): xfsprogs-3.1.8-1.fc17.x86_64.rpm | 715 kB 00:03 ---------------------------------------------------------------------------------------- Total 333 kB/s | 18 MB 00:55 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-x86_64 Importing GPG key 0x1ACA3465: Userid : "Fedora (17) " Fingerprint: cac4 3fb7 74a4 a673 d81c 5de7 50e9 4c99 1aca 3465 Package : fedora-release-17-0.8.noarch (@anaconda-0) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-x86_64 Running Transaction Check Running Transaction Test Transaction Test Succeeded Running Transaction Installing : libqb-0.11.1-1.fc17.x86_64 1/48 Installing : libtool-ltdl-2.4.2-3.fc17.x86_64 2/48 Installing : cluster-glue-libs-1.0.6-9.fc17.1.x86_64 3/48 Installing : libxslt-1.1.26-9.fc17.x86_64 4/48 Installing : 1:perl-Pod-Escapes-1.04-211.fc17.noarch 5/48 Installing : perl-threads-1.86-2.fc17.x86_64 6/48 Installing : 4:perl-macros-5.14.2-211.fc17.x86_64 7/48 Installing : 1:perl-Pod-Simple-3.16-211.fc17.noarch 8/48 Installing : perl-Socket-2.001-1.fc17.x86_64 9/48 Installing : perl-Carp-1.22-2.fc17.noarch 10/48 Installing : 4:perl-libs-5.14.2-211.fc17.x86_64 11/48 Installing : perl-threads-shared-1.40-2.fc17.x86_64 12/48 Installing : perl-Scalar-List-Utils-1.25-1.fc17.x86_64 13/48 Installing : 1:perl-Module-Pluggable-3.90-211.fc17.noarch 14/48 Installing : perl-PathTools-3.33-211.fc17.x86_64 15/48 Installing : 4:perl-5.14.2-211.fc17.x86_64 16/48 Installing : libibverbs-1.1.6-2.fc17.x86_64 17/48 Installing : keyutils-1.5.5-2.fc17.x86_64 18/48 Installing : libgssglue-0.3-1.fc17.x86_64 19/48 Installing : libtirpc-0.2.2-2.1.fc17.x86_64 20/48 Installing : 1:net-snmp-libs-5.7.1-4.fc17.x86_64 21/48 Installing : rpcbind-0.2.0-16.fc17.x86_64 22/48 Installing : librdmacm-1.0.15-1.fc17.x86_64 23/48 Installing : corosynclib-1.99.9-1.fc17.x86_64 24/48 Installing : corosync-1.99.9-1.fc17.x86_64 25/48 error reading information on service corosync: No such file or directory Installing : 1:perl-TimeDate-1.20-6.fc17.noarch 26/48 Installing : 1:quota-nls-4.00-3.fc17.noarch 27/48 Installing : tcp_wrappers-7.6-69.fc17.x86_64 28/48 Installing : 1:quota-4.00-3.fc17.x86_64 29/48 Installing : libnfsidmap-0.25-1.fc17.x86_64 30/48 Installing : 1:libwbclient-3.6.3-81.fc17.1.x86_64 31/48 Installing : libnet-1.1.5-3.fc17.x86_64 32/48 Installing : 2:ethtool-3.2-2.fc17.x86_64 33/48 Installing : libevent-2.0.14-2.fc17.x86_64 34/48 Installing : 1:nfs-utils-1.2.5-12.fc17.x86_64 35/48 Installing : libtalloc-2.0.7-4.fc17.x86_64 36/48 Installing : cifs-utils-5.3-2.fc17.x86_64 37/48 Installing : libtasn1-2.12-1.fc17.x86_64 38/48 Installing : OpenIPMI-libs-2.0.18-13.fc17.x86_64 39/48 Installing : cluster-glue-1.0.6-9.fc17.1.x86_64 40/48 Installing : p11-kit-0.12-1.fc17.x86_64 41/48 Installing : gnutls-2.12.17-1.fc17.x86_64 42/48 Installing : pacemaker-libs-1.1.7-2.fc17.x86_64 43/48 Installing : pacemaker-cluster-libs-1.1.7-2.fc17.x86_64 44/48 Installing : pacemaker-cli-1.1.7-2.fc17.x86_64 45/48 Installing : xfsprogs-3.1.8-1.fc17.x86_64 46/48 Installing : resource-agents-3.9.2-2.fc17.1.x86_64 47/48 Installing : pacemaker-1.1.7-2.fc17.x86_64 48/48 Verifying : xfsprogs-3.1.8-1.fc17.x86_64 1/48 Verifying : 1:net-snmp-libs-5.7.1-4.fc17.x86_64 2/48 Verifying : corosync-1.99.9-1.fc17.x86_64 3/48 Verifying : cluster-glue-1.0.6-9.fc17.1.x86_64 4/48 Verifying : perl-PathTools-3.33-211.fc17.x86_64 5/48 Verifying : p11-kit-0.12-1.fc17.x86_64 6/48 Verifying : 1:perl-Pod-Simple-3.16-211.fc17.noarch 7/48 Verifying : OpenIPMI-libs-2.0.18-13.fc17.x86_64 8/48 Verifying : libtasn1-2.12-1.fc17.x86_64 9/48 Verifying : perl-threads-1.86-2.fc17.x86_64 10/48 Verifying : 1:perl-Pod-Escapes-1.04-211.fc17.noarch 11/48 Verifying : pacemaker-1.1.7-2.fc17.x86_64 12/48 Verifying : 4:perl-5.14.2-211.fc17.x86_64 13/48 Verifying : gnutls-2.12.17-1.fc17.x86_64 14/48 Verifying : perl-threads-shared-1.40-2.fc17.x86_64 15/48 Verifying : 4:perl-macros-5.14.2-211.fc17.x86_64 16/48 Verifying : 1:perl-Module-Pluggable-3.90-211.fc17.noarch 17/48 Verifying : 1:nfs-utils-1.2.5-12.fc17.x86_64 18/48 Verifying : cluster-glue-libs-1.0.6-9.fc17.1.x86_64 19/48 Verifying : pacemaker-libs-1.1.7-2.fc17.x86_64 20/48 Verifying : libtalloc-2.0.7-4.fc17.x86_64 21/48 Verifying : libevent-2.0.14-2.fc17.x86_64 22/48 Verifying : perl-Socket-2.001-1.fc17.x86_64 23/48 Verifying : libgssglue-0.3-1.fc17.x86_64 24/48 Verifying : perl-Carp-1.22-2.fc17.noarch 25/48 Verifying : libtirpc-0.2.2-2.1.fc17.x86_64 26/48 Verifying : 2:ethtool-3.2-2.fc17.x86_64 27/48 Verifying : 4:perl-libs-5.14.2-211.fc17.x86_64 28/48 Verifying : libxslt-1.1.26-9.fc17.x86_64 29/48 Verifying : rpcbind-0.2.0-16.fc17.x86_64 30/48 Verifying : librdmacm-1.0.15-1.fc17.x86_64 31/48 Verifying : resource-agents-3.9.2-2.fc17.1.x86_64 32/48 Verifying : 1:quota-4.00-3.fc17.x86_64 33/48 Verifying : 1:perl-TimeDate-1.20-6.fc17.noarch 34/48 Verifying : perl-Scalar-List-Utils-1.25-1.fc17.x86_64 35/48 Verifying : libtool-ltdl-2.4.2-3.fc17.x86_64 36/48 Verifying : pacemaker-cluster-libs-1.1.7-2.fc17.x86_64 37/48 Verifying : cifs-utils-5.3-2.fc17.x86_64 38/48 Verifying : libnet-1.1.5-3.fc17.x86_64 39/48 Verifying : corosynclib-1.99.9-1.fc17.x86_64 40/48 Verifying : libqb-0.11.1-1.fc17.x86_64 41/48 Verifying : 1:libwbclient-3.6.3-81.fc17.1.x86_64 42/48 Verifying : libnfsidmap-0.25-1.fc17.x86_64 43/48 Verifying : tcp_wrappers-7.6-69.fc17.x86_64 44/48 Verifying : keyutils-1.5.5-2.fc17.x86_64 45/48 Verifying : libibverbs-1.1.6-2.fc17.x86_64 46/48 Verifying : 1:quota-nls-4.00-3.fc17.noarch 47/48 Verifying : pacemaker-cli-1.1.7-2.fc17.x86_64 48/48 Installed: corosync.x86_64 0:1.99.9-1.fc17 pacemaker.x86_64 0:1.1.7-2.fc17 Dependency Installed: OpenIPMI-libs.x86_64 0:2.0.18-13.fc17 cifs-utils.x86_64 0:5.3-2.fc17 cluster-glue.x86_64 0:1.0.6-9.fc17.1 cluster-glue-libs.x86_64 0:1.0.6-9.fc17.1 corosynclib.x86_64 0:1.99.9-1.fc17 ethtool.x86_64 2:3.2-2.fc17 gnutls.x86_64 0:2.12.17-1.fc17 keyutils.x86_64 0:1.5.5-2.fc17 libevent.x86_64 0:2.0.14-2.fc17 libgssglue.x86_64 0:0.3-1.fc17 libibverbs.x86_64 0:1.1.6-2.fc17 libnet.x86_64 0:1.1.5-3.fc17 libnfsidmap.x86_64 0:0.25-1.fc17 libqb.x86_64 0:0.11.1-1.fc17 librdmacm.x86_64 0:1.0.15-1.fc17 libtalloc.x86_64 0:2.0.7-4.fc17 libtasn1.x86_64 0:2.12-1.fc17 libtirpc.x86_64 0:0.2.2-2.1.fc17 libtool-ltdl.x86_64 0:2.4.2-3.fc17 libwbclient.x86_64 1:3.6.3-81.fc17.1 libxslt.x86_64 0:1.1.26-9.fc17 net-snmp-libs.x86_64 1:5.7.1-4.fc17 nfs-utils.x86_64 1:1.2.5-12.fc17 p11-kit.x86_64 0:0.12-1.fc17 pacemaker-cli.x86_64 0:1.1.7-2.fc17 pacemaker-cluster-libs.x86_64 0:1.1.7-2.fc17 pacemaker-libs.x86_64 0:1.1.7-2.fc17 perl.x86_64 4:5.14.2-211.fc17 perl-Carp.noarch 0:1.22-2.fc17 perl-Module-Pluggable.noarch 1:3.90-211.fc17 perl-PathTools.x86_64 0:3.33-211.fc17 perl-Pod-Escapes.noarch 1:1.04-211.fc17 perl-Pod-Simple.noarch 1:3.16-211.fc17 perl-Scalar-List-Utils.x86_64 0:1.25-1.fc17 perl-Socket.x86_64 0:2.001-1.fc17 perl-TimeDate.noarch 1:1.20-6.fc17 perl-libs.x86_64 4:5.14.2-211.fc17 perl-macros.x86_64 4:5.14.2-211.fc17 perl-threads.x86_64 0:1.86-2.fc17 perl-threads-shared.x86_64 0:1.40-2.fc17 quota.x86_64 1:4.00-3.fc17 quota-nls.noarch 1:4.00-3.fc17 resource-agents.x86_64 0:3.9.2-2.fc17.1 rpcbind.x86_64 0:0.2.0-16.fc17 tcp_wrappers.x86_64 0:7.6-69.fc17 xfsprogs.x86_64 0:3.1.8-1.fc17 Complete! [root@pcmk-1 ~]# ..... Now install the cluster software on the second node. ifdef::pcs[] === Install the Cluster Management Software === The pcs cli command coupled with the pcs daemon creates a cluster management system capable of managing all aspects of the cluster stack across all nodes from a single location. [source,C] ---- [ALL] # yum install -y pcs ---- Make sure to install the pcs packages on both nodes. endif::[] == Setup == ifdef::pcs[] === Enable pcs Daemon === Before the cluster can be configured, the pcs daemon must be started and enabled to boot on startup on each node. This daemon works with the pcs cli command to manage syncing the corosync configuration across all the nodes in the cluster. Start and enable the daemon by issuing the following commands on each node. [source,C] ---- # systemctl start pcsd.service # systemctl enable pcsd.service ---- Now we need a way for `pcs` to talk to itself on other nodes in the cluster. This is necessary in order to perform tasks such as syncing the corosync config, or starting/stopping the cluster on remote nodes While `pcs` can be used locally without setting up these user accounts, this tutorial will make use of these remote access commands, so we will set a password for the 'hacluster' user. Its probably best if password is consistent across all the nodes. As 'root', run: [source,C] ---- # passwd hacluster password: ---- Alternatively, to script this process or set the password on a different machine to the one you're logged into, you can use the `--stdin` option for `passwd`: [source,C] ---- # ssh pcmk-2 -- 'echo redhat1 | passwd --stdin hacluster' ---- endif::[] ifdef::crmsh[] === Preparation - Multicast === Choose a port number and http://en.wikipedia.org/wiki/Multicast[multi-cast] address. http://en.wikipedia.org/wiki/Multicast_address[] Be sure that the values you chose do not conflict with any existing clusters you might have. For this document, I have chosen port '4000' and used '239.255.1.1' as the multi-cast address. endif::[] === Notes on Multicast Address Assignment === There are several subtle points that often deserve consideration when choosing/assigning multicast addresses for corosync. footnote:[This information is borrowed from, the now defunct, http://web.archive.org/web/20101211210054/http://29west.com/docs/THPM/multicast-address-assignment.html] . Avoid '224.0.0.x' + Traffic to addresses of the form '224.0.0.x' is often flooded to all switch ports. This address range is reserved for link-local uses. Many routing protocols assume that all traffic within this range will be received by all routers on the network. Hence (at least all Cisco) switches flood traffic within this range. The flooding behavior overrides the normal selective forwarding behavior of a multicast-aware switch (e.g. IGMP snooping, CGMP, etc.). . Watch for '32:1' overlap + 32 non-contiguous IP multicast addresses are mapped onto each Ethernet multicast address. A receiver that joins a single IP multicast group implicitly joins 31 others due to this overlap. Of course, filtering in the operating system discards undesired multicast traffic from applications, but NIC bandwidth and CPU resources are nonetheless consumed discarding it. The overlap occurs in the 5 high-order bits, so it's best to use the 23 low-order bits to make distinct multicast streams unique. For example, IP multicast addresses in the range '239.0.0.0' to '239.127.255.255' all map to unique Ethernet multicast addresses. However, IP multicast address '239.128.0.0' maps to the same Ethernet multicast address as '239.0.0.0', '239.128.0.1' maps to the same Ethernet multicast address as '239.0.0.1', etc. . Avoid 'x.0.0.y' and 'x.128.0.y' + Combining the above two considerations, it's best to avoid using IP multicast addresses of the form 'x.0.0.y' and 'x.128.0.y' since they all map onto the range of Ethernet multicast addresses that are flooded to all switch ports. . Watch for address assignment conflicts + http://www.iana.org/[IANA] administers http://www.iana.org/assignments/multicast-addresses[Internet multicast addresses]. Potential conflicts with Internet multicast address assignments can be avoided by using http://www.ietf.org/rfc/rfc3180.txt[GLOP addressing] (http://en.wikipedia.org/wiki/Autonomous_system_%28Internet%29[AS] required) or http://www.ietf.org/rfc/rfc2365.txt[administratively scoped] addresses. Such addresses can be safely used on a network connected to the Internet without fear of conflict with multicast sources originating on the Internet. Administratively scoped addresses are roughly analogous to the unicast address space for http://www.ietf.org/rfc/rfc1918.txt[private internets]. Site-local multicast addresses are of the form '239.255.x.y', but can grow down to '239.252.x.y' if needed. Organization-local multicast addresses are of the form '239.192-251.x.y', but can grow down to '239.x.y.z' if needed. For a more detailed treatment (57 pages!), see http://www.cisco.com/en/US/tech/tk828/technologies_white_paper09186a00802d4643.shtml[Cisco's Guidelines for Enterprise IP Multicast Address Allocation] paper. === Configuring Corosync === ifdef::pcs[] In the past, at this point in the tutorial an explanation of how to configure and propagate corosync's /etc/corosync.conf file would be necessary. Using pcs with the pcs daemon greatly simplifies this process by generating 'corosync.conf' across all the nodes in the cluster with a single command. The only thing required to achieve this is to authenticate as the pcs user 'hacluster' on one of the nodes in the cluster, and then issue the 'pcs cluster setup' command with a list of all the node names in the cluster. [source,C] ---- # pcs cluster auth pcmk-1 pcmk-2 Username: hacluster Password: pcmk-1: Authorized pcmk-2: Authorized # pcs cluster setup mycluster pcmk-1 pcmk-2 pcmk-1: Succeeded pcmk-2: Succeeded ---- That's it. Corosync is configured across the cluster. If you received an authorization error for either of those commands, make sure you setup the 'hacluster' user account and password on every node in the cluster with the same password. endif::[] ifdef::crmsh[] [IMPORTANT] =========== The instructions below only apply for a machine with a single NIC. If you have a more complicated setup, you should edit the configuration manually. =========== [source,C] ---- # export ais_port=4000 # export ais_mcast=239.255.1.1 ---- Next we automatically determine the hosts address. By not using the full address, we make the configuration suitable to be copied to other nodes. [source,Bash] ---- export ais_addr=`ip addr | grep "inet " | tail -n 1 | awk '{print $4}' | sed s/255/0/g` ---- Display and verify the configuration options [source,Bash] ---- # env | grep ais_ ais_mcast=239.255.1.1 ais_port=4000 ais_addr=192.168.122.0 ---- Once you're happy with the chosen values, update the Corosync configuration [source,C] ---- # cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf # sed -i.bak "s/.*mcastaddr:.*/mcastaddr:\ $ais_mcast/g" /etc/corosync/corosync.conf # sed -i.bak "s/.*mcastport:.*/mcastport:\ $ais_port/g" /etc/corosync/corosync.conf # sed -i.bak "s/.*\tbindnetaddr:.*/bindnetaddr:\ $ais_addr/g" /etc/corosync/corosync.conf ---- Lastly, you'll need to enable quorum [source,Bash] ----- cat << END >> /etc/corosync/corosync.conf quorum { provider: corosync_votequorum expected_votes: 2 } END ----- endif::[] The final /etc/corosync.conf configuration on each node should look something like the sample in Appendix B, Sample Corosync Configuration. [IMPORTANT] =========== Pacemaker used to obtain membership and quorum from a custom Corosync plugin. This plugin also had the capability to start Pacemaker automatically when Corosync was started. Neither behavior is possible with Corosync 2.0 and beyond as support for plugins was removed. Instead, Pacemaker must be started as a separate service. Also, since Pacemaker made use of the plugin for message routing, a node using the plugin (Corosync prior to 2.0) cannot talk to one that isn't (Corosync 2.0+). Rolling upgrades between these versions are therefor not possible and an alternate strategy footnote:[http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ap-upgrade.html] must be used. =========== ifdef::crmsh[] === Propagate the Configuration === Now we need to copy the changes so far to the other node: [source,C] ---- # for f in /etc/corosync/corosync.conf /etc/hosts; do scp $f pcmk-2:$f ; done corosync.conf 100% 1528 1.5KB/s 00:00 hosts 100% 281 0.3KB/s 00:00 # ---- endif::[] diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt b/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt index eaa69e6a17..b665aadd97 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt @@ -1,155 +1,155 @@ = Read-Me-First = == The Scope of this Document == Computer clusters can be used to provide highly available services or resources. The redundancy of multiple machines is used to guard against failures of many types. This document will walk through the installation and setup of simple clusters using the Fedora distribution, version 17. The clusters described here will use Pacemaker and Corosync to provide resource management and messaging. Required packages and modifications to their configuration files are described along with the use of the Pacemaker command line tool for generating the XML used for cluster control. Pacemaker is a central component and provides the resource management required in these systems. This management includes detecting and recovering from the failure of various nodes, resources and services under its control. When more in depth information is required and for real world usage, please refer to the http://www.clusterlabs.org/doc/[Pacemaker Explained] manual. == What Is Pacemaker? == Pacemaker is a cluster resource manager. It achieves maximum availability for your cluster services (aka. resources) by detecting and recovering from node and resource-level failures by making use of the messaging and membership capabilities provided by your preferred cluster infrastructure (either Corosync or Heartbeat). Pacemaker's key features include: * Detection and recovery of node and service-level failures * Storage agnostic, no requirement for shared storage * Resource agnostic, anything that can be scripted can be clustered * Supports STONITH for ensuring data integrity * Supports large and small clusters * Supports both quorate and resource driven clusters * Supports practically any redundancy configuration * Automatically replicated configuration that can be updated from any node * Ability to specify cluster-wide service ordering, colocation and anti-colocation * Support for advanced service types ** Clones: for services which need to be active on multiple nodes ** Multi-state: for services with multiple modes (eg. master/slave, primary/secondary) * Unified, scriptable, cluster management tools. == Pacemaker Architecture == At the highest level, the cluster is made up of three pieces: * Non-cluster aware components (illustrated in green). These pieces include the resources themselves, scripts that start, stop and monitor them, and also a local daemon that masks the differences between the different standards these scripts implement. * Resource management Pacemaker provides the brain (illustrated in blue) that processes and reacts to events regarding the cluster. These events include nodes joining or leaving the cluster; resource events caused by failures, maintenance, scheduled activities; and other administrative actions. Pacemaker will compute the ideal state of the cluster and plot a path to achieve it after any of these events. This may include moving resources, stopping nodes and even forcing them offline with remote power switches. * Low level infrastructure Corosync provides reliable messaging, membership and quorum information about the cluster (illustrated in red). .Conceptual Stack Overview -image::images/pcmk-overview.png["Conceptual overview of the cluster stack",align="center"] +image::images/pcmk-overview.png["Conceptual overview of the cluster stack",align="center",scaledwidth="65%"] When combined with Corosync, Pacemaker also supports popular open source cluster filesystems. footnote:[Even though Pacemaker also supports Heartbeat, the filesystems need to use the stack for messaging and membership and Corosync seems to be what they're standardizing on. Technically it would be possible for them to support Heartbeat as well, however there seems little interest in this.] Due to recent standardization within the cluster filesystem community, they make use of a common distributed lock manager which makes use of Corosync for its messaging capabilities and Pacemaker for its membership (which nodes are up/down) and fencing services. .The Pacemaker Stack -image::images/pcmk-stack.png["The Pacemaker StackThe Pacemaker stack when running on Corosync",align="center"] +image::images/pcmk-stack.png["The Pacemaker StackThe Pacemaker stack when running on Corosync",align="center",scaledwidth="65%"] === Internal Components === Pacemaker itself is composed of four key components (illustrated below in the same color scheme as the previous diagram): * CIB (aka. Cluster Information Base) * CRMd (aka. Cluster Resource Management daemon) * PEngine (aka. PE or Policy Engine) * STONITHd .Internal Components -image::images/pcmk-internals.png["Subsystems of a Pacemaker cluster running on Corosync",align="center"] +image::images/pcmk-internals.png["Subsystems of a Pacemaker cluster running on Corosync",align="center",scaledwidth="65%"] The CIB uses XML to represent both the cluster's configuration and current state of all resources in the cluster. The contents of the CIB are automatically kept in sync across the entire cluster and are used by the PEngine to compute the ideal state of the cluster and how it should be achieved. This list of instructions is then fed to the DC (Designated Co-ordinator). Pacemaker centralizes all cluster decision making by electing one of the CRMd instances to act as a master. Should the elected CRMd process, or the node it is on, fail... a new one is quickly established. The DC carries out the PEngine's instructions in the required order by passing them to either the LRMd (Local Resource Management daemon) or CRMd peers on other nodes via the cluster messaging infrastructure (which in turn passes them on to their LRMd process). The peer nodes all report the results of their operations back to the DC and based on the expected and actual results, will either execute any actions that needed to wait for the previous one to complete, or abort processing and ask the PEngine to recalculate the ideal cluster state based on the unexpected results. In some cases, it may be necessary to power off nodes in order to protect shared data or complete resource recovery. For this Pacemaker comes with STONITHd. STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and is usually implemented with a remote power switch. In Pacemaker, STONITH devices are modeled as resources (and configured in the CIB) to enable them to be easily monitored for failure, however STONITHd takes care of understanding the STONITH topology such that its clients simply request a node be fenced and it does the rest. == Types of Pacemaker Clusters == Pacemaker makes no assumptions about your environment, this allows it to support practically any http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations[redundancy configuration] including Active/Active, Active/Passive, N+1, N+M, N-to-1 and N-to-N. In this document we will focus on the setup of a highly available Apache web server with an Active/Passive cluster using DRBD and Ext4 to store data. Then, we will upgrade this cluster to Active/Active using GFS2. .Active/Passive Redundancy -image::images/pcmk-active-passive.png["Two-node Active/Passive clusters using Pacemaker and DRBD are a cost-effective solution for many High Availability situations",align="center"] +image::images/pcmk-active-passive.png["Two-node Active/Passive clusters using Pacemaker and DRBD are a cost-effective solution for many High Availability situations",align="center",scaledwidth="65%"] .N to N Redundancy -image::images/pcmk-active-active.png["When shared storage is available, every node can potentially be used for failover. Pacemaker can even run multiple copies of services to spread out the workload",align="center"] +image::images/pcmk-active-active.png["When shared storage is available, every node can potentially be used for failover. Pacemaker can even run multiple copies of services to spread out the workload",align="center",scaledwidth="65%"]