diff --git a/doc/Pacemaker_Remote/en-US/Ch-Baremetal-Tutorial.txt b/doc/Pacemaker_Remote/en-US/Ch-Baremetal-Tutorial.txt index e33f5cb6a0..9bf6dc6c20 100644 --- a/doc/Pacemaker_Remote/en-US/Ch-Baremetal-Tutorial.txt +++ b/doc/Pacemaker_Remote/en-US/Ch-Baremetal-Tutorial.txt @@ -1,251 +1,230 @@ = Baremetal Walk-through = +What this tutorial is:+ This tutorial is an in-depth walk-through of how to get pacemaker to integrate a baremetal remote-node into the cluster as a node capable of running cluster resources. +What this tutorial is not:+ This tutorial is not a realistic deployment scenario. The steps shown here are meant to get users familiar with the concept of remote-nodes as quickly as possible. -== Step 1: Setup == - This tutorial requires three machines. Two machines to act as cluster-nodes and a third to act as the baremetal remote-node. -This tutorial was tested using Fedora 18 on both the cluster-nodes and baremetal remote-node. Anything that is capable of running pacemaker v1.1.11 or greater will do though. An installation guide for installing Fedora 18 can be found here, http://docs.fedoraproject.org/en-US/Fedora/18/html/Installation_Guide/. +This tutorial was tested using Fedora 20 on both the cluster-nodes and baremetal remote-node. Anything that is capable of running pacemaker v1.1.11 or greater will do though. An installation guide for installing Fedora 20 can be found here, http://docs.fedoraproject.org/en-US/Fedora/20/html/Installation_Guide/. -Fedora 18 (or similar distro) host preparation steps. +Fedora 20 (or similar distro) host preparation steps. -=== SElinux and Firewall Considerations === +== SElinux and Firewall Considerations == In order to simply this tutorial we will disable selinux and the firewall on all the nodes. +WARNING:+ These actions will open a significant security threat to machines exposed to the outside world. [source,C] ---- # setenforce 0 # sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config # firewall-cmd --add-port 3121/tcp --permanent # systemctl disable iptables.service # systemctl disable ip6tables.service # rm '/etc/systemd/system/basic.target.wants/iptables.service' # rm '/etc/systemd/system/basic.target.wants/ip6tables.service' # systemctl stop iptables.service # systemctl stop ip6tables.service ---- -=== Setup Pacemaker Remote on Baremetal remote-node === +== Setup Pacemaker Remote on Baremetal remote-node == On the baremetal remote-node machine run these commands to generate an authkey and copy it to the /etc/pacemaker folder. [source,C] ---- # mkdir /etc/pacemaker # dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1 ---- Make sure to distribute this key to both of the cluster-nodes as well. All the nodes must have the same /etc/pacemaker/authkey installed for the communication to work correctly. Now install and start the pacemaker_remote daemon on the baremetal remote-node. [source,C] ---- # yum install -y paceamaker-remote resource-agents pcs # systemctl enable pacemaker_remote.service # systemctl start pacemaker_remote.service ---- Verify the start is successful. [source,C] ---- # systemctl status pacemaker_remote pacemaker_remote.service - Pacemaker Remote Service Loaded: loaded (/usr/lib/systemd/system/pacemaker_remote.service; enabled) Active: active (running) since Thu 2013-03-14 18:24:04 EDT; 2min 8s ago Main PID: 1233 (pacemaker_remot) CGroup: name=systemd:/system/pacemaker_remote.service └─1233 /usr/sbin/pacemaker_remoted Mar 14 18:24:04 remote1 systemd[1]: Starting Pacemaker Remote Service... Mar 14 18:24:04 remote1 systemd[1]: Started Pacemaker Remote Service. Mar 14 18:24:04 remote1 pacemaker_remoted[1233]: notice: lrmd_init_remote_tls_server: Starting a tls listener on port 3121. ---- -=== Verify cluster-node Connection to baremetal-node === +== Verify cluster-node Connection to baremetal-node == Before moving forward it's worth going ahead and verifying the cluster-nodes can contact the baremetal node on port 3121. Here's a trick you can use. Connect using telnet from each of the cluster-nodes. The connection will get destroyed, but how it is destroyed tells you whether it worked or not. First add the baremetal remote-node's hostname (we're using remote1 in this tutorial) to the cluster-nodes' /etc/hosts files if you haven't already. This is required unless you have dns setup in a way where remote1's address can be discovered. Execute the following on each cluster-node, replacing the ip address with the actual ip address of the baremetal remote-node. [source,C] ---- # cat << END >> /etc/hosts 192.168.122.10 remote1 END ---- If running the telnet command on one of the cluster-nodes results in this output before disconnecting, the connection works. [source,C] ---- # telnet remote1 3121 Trying 192.168.122.10... Connected to remote1. Escape character is '^]'. Connection closed by foreign host. ---- If you see this, the connection is not working. [source,C] ---- # telnet remote1 3121 Trying 192.168.122.10... telnet: connect to address 192.168.122.10: No route to host ---- Once you can successfully connect to the baremetal remote-node from the both cluster-nodes, move on to setting up pacemaker on the cluster-nodes. -=== Install cluster-node Software === +== Install cluster-node Software == On the two cluster-nodes install the following packages. [source,C] ---- # yum install -y pacemaker corosync pcs resource-agents ---- -=== Setup Corosync on cluster-nodes === - -On one of the cluster nodes, execute the following. - -[source,C] ----- -# export corosync_addr=`ip addr | grep "inet " | tail -n 1 | awk '{print $4}' | sed s/255/0/g` ----- +== Setup Corosync on cluster-nodes == -Display and verify that address is correct +Corosync handles pacemaker's cluster membership and messaging. The corosync config file is located in /etc/corosync/corosync.conf. That config file must be initialized with information about the two cluster-nodes before pacemaker can start. +To initialize the corosync config file, execute the following pcs command on both nodes filling in the information in <> with your nodes' information. [source,C] ---- -# echo $corosync_addr +# pcs cluster setup --local mycluster ---- -In many cases the address will be 192.168.1.0 if you are behind a standard home router. - -Now copy over the example corosync.conf. This code will inject your bindaddress and enable the vote quorum api which is required by pacemaker. - +A recent syntax change in pcs may cause the above command to fail. If so try this alternative. [source,C] ---- -# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf -# sed -i.bak "s/.*\tbindnetaddr:.*/bindnetaddr:\ $corosync_addr/g" /etc/corosync/corosync.conf -# cat << END >> /etc/corosync/corosync.conf -quorum { - provider: corosync_votequorum - expected_votes: 2 - two_node: 1 -} -END +# pcs cluster setup --force --local --name mycluster ---- -Make sure to copy the newly created /etc/corosync/corosync.conf file to the second cluster-node before continuing. - -=== Start Pacemaker on cluster-nodes === +== Start Pacemaker on cluster-nodes == Start the cluster stack on both cluster nodes using the following command. [source,C] ---- # pcs cluster start ---- Verify corosync membership [source,C] ---- # pcs status corosync Membership information Nodeid Votes Name 1795270848 1 node1 (local) ---- Verify pacemaker status. At first the 'pcs cluster status' output will look like this. [source,C] ---- # pcs status Last updated: Thu Mar 14 12:26:00 2013 Last change: Thu Mar 14 12:25:55 2013 via crmd on example-host Stack: corosync Current DC: Version: 1.1.11 1 Nodes configured, unknown expected votes 0 Resources configured. ---- After about a minute you should see your two cluster-nodes come online. [source,C] ---- # pcs status Last updated: Thu Mar 14 12:28:23 2013 Last change: Thu Mar 14 12:25:55 2013 via crmd on node1 Stack: corosync Current DC: node1 (1795270848) - partition with quorum Version: 1.1.11 2 Nodes configured, unknown expected votes 0 Resources configured. Online: [ node1 node2 ] ---- For the sake of this tutorial, we are going to disable stonith to avoid having to cover fencing device configuration. [source,C] ---- # pcs property set stonith-enabled=false ---- -=== Integrate Baremetal remote-node into Cluster === +== Integrate Baremetal remote-node into Cluster == Integrating a baremetal remote-node into the cluster is achieved through the creation of a remote-node connection resource. The remote-node connection resource both establishes the connection to the remote-node and defines that the remote-node exists. Note that this resource is actually internal to Pacemaker's crmd component. A metadata file for this resource can be found in the /usr/lib/ocf/resource.d/pacemaker/remote file that describes what options are available, but there is no actual ocf:pacemaker:remote resource agent script that performs any work. Define the remote-node connection resource to our baremetal remote-node, remote1, using the following command. [source,C] ---- # pcs resource create remote1 ocf:pacemaker:remote ---- That's it. After a moment you should see the remote-node come online. [source,C] ---- Last updated: Fri Oct 18 18:47:21 2013 Last change: Fri Oct 18 18:46:14 2013 via cibadmin on node1 Stack: corosync Current DC: node1 (1) - partition with quorum Version: 1.1.11 3 Nodes configured 1 Resources configured Online: [ node1 node2 ] RemoteOnline: [ remote1 ] remote1 (ocf::pacemaker:remote): Started node1 ---- -=== Starting Resources on baremetal remote-node === +== Starting Resources on baremetal remote-node == +"Warning: Never involve a remote-node connection resource in a resource group, colocation, or order constraint"+ Once the baremetal remote-node is integrated into the cluster, starting resources on a baremetal remote-node is the exact same as the cluster nodes. Refer to the Clusters from Scratch document for examples on resource creation. http://clusterlabs.org/doc/ -=== Fencing baremetal remote-nodes === +== Fencing baremetal remote-nodes == The cluster understands how to fence baremetal remote-nodes and can use standard fencing devices to do so. No special considerations are required. Note however that remote-nodes can never initiate a fencing action. Only cluster-nodes are capable of actually executing the fencing operation on another node. -=== Accessing Cluster Tools from a Baremetal remote-node === +== Accessing Cluster Tools from a Baremetal remote-node == Besides allowing the cluster to manage resources on a remote-node, pacemaker_remote has one other trick. +The pacemaker_remote daemon allows nearly all the pacemaker tools (crm_resource, crm_mon, crm_attribute, crm_master) to work on remote nodes natively.+ Try it, run +crm_mon+ or +pcs status+ on the baremetal node after pacemaker has integrated the remote-node into the cluster. These tools just work. These means resource agents such as master/slave resources which need access to tools like crm_master work seamlessly on the remote-nodes. diff --git a/doc/Pacemaker_Remote/en-US/Ch-Future.txt b/doc/Pacemaker_Remote/en-US/Ch-Future.txt index 2c493067e6..43d136c4e8 100644 --- a/doc/Pacemaker_Remote/en-US/Ch-Future.txt +++ b/doc/Pacemaker_Remote/en-US/Ch-Future.txt @@ -1,16 +1,17 @@ = Future Features = Basic KVM and Linux container integration was the first phase of development for pacemaker_remote and was completed for Pacemaker v1.1.10. Here are some planned features that expand upon this initial functionality. == Libvirt Sandbox Support == Once the libvirt-sandbox project is integrated with pacemaker_remote, we will gain the ability to preform per-resource linux container isolation with very little performance impact. This functionality will allow resources living on a single node to be isolated from one another. At that point CPU and memory limits could be set per-resource dynamically just using the cluster config. == Bare-metal Support == +"This feature has already been introduced into Pacemaker's master github branch and is scheduled for Pacemaker v1.1.11"+ The pacemaker_remote daemon already has the ability to run on bare-metal hardware nodes, but the policy engine logic for integrating bare-metal nodes is not complete. There are some complications involved with understanding a bare-metal node's state that virtual nodes don't have. Once this logic is complete, pacemaker will be able to integrate bare-metal nodes in the same way virtual remote-nodes currently are. Some special considerations for fencing will need to be addressed. == KVM Migration Support == ++"This feature has already been introduced into Pacemaker's master github branch and is scheduled for Pacemaker v1.1.12"+ Pacemaker's policy engine is limited in its ability to perform live migrations of KVM resources when resource dependencies are involved. This limitation affects how resources living within a KVM remote-node are handled when a live migration takes place. Currently when a live migration is performed on a KVM remote-node, all the resources within that remote-node have to be stopped before the migration takes place and started once again after migration has finished. This policy engine limitation is fully explained in this bug report, http://bugs.clusterlabs.org/show_bug.cgi?id=5055#c3 diff --git a/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt b/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt index fe0077524f..adf3422cb3 100644 --- a/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt +++ b/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt @@ -1,483 +1,467 @@ = KVM Walk-through = +What this tutorial is:+ This tutorial is an in-depth walk-through of how to get pacemaker to manage a KVM guest instance and integrate that guest into the cluster as a remote-node. +What this tutorial is not:+ This tutorial is not a realistic deployment scenario. The steps shown here are meant to get users familiar with the concept of remote-nodes as quickly as possible. == Step 1: Setup the Host == -This tutorial was created using Fedora 18 on the host and guest nodes. Anything that is capable of running libvirt and pacemaker v1.1.10 or greater will do though. An installation guide for installing Fedora 18 can be found here, http://docs.fedoraproject.org/en-US/Fedora/18/html/Installation_Guide/. +This tutorial was created using Fedora 20 on the host and guest nodes. Anything that is capable of running libvirt and pacemaker v1.1.10 or greater will do though. An installation guide for installing Fedora 20 can be found here, http://docs.fedoraproject.org/en-US/Fedora/20/html/Installation_Guide/. -Fedora 18 (or similar distro) host preparation steps. +Fedora 20 (or similar distro) host preparation steps. === SElinux and Firewall === In order to simply this tutorial we will disable the selinux and the firewall on the host. +WARNING:+ These actions will open a significant security threat to machines exposed to the outside world. [source,C] ---- # setenforce 0 # sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config # systemctl disable iptables.service # systemctl disable ip6tables.service # rm '/etc/systemd/system/basic.target.wants/iptables.service' # rm '/etc/systemd/system/basic.target.wants/ip6tables.service' # systemctl stop iptables.service # systemctl stop ip6tables.service ---- === Install Cluster Software === [source,C] ---- # yum install -y pacemaker corosync pcs resource-agents ---- === Setup Corosync === -Running the command below will attempt to detect the network address corosync should bind to. +Corosync handles pacemaker's cluster membership and messaging. The corosync config file is located in /etc/corosync/corosync.conf. That config file must be initialized with information about the cluster-nodes before pacemaker can start. +To initialize the corosync config file, execute the following pcs command on both nodes filling in the information in <> with your nodes' information. [source,C] ---- -# export corosync_addr=`ip addr | grep "inet " | tail -n 1 | awk '{print $4}' | sed s/255/0/g` +# pcs cluster setup --local mycluster ---- -Display and verify that address is correct - -[source,C] ----- -# echo $corosync_addr ----- - -In many cases the address will be 192.168.1.0 if you are behind a standard home router. - -Now copy over the example corosync.conf. This code will inject your bindaddress and enable the vote quorum api which is required by pacemaker. - +A recent syntax change in pcs may cause the above command to fail. If so try this alternative. [source,C] ---- -# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf -# sed -i.bak "s/.*\tbindnetaddr:.*/bindnetaddr:\ $corosync_addr/g" /etc/corosync/corosync.conf -# cat << END >> /etc/corosync/corosync.conf -quorum { - provider: corosync_votequorum - expected_votes: 2 -} -END +# pcs cluster setup --force --local --name mycluster ---- === Verify Cluster Software === Start the cluster [source,C] ---- # pcs cluster start ---- Verify corosync membership [source,C] ---- # pcs status corosync Membership information Nodeid Votes Name 1795270848 1 example-host (local) ---- Verify pacemaker status. At first the 'pcs cluster status' output will look like this. [source,C] ---- # pcs status Last updated: Thu Mar 14 12:26:00 2013 Last change: Thu Mar 14 12:25:55 2013 via crmd on example-host Stack: corosync Current DC: Version: 1.1.10 1 Nodes configured, unknown expected votes 0 Resources configured. ---- After about a minute you should see your host as a single node in the cluster. [source,C] ---- # pcs status Last updated: Thu Mar 14 12:28:23 2013 Last change: Thu Mar 14 12:25:55 2013 via crmd on example-host Stack: corosync Current DC: example-host (1795270848) - partition WITHOUT quorum Version: 1.1.8-9b13ea1 1 Nodes configured, unknown expected votes 0 Resources configured. Online: [ example-host ] ---- Go ahead and stop the cluster for now after verifying everything is in order. [source,C] ---- # pcs cluster stop ---- === Install Virtualization Software === [source,C] ---- # yum install -y kvm libvirt qemu-system qemu-kvm bridge-utils virt-manager # systemctl enable libvirtd.service ---- reboot the host == Step2: Create the KVM guest == I am not going to outline the installation steps required to create a kvm guest. There are plenty of tutorials available elsewhere that do that. I recommend using a Fedora 18 or greater distro as your guest as that is what I am testing this tutorial with. === Setup Guest Network === Run the commands below to set up a static ip address (192.168.122.10) and hostname (guest1). [source,C] ---- export remote_hostname=guest1 export remote_ip=192.168.122.10 export remote_gateway=192.168.122.1 yum remove -y NetworkManager rm -f /etc/hostname cat << END >> /etc/hostname $remote_hostname END hostname $remote_hostname cat << END >> /etc/sysconfig/network HOSTNAME=$remote_hostname GATEWAY=$remote_gateway END sed -i.bak "s/.*BOOTPROTO=.*/BOOTPROTO=none/g" /etc/sysconfig/network-scripts/ifcfg-eth0 cat << END >> /etc/sysconfig/network-scripts/ifcfg-eth0 IPADDR0=$remote_ip PREFIX0=24 GATEWAY0=$remote_gateway DNS1=$remote_gateway END systemctl restart network systemctl enable network.service systemctl enable sshd systemctl start sshd echo "checking connectivity" ping www.google.com ---- To simplify the tutorial we'll go ahead and disable selinux on the guest. We'll also need to poke a hole through the firewall on port 3121 (the default port for pacemaker_remote) so the host can contact the guest. [source,C] ---- # setenforce 0 # sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config # firewall-cmd --add-port 3121/tcp --permanent ---- If you still encounter connection issues just disable iptables and ipv6tables on the guest like we did on the host to guarantee you'll be able to contact the guest from the host. At this point you should be able to ssh into the guest from the host. === Setup Pacemaker Remote === On the +HOST+ machine run these commands to generate an authkey and copy it to the /etc/pacemaker folder on both the host and guest. [source,C] ---- # mkdir /etc/pacemaker # dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1 # scp -r /etc/pacemaker root@192.168.122.10:/etc/ ---- Now on the +GUEST+ install pacemaker-remote package and enable the daemon to run at startup. In the commands below you will notice the 'pacemaker' and 'pacemaker_remote' packages are being installed. The 'pacemaker' package is not required. The only reason it is being installed for this tutorial is because it contains the a 'Dummy' resource agent we will be using later on to test the remote-node. [source,C] ---- # yum install -y pacemaker paceamaker-remote resource-agents # systemctl enable pacemaker_remote.service ---- Now start pacemaker_remote on the guest and verify the start was successful. [source,C] ---- # systemctl start pacemaker_remote.service # systemctl status pacemaker_remote pacemaker_remote.service - Pacemaker Remote Service Loaded: loaded (/usr/lib/systemd/system/pacemaker_remote.service; enabled) Active: active (running) since Thu 2013-03-14 18:24:04 EDT; 2min 8s ago Main PID: 1233 (pacemaker_remot) CGroup: name=systemd:/system/pacemaker_remote.service └─1233 /usr/sbin/pacemaker_remoted Mar 14 18:24:04 guest1 systemd[1]: Starting Pacemaker Remote Service... Mar 14 18:24:04 guest1 systemd[1]: Started Pacemaker Remote Service. Mar 14 18:24:04 guest1 pacemaker_remoted[1233]: notice: lrmd_init_remote_tls_server: Starting a tls listener on port 3121. ---- === Verify Host Connection to Guest === Before moving forward it's worth going ahead and verifying the host can contact the guest on port 3121. Here's a trick you can use. Connect using telnet from the host. The connection will get destroyed, but how it is destroyed tells you whether it worked or not. First add guest1 to the host machine's /etc/hosts file if you haven't already. This is required unless you have dns setup in a way where guest1's address can be discovered. [source,C] ---- # cat << END >> /etc/hosts 192.168.122.10 guest1 END ---- If running the telnet command on the host results in this output before disconnecting, the connection works. [source,C] ---- # telnet guest1 3121 Trying 192.168.122.10... Connected to guest1. Escape character is '^]'. Connection closed by foreign host. ---- If you see this, the connection is not working. [source,C] ---- # telnet guest1 3121 Trying 192.168.122.10... telnet: connect to address 192.168.122.10: No route to host ---- Once you can successfully connect to the guest from the host, shutdown the guest. Pacemaker will be managing the virtual machine from this point forward. == Step3: Integrate KVM guest into Cluster. == Now the fun part, integrating the virtual machine you've just created into the cluster. It is incredibly simple. === Start the Cluster === On the host, start pacemaker. [source,C] ---- # pcs cluster start ---- Wait for the host to become the DC. The output of 'pcs status' should look similar to this after about a minute. [source,C] ---- Last updated: Thu Mar 14 16:41:22 2013 Last change: Thu Mar 14 16:41:08 2013 via crmd on example-host Stack: corosync Current DC: example-host (1795270848) - partition WITHOUT quorum Version: 1.1.10 1 Nodes configured, unknown expected votes 0 Resources configured. Online: [ example-host ] ---- Now enable the cluster to work without quorum or stonith. This is required just for the sake of getting this tutorial to work with a single cluster-node. [source,C] ---- # pcs property set stonith-enabled=false # pcs property set no-quorum-policy=ignore ---- === Integrate KVM Guest as remote-node === If you didn't already do this earlier in the verify host to guest connection section, add the KVM guest's ip to the host's /etc/hosts file so we can connect by hostname. The command below will do that if you used the same ip address I used earlier. [source,C] ---- # cat << END >> /etc/hosts 192.168.122.10 guest1 END ---- We will use the +VirtualDomain+ resource agent for the management of the virtual machine. This agent requires the virtual machine's xml config to be dumped to a file on disk. To do this pick out the name of the virtual machine you just created from the output of this list. [source,C] ---- # virsh list --all Id Name State ______________________________________________ - guest1 shut off ---- In my case I named it guest1. Dump the xml to a file somewhere on the host using the following command. [source,C] ---- # virsh dumpxml guest1 > /root/guest1.xml ---- Now just register the resource with pacemaker and you're set! [source,C] ---- # pcs resource create vm-guest1 VirtualDomain hypervisor="qemu:///system" config="/root/guest1.xml" meta remote-node=guest1 ---- Once the 'vm-guest1' resource is started you will see 'guest1' appear in the 'pcs status' output as a node. The final 'pcs status' output should look something like this. [source,C] ---- Last updated: Fri Mar 15 09:30:30 2013 Last change: Thu Mar 14 17:21:35 2013 via cibadmin on example-host Stack: corosync Current DC: example-host (1795270848) - partition WITHOUT quorum Version: 1.1.10 2 Nodes configured, unknown expected votes 2 Resources configured. Online: [ example-host guest1 ] Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host ---- === Starting Resources on KVM Guest === The commands below demonstrate how resources can be executed on both the remote-node and the cluster-node. Create a few Dummy resources. Dummy resources are real resource agents used just for testing purposes. They actually execute on the host they are assigned to just like an apache server or database would, except their execution just means a file was created. When the resource is stopped, that the file it created is removed. [source,C] ---- # pcs resource create FAKE1 ocf:pacemaker:Dummy # pcs resource create FAKE2 ocf:pacemaker:Dummy # pcs resource create FAKE3 ocf:pacemaker:Dummy # pcs resource create FAKE4 ocf:pacemaker:Dummy # pcs resource create FAKE5 ocf:pacemaker:Dummy ---- Now check your 'pcs status' output. In the resource section you should see something like the following, where some of the resources got started on the cluster-node, and some started on the remote-node. [source,C] ---- Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host FAKE1 (ocf::pacemaker:Dummy): Started guest1 FAKE2 (ocf::pacemaker:Dummy): Started guest1 FAKE3 (ocf::pacemaker:Dummy): Started example-host FAKE4 (ocf::pacemaker:Dummy): Started guest1 FAKE5 (ocf::pacemaker:Dummy): Started example-host ---- The remote-node, 'guest1', reacts just like any other node in the cluster. For example, pick out a resource that is running on your cluster-node. For my purposes I am picking FAKE3 from the output above. We can force FAKE3 to run on 'guest1' in the exact same way we would any other node. [source,C] ---- # pcs constraint FAKE3 prefers guest1 ---- Now looking at the bottom of the 'pcs status' output you'll see FAKE3 is on 'guest1'. [source,C] ---- Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host FAKE1 (ocf::pacemaker:Dummy): Started guest1 FAKE2 (ocf::pacemaker:Dummy): Started guest1 FAKE3 (ocf::pacemaker:Dummy): Started guest1 FAKE4 (ocf::pacemaker:Dummy): Started example-host FAKE5 (ocf::pacemaker:Dummy): Started example-host ---- === Testing Remote-node Recovery and Fencing === Pacemaker's policy engine is smart enough to know fencing remote-nodes associated with a virtual machine means shutting off/rebooting the virtual machine. No special configuration is necessary to make this happen. If you are interested in testing this functionality out, trying stopping the guest's pacemaker_remote daemon. This would be equivalent of abruptly terminating a cluster-node's corosync membership without properly shutting it down. ssh into the guest and run this command. [source,C] ---- # kill -9 `pidof pacemaker_remoted` ---- After a few seconds or so you'll see this in your 'pcs status' output. The 'guest1' node will be show as offline as it is being recovered. [source,C] ---- Last updated: Fri Mar 15 11:00:31 2013 Last change: Fri Mar 15 09:54:16 2013 via cibadmin on example-host Stack: corosync Current DC: example-host (1795270848) - partition WITHOUT quorum Version: 1.1.10 2 Nodes configured, unknown expected votes 7 Resources configured. Online: [ example-host ] OFFLINE: [ guest1 ] Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host FAKE1 (ocf::pacemaker:Dummy): Stopped FAKE2 (ocf::pacemaker:Dummy): Stopped FAKE3 (ocf::pacemaker:Dummy): Stopped FAKE4 (ocf::pacemaker:Dummy): Started example-host FAKE5 (ocf::pacemaker:Dummy): Started example-host Failed actions: guest1_monitor_30000 (node=example-host, call=3, rc=7, status=complete): not running ---- Once recovery of the guest is complete, you'll see it automatically get re-integrated into the cluster. The final 'pcs status' output should look something like this. [source,C] ---- Last updated: Fri Mar 15 11:03:17 2013 Last change: Fri Mar 15 09:54:16 2013 via cibadmin on example-host Stack: corosync Current DC: example-host (1795270848) - partition WITHOUT quorum Version: 1.1.10 2 Nodes configured, unknown expected votes 7 Resources configured. Online: [ example-host guest1 ] Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host FAKE1 (ocf::pacemaker:Dummy): Started guest1 FAKE2 (ocf::pacemaker:Dummy): Started guest1 FAKE3 (ocf::pacemaker:Dummy): Started guest1 FAKE4 (ocf::pacemaker:Dummy): Started example-host FAKE5 (ocf::pacemaker:Dummy): Started example-host Failed actions: guest1_monitor_30000 (node=example-host, call=3, rc=7, status=complete): not running ---- === Accessing Cluster Tools from Remote-node === Besides just allowing the cluster to manage resources on a remote-node, pacemaker_remote has one other trick. +The pacemaker_remote daemon allows nearly all the pacemaker tools (crm_resource, crm_mon, crm_attribute, crm_master) to work on remote nodes natively.+ Try it, run +crm_mon+ or +pcs status+ on the guest after pacemaker has integrated the remote-node into the cluster. These tools just work. These means resource agents such as master/slave resources which need access to tools like crm_master work seamlessly on the remote-nodes. diff --git a/doc/Pacemaker_Remote/en-US/Ch-LXC-Tutorial.txt b/doc/Pacemaker_Remote/en-US/Ch-LXC-Tutorial.txt index f6be9cec0a..9b14effe09 100644 --- a/doc/Pacemaker_Remote/en-US/Ch-LXC-Tutorial.txt +++ b/doc/Pacemaker_Remote/en-US/Ch-LXC-Tutorial.txt @@ -1,330 +1,314 @@ = Linux Container (LXC) Walk-through = +Warning: Continued development in the VirtualDomain agent, libvirt, and the lxc_autogen script have rendered this tutorial (in its current form) obsolete.+ The high level approach of this tutorial remains accurate, but many of the specifics related to configuring the lxc environment have changed. This walk-through needs to be updated to reflect the current tested methodology. +What this tutorial is:+ This tutorial demonstrates how pacemaker_remote can be used with Linux containers (managed by libvirt-lxc) to run cluster resources in an isolated environment. +What this tutorial is not:+ This tutorial is not a realistic deployment scenario. The steps shown here are meant to introduce users to the concept of managing Linux container environments with Pacemaker. == Step 1: Setup LXC Host == This tutorial was tested with Fedora 18. Anything that is capable of running libvirt and pacemaker v1.1.10 or greater will do though. An installation guide for installing Fedora 18 can be found here, http://docs.fedoraproject.org/en-US/Fedora/18/html/Installation_Guide/. Fedora 18 (or similar distro) host preparation steps. === SElinux and Firewall Rules === In order to simply this tutorial we will disable the selinux and the firewall on the host. WARNING: These actions pose a significant security issues to machines exposed to the outside world. Basically, just don't do this on your production system. [source,C] ---- # setenforce 0 # sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config # firewall-cmd --add-port 3121/tcp --permanent # systemctl disable iptables.service # systemctl disable ip6tables.service # rm '/etc/systemd/system/basic.target.wants/iptables.service' # rm '/etc/systemd/system/basic.target.wants/ip6tables.service' # systemctl stop iptables.service # systemctl stop ip6tables.service ---- === Install Cluster Software on Host === [source,C] ---- # yum install -y pacemaker pacemaker-remote corosync pcs resource-agents ---- === Configure Corosync === -Running the command below will attempt to detect the network address corosync should bind to. +Corosync handles pacemaker's cluster membership and messaging. The corosync config file is located in /etc/corosync/corosync.conf. That config file must be initialized with information about the cluster-nodes before pacemaker can start. +To initialize the corosync config file, execute the following pcs command on both nodes filling in the information in <> with your nodes' information. [source,C] ---- -# export corosync_addr=`ip addr | grep "inet " | tail -n 1 | awk '{print $4}' | sed s/255/0/g` +# pcs cluster setup --local mycluster ---- -Display and verify the address is correct - -[source,C] ----- -# echo $corosync_addr ----- - -In most cases the address will be 192.168.1.0 if you are behind a standard home router. - -Now copy over the example corosync.conf. This code will inject your bindaddress and enable the vote quorum api which is required by pacemaker. - +A recent syntax change in pcs may cause the above command to fail. If so try this alternative. [source,C] ---- -# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf -# sed -i.bak "s/.*\tbindnetaddr:.*/bindnetaddr:\ $corosync_addr/g" /etc/corosync/corosync.conf -# cat << END >> /etc/corosync/corosync.conf -quorum { - provider: corosync_votequorum - expected_votes: 2 -} -END +# pcs cluster setup --force --local --name mycluster ---- === Verify Cluster === Start the cluster [source,C] ---- # pcs cluster start ---- Verify corosync membership [source,C] ---- # pcs status corosync Membership information Nodeid Votes Name 1795270848 1 example-host (local) ---- Verify pacemaker status. At first the 'pcs cluster status' output will look like this. [source,C] ---- # pcs status Last updated: Thu Mar 14 12:26:00 2013 Last change: Thu Mar 14 12:25:55 2013 via crmd on example-host Stack: corosync Current DC: Version: 1.1.10 1 Nodes configured, unknown expected votes 0 Resources configured. ---- After about a minute you should see your host as a single node in the cluster. [source,C] ---- # pcs status Last updated: Thu Mar 14 12:28:23 2013 Last change: Thu Mar 14 12:25:55 2013 via crmd on example-host Stack: corosync Current DC: example-host (1795270848) - partition WITHOUT quorum Version: 1.1.8-9b13ea1 1 Nodes configured, unknown expected votes 0 Resources configured. Online: [ example-host ] ---- Go ahead and stop the cluster for now after verifying everything is in order. [source,C] ---- # pcs cluster stop ---- == Step 2: Setup LXC Environment == === Install Libvirt LXC software === [source,C] ---- # yum install -y libvirt libvirt-daemon-lxc wget # systemctl enable libvirtd ---- At this point, restart the host. === Generate Libvirt LXC domains === I've attempted to simply this tutorial by creating a script to auto generate the libvirt-lxc xml domain definitions. Download the script to whatever directory you want the containers to live in. In this example I am using the /root/lxc/ directory. [source,C] ---- # mkdir /root/lxc/ # cd /root/lxc/ # wget https://raw.github.com/davidvossel/pcmk-lxc-autogen/master/lxc-autogen # chmod 755 lxc-autogen ---- Now execute the script. [source,C] ---- # ./lxc-autogen ---- After executing the script you will see a bunch of directories and xml files are generated. Those xml files are the libvirt-lxc domain definitions, and the directories are used as some special mount points for each container. If you open up one of the xml files you'll be able to see how the cpu, memory, and filesystem resources for the container are defined. You can use the libvirt-lxc driver's documentation found here, http://libvirt.org/drvlxc.html, as a reference to help understand all the parts of the xml file. The lxc-autogen script is not complicated and is worth exploring in order to grasp how the environment is generated. It is worth noting that this environment is dependent on use of libvirt's default network interface. Verify the commands below look the same as your environment. The default network address 192.168.122.1 should have been generated by automatically when you installed the virtualization software. [source,C] ---- # virsh net-list Name State Autostart Persistent ________________________________________________________ default active yes yes # virsh net-dumpxml default | grep -e "ip address=" ---- === Generate the Authkey === Generate the authkey used to secure connections between the host and the lxc guest pacemaker_remote instances. This is sort of a funny case because the lxc guests and the host will share the same key file in the /etc/pacemaker/ directory. If in a different deployment where the lxc guests do not share the host's /etc/pacemaker directory, this key will have to be copied into each lxc guest. [source,C] ---- # dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1 ---- == Step 3: Integrate LXC guests into Cluster. == === Start Cluster === On the host, start pacemaker. [source,C] ---- # pcs cluster start ---- Wait for the host to become the DC. The output of 'pcs status' should look similar to this after about a minute. [source,C] ---- Last updated: Thu Mar 14 16:41:22 2013 Last change: Thu Mar 14 16:41:08 2013 via crmd on example-host Stack: corosync Current DC: example-host (1795270848) - partition WITHOUT quorum Version: 1.1.10 1 Nodes configured, unknown expected votes 0 Resources configured. Online: [ example-host ] ---- Now enable the cluster to work without quorum or stonith. This is required just for the sake of getting this tutorial to work with a single cluster-node. [source,C] ---- # pcs property set stonith-enabled=false # pcs property set no-quorum-policy=ignore ---- === Integrate LXC Guests as remote-nodes === If you ran the 'lxc-autogen' script with default parameters, 3 lxc domain definitions were created as .xml files. If you used the same directory I used for the lxc environment, the config files will be located in /root/lxc. Replace the 'config' parameters in the following pcs commands if yours should be different. The pcs commands below each configure a lxc guest as a remote-node in pacemaker. Behind the scenes each lxc guest is launching an instance of pacemaker_remote allowing pacemaker to integrate the lxc guests as remote-nodes. The meta-attribute 'remote-node=' used in each command is what tells pacemaker that the lxc guest is both a resource and a remote-node capable of running resources. In this case, the 'remote-node' attribute also indicates to pacemaker that it can contact each lxc's pacemaker_remote service by using the remote-node name as the hostname. If you look in the /etc/hosts/ file you will see entries for each lxc guest. These entries were auto-generated earlier by the 'lxc-autogen' script. [source,C] ---- # pcs resource create container1 VirtualDomain force_stop="true" hypervisor="lxc:///" config="/root/lxc/lxc1.xml" meta remote-node=lxc1 # pcs resource create container2 VirtualDomain force_stop="true" hypervisor="lxc:///" config="/root/lxc/lxc2.xml" meta remote-node=lxc2 # pcs resource create container3 VirtualDomain force_stop="true" hypervisor="lxc:///" config="/root/lxc/lxc3.xml" meta remote-node=lxc3 ---- After creating the container resources you 'pcs status' should look like this. [source,C] ---- Last updated: Mon Mar 18 17:15:46 2013 Last change: Mon Mar 18 17:15:26 2013 via cibadmin on guest1 Stack: corosync Current DC: example-host (175810752) - partition WITHOUT quorum Version: 1.1.10 4 Nodes configured, unknown expected votes 6 Resources configured. Online: [ example-host lxc1 lxc2 lxc3 ] Full list of resources: container3 (ocf::heartbeat:VirtualDomain): Started example-host container1 (ocf::heartbeat:VirtualDomain): Started example-host container2 (ocf::heartbeat:VirtualDomain): Started example-host ---- === Starting Resources on LXC Guests === Now that the lxc guests are integrated into the cluster, lets generate some Dummy resources to run on them. Dummy resources are real resource agents used just for testing purposes. They actually execute on the node they are assigned to just like an apache server or database would, except their execution just means a file was created. When the resource is stopped, that the file it created is removed. [source,C] ---- # pcs resource create FAKE1 ocf:pacemaker:Dummy # pcs resource create FAKE2 ocf:pacemaker:Dummy # pcs resource create FAKE3 ocf:pacemaker:Dummy # pcs resource create FAKE4 ocf:pacemaker:Dummy # pcs resource create FAKE5 ocf:pacemaker:Dummy ---- After creating the Dummy resources you will see that the resource got distributed among all the nodes. The 'pcs status' output should look similar to this. [source,C] ---- Last updated: Mon Mar 18 17:31:54 2013 Last change: Mon Mar 18 17:31:05 2013 via cibadmin on example-host Stack: corosync Current DC: example=host (175810752) - partition WITHOUT quorum Version: 1.1.10 4 Nodes configured, unknown expected votes 11 Resources configured. Online: [ example-host lxc1 lxc2 lxc3 ] Full list of resources: container3 (ocf::heartbeat:VirtualDomain): Started example-host container1 (ocf::heartbeat:VirtualDomain): Started example-host container2 (ocf::heartbeat:VirtualDomain): Started example-host FAKE1 (ocf::pacemaker:Dummy): Started lxc1 FAKE2 (ocf::pacemaker:Dummy): Started lxc2 FAKE3 (ocf::pacemaker:Dummy): Started lxc3 FAKE4 (ocf::pacemaker:Dummy): Started lxc1 FAKE5 (ocf::pacemaker:Dummy): Started lxc2 ---- To witness that Dummy agents are running within the lxc guests browse one of the lxc domain's filesystem folders. Each lxc guest has a custom mount point for the '/var/run/'directory, which is the location the Dummy resources write their state files to. [source,C] ---- # ls lxc1-filesystem/var/run/ Dummy-FAKE4.state Dummy-FAKE.state ---- If you are curious, take a look at lxc1.xml to see how the filesystem is mounted. === Testing LXC Guest Failure === You will be able to see each pacemaker_remoted process running in each lxc guest from the host machine. [source,C] ---- # ps -A | grep -e pacemaker_remote* 9142 pts/2 00:00:00 pacemaker_remot 10148 pts/4 00:00:00 pacemaker_remot 10942 pts/6 00:00:00 pacemaker_remot ---- In order to see how the cluster reacts to a failed lxc guest. Try killing one of the pacemaker_remote instances. [source,C] ---- # kill -9 9142 ---- After a few moments the lxc guest that was running that instance of pacemaker_remote will be recovered along with all the resources running within that container. diff --git a/doc/Pacemaker_Remote/en-US/Pacemaker_Remote.ent b/doc/Pacemaker_Remote/en-US/Pacemaker_Remote.ent index 65d8badd4f..be6171c50d 100644 --- a/doc/Pacemaker_Remote/en-US/Pacemaker_Remote.ent +++ b/doc/Pacemaker_Remote/en-US/Pacemaker_Remote.ent @@ -1,6 +1,6 @@ - +