diff --git a/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt b/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt index 328a52eb0f..72a9076592 100644 --- a/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt +++ b/doc/Pacemaker_Remote/en-US/Ch-KVM-Tutorial.txt @@ -1,477 +1,583 @@ = Guest Node Walk-through = *What this tutorial is:* An in-depth walk-through of how to get Pacemaker to manage a KVM guest instance and integrate that guest into the cluster as a guest node. *What this tutorial is not:* A realistic deployment scenario. The steps shown here are meant to get users familiar with the concept of guest nodes as quickly as possible. == Configure the Physical Host == -=== SElinux and Firewall === +[NOTE] +====== +For this example, we will use a single physical host named *example-host*. +A production cluster would likely have multiple physical hosts, in which case +you would run the commands here on each one, unless noted otherwise. +====== -In order to simplify this tutorial, we will disable SELinux and the local -firewall on the host. This may create significant security issues and should -not be performed on machines that will be exposed to the outside world, but may -be appropriate during development and testing on a protected host. +=== Configure Firewall on Host === + +On the physical host, allow cluster-related services through the local firewall: ---- -# setenforce 0 -# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config -# systemctl disable firewalld.service -# systemctl stop firewalld.service -# iptables --flush +# firewall-cmd --permanent --add-service=high-availability +success +# firewall-cmd --reload +success ---- +[NOTE] +====== +If you are using iptables directly, or some other firewall solution besides +firewalld, simply open the following ports, which can be used by various +clustering components: TCP ports 2224, 3121, and 21064, and UDP port 5405. + +If you run into any problems during testing, you might want to disable +the firewall and SELinux entirely until you have everything working. +This may create significant security issues and should not be performed on +machines that will be exposed to the outside world, but may be appropriate +during development and testing on a protected host. + +To disable security measures: +---- +[root@pcmk-1 ~]# setenforce 0 +[root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config +[root@pcmk-1 ~]# systemctl disable firewalld.service +[root@pcmk-1 ~]# systemctl stop firewalld.service +[root@pcmk-1 ~]# iptables --flush +---- +====== + === Install Cluster Software === ---- # yum install -y pacemaker corosync pcs resource-agents ---- === Configure Corosync === Corosync handles pacemaker's cluster membership and messaging. The corosync -config file is located in /etc/corosync/corosync.conf. That config file must be -initialized with information about the cluster nodes before pacemaker can +config file is located in +/etc/corosync/corosync.conf+. That config file must +be initialized with information about the cluster nodes before pacemaker can start. -To initialize the corosync config file, execute the following pcs command on both nodes filling in the information in <> with your nodes' information. +To initialize the corosync config file, execute the following `pcs` command, +replacing the cluster name and hostname as desired: ---- -# pcs cluster setup --force --local --name mycluster +# pcs cluster setup --force --local --name mycluster example-host ---- +[NOTE] +====== +If you have multiple physical hosts, you would execute the setup command on +only one host, but list all of them at the end of the command. +====== + +=== Configure Pacemaker for Remote Node Communication === + +Create a place to hold an authentication key for use with pacemaker_remote: +---- +# mkdir -p --mode=0750 /etc/pacemaker +# chgrp haclient /etc/pacemaker +---- + +Generate a key: +---- +# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1 +---- + +[NOTE] +====== +If you have multiple physical hosts, you would generate the key on only one +host, and copy it to the same location on all hosts. +====== + === Verify Cluster Software === Start the cluster ---- # pcs cluster start ---- Verify corosync membership ----- +.... # pcs status corosync Membership information +---------------------- Nodeid Votes Name -1795270848 1 example-host (local) ----- + 1 1 example-host (local) +.... Verify pacemaker status. At first, the output will look like this: ---- # pcs status +Cluster name: mycluster +WARNING: no stonith devices and stonith-enabled is not false +Last updated: Fri Oct 9 15:18:32 2015 Last change: Fri Oct 9 12:42:21 2015 by root via cibadmin on example-host +Stack: corosync +Current DC: NONE +1 node and 0 resources configured + +Node example-host: UNCLEAN (offline) + +Full list of resources: + + +PCSD Status: + example-host: Online + +Daemon Status: + corosync: active/disabled + pacemaker: active/disabled + pcsd: active/enabled +---- + +After a short amount of time, you should see your host as a single node in the +cluster: +---- +# pcs status +Cluster name: mycluster +WARNING: no stonith devices and stonith-enabled is not false +Last updated: Fri Oct 9 15:20:05 2015 Last change: Fri Oct 9 12:42:21 2015 by root via cibadmin on example-host +Stack: corosync +Current DC: example-host (version 1.1.13-a14efad) - partition WITHOUT quorum +1 node and 0 resources configured - Last updated: Thu Mar 14 12:26:00 2013 - Last change: Thu Mar 14 12:25:55 2013 via crmd on example-host - Stack: corosync - Current DC: - Version: 1.1.10 - 1 Nodes configured, unknown expected votes - 0 Resources configured. +Online: [ example-host ] + +Full list of resources: + + +PCSD Status: + example-host: Online + +Daemon Status: + corosync: active/disabled + pacemaker: active/disabled + pcsd: active/enabled ---- -After about a minute you should see your host as a single node in the cluster. +=== Disable STONITH and Quorum === +Now, enable the cluster to work without quorum or stonith. This is required +for the sake of getting this tutorial to work with a single cluster node. + +---- +# pcs property set stonith-enabled=false +# pcs property set no-quorum-policy=ignore +---- + +[WARNING] +========= +The use of `stonith-enabled=false` is completely inappropriate for a production +cluster. It tells the cluster to simply pretend that failed nodes are safely +powered off. Some vendors will refuse to support clusters that have STONITH +disabled. We disable STONITH here only to focus the discussion on +pacemaker_remote, and to be able to use a single physical host in the example. +========= + +Now, the status output should look similar to this: ---- # pcs status +Cluster name: mycluster +Last updated: Fri Oct 9 15:22:49 2015 Last change: Fri Oct 9 15:22:46 2015 by root via cibadmin on example-host +Stack: corosync +Current DC: example-host (version 1.1.13-a14efad) - partition with quorum +1 node and 0 resources configured + +Online: [ example-host ] + +Full list of resources: + - Last updated: Thu Mar 14 12:28:23 2013 - Last change: Thu Mar 14 12:25:55 2013 via crmd on example-host - Stack: corosync - Current DC: example-host (1795270848) - partition WITHOUT quorum - Version: 1.1.8-9b13ea1 - 1 Nodes configured, unknown expected votes - 0 Resources configured. +PCSD Status: + example-host: Online - Online: [ example-host ] +Daemon Status: + corosync: active/disabled + pacemaker: active/disabled + pcsd: active/enabled ---- Go ahead and stop the cluster for now after verifying everything is in order. ---- -# pcs cluster stop +# pcs cluster stop --force ---- === Install Virtualization Software === ---- # yum install -y kvm libvirt qemu-system qemu-kvm bridge-utils virt-manager # systemctl enable libvirtd.service ---- -reboot the host +Reboot the host. [NOTE] ====== While KVM is used in this example, any virtualization platform with a Pacemaker resource agent can be used to create a guest node. The resource agent needs only to support usual commands (start, stop, etc.); Pacemaker implements the *remote-node* meta-attribute, independent of the agent. ====== == Configure the KVM guest == === Create Guest === -I am not going to outline the installation steps required to create a KVM +We will not outline here the installation steps required to create a KVM guest. There are plenty of tutorials available elsewhere that do that. +Just be sure to configure the guest with a hostname and a static IP address +(as an example here, we will use guest1 and 192.168.122.10). -=== Configure Guest Network === +=== Configure Firewall on Guest === -Run the commands below to set up a static ip address (192.168.122.10) and hostname (guest1). +On each guest, allow cluster-related services through the local firewall, +following the same procedure as in <<_configure_firewall_on_host>>. ----- -export remote_hostname=guest1 -export remote_ip=192.168.122.10 -export remote_gateway=192.168.122.1 - -yum remove -y NetworkManager - -rm -f /etc/hostname -cat << END >> /etc/hostname -$remote_hostname -END - -hostname $remote_hostname +=== Verify Connectivity === -cat << END >> /etc/sysconfig/network -HOSTNAME=$remote_hostname -GATEWAY=$remote_gateway -END - -sed -i.bak "s/.*BOOTPROTO=.*/BOOTPROTO=none/g" /etc/sysconfig/network-scripts/ifcfg-eth0 - -cat << END >> /etc/sysconfig/network-scripts/ifcfg-eth0 -IPADDR0=$remote_ip -PREFIX0=24 -GATEWAY0=$remote_gateway -DNS1=$remote_gateway -END +At this point, you should be able to ping and ssh into guests from hosts, and +vice versa. -systemctl restart network -systemctl enable network.service -systemctl enable sshd -systemctl start sshd +=== Configure pacemaker_remote === -echo "checking connectivity" -ping www.google.com +Install pacemaker_remote, and enable it to run at start-up. Here, we also +install the pacemaker package; it is not required, but it contains the dummy +resource agent that we will use later for testing. ---- - -To simplify the tutorial we'll go ahead and disable selinux on the guest. We'll also need to poke a hole through the firewall on port 3121 (the default port for pacemaker_remote) so the host can contact the guest. - +# yum install -y pacemaker pacemaker-remote resource-agents +# systemctl enable pacemaker_remote.service ---- -# setenforce 0 -# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config - -# firewall-cmd --add-port 3121/tcp --permanent ----- - -If you still encounter connection issues, just disable firewalld on the guest -like we did on the host, to guarantee you'll be able to contact the guest from -the host. - -At this point you should be able to ssh into the guest from the host. - -=== Configure pacemaker_remote === - -On the 'host' machine, run these commands to generate an authkey and copy it to -the /etc/pacemaker folder on both the host and guest. +Copy the authentication key from a host: ---- # mkdir -p --mode=0750 /etc/pacemaker # chgrp haclient /etc/pacemaker -# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1 -# scp -r /etc/pacemaker root@192.168.122.10:/etc/ +# scp root@example-host:/etc/pacemaker/authkey /etc/pacemaker ---- -Now on the 'guest', install the pacemaker-remote package, and enable the daemon -to run at startup. In the commands below, you will notice the pacemaker -package is also installed. It is not required; the only reason it is being -installed for this tutorial is because it contains the Dummy resource agent -that we will use later for testing. - +Start pacemaker_remote, and verify the start was successful: ---- -# yum install -y pacemaker pacemaker-remote resource-agents -# systemctl enable pacemaker_remote.service ----- - -Now start pacemaker_remote on the guest and verify the start was successful. - ----- -# systemctl start pacemaker_remote.service - +# systemctl start pacemaker_remote # systemctl status pacemaker_remote pacemaker_remote.service - Pacemaker Remote Service Loaded: loaded (/usr/lib/systemd/system/pacemaker_remote.service; enabled) Active: active (running) since Thu 2013-03-14 18:24:04 EDT; 2min 8s ago Main PID: 1233 (pacemaker_remot) CGroup: name=systemd:/system/pacemaker_remote.service └─1233 /usr/sbin/pacemaker_remoted Mar 14 18:24:04 guest1 systemd[1]: Starting Pacemaker Remote Service... Mar 14 18:24:04 guest1 systemd[1]: Started Pacemaker Remote Service. Mar 14 18:24:04 guest1 pacemaker_remoted[1233]: notice: lrmd_init_remote_tls_server: Starting a tls listener on port 3121. ---- === Verify Host Connection to Guest === Before moving forward, it's worth verifying that the host can contact the guest on port 3121. Here's a trick you can use. Connect using ssh from the host. The connection will get destroyed, but how it is destroyed tells you whether it worked or not. -First add guest1 to the host machine's /etc/hosts file if you haven't already. This is required unless you have dns setup in a way where guest1's address can be discovered. +First add guest1 to the host machine's +/etc/hosts+ file if you haven't +already. This is required unless you have DNS setup in a way where guest1's +address can be discovered. ---- # cat << END >> /etc/hosts 192.168.122.10 guest1 END ---- If running the ssh command on one of the cluster nodes results in this output before disconnecting, the connection works. ---- # ssh -p 3121 guest1 ssh_exchange_identification: read: Connection reset by peer ---- -If you see this, the connection is not working. +If you see one of these, the connection is not working. ---- # ssh -p 3121 guest1 ssh: connect to host guest1 port 3121: No route to host ---- +---- +# ssh -p 3121 guest1 +ssh: connect to host guest1 port 3121: Connection refused +---- Once you can successfully connect to the guest from the host, shutdown the guest. Pacemaker will be managing the virtual machine from this point forward. == Integrate Guest into Cluster == Now the fun part, integrating the virtual machine you've just created into the cluster. It is incredibly simple. === Start the Cluster === On the host, start pacemaker. ---- # pcs cluster start ---- Wait for the host to become the DC. The output of `pcs status` should look -similar to this after about a minute. - ----- -Last updated: Thu Mar 14 16:41:22 2013 -Last change: Thu Mar 14 16:41:08 2013 via crmd on example-host -Stack: corosync -Current DC: example-host (1795270848) - partition WITHOUT quorum -Version: 1.1.10 -1 Nodes configured, unknown expected votes -0 Resources configured. - - -Online: [ example-host ] ----- - -Now enable the cluster to work without quorum or stonith. This is required -just for the sake of getting this tutorial to work with a single cluster node. - ----- -# pcs property set stonith-enabled=false -# pcs property set no-quorum-policy=ignore ----- +as it did in <<_disable_stonith_and_quorum>>. === Integrate as Guest Node === -If you didn't already do this earlier in the verify host to guest connection section, add the KVM guest's ip to the host's /etc/hosts file so we can connect by hostname. The command below will do that if you used the same ip address I used earlier. - +If you didn't already do this earlier in the verify host to guest connection +section, add the KVM guest's IP address to the host's +/etc/hosts+ file so we +can connect by hostname. For this example: ---- # cat << END >> /etc/hosts 192.168.122.10 guest1 END ---- We will use the *VirtualDomain* resource agent for the management of the virtual machine. This agent requires the virtual machine's XML config to be dumped to a file on disk. To do this, pick out the name of the virtual machine you just created from the output of this list. .... # virsh list --all Id Name State ---------------------------------------------------- - guest1 shut off .... In my case I named it guest1. Dump the xml to a file somewhere on the host using the following command. ---- -# virsh dumpxml guest1 > /root/guest1.xml +# virsh dumpxml guest1 > /etc/pacemaker/guest1.xml ---- Now just register the resource with pacemaker and you're set! ---- -# pcs resource create vm-guest1 VirtualDomain hypervisor="qemu:///system" config="/root/guest1.xml" meta remote-node=guest1 +# pcs resource create vm-guest1 VirtualDomain hypervisor="qemu:///system" \ + config="/etc/pacemaker/guest1.xml" meta remote-node=guest1 ---- +[NOTE] +====== +This example puts the guest XML under /etc/pacemaker because the +permissions and SELinux labeling should not need any changes. +If you run into trouble with this or any step, try disabling SELinux +with `setenforce 0`. If it works after that, see SELinux documentation +for how to troubleshoot, if you wish to reenable SELinux. +====== + +[NOTE] +====== +Pacemaker will automatically monitor pacemaker_remote connections for failure, +so it is not necessary to create a recurring monitor on the VirtualDomain +resource. +====== + Once the *vm-guest1* resource is started you will see *guest1* appear in the `pcs status` output as a node. The final `pcs status` output should look something like this. ---- -Last updated: Fri Mar 15 09:30:30 2013 -Last change: Thu Mar 14 17:21:35 2013 via cibadmin on example-host +# pcs status +Cluster name: mycluster +Last updated: Fri Oct 9 18:00:45 2015 Last change: Fri Oct 9 17:53:44 2015 by root via crm_resource on example-host Stack: corosync -Current DC: example-host (1795270848) - partition WITHOUT quorum -Version: 1.1.10 -2 Nodes configured, unknown expected votes -2 Resources configured. - +Current DC: example-host (version 1.1.13-a14efad) - partition with quorum +2 nodes and 2 resources configured -Online: [ example-host guest1 ] +Online: [ example-host ] +GuestOnline: [ guest1@example-host ] Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host + +PCSD Status: + example-host: Online + +Daemon Status: + corosync: active/disabled + pacemaker: active/disabled + pcsd: active/enabled ---- === Starting Resources on KVM Guest === The commands below demonstrate how resources can be executed on both the guest node and the cluster node. Create a few Dummy resources. Dummy resources are real resource agents used just for testing purposes. They actually execute on the host they are assigned to just like an apache server or database would, except their execution just means a file was created. When the resource is stopped, that the file it created is removed. ---- # pcs resource create FAKE1 ocf:pacemaker:Dummy # pcs resource create FAKE2 ocf:pacemaker:Dummy # pcs resource create FAKE3 ocf:pacemaker:Dummy # pcs resource create FAKE4 ocf:pacemaker:Dummy # pcs resource create FAKE5 ocf:pacemaker:Dummy ---- Now check your `pcs status` output. In the resource section, you should see something like the following, where some of the resources started on the cluster node, and some started on the guest node. ---- Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host FAKE1 (ocf::pacemaker:Dummy): Started guest1 FAKE2 (ocf::pacemaker:Dummy): Started guest1 FAKE3 (ocf::pacemaker:Dummy): Started example-host FAKE4 (ocf::pacemaker:Dummy): Started guest1 FAKE5 (ocf::pacemaker:Dummy): Started example-host ---- The guest node, *guest1*, reacts just like any other node in the cluster. For example, pick out a resource that is running on your cluster node. For my purposes, I am picking FAKE3 from the output above. We can force FAKE3 to run on *guest1* in the exact same way we would any other node. ---- -# pcs constraint FAKE3 prefers guest1 +# pcs constraint location FAKE3 prefers guest1 ---- Now, looking at the bottom of the `pcs status` output you'll see FAKE3 is on *guest1*. ---- Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host FAKE1 (ocf::pacemaker:Dummy): Started guest1 FAKE2 (ocf::pacemaker:Dummy): Started guest1 FAKE3 (ocf::pacemaker:Dummy): Started guest1 FAKE4 (ocf::pacemaker:Dummy): Started example-host FAKE5 (ocf::pacemaker:Dummy): Started example-host ---- === Testing Recovery and Fencing === Pacemaker's policy engine is smart enough to know fencing guest nodes associated with a virtual machine means shutting off/rebooting the virtual machine. No special configuration is necessary to make this happen. If you are interested in testing this functionality out, trying stopping the guest's pacemaker_remote daemon. This would be equivalent of abruptly terminating a cluster node's corosync membership without properly shutting it down. ssh into the guest and run this command. ---- # kill -9 `pidof pacemaker_remoted` ---- -After a few seconds or so, you'll see this in your `pcs status` output. The -*guest1* node will be show as offline as it is being recovered. - +Within a few seconds, your `pcs status` output will show a monitor failure, +and the *guest1* node will not be shown while it is being recovered. ---- -Last updated: Fri Mar 15 11:00:31 2013 -Last change: Fri Mar 15 09:54:16 2013 via cibadmin on example-host +# pcs status +Cluster name: mycluster +Last updated: Fri Oct 9 18:08:35 2015 Last change: Fri Oct 9 18:07:00 2015 by root via cibadmin on example-host Stack: corosync -Current DC: example-host (1795270848) - partition WITHOUT quorum -Version: 1.1.10 -2 Nodes configured, unknown expected votes -7 Resources configured. - +Current DC: example-host (version 1.1.13-a14efad) - partition with quorum +2 nodes and 7 resources configured Online: [ example-host ] -OFFLINE: [ guest1 ] Full list of resources: - vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host - FAKE1 (ocf::pacemaker:Dummy): Stopped - FAKE2 (ocf::pacemaker:Dummy): Stopped - FAKE3 (ocf::pacemaker:Dummy): Stopped + vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host + FAKE1 (ocf::pacemaker:Dummy): Stopped + FAKE2 (ocf::pacemaker:Dummy): Stopped + FAKE3 (ocf::pacemaker:Dummy): Stopped FAKE4 (ocf::pacemaker:Dummy): Started example-host FAKE5 (ocf::pacemaker:Dummy): Started example-host -Failed actions: - guest1_monitor_30000 (node=example-host, call=3, rc=7, status=complete): not running +Failed Actions: +* guest1_monitor_30000 on example-host 'unknown error' (1): call=8, status=Error, exitreason='none', + last-rc-change='Fri Oct 9 18:08:29 2015', queued=0ms, exec=0ms + + +PCSD Status: + example-host: Online + +Daemon Status: + corosync: active/disabled + pacemaker: active/disabled + pcsd: active/enabled ---- +[NOTE] +====== +A guest node involves two resources: the one you explicitly configured creates the guest, +and Pacemaker creates an implicit resource for the pacemaker_remote connection, which +will be named the same as the value of the *remote-node* attribute of the explicit resource. +When we killed pacemaker_remote, it is the implicit resource that failed, which is why +the failed action starts with *guest1* and not *vm-guest1*. +====== + Once recovery of the guest is complete, you'll see it automatically get re-integrated into the cluster. The final `pcs status` output should look something like this. ---- -Last updated: Fri Mar 15 11:03:17 2013 -Last change: Fri Mar 15 09:54:16 2013 via cibadmin on example-host +Cluster name: mycluster +Last updated: Fri Oct 9 18:18:30 2015 Last change: Fri Oct 9 18:07:00 2015 by root via cibadmin on example-host Stack: corosync -Current DC: example-host (1795270848) - partition WITHOUT quorum -Version: 1.1.10 -2 Nodes configured, unknown expected votes -7 Resources configured. +Current DC: example-host (version 1.1.13-a14efad) - partition with quorum +2 nodes and 7 resources configured - -Online: [ example-host guest1 ] +Online: [ example-host ] +GuestOnline: [ guest1@example-host ] Full list of resources: vm-guest1 (ocf::heartbeat:VirtualDomain): Started example-host - FAKE1 (ocf::pacemaker:Dummy): Started guest1 - FAKE2 (ocf::pacemaker:Dummy): Started guest1 - FAKE3 (ocf::pacemaker:Dummy): Started guest1 + FAKE1 (ocf::pacemaker:Dummy): Started guest1 + FAKE2 (ocf::pacemaker:Dummy): Started guest1 + FAKE3 (ocf::pacemaker:Dummy): Started guest1 FAKE4 (ocf::pacemaker:Dummy): Started example-host FAKE5 (ocf::pacemaker:Dummy): Started example-host -Failed actions: - guest1_monitor_30000 (node=example-host, call=3, rc=7, status=complete): not running +Failed Actions: +* guest1_monitor_30000 on example-host 'unknown error' (1): call=8, status=Error, exitreason='none', + last-rc-change='Fri Oct 9 18:08:29 2015', queued=0ms, exec=0ms + + +PCSD Status: + example-host: Online + +Daemon Status: + corosync: active/disabled + pacemaker: active/disabled + pcsd: active/enabled +---- + +Normally, once you've investigated and addressed a failed action, you can clear the +failure. However Pacemaker does not yet support cleanup for the implicitly +created connection resource while the explicit resource is active. If you want +to clear the failed action from the status output, stop the guest resource before +clearing it. For example: +---- +# pcs resource disable vm-guest1 --wait +# pcs resource cleanup guest1 +# pcs resource enable vm-guest1 ---- === Accessing Cluster Tools from Guest Node === Besides allowing the cluster to manage resources on a guest node, pacemaker_remote has one other trick. The pacemaker_remote daemon allows nearly all the pacemaker tools (`crm_resource`, `crm_mon`, `crm_attribute`, `crm_master`, etc.) to work on guest nodes natively. Try it: Run `crm_mon` on the guest after pacemaker has integrated the guest node into the cluster. These tools just work. This means resource agents such as master/slave resources which need access to tools like `crm_master` work seamlessly on the guest nodes. Higher-level command shells such as `pcs` may have partial support on guest nodes, but it is recommended to run them from a cluster node.