== Organizational ==
=== Who can edit this wiki? ===
Anyone in the Developers group can edit this wiki. If you are not in the group but would like editing privileges, please email webmaster@clusterlabs.org.
=== Why was Pacemaker started? ===
Pacemaker grew out of the Heartbeat project. See the [[Pacemaker#Project_History|project history]] for more details.
=== How did Pacemaker get its name? ===
First of all, it's not called the CRM (for Cluster Resource Manager) because of the [[https://en.wikipedia.org/wiki/CRM|abundance of terms]] that are commonly abbreviated to those three letters.
The Pacemaker name came from [[http://khamsouk.souvanlasy.com/|Kham]], a good friend of Pacemaker author Andrew Beekhof's, and was originally used by a Java GUI Beekhof was prototyping in early 2007. The GUI was abandoned, and when it came time to choose a name for this project, Lars suggested it was an even better fit for an independent CRM.
The idea stems from the analogy between the role of this software and that of the little device that keeps the human heart pumping. Pacemaker monitors the cluster and intervenes when necessary to ensure the smooth operation of the services it provides.
=== What is Pacemaker's relationship with Corosync? ===
Pacemaker keeps your applications running when they or the machines they're running on fail. However, it can't do this without connectivity to the other machines in the cluster -- a significant problem in its own right.
[[http://www.corosync.org|Corosync]] provides a mechanism to reliably send messages between nodes, notifications when nodes join and leave the cluster, and a list of active nodes that is consistent throughout the cluster.
=== Is there any documentation? ===
Yes, see the [[https://www.ClusterLabs.org/pacemaker/doc|Pacemaker documentation set]].
=== Where should I ask questions? ===
Basic questions can often be answered on the [[ClusterLabs IRC channel]], but sending them to the relevant [[Mailing_lists|mailing list]] is always a good idea so that everyone can benefit from the answer.
=== Do I need shared storage? ===
No. We can help manage it if you have some, but Pacemaker itself has no need for shared storage.
=== Which cluster filesystems does Pacemaker support? ===
Pacemaker supports the popular [[http://oss.oracle.com/projects/ocfs2/|OCFS2]] and [[http://www.redhat.com/gfs/|GFS2]] filesystems. As you'd expect, you can use them on top of real disks or network block devices like [[http://www.linbit.com/en/products-services/drbd/|DRBD]].
=== What kind of applications can I manage with Pacemaker? ===
Pacemaker is application agnostic, meaning anything that can be scripted can be made highly available - provided the script conforms to one of the supported standards: [[https://www.ClusterLabs.org/pacemaker/doc/en-US/Pacemaker/1.1-crmsh/html/Pacemaker_Explained/_linux_standard_base.html|LSB]], [[https://www.ClusterLabs.org/pacemaker/doc/en-US/Pacemaker/1.1-crmsh/html/Pacemaker_Explained/s-resource-supported.html#_open_cluster_framework|OCF]], [[https://www.ClusterLabs.org/pacemaker/doc/en-US/Pacemaker/1.1-crmsh/html/Pacemaker_Explained/_systemd.html|Systemd]], or [[https://www.ClusterLabs.org/pacemaker/doc/n-US/Pacemaker/1.1-crmsh/html/Pacemaker_Explained/_upstart.html|Upstart]].
=== Do I need a fencing device? ===
Yes. Fencing is the only 100% reliable way to ensure the integrity of your data and that applications are only active on one host. Although Pacemaker is technically able to function without Fencing, there are a good reasons SUSE and Red Hat will not support such a configuration.
=== Do I need to know XML to configure Pacemaker? ===
No. Although Pacemaker uses XML as its native configuration format, there exist at least 2 CLIs and 4 GUIs that present the configuration in a human friendly format.
=== How do I synchronize the cluster configuration? ===
Any changes to Pacemaker's configuration are automatically replicated to other machines. The configuration is also versioned, so any offline machines will be updated when they return.
=== Should I choose pcs or crm shell? ===
Arguably the best advice is to use whichever one comes with your distro. This is the one that will be tailored to that environment, receive regular bugfixes and feature in the documentation.
Of course, for years people have been side-loading all of Pacemaker onto enterprise distros that didn't ship it, so doing the same for just a configuration tool should be easy if your favorite distro does not ship your favorite tool.
=== What if my question isn't here? ===
See our [[https://www.ClusterLabs.org/help.html|help]] page and let us know!
=== What versions of Pacemaker are supported? ===
When seeking assistance, please try to ensure you have one of the versions supported directly by the project.
Please refer to the [[Releases]] page for further details including the schedule of planned releases.
{{:ReleaseMatrix}}
== Technical ==
=== How do I install Pacemaker? ===
Installation from source and from pre-built packages is described on the [[Install]] page.
=== Can I use Pacemaker with Corosync 2.x and later? ===
Yes. This is the only option supported in Pacemaker 2.0.0 and later. See the [[http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/_cluster_software_installation.html|documentation]] for details.
=== Can I use Pacemaker with Heartbeat? ===
Only with Pacemaker versions less than 2.0.0. See [[http://www.linux-ha.org/wiki/Configuration|Linux-HA documentation]] for details.
=== Can I use Pacemaker with CMAN? ===
Only with Pacemaker versions greater than or equal to 1.1.5 and less than 2.0.0. See the [[https://www.clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1-plugin/html/Clusters_from_Scratch/ch08s02.html|documentation]] for details.
=== Can I use Pacemaker with Corosync 1.x? ===
Only with Pacemaker versions less than 2.0.0. You will need to configure Corosync to load Pacemaker's custom plugin. See the [[http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Clusters_from_Scratch/s-configure-corosync.html|documentation]] for details.
=== Can I mix different cluster layers in the same cluster? ===
No.
=== Where can I get the source code? ===
* The source code can be browsed at [[https://github.com/ClusterLabs/pacemaker/ | GitHub]] or downloaded as a [[https://github.com/ClusterLabs/pacemaker/tarball/main|tarball]].
* Alternatively, you can get a full copy of the Git repository by executing `git clone git://github.com/ClusterLabs/pacemaker.git`
=== Where can I get pre-built packages? ===
Most users should be able to install Pacemaker directly from their distribution.
Pacemaker currently ships with [[https://fedoraproject.org/|Fedora]] (since 12), [[https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux | RedHat Enterprise Linux]] (since 6.0), [[https://www.opensuse.org|openSUSE]] (since 11.0), [[https://www.debian.org/|Debian]] (since "Squeeze"), [[https://www.ubuntu.com/|Ubuntu LTS]] (since 10.4 "Lucid Lynx”) and as a key component of the [[https://www.novell.com/products/highavailability/|High Availability Extension]] for [[http://www.novell.com/linux|SUSE Linux Enterprise Server 11]] (//available free of charge to existing SLES10 customers//).
Users of other distributions should refer to our [[Install]] page.
=== How do I test my Cluster? ===
Pacemaker comes with a Cluster Test Suite (CTS for short) which is an integral part of our release testing.
Traditionally this had been hard to set up and use however a new tool [[https://github.com/ClusterLabs/pacemaker/tree/main/cts/lab/cluster_test|cluster_test]] has been written to simplify the process. Please give it a try and send feedback via the mailing list.
=== What are multiply active resources? ===
Pacemaker will try to determine what resources are active on a node when it joins the cluster. To do this, it sends what we call a probe, using resource agents' monitor operation. There are two common reasons for seeing a log message about a resource being multiply active:
* Your resource really is active on more than one node
** Ensure the service is //not// enabled to start at system boot
** Ensure administrators do not start the service manually anywhere
** Did Pacemaker suffer an internal failure? If so, please check the [[Help:Contents]] page and report it
* Your resource agent doesn't implement the monitor operation correctly
** Make sure your resource agent conforms to the OCF standard by using the ocf-tester script
You may also want to read the [[https://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-options.html|documentation]] for the **multiple-active** option which controls what Pacemaker does when it encounters this condition.
=== I killed a node but the cluster didn't recover ===
One of the most common reasons for this is the way quorum is calculated for a 2-node cluster. Corosync 2 doesn't pretend 2-node clusters always have quorum. (Corosync 3 has the two-node option to select the desired behavior.)
In order to have quorum, //more// than half of the total number of cluster nodes need to be online. Clearly this is not the case when a node failure occurs in a 2-node cluster.
If you want to allow the remaining node to provide all the cluster services, you need to set the `no-quorum-policy` to `ignore`.
```
crm configure property no-quorum-policy=ignore
```
Just be sure to set up fencing to ensure data integrity.
== Features ==
=== Colocation Sets ===
The //sequential// option does not refer to ordering. Instead it tells Pacemaker to create a colocation chain between the members of the set. For example:
```
colocation myset inf: app1 app2 app3 app4
```
is the equivalent of
```
colocation myset-1 inf: app2 app1
colocation myset-2 inf: app3 app2
colocation myset-3 inf: app4 app3
```
(app4 -> app3 -> app2 -> app1)
Putting them in brackets sets //sequential=false// and removes the internal constraints. So:
```
colocation myset inf: app1 ( app2 app3 app4 )
```
is actually the equivalent of
```
colocation myset-1 inf: app2 app1
colocation myset-2 inf: app3 app1
colocation myset-3 inf: app4 app1
```
(app2 -> app1, app3 -> app1, app4 -> app1)
The difference has implications when there is a failure. With //sequential// turned **on**, a failure in //app2// results in //app3// and //app4// also being restarted. However with //sequential// turned **off**, a failure in //app2// does not affect //app3// or //app4//. In both cases, a failure in //app1// results in all resources being restarted.