diff --git a/src/faq.html b/src/faq.html index a5bb811..00b466b 100644 --- a/src/faq.html +++ b/src/faq.html @@ -1,132 +1,132 @@ --- layout: default title: FAQ ---

Frequently Asked Questions

Q: Where can I get Pacemaker?

A: Pacemaker ships as part of most modern distributions, so you can usually just launch your favorite package manager on:

If all else fails, you can try installing from source.

Q: Is there any documentation?

A: Yes. You can find the set relevant to your version in our documentation index.

Q: Where should I ask questions?

A: Often basic questions can be answered on irc, but sending them to the mailing list is always a good idea so that everyone can benefit from the answer.

Q: Do I need shared storage?

A: No. We can help manage it if you have some, but Pacemaker itself has no need for shared storage.

Q: Which cluster filesystems does Pacemaker support?

A: Pacemaker supports the popular OCFS2 and GFS2 filesystems. As you'd expect, you can use them on top of real disks or network block devices like DRBD.

Q: What kind of applications can I manage with Pacemaker?

A: Pacemaker is application-agnostic, meaning anything that can be scripted can be made highly available, provided the script conforms to one of the supported standards: LSB, OCF, Systemd, or Upstart.

Q: Can I use Pacemaker with Heartbeat?

A: Only Pacemaker versions less than 2.0.0. See this documentation for details.

Q: Can I use Pacemaker with CMAN?

A: Only Pacemaker versions greater than or equal to 1.1.5 and less than 2.0.0. See the documentation for details.

Q: Can I use Pacemaker with Corosync 1.x?

A: Only Pacemaker versions less than 2.0.0. You will need to configure Corosync to load Pacemaker's custom plugin. See the documentation for details.

Q: Can I use Pacemaker with Corosync 2.x or greater?

A: Yes. This is the only option supported by Pacemaker 2.0.0 and greater. See the documentation + href="/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_cluster_software_installation.html">documentation for details.

Q: Do I need a fencing device?

A: Yes. Fencing is the only 100% reliable way to ensure the integrity of your data and that applications are only active on one host. Although Pacemaker is technically able to function without Fencing, there are a good reasons SUSE and Red Hat will not support such a configuration.

Q: Do I need to know XML to configure Pacemaker?

A: No. Although Pacemaker uses XML as its native configuration format, there exist 2 CLIs and at least 4 GUIs that present the configuration in a human friendly format.

Q: How do I synchronize the cluster configuration?

A: Any changes to Pacemaker's configuration are automatically replicated to other machines. The configuration is also versioned, so any offline machines will be updated when they return.

Q: Should I choose pcs or crmsh?

A: Arguably the best advice is to use whichever one comes with your distro. This is the one that will be tailored to that environment, receive regular bugfixes and feature in the documentation.

Of course, for years people have been side-loading all of Pacemaker onto enterprise distros that didn't ship it, so doing the same for just a configuration tool should be easy if your favorite distro does not ship your favorite tool.

Q: What if my question isn't here?

A: See the getting help section and let us know!

diff --git a/src/quickstart-redhat-6.html b/src/quickstart-redhat-6.html index ae2934a..d62f038 100644 --- a/src/quickstart-redhat-6.html +++ b/src/quickstart-redhat-6.html @@ -1,199 +1,199 @@ --- layout: pacemaker title: RHEL 6 Quickstart ---
{% include quickstart-common.html %}

RHEL 6.4 onwards

Install

Pacemaker ships as part of the Red Hat High Availability Add-on. The easiest way to try it out on RHEL is to install it from the Scientific Linux or CentOS repositories.

If you are already running CentOS or Scientific Linux, you can skip this step. Otherwise, to teach the machine where to find the CentOS packages, run:

[ALL] # cat < /etc/yum.repo.d/centos.repo [centos-6-base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ enabled=1 EOF

Next we use yum to install pacemaker and some other necessary packages we will need:

[ALL] # yum install pacemaker cman pcs ccs resource-agents

Configure Cluster Membership and Messaging

The supported stack on RHEL6 is based on CMAN, so thats what Pacemaker uses too.

We now create a CMAN cluster and populate it with some nodes. Note that the name cannot exceed 15 characters (we'll use 'pacemaker1').

[ONE] # ccs -f /etc/cluster/cluster.conf --createcluster pacemaker1 [ONE] # ccs -f /etc/cluster/cluster.conf --addnode node1 [ONE] # ccs -f /etc/cluster/cluster.conf --addnode node2

Next we need to teach CMAN how to send it's fencing requests to Pacemaker. We do this regardless of whether or not fencing is enabled within Pacemaker.

[ONE] # ccs -f /etc/cluster/cluster.conf --addfencedev pcmk agent=fence_pcmk [ONE] # ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect node1 [ONE] # ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect node2 [ONE] # ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk node1 pcmk-redirect port=node1 [ONE] # ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk node2 pcmk-redirect port=node2

Now copy /etc/cluster/cluster.conf to all the other nodes that will be part of the cluster.

Start the Cluster

CMAN was originally written for rgmanager and assumes the cluster should not start until the node has quorum, so before we try to start the cluster, we need to disable this behavior:

[ALL] # echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/sysconfig/cman

Now, on each machine, run:

[ALL] # service cman start [ALL] # service pacemaker start

A note for users of prior RHEL versions

The original cluster shell (crmsh) is no longer available on RHEL. To help people make the transition there is a quick reference guide for those wanting to know what the pcs equivalent is for various crmsh commands.

Set Cluster Options

With so many devices and possible topologies, it is nearly impossible to include Fencing in a document like this. For now we will disable it.

[ONE] # pcs property set stonith-enabled=false

One of the most common ways to deploy Pacemaker is in a 2-node configuration. However quorum as a concept makes no sense in this scenario (because you only have it when more than half the nodes are available), so we'll disable it too.

[ONE] # pcs property set no-quorum-policy=ignore

For demonstration purposes, we will force the cluster to move services after a single failure:

[ONE] # pcs resource defaults migration-threshold=1

Add a Resource

Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

[ONE] # pcs resource create my_first_svc Dummy op monitor interval=120s

"my_first_svc" is the name the service will be known as.

"ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

"op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

You should now be able to see the service running using:

[ONE] # pcs status

or

[ONE] # crm_mon -1

Simulate a Service Failure

We can simulate an error by telling the service to stop directly (without telling the cluster):

[ONE] # crm_resource --resource my_first_svc --force-stop

If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

Next Steps

diff --git a/src/quickstart-redhat.html b/src/quickstart-redhat.html index b007c23..e629b56 100644 --- a/src/quickstart-redhat.html +++ b/src/quickstart-redhat.html @@ -1,168 +1,168 @@ --- layout: pacemaker title: RHEL 7 Quickstart ---
{% include quickstart-common.html %}

RHEL 7

Install

Pacemaker ships as part of the Red Hat High Availability Add-on. The easiest way to try it out on RHEL is to install it from the Scientific Linux or CentOS repositories.

If you are already running CentOS or Scientific Linux, you can skip this step. Otherwise, to teach the machine where to find the CentOS packages, run:

[ALL] # cat < /etc/yum.repos.d/centos.repo [centos-7-base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ enabled=1 EOF

Next we use yum to install pacemaker and some other necessary packages we will need:

[ALL] # yum install pacemaker pcs resource-agents

Create the Cluster

The supported stack on RHEL7 is based on Corosync 2, so thats what Pacemaker uses too.

First make sure that pcs daemon is running on every node:

[ALL] # systemctl start pcsd.service [ALL] # systemctl enable pcsd.service

Then we set up the authentication needed for pcs.

[ALL] # echo CHANGEME | passwd --stdin hacluster [ONE] # pcs cluster auth node1 node2 -u hacluster -p CHANGEME --force

We now create a cluster and populate it with some nodes. Note that the name cannot exceed 15 characters (we'll use 'pacemaker1').

[ONE] # pcs cluster setup --force --name pacemaker1 node1 node2

Start the Cluster

[ONE] # pcs cluster start --all

Set Cluster Options

With so many devices and possible topologies, it is nearly impossible to include Fencing in a document like this. For now we will disable it.

[ONE] # pcs property set stonith-enabled=false

One of the most common ways to deploy Pacemaker is in a 2-node configuration. However quorum as a concept makes no sense in this scenario (because you only have it when more than half the nodes are available), so we'll disable it too.

[ONE] # pcs property set no-quorum-policy=ignore

For demonstration purposes, we will force the cluster to move services after a single failure:

[ONE] # pcs resource defaults migration-threshold=1

Add a Resource

Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

[ONE] # pcs resource create my_first_svc Dummy op monitor interval=120s

"my_first_svc" is the name the service will be known as.

"ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

"op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

You should now be able to see the service running using:

[ONE] # pcs status

or

[ONE] # crm_mon -1

Simulate a Service Failure

We can simulate an error by telling the service to stop directly (without telling the cluster):

[ONE] # crm_resource --resource my_first_svc --force-stop

If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval of 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

Next Steps