diff --git a/src/faq.html b/src/faq.html index 74b140c..a5bb811 100644 --- a/src/faq.html +++ b/src/faq.html @@ -1,133 +1,132 @@ --- layout: default title: FAQ ---

Frequently Asked Questions

Q: Where can I get Pacemaker?

A: Pacemaker ships as part of most modern distributions, so you can usually just launch your favorite package manager on:

If all else fails, you can try installing from source.

Q: Is there any documentation?

A: Yes. You can find the set relevant to your version in our documentation index.

Q: Where should I ask questions?

A: Often basic questions can be answered on irc, but sending them to the mailing list is always a good idea so that everyone can benefit from the answer.

Q: Do I need shared storage?

A: No. We can help manage it if you have some, but Pacemaker itself has no need for shared storage.

Q: Which cluster filesystems does Pacemaker support?

A: Pacemaker supports the popular OCFS2 and GFS2 filesystems. As you'd expect, you can use them on top of real disks or network block devices like DRBD.

Q: What kind of applications can I manage with Pacemaker?

-

A: Pacemaker is application agnostic, meaning - anything that can be scripted can be made highly available - - provided the script conforms to one of the supported - standards: - LSB, - OCF, - Systemd, - or Upstart. +

A: Pacemaker is application-agnostic, meaning + anything that can be scripted can be made highly available, + provided the script conforms to one of the supported standards: + LSB, + OCF, + Systemd, + or Upstart.

Q: Can I use Pacemaker with Heartbeat?

A: Only Pacemaker versions less than 2.0.0. See this documentation for details.

Q: Can I use Pacemaker with CMAN?

A: Only Pacemaker versions greater than or equal to - 1.1.5 and less than 2.0.0. See - the documentation - for details

+ 1.1.5 and less than 2.0.0. See the + documentation + for details.

Q: Can I use Pacemaker with Corosync 1.x?

A: Only Pacemaker versions less than 2.0.0. You will need to configure Corosync to load Pacemaker's custom plugin. See - the documentation for details.

+ the documentation for details.

Q: Can I use Pacemaker with Corosync 2.x or greater?

A: Yes. This is the only option supported by Pacemaker 2.0.0 and greater. See the documentation for details.

Q: Do I need a fencing device?

A: Yes. Fencing is the only 100% reliable way to ensure the integrity of your data and that applications are only active on one host. Although Pacemaker is technically able to function without Fencing, there are a good reasons SUSE and Red Hat will not support such a configuration.

Q: Do I need to know XML to configure Pacemaker?

A: No. Although Pacemaker uses XML as its native configuration format, there exist 2 CLIs and at least 4 GUIs that present the configuration in a human friendly format.

Q: How do I synchronize the cluster configuration?

A: Any changes to Pacemaker's configuration are automatically replicated to other machines. The configuration is also versioned, so any offline machines will be updated when they return.

Q: Should I choose pcs or crmsh?

A: Arguably the best advice is to use whichever one comes with your distro. This is the one that will be tailored to that environment, receive regular bugfixes and feature in the documentation.

Of course, for years people have been side-loading all of Pacemaker onto enterprise distros that didn't ship it, so doing the same for just a configuration tool should be easy if your favorite distro does not ship your favorite tool.

Q: What if my question isn't here?

A: See the getting help section and let us know!

diff --git a/src/quickstart-suse-11.html b/src/quickstart-suse-11.html index e8a07b0..a6d9ba1 100644 --- a/src/quickstart-suse-11.html +++ b/src/quickstart-suse-11.html @@ -1,129 +1,129 @@ --- layout: pacemaker title: SLES 11 Quickstart ---
{% include quickstart-common.html %}

SLES 11

Install

Pacemaker ships as part of the SUSE High Availability Extension. To install, follow the provided documentation. It is also available in openSUSE Leap and openSUSE Tumbleweed (for openSUSE, see the SLES 12 Quickstart guide.

Create the Cluster

The supported stack on SLES11 is based on Corosync/OpenAIS.

To get started, install the cluster stack on all nodes.

[ALL] # zypper install ha-cluster-bootstrap

First we initialize the cluster on the first machine (node1):

[ONE] # ha-cluster-init

Now we can join the cluster from the second machine (node2):

[ONE] # ha-cluster-join -c node1

These two steps create and start a basic cluster together with the HAWK web interface. If given additional arguments, ha-cluster-init can also configure STONITH and OCFS2 as part of initial configuration.

For more details on ha-cluster-init, see the output of ha-cluster-init --help.

Set Cluster Options

For demonstration purposes, we will force the cluster to move services after a single failure:

[ONE] # crm configure property migration-threshold=1

Add a Resource

Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

[ONE] # crm configure primitive my_first_svc ocf:pacemaker:Dummy op monitor interval=120s

"my_first_svc" is the name the service will be known as.

"ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

"op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

You should now be able to see the service running using:

[ONE] # crm status

Simulate a Service Failure

We can simulate an error by telling the service stop directly (without telling the cluster):

[ONE] # crm_resource --resource my_first_svc --force-stop

If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

You can also watch the transition from the HAWK dashboard, by going to https://node1:7630.

Next Steps

diff --git a/src/quickstart-suse.html b/src/quickstart-suse.html index 764f731..b03fde2 100644 --- a/src/quickstart-suse.html +++ b/src/quickstart-suse.html @@ -1,131 +1,131 @@ --- layout: pacemaker title: SLES 12 Quickstart ---
{% include quickstart-common.html %}

SLES 12

Install

Pacemaker ships as part of the SUSE High Availability Extension. To install, follow the provided documentation. It is also available in openSUSE Leap and openSUSE Tumbleweed.

Create the Cluster

The supported stack on SLES12 is based on Corosync 2.x.

To get started, install the cluster stack on all nodes.

[ALL] # zypper install ha-cluster-bootstrap

First we initialize the cluster on the first machine (node1):

[ONE] # ha-cluster-init

Now we can join the cluster from the second machine (node2):

[ONE] # ha-cluster-join -c node1

These two steps create and start a basic cluster together with the HAWK web interface. If given additional arguments, ha-cluster-init can also configure STONITH, OCFS2 and an administration IP address as part of initial configuration. It is also possible to choose whether to use multicast or unicast for corosync communication.

For more details on ha-cluster-init, see the output of ha-cluster-init --help.

Set Cluster Options

For demonstration purposes, we will force the cluster to move services after a single failure:

[ONE] # crm configure property migration-threshold=1

Add a Resource

Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

[ONE] # crm configure primitive my_first_svc Dummy op monitor interval=120s

"my_first_svc" is the name the service will be known as.

"Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

"op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

You should now be able to see the service running using:

[ONE] # crm status

Simulate a Service Failure

We can simulate an error by telling the service stop directly (without telling the cluster):

[ONE] # crm_resource --resource my_first_svc --force-stop

If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

You can also watch the transition from the HAWK dashboard, by going to https://node1:7630.

Next Steps

diff --git a/src/quickstart-ubuntu.html b/src/quickstart-ubuntu.html index 729d50e..d76b298 100644 --- a/src/quickstart-ubuntu.html +++ b/src/quickstart-ubuntu.html @@ -1,153 +1,153 @@ --- layout: pacemaker title: Ubuntu Quickstart ---
{% include quickstart-common.html %}

Ubuntu

Ubuntu appears to have switched to Corosync 2 for it's LTS releases.

We use aptitude to install pacemaker and some other necessary packages we will need:

[ALL] # aptitude install pacemaker corosync fence-agents

Configure Cluster Membership and Messaging

Since the pcs tool from RHEL does not exist on Ubuntu, we well create the corosync configuration file on both machines manually:

[ALL] # cat < /etc/corosync/corosync.conf totem { version: 2 secauth: off cluster_name: pacemaker1 transport: udpu } nodelist { node { ring0_addr: node1 nodeid: 101 } node { ring0_addr: node2 nodeid: 102 } } quorum { provider: corosync_votequorum two_node: 1 wait_for_all: 1 last_man_standing: 1 auto_tie_breaker: 0 } EOF

Start the Cluster

On each machine, run:

[ALL] # service pacemaker start

Set Cluster Options

With so many devices and possible topologies, it is nearly impossible to include Fencing in a document like this. For now we will disable it.

[ONE] # crm configure property stonith-enabled=false

One of the most common ways to deploy Pacemaker is in a 2-node configuration. However quorum as a concept makes no sense in this scenario (because you only have it when more than half the nodes are available), so we'll disable it too.

[ONE] # crm configure property no-quorum-policy=ignore

For demonstration purposes, we will force the cluster to move services after a single failure:

[ONE] # crm configure property migration-threshold=1

Add a Resource

Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

[ONE] # crm configure primitive my_first_svc ocf:pacemaker:Dummy op monitor interval=120s

"my_first_svc" is the name the service will be known as.

"ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

"op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

You should now be able to see the service running using:

[ONE] # crm_mon -1

Simulate a Service Failure

We can simulate an error by telling the service stop directly (without telling the cluster):

[ONE] # crm_resource --resource my_first_svc --force-stop

If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

Next Steps