diff --git a/.gitignore b/.gitignore index ef74526..e9d342b 100644 --- a/.gitignore +++ b/.gitignore @@ -1,38 +1,34 @@ # generated by jekyll html/*.html html/*.txt -html/*/*/index.php html/assets/ -html/doc/index.php +html/doc/ +html/pacemaker/*/index.php html/polls/index.html src/.*-cache/ src/.jekyll-metadata # generated by pacemaker make targets -html/abi/pacemaker/*/ -html/doc/acls.* -html/doc/build-1.1-*.txt -html/doc/crm_fencing.* -html/doc/en-US/ -html/doc/fr/ -html/doc/it-IT/ -html/doc/ro-RO/ -html/doc/zh-CN/ -html/doxygen/pacemaker/*/ -html/global/pacemaker/*/ -html/man/pacemaker/*.[78].html +html/pacemaker/abi/*/ +html/pacemaker/doc/*/ +html/pacemaker/doc/acls.* +html/pacemaker/doc/build-1.1-*.txt +html/pacemaker/doc/crm_fencing.* +html/pacemaker/doxygen/*/ +html/pacemaker/global/*/ +html/pacemaker/man/*.[78].html # provided by mediawiki mediawiki123/ # not version-controlled *~ *.swp html/doc/Two-Stacks.pdf html/images/ include/wiki.clusterlabs.org/secrets.php # not ClusterLabs-related beekhof.net/ html/Pictures/ html/rpm-test* diff --git a/README.md b/README.md index d7f9e6f..6ebf129 100644 --- a/README.md +++ b/README.md @@ -1,93 +1,98 @@ # ClusterLabs.org website ## Installing Jekyll ClusterLabs,org is partially generated by jekyll. Installing jekyll requires the following dependencies: * nodejs * npm * ruby * ruby-devel * rubygems * rubygem-bundler * rubygem-rdiscount Once you have those, change to the `src` directory and run `bundle install`. ## Using Jekyll ClusterLabs.org's jekyll source is under the `src` directory. Jekyll will generate static content to the html directory. To generate content in a checkout for development and testing, change to the `src` directory and run `bundle exec jekyll build` (to merely generate content) or `bundle exec jekyll serve` (to generate and test via a local server). To generate content on the production site, run `JEKYLL_ENV=production jekyll build` (which will enable such things as site analytics and asset digests). If `src/Gemfile` changes, re-run `bundle install` afterward. ## Images, stylesheets and JavaScripts We use the jekyll-assets plugin to manage "assets" such as images, stylesheets, and JavaScript. One advantage is that digest hashes are automatically added to the generated filenames when in production mode. This allows "cache busting" when an asset changes, so we can use long cache times on the server end. Another advantage is that sources are minified when in production mode. How CSS is managed: * `src/_assets/css/main.scss` is just a list of imports * `src/_assets/css/_*.scss` contain the CSS to be imported by `main.scss` * jekyll will generate `html/assets/main.css` (or `main-_HASH_.css`) as the combination of all imports * web pages can reference the stylesheet via `{% css main %}` JavaScript is managed similarly: * `src/_assets/js/main.js` is just a list of requires * `src/_assets/js/*.js` contain the JavaScript to be required by `main.js` * jekyll will copy these to `html/assets` * jekyll will generate `html/assets/main.js` (or `main-_HASH_.js`) as the combination of all JavaScript * web pages can reference the script via `{% js main %}` How images are managed: * `src/_assets/images/*` are our images * web pages can add an img tag using `{% img _NAME_._EXT_ %}` * web pages can reference a path to an image (e.g. in a link's href) using `{% asset_path _NAME_._EXT_ %}` * CSS can reference a path to an image using `url(asset_path("_NAME_._EXT_"))` * only images that are referenced in one of these ways will be deployed to the website, so `_assets` may contain image sources such as SVGs that do not need to be deployed * Tip: http://compresspng.com/ can often compress PNGs extremely well ## Site icons Site icons used to be easy, right? `favicon.ico` seems downright traditional. Unfortunately, site icons have become an ugly mess of incompatible proprietary extensions. Even `favicon.ico` is just a proprietary extension (and obsolete, as well). Now, there are also `apple-touch-icon[-NxN][-precomposed].png` (with at least _12_ different sizes!), `browserconfig.xml`, `manifest.json`, link tags with `rel=(icon|shortcut icon|apple-touch-icon-*)`, and Windows Phone tile overlay divs. If you want to be discouraged and confused, see: * http://stackoverflow.com/questions/23849377/html-5-favicon-support * https://mathiasbynens.be/notes/touch-icons * https://css-tricks.com/favicon-quiz/ There is no way to handle the mess universally. In particular, some devices do much better when different icon sizes are provided and listed in the HTML as link tags, and will pick the size needed, whereas other devices will download every single icon listed in those link tags, crippling page performance -- not to mention the overhead that listing two dozen icon sizes adds to the HTML. We've chosen a simple approach: provide two site icons, a 16x16 `favicon.ico`, and a 180x180 `apple-touch-icon.png`, both listed in link tags in the HTML. Most browsers/devices will choose one of these and scale it as needed. + +## Web server configuration + +The clusterlabs.org web server is configured to redirect certain old URLs to +their new locations, so be careful about renaming files. diff --git a/html/doc/Cluster_from_Scratch.pdf b/html/doc/Cluster_from_Scratch.pdf deleted file mode 120000 index 8f35861..0000000 --- a/html/doc/Cluster_from_Scratch.pdf +++ /dev/null @@ -1 +0,0 @@ -en-US/Pacemaker/1.1/pdf/Clusters_from_Scratch/Pacemaker-1.1-Clusters_from_Scratch-en-US.pdf \ No newline at end of file diff --git a/html/doc/Clusters_from_Scratch-1.0-GFS2.pdf b/html/pacemaker/doc/Clusters_from_Scratch-1.0-GFS2.pdf similarity index 100% rename from html/doc/Clusters_from_Scratch-1.0-GFS2.pdf rename to html/pacemaker/doc/Clusters_from_Scratch-1.0-GFS2.pdf diff --git a/html/doc/Clusters_from_Scratch-1.0-OCFS2.pdf b/html/pacemaker/doc/Clusters_from_Scratch-1.0-OCFS2.pdf similarity index 100% rename from html/doc/Clusters_from_Scratch-1.0-OCFS2.pdf rename to html/pacemaker/doc/Clusters_from_Scratch-1.0-OCFS2.pdf diff --git a/html/doc/Colocation_Explained.pdf b/html/pacemaker/doc/Colocation_Explained.pdf similarity index 100% rename from html/doc/Colocation_Explained.pdf rename to html/pacemaker/doc/Colocation_Explained.pdf diff --git a/html/doc/Colocation_Explained_White.pdf b/html/pacemaker/doc/Colocation_Explained_White.pdf similarity index 100% rename from html/doc/Colocation_Explained_White.pdf rename to html/pacemaker/doc/Colocation_Explained_White.pdf diff --git a/html/doc/Ordering_Explained.pdf b/html/pacemaker/doc/Ordering_Explained.pdf similarity index 100% rename from html/doc/Ordering_Explained.pdf rename to html/pacemaker/doc/Ordering_Explained.pdf diff --git a/html/doc/Ordering_Explained_White.pdf b/html/pacemaker/doc/Ordering_Explained_White.pdf similarity index 100% rename from html/doc/Ordering_Explained_White.pdf rename to html/pacemaker/doc/Ordering_Explained_White.pdf diff --git a/html/doc/build-1.0.txt b/html/pacemaker/doc/build-1.0.txt similarity index 100% rename from html/doc/build-1.0.txt rename to html/pacemaker/doc/build-1.0.txt diff --git a/html/doc/build-1.1-plugin.txt b/html/pacemaker/doc/build-1.1-plugin.txt similarity index 100% rename from html/doc/build-1.1-plugin.txt rename to html/pacemaker/doc/build-1.1-plugin.txt diff --git a/html/doc/build-1.1.txt b/html/pacemaker/doc/build-1.1.txt similarity index 100% rename from html/doc/build-1.1.txt rename to html/pacemaker/doc/build-1.1.txt diff --git a/html/doc/desc-1.1-crmsh.txt b/html/pacemaker/doc/desc-1.1-crmsh.txt similarity index 100% rename from html/doc/desc-1.1-crmsh.txt rename to html/pacemaker/doc/desc-1.1-crmsh.txt diff --git a/html/doc/desc-1.1-pcs.txt b/html/pacemaker/doc/desc-1.1-pcs.txt similarity index 100% rename from html/doc/desc-1.1-pcs.txt rename to html/pacemaker/doc/desc-1.1-pcs.txt diff --git a/html/doc/desc-1.1-plugin.txt b/html/pacemaker/doc/desc-1.1-plugin.txt similarity index 100% rename from html/doc/desc-1.1-plugin.txt rename to html/pacemaker/doc/desc-1.1-plugin.txt diff --git a/html/doc/title-1.0.txt b/html/pacemaker/doc/title-1.0.txt similarity index 100% rename from html/doc/title-1.0.txt rename to html/pacemaker/doc/title-1.0.txt diff --git a/html/doc/title-1.1-crmsh.txt b/html/pacemaker/doc/title-1.1-crmsh.txt similarity index 100% rename from html/doc/title-1.1-crmsh.txt rename to html/pacemaker/doc/title-1.1-crmsh.txt diff --git a/html/doc/title-1.1-pcs.txt b/html/pacemaker/doc/title-1.1-pcs.txt similarity index 100% rename from html/doc/title-1.1-pcs.txt rename to html/pacemaker/doc/title-1.1-pcs.txt diff --git a/html/doc/title-1.1-plugin.txt b/html/pacemaker/doc/title-1.1-plugin.txt similarity index 100% rename from html/doc/title-1.1-plugin.txt rename to html/pacemaker/doc/title-1.1-plugin.txt diff --git a/html/pacemaker/index.html b/html/pacemaker/index.html new file mode 100644 index 0000000..3641585 --- /dev/null +++ b/html/pacemaker/index.html @@ -0,0 +1,199 @@ + + + + ClusterLabs > Pacemaker + + + + + + + + + + + + + + + + + +
+ +
+
+ + + + + + + + + +
+ + + +
+ "The definitive open-source high-availability stack for the Linux + platform builds upon the Pacemaker cluster resource manager." + -- LINUX Journal, + "Ahead + of the Pack: the Pacemaker High-Availability Stack" +
+ + +

Features

+
    +
  • Detection and recovery of machine and application-level failures
  • +
  • Supports practically any redundancy configuration
  • +
  • Supports both quorate and resource-driven clusters
  • +
  • Configurable strategies for dealing with quorum loss (when multiple machines fail)
  • +
  • Supports application startup/shutdown ordering, regardless machine(s) the applications are on
  • +
  • Supports applications that must/must-not run on the same machine
  • +
  • Supports applications which need to be active on multiple machines
  • +
  • Supports applications with multiple modes (eg. master/slave)
  • +
  • Provably correct response to any failure or cluster state. The + cluster's response to any stimuli can be tested offline + before the condition exists
  • +
+ +

Background

+ + Black Duck Open Hub project report for pacemaker + +

+ Pacemaker has been around + since 2004 + and is primarily a collaborative effort + between Red Hat + and SuSE. However, we also + receive considerable help and support from the folks + at LinBit and the community in + general. +

+

+ The core Pacemaker team is made up of full-time developers from + Australia, the Czech Republic, the USA, and Germany. Contributions to the code or + documentation are always welcome. +

+

+ Pacemaker ships with most modern Linux distributions and has been + deployed in many critical environments including Deutsche + Flugsicherung GmbH + (DFS) + which uses Pacemaker to ensure + its air traffic + control systems are always available. +

+

+ Currently Andrew Beekhof is + the project lead for Pacemaker. +

+
+ + +
+
+ + + + + + +
+ + + + + + + + + + diff --git a/src/_config.yml b/src/_config.yml index bc6665a..3a07a41 100644 --- a/src/_config.yml +++ b/src/_config.yml @@ -1,55 +1,52 @@ # Welcome to Jekyll! # # This config file is meant for settings that affect your whole blog, values # which you are expected to set up once and rarely edit after that. If you find # yourself editing these this file very often, consider using Jekyll's data files # feature for the data you need to update frequently. # # For technical reasons, this file is *NOT* reloaded automatically when you use # 'bundle exec jekyll serve'. If you change this file, please restart the server process. # Site settings # These are used to personalize your new site. If you look in the HTML files, # you will see them accessed via {{ site.title }}, {{ site.email }}, and so on. # You can create any custom variable you would like, and they will be accessible # in the templates via {{ site.myvariable }}. title: ClusterLabs email: andrew@beekhof.net description: Community hub for open-source high-availability software -url: http://www.clusterlabs.org/ +url: https://www.clusterlabs.org/ google_analytics: UA-8156370-1 # Build settings theme: minima destination: ../html gems: - jekyll-assets - font-awesome-sass include: - - abi - doc - - doxygen - - global - - man + - pacemaker - polls exclude: - Gemfile - Gemfile.lock - LICENSE.theme # All content generated outside of jekyll, or not yet converted to jekyll, # must be listed here, or jekyll will erase it when building the site. # Though not documented as such, the values here function as prefix matches. keep_files: - - abi - - doc - - doxygen - - global - images - - man + - pacemaker/abi + - pacemaker/doc + - pacemaker/doxygen + - pacemaker/global + - pacemaker/man - Pictures - rpm-test - rpm-test-next - rpm-test-rhel diff --git a/src/_includes/sidebar.html b/src/_includes/sidebar.html index e633082..2a561f1 100644 --- a/src/_includes/sidebar.html +++ b/src/_includes/sidebar.html @@ -1,49 +1,49 @@ diff --git a/src/_layouts/home.html b/src/_layouts/home.html index 87cb70f..fb480da 100644 --- a/src/_layouts/home.html +++ b/src/_layouts/home.html @@ -1,203 +1,213 @@ --- layout: clusterlabs ---

Quick Overview

{% img Deploy-small.png %}

Deploy

We support many deployment scenarios, from the simplest 2-node standby cluster to a 32-node active/active configuration. We can also dramatically reduce hardware costs by allowing several active/passive clusters to be combined and share a common backup node.

{% img Monitor-small.png %}

Monitor

We monitor the system for both hardware and software failures. In the event of a failure, we will automatically recover your application and make sure it is available from one of the remaning machines in the cluster.

{% img Recover-small.png %}

Recover

After a failure, we use advanced algorithms to quickly determine the optimum locations for services based on relative node preferences and/or requirements to run with other cluster services (we call these "constraints").

Why clusters

At its core, a cluster is a distributed finite state machine capable of co-ordinating the startup and recovery of inter-related services across a set of machines.

System HA is possible without a cluster manager, but you save many headaches using one anyway

Even a distributed and/or replicated application that is able to survive the failure of one or more components can benefit from a higher level cluster:

While SYS-V init replacements like systemd can provide deterministic recovery of a complex stack of services, the recovery is limited to one machine and lacks the context of what is happening on other machines - context that is crucial to determine the difference between a local failure, clean startup or recovery after a total site failure.

Features

The ClusterLabs stack, incorporating Corosync - and Pacemaker defines + and Pacemaker defines an Open Source, High Availability cluster offering suitable for both small and large deployments.

Components

"The definitive open-source high-availability stack for the Linux platform builds upon the Pacemaker cluster resource manager."
-- LINUX Journal, "Ahead of the Pack: the Pacemaker High-Availability Stack"

A Pacemaker stack is built on five core components:

We describe each of these in more detail as well as other optional components such as CLIs and GUIs.

Background

Pacemaker has been around since 2004 and is primarily a collaborative effort - between Red Hat - and SUSE, however we also + between Red Hat + and SUSE, however we also receive considerable help and support from the folks - at LinBit and the community in + at LinBit and the community in general.

"Pacemaker cluster stack is the state-of-the-art high availability and load balancing stack for the Linux platform."
-- OpenStack documentation

Corosync also began life in 2004 but was then part of the OpenAIS project. - It is primarily a Red - Hat initiative, however we also receive considerable - help and support from the folks in the community. + It is primarily a Red Hat initiative, + with considerable help and support from the folks in the community.

The core ClusterLabs team is made up of full-time developers from Australia, Austria, Canada, China, Czech Repulic, England, Germany, Sweden and the USA. Contributions to the code or documentation are always welcome.

The ClusterLabs stack ships with most modern enterprise distributions and has been deployed in many critical environments including Deutsche Flugsicherung GmbH (DFS) which uses Pacemaker to ensure its air traffic control systems are always available.

diff --git a/src/components.html b/src/components.html index 473baac..e69c430 100644 --- a/src/components.html +++ b/src/components.html @@ -1,178 +1,179 @@ --- layout: pacemaker title: Components ---

Core Components

-

Pacemaker

+

Pacemaker

At its core, Pacemaker is a distributed finite state machine capable of co-ordinating the startup and recovery of inter-related services across a set of machines.

Pacemaker understands many different resource types (OCF, SYSV, systemd) and can accurately model the relationships between them (colocation, ordering).

It can even use technology such as Docker to automatically isolate the resources managed by the cluster.

-

Corosync

+

Corosync

Corosync APIs provide membership (a list of peers), messaging (the ability to talk to processes on those peers), and quorum (do we have a majority) capabilities to projects such as Apache Qpid and Pacemaker.

-

libQB

+

libQB

libqb is a library with the primary purpose of providing high performance client server reusable features. It provides high performance logging, tracing, ipc, and poll.

The initial features of libqb come from the parts of corosync that were thought to useful to other projects.

Resource Agents

Resource agents are the abstraction that allows Pacemaker to manage services it knows nothing about. They contain the logic for what to do when the cluster wishes to start, stop or check the health of a service.

This particular set of agents conform to the Open Cluster Framework (OCF) specification. A guide to writing agents is also available.

Fence Agents

Fence agents are the abstraction that allows Pacemaker to isolate badly behaving nodes. They achieve this by either powering off the node or disabling its access to the network and/or shared storage.

Many types of network power switches exist and you will want to choose the one(s) that match your hardware. Please be aware that some (ones that don't loose power when the machine goes down) are better than others.

Agents are generally expected to expose OCF-compliant metadata.

OCF specification

The original documentation that sparked a lot of this work. Mostly we only use the "RA" specification. Efforts are underway to revive the process for updating and modernizing the spec.

Configuration Tools

Pacemaker's internal configuration format is XML, which is great for machines but terrible for humans.

The community's best minds have created GUIs and Shells to hide the XML and allow the configuration to be viewed and updated in a more human friendly format.

Command Line Interfaces (Shells)

-

crmsh

+

crmsh

The original configuration shell for Pacemaker. Written and actively maintained by SUSE, it may be used either as an interactive shell with tab completion, for single commands directly on the shell's command line or as batch mode scripting tool. Documentation for crmsh can be - found here. + found here.

pcs

An alternate vision for a full cluster lifecycle configuration shell and web based GUI. Handles everything from cluster installation through to resource configuration and status.

GUI Tools

pygui

The original GUI for Pacemaker written in Python by IBM China. Mostly deprecated on SLES in favor of Hawk

-

Hawk

+

Hawk

Hawk is a web-based GUI for managing and monitoring Pacemaker HA clusters. It is generally intended to be run on every node in the cluster, so that you can just point your web browser at any node to access it. - There is a usage guide at hawk-guide.readthedocs.org, and it is + There is a usage guide at + hawk-guide.readthedocs.io, and it is documented as part of the - - SUSE Linux Enterprise High Availability Extension documentation + SUSE + Linux Enterprise High Availability Extension documentation

LCMC

The Linux Cluster Management Console (LCMC) is a GUI with an inovative approach for representing the status of and relationships between cluster services. It uses SSH to let you install, configure and manage clusters from your desktop.

pcs

An alternate vision for a full cluster lifecycle configuration shell and web based GUI. Handles everything from cluster installation through to resource configuration and status.

Striker

Striker is the user interface for the Anvil! (virtual) server platform and the ScanCore autonomous self-defence and alert system.

Other Add-ons

booth

The Booth cluster ticket manager extends Pacemaker to support geographically distributed clustering. It does this by managing the granting and revoking of 'tickets' which authorizes one of the cluster sites, potentially located in geographically dispersed locations, to run certain resources.

sbd

SBD provides a node fencing mechanism through the exchange of messages via shared block storage such as for example a SAN, iSCSI, FCoE. This isolates the fencing mechanism from changes in firmware version or dependencies on specific firmware controllers, and it can be used as a STONITH mechanism in all configurations that have reliable shared storage. It can also be used as a pure watchdog-based fencing mechanism.

diff --git a/src/corosync.html b/src/corosync.html index b051e68..69878ca 100644 --- a/src/corosync.html +++ b/src/corosync.html @@ -1,56 +1,56 @@ --- layout: default title: Corosync ---

Virtual synchrony

A closed process group communication model with virtual synchrony guarantees for creating replicated state machines.

Availability

A simple availability manager that restarts the application process when it has failed.

Information

A configuration and statistics in-memory database that provide the ability to set, retrieve, and receive change notifications of information.

Quorum

A quorum system that notifies applications when quorum is achieved or lost.

diff --git a/src/developers.html b/src/developers.html index 179a247..2a7582d 100644 --- a/src/developers.html +++ b/src/developers.html @@ -1,56 +1,56 @@ --- layout: pacemaker title: Developers ---

Automated Integration Testing

Pre-built Packages

Recent versions of all major Linux distributions provide Pacemaker as part of their usual repositories, so you can usually just launch your favorite package manager. One exception is Debian 8 ("jessie"), which had packaging issues not resolved by the release deadline. For more information, see the Debian-HA team.

Release History

{% js pcmk_versions %}
diff --git a/src/doc/index.html b/src/doc/index.html new file mode 100644 index 0000000..fd5cf64 --- /dev/null +++ b/src/doc/index.html @@ -0,0 +1,25 @@ +--- +layout: pacemaker +title: Documentation +--- +
+ +
+

General

+
+ +

+ The ClusterLabs wiki has + how-to's, tips, and other information that doesn't make it into the project + manuals. +

+ +
+

Project-specific

+
+ + + +
diff --git a/src/doc/index.php b/src/doc/index.php deleted file mode 100644 index cddaabe..0000000 --- a/src/doc/index.php +++ /dev/null @@ -1,165 +0,0 @@ ---- -layout: pacemaker -title: ClusterLabs - Pacemaker Documentation ---- -
-

-The following Pacemaker documentation was generated from the upstream sources. -

- -

Where to Start

-

- - If you're new to Pacemaker or clustering in general, the best - place to start is the Clusters from Scratch guide. This - document walks you step-by-step through the installation and - configuration of a High Availability cluster with Pacemaker. - It even makes the common configuration mistakes so that it can - demonstrate how to fix them. - -

- -

- - On the otherhand, if you're looking for an exhaustive reference of - all Pacemaker's options and features, try Pacemaker - Explained. It's dry, but should have the answers you're - looking for. - -

- -

- There is also a project wiki with plenty of - examples and - howto guides which - the wider community is encouraged to update and add to. -

- -

Unversioned documentation

-
-

General Concepts

- - - - - - - - - - - - - - - - - - - -
Ordering Explained[pdf][print]
Colocation Explained[pdf][print]
Configuring Fencing with crmsh[html]
ACL Guide[html]
-
- -"; - echo "

"; - foreach (glob("title-$version.txt") as $filename) { - readfile($filename); - } - echo "

"; - foreach (glob("desc-$version.txt") as $filename) { - readfile($filename); - } - echo "
"; - foreach (glob("build-$version.txt") as $filename) { - readfile($filename); - } - echo "
"; - - $langs = array(); - // for now, show only US English; other translations haven't been maintained - foreach (glob("$base/en-US/Pacemaker/$version") as $item) { - $langs[] = basename(dirname(dirname($item))); - } - - $books = array(); - foreach (glob("$base/en-US/Pacemaker/$version/pdf/*") as $filename) { - $books[] = basename($filename); - } - - echo ''; - foreach ($books as $b) { - foreach ($langs as $lang) { - if (glob("$base/$lang/Pacemaker/$version/pdf/$b/*-$lang.pdf")) { - echo '"; - - echo '"; - } - } - } - echo "
'.str_replace("_", " ", $b)." ($lang)'; - foreach (glob("$base/$lang/Pacemaker/$version/epub/$b/*.epub") as $filename) { - echo " [epub]"; - } - foreach (glob("$base/$lang/Pacemaker/$version/pdf/$b/*.pdf") as $filename) { - echo " [pdf]"; - } - foreach (glob("$base/$lang/Pacemaker/$version/html/$b/index.html") as $filename) { - echo " [html]"; - } - foreach (glob("$base/$lang/Pacemaker/$version/html-single/$b/index.html") as $filename) { - echo " [html-single]"; - } - foreach (glob("$base/$lang/Pacemaker/$version/txt/$b/*.txt") as $filename) { - echo " [txt]"; - } - echo "
"; - echo "
"; - } - -$docs = array(); - -foreach (glob("*.html") as $file) { - $fields = explode(".", $file, -1); - $docs[] = implode(".", $fields); -} - -foreach (glob("*.pdf") as $file) { - $fields = explode(".", $file, -1); - $docs[] = implode(".", $fields); -} - - -echo "

Versioned documentation

"; -foreach(get_versions(".") as $v) { - docs_for_version(".", $v); -} - -?> - -

Deprecated Documentation

-
-

Pacemaker 1.0 with OpenAIS

- - - - - - - - - -
Clusters from Scratch - Pacemaker 1.0 & GFS2[pdf]
Clusters from Scratch - Pacemaker 1.0 & OCFS2[pdf]
-
- diff --git a/src/faq.html b/src/faq.html index af5a55a..8ac5ed4 100644 --- a/src/faq.html +++ b/src/faq.html @@ -1,141 +1,141 @@ --- layout: default title: FAQ ---

Frequently Asked Questions

Q: Where can I get Pacemaker?

A: Pacemaker ships as part of most modern distributions, so you can usually just launch your favorite package manager on:

If all else fails, you can try installing from source.

Q: Is there any documentation?

A: Yes. You can find the set relevant to your version in our documentation index.

Q: Where should I ask questions?

A: Often basic questions can be answered on irc, but sending them to the - mailing list is + mailing list is always a good idea so that everyone can benefit from the answer.

Q: Do I need shared storage?

A: No. We can help manage it if you have some, but Pacemaker itself has no need for shared storage.

Q: Which cluster filesystems does Pacemaker support?

A: Pacemaker supports the - popular OCFS2 - and GFS2 + popular OCFS2 + and GFS2 filesystems. As you'd expect, you can use them on top of real disks or network block devices - like DRBD. + like DRBD.

Q: What kind of applications can I manage with Pacemaker?

A: Pacemaker is application agnostic, meaning anything that can be scripted can be made highly available - provided the script conforms to one of the supported standards: - LSB, - OCF, - Systemd, - or Upstart. + LSB, + OCF, + Systemd, + or Upstart.

Q: Can I use Pacemaker with Heartbeat?

A: Yes. Pacemaker started off life as part of the Heartbeat project and continues to support it as an alternative to Corosync. See this documentation for more details

Q: Can I use Pacemaker with CMAN?

A: Yes. Pacemaker added support for CMAN v3 in version 1.1.5 to better integrate with distros that have traditionally shipped and/or supported the RHCS cluster stack instead of Pacemaker. This is particularly relevant for those looking to use GFS2 or OCFS2. See - the documentation + the documentation for more details

Q: Can I use Pacemaker with Corosync 1.x?

A: Yes. You will need to configure Corosync to load Pacemaker's custom plugin to provide the membership and quorum information we require. See - the documentation for more details.

+ the documentation for more details.

Q: Can I use Pacemaker with Corosync 2.x?

A: Yes. Pacemaker can obtain the membership and quorum information it requires directly from Corosync in this configuration. See - the documentation for more details.

+ the documentation for more details.

Q: Do I need a fencing device?

A: Yes. Fencing is the only 100% reliable way to ensure the integrity of your data and that applications are only active on one host. Although Pacemaker is technically able to function without Fencing, there are a good reasons SUSE and Red Hat will not support such a configuration.

Q: Do I need to know XML to configure Pacemaker?

A: No. Although Pacemaker uses XML as its native configuration format, there exist 2 CLIs and at least 4 GUIs that present the configuration in a human friendly format.

Q: How do I synchronize the cluster configuration?

A: Any changes to Pacemaker's configuration are automatically replicated to other machines. The configuration is also versioned, so any offline machines will be updated when they return.

Q: Should I choose pcs or crmsh?

A: Arguably the best advice is to use whichever one comes with your distro. This is the one that will be tailored to that environment, receive regular bugfixes and feature in the documentation.

Of course, for years people have been side-loading all of Pacemaker onto enterprise distros that didn't ship it, so doing the same for just a configuration tool should be easy if your favorite distro does not ship your favorite tool.

Q: What if my question isn't here?

A: See the getting help section and let us know!

diff --git a/src/help.html b/src/help.html index c24b2a2..f88cb84 100644 --- a/src/help.html +++ b/src/help.html @@ -1,166 +1,161 @@ --- layout: pacemaker title: Help ---

Getting Help

You can stay up to date with the Pacemaker project by subscribing to our - news and/or - site updates feeds. + site updates feeds.

A good first step is always to check out the FAQ - and documentation. Otherwise, many + and documentation. Otherwise, many members of the community hang out on irc and are happy to answer questions. We are spread out over many timezones though (and have day jobs), so you may need to be patient when waiting for a reply.

Extended or complex issues might be better sent to the - relevant mailing list(s) + relevant mailing list(s) (you'll need to subscribe in order to send messages). Don't worry if you pick the wrong one, many of us are on multiple lists and someone will suggest a more appropriate forum if necessary.

People new to the project, or Open Source generally, are encouraged to read Getting Answers by Mike Ash from Rogue Amoeba. It provides some very good tips on effective communication with groups such as this one. Following the advice it contains will greatly increase the chance of a quick and helpful reply.

Bugs and other problems can also be reported - via Bugzilla. + via Bugzilla.

Or if you already know the solution, submit a patch against - our GitHub + our GitHub repository.

The development of most of the ClusterLabs-related projects take place as part of the ClusterLabs organization at Github, and the source code and issue trackers for these projects can be found there.

Providing Help

If you find this project useful, you may want to consider supporting its future development. There are a number of ways to support the project (in no particular order):

Thank you for using Pacemaker

Professional Support

Does your company provide Pacemaker training or support? Let us know!

diff --git a/src/abi/pacemaker/index.php b/src/pacemaker/abi/index.php similarity index 82% rename from src/abi/pacemaker/index.php rename to src/pacemaker/abi/index.php index c9946cd..a6d8657 100644 --- a/src/abi/pacemaker/index.php +++ b/src/pacemaker/abi/index.php @@ -1,82 +1,81 @@ --- layout: pacemaker -title: ClusterLabs - Pacemaker ABI Compatibility +title: Pacemaker ABI Compatibility ---

-This page details the ABI compatability between the listed -Pacemaker versions. -The reports are generated with the -ABI Compliance -Checker that ships with Fedora + This page details ABI compatibility between the listed + Pacemaker versions. Reports are generated using the + ABI Compliance Checker + that ships with Fedora.

/", "", $line); $compat_reports[$i]["status"] = preg_replace("/<\/td>.*/", "", $compat_reports[$i]["status"]); break; } } fclose($file_handle); ++$i; } usort($compat_reports, "sorter"); foreach ($compat_reports as $item) { $report = $item["report"]; $filename = $item["filename"]; $from = $item["from"]; $to = $item["to"]; $status = $item["status"]; echo " "; echo " "; echo " "; echo " "; echo " "; echo " "; } ?>

ABI Compatability Table

Version Reference Version Status Report
$to $from $status report
diff --git a/src/pacemaker/doc/index.php b/src/pacemaker/doc/index.php new file mode 100644 index 0000000..8843be1 --- /dev/null +++ b/src/pacemaker/doc/index.php @@ -0,0 +1,169 @@ +--- +layout: pacemaker +title: Pacemaker Documentation +--- +
+ +

+ Most of the documentation listed here was generated from the Pacemaker + sources. +

+ +
+

Where to Start

+
+

+ If you're new to Pacemaker or clustering in general, the best place to + start is Clusters from Scratch, which walks you step-by-step through + the installation and configuration of a high-availability cluster with + Pacemaker. It even makes common configuration mistakes so that it can + demonstrate how to fix them. +

+ +

+ On the other hand, if you're looking for an exhaustive reference of all + of Pacemaker's options and features, try Pacemaker Explained. It's + dry, but should have the answers you're looking for. +

+ +

+ There is also a project wiki + with examples, how-to guides, and other information that doesn't make it + into the manuals. +

+ +
+

Unversioned documentation

+
+ +
+

General Concepts

+ + + + + + + + + + + + + + + + + + + +
Ordering Explained[pdf][print]
Colocation Explained[pdf][print]
Configuring Fencing with crmsh[html]
ACL Guide[html]
+
+ + "; + echo "

"; + foreach (glob("title-$version.txt") as $filename) { + readfile($filename); + } + echo "

"; + foreach (glob("desc-$version.txt") as $filename) { + readfile($filename); + } + echo "
"; + foreach (glob("build-$version.txt") as $filename) { + readfile($filename); + } + echo "
"; + + $langs = array(); + // for now, show only US English; other translations haven't been maintained + //foreach (glob("$base/*/Pacemaker/$version") as $item) { + // $langs[] = basename(dirname(dirname($item))); + //} + $langs[] = "en-US"; + + $books = array(); + foreach (glob("$base/en-US/Pacemaker/$version/pdf/*") as $filename) { + $books[] = basename($filename); + } + + echo ''; + foreach ($books as $b) { + foreach ($langs as $lang) { + if (glob("$base/$lang/Pacemaker/$version/pdf/$b/*-$lang.pdf")) { + echo '"; + + echo '"; + } + } + } + echo "
'.str_replace("_", " ", $b)." ($lang)'; + foreach (glob("$base/$lang/Pacemaker/$version/epub/$b/*.epub") as $filename) { + echo " [epub]"; + } + foreach (glob("$base/$lang/Pacemaker/$version/pdf/$b/*.pdf") as $filename) { + echo " [pdf]"; + } + foreach (glob("$base/$lang/Pacemaker/$version/html/$b/index.html") as $filename) { + echo " [html]"; + } + foreach (glob("$base/$lang/Pacemaker/$version/html-single/$b/index.html") as $filename) { + echo " [html-single]"; + } + foreach (glob("$base/$lang/Pacemaker/$version/txt/$b/*.txt") as $filename) { + echo " [txt]"; + } + echo "
"; + echo "
"; + } + + $docs = array(); + + foreach (glob("*.html") as $file) { + $fields = explode(".", $file, -1); + $docs[] = implode(".", $fields); + } + + foreach (glob("*.pdf") as $file) { + $fields = explode(".", $file, -1); + $docs[] = implode(".", $fields); + } + + + echo "
\n

Versioned documentation

\n
"; + foreach(get_versions(".") as $v) { + docs_for_version(".", $v); + } + + ?> + +
+

Deprecated Documentation

+
+
+

Pacemaker 1.0 with OpenAIS

+ + + + + + + + + +
Clusters from Scratch - Pacemaker 1.0 & GFS2[pdf]
Clusters from Scratch - Pacemaker 1.0 & OCFS2[pdf]
+
+ + diff --git a/src/doxygen/pacemaker/index.php b/src/pacemaker/doxygen/index.php similarity index 93% rename from src/doxygen/pacemaker/index.php rename to src/pacemaker/doxygen/index.php index a81d1b0..b238f68 100644 --- a/src/doxygen/pacemaker/index.php +++ b/src/pacemaker/doxygen/index.php @@ -1,35 +1,35 @@ --- layout: pacemaker -title: ClusterLabs - Pacemaker API Documentation +title: Pacemaker API Documentation ---

Pacemaker API Documentation

"; $runs = glob("*"); array_multisort(array_map('filemtime', $runs), /*SORT_ASC*/SORT_DESC, $runs); foreach ($runs as $hash) { if (strstr($hash, "index")) { continue; } if (strstr($hash, "-")) { $title = "Version"; $path = "releases/tag"; } else { $title = "Commit"; $path = "commit"; } echo "
  • $title $hash"; echo " [Main Page]"; echo " [API List]"; echo " [Source]"; echo "
  • "; } echo ""; ?>
    diff --git a/src/global/pacemaker/index.php b/src/pacemaker/global/index.php similarity index 93% rename from src/global/pacemaker/index.php rename to src/pacemaker/global/index.php index d4be8d0..4178f88 100644 --- a/src/global/pacemaker/index.php +++ b/src/pacemaker/global/index.php @@ -1,35 +1,35 @@ --- layout: pacemaker -title: ClusterLabs - Annotated Pacemaker Sources +title: Annotated Pacemaker Sources ---

    Annotated Pacemaker Sources

    "; $runs = glob("*"); array_multisort(array_map('filemtime', $runs), /*SORT_ASC*/SORT_DESC, $runs); foreach ($runs as $hash) { if (strstr($hash, "index")) { continue; } if (strstr($hash, "-")) { $title = "Version"; $path = "releases/tag"; } else { $title = "Commit"; $path = "commit"; } echo "
  • $title $hash"; echo " [Annotated]"; echo " [Download]"; echo "
  • "; } echo ""; ?>
    diff --git a/src/pacemaker.html b/src/pacemaker/index.html similarity index 74% rename from src/pacemaker.html rename to src/pacemaker/index.html index 49c3d65..008365c 100644 --- a/src/pacemaker.html +++ b/src/pacemaker/index.html @@ -1,88 +1,88 @@ --- layout: default title: Pacemaker ---
    "The definitive open-source high-availability stack for the Linux platform builds upon the Pacemaker cluster resource manager." -- LINUX Journal, "Ahead of the Pack: the Pacemaker High-Availability Stack"

    Features

    Background

    Black Duck Open Hub project report for pacemaker

    Pacemaker has been around since 2004 and is primarily a collaborative effort - between Red Hat + between Red Hat and SuSE. However, we also receive considerable help and support from the folks at LinBit and the community in general.

    The core Pacemaker team is made up of full-time developers from Australia, the Czech Republic, the USA, and Germany. Contributions to the code or documentation are always welcome.

    Pacemaker ships with most modern Linux distributions and has been deployed in many critical environments including Deutsche Flugsicherung GmbH - (DFS) + (DFS) which uses Pacemaker to ensure its air traffic control systems are always available.

    Currently Andrew Beekhof is the project lead for Pacemaker.

    diff --git a/src/man/pacemaker/index.php b/src/pacemaker/man/index.php similarity index 90% rename from src/man/pacemaker/index.php rename to src/pacemaker/man/index.php index cc269c4..edac999 100644 --- a/src/man/pacemaker/index.php +++ b/src/pacemaker/man/index.php @@ -1,230 +1,214 @@ --- layout: pacemaker -title: ClusterLabs - Pacemaker Manual Pages +title: Pacemaker Manual Pages ---

    Pacemaker Command Line Tools

    Tool Summary

    DESCRIPTION')) { $line = fgets($file_handle); $line = fgets($file_handle); while (!feof($file_handle)) { $line = fgets($file_handle); if(strstr($line, 'OPTIONS')) { $done = 1; break; } else { echo $line; } } if($done) { break; } } } fclose($file_handle); } $mans = glob("*.8.html"); foreach ($mans as $m) { $fields = explode(".", $m, 3); $base = $fields[0]; echo '
    '; echo ''; echo "$base"; echo ''; echo ''; get_desc($m); echo ''; echo '
    '; } ?>

    The Right Tool for the Job

    Pacemaker ships with a comprehensive set of tools that assist you in managing your cluster from the command line. Here we introduce the tools needed for managing the cluster configuration in the CIB and the cluster resources.

    The following list presents several tasks related to cluster management and briefly introduces the tools to use to accomplish these tasks:

    Monitoring the Cluster's Status

    The crm_mon command allows you to monitor your cluster's status and configuration. Its output includes the number of nodes, uname, uuid, status, the resources configured in your cluster, and the current status of each. The output of crm_mon can be displayed at the console or printed into an HTML file. When provided with a cluster configuration file without the status section, crm_mon creates an overview of nodes and resources as specified in the file. See crm_mon(8) for a detailed introduction to this tool's usage and command syntax.

    Managing the CIB

    The cibadmin command is the low-level administrative command for manipulating the Pacemaker CIB. It can be used to dump all or part of the CIB, update all or part of it, modify all or part of it, delete the entire CIB, or perform miscellaneous CIB administrative operations. See cibadmin(8) for a detailed introduction to this tool's usage and command syntax.

    Managing Configuration Changes

    The crm_diff command assists you in creating and applying XML patches. This can be useful for visualizing the changes between two versions of the cluster configuration or saving changes so they can be applied at a later time using cibadmin(8). See crm_diff(8) for a detailed introduction to this tool's usage and command syntax.

    Manipulating CIB Attributes

    The crm_attribute command lets you query and manipulate node attributes and cluster configuration options that are used in the CIB. See crm_attribute(8) for a detailed introduction to this tool's usage and command syntax.

    Validating the Cluster Configuration

    The crm_verify command checks the configuration database (CIB) for consistency and other problems. It can check a file containing the configuration or connect to a running cluster. It reports two classes of problems. Errors must be fixed before Pacemaker can work properly while warning resolution is up to the administrator. crm_verify assists in creating new or modified configurations. You can take a local copy of a CIB in the running cluster, edit it, validate it using crm_verify , then put the new configuration into effect using cibadmin . See crm_verify(8) for a detailed introduction to this tool's usage and command syntax.

    Managing Resource Configurations

    The crm_resource command performs various resource-related actions on the cluster. It lets you modify the definition of configured resources, start and stop resources, or delete and migrate resources between nodes. See crm_resource(8) for a detailed introduction to this tool's usage and command syntax.

    Managing Resource Fail Counts

    The crm_failcount command queries the number of failures per resource on a given node. This tool can also be used to reset the failcount, allowing the resource to again run on nodes where it had failed too often. See crm_failcount(8) for a detailed introduction to this tool's usage and command syntax.

    -
    - Generate and Retrieve Node UUIDs -
    -
    -

    UUIDs are used to identify cluster nodes to ensure that - they can always be uniquely identified. The command - crm_uuid displays the - UUID of the node on which it is run. In very rare - circumstances, it may be necessary to set a node's UUID - to a known value. This can also be achieved with - crm_uuid , but you - should use this command with extreme caution. For more - information, refer to - crm_uuid(8).

    -
    Managing a Node's Standby Status

    The crm_standby command can manipulate a node's standby attribute. Any node in standby mode is no longer eligible to host resources and any resources that are there must be moved. Standby mode can be useful for performing maintenance tasks, such as kernel updates. Remove the standby attribute from the node as it should become a fully active member of the cluster again. See crm_standby(8) for a detailed introduction to this tool's usage and command syntax.

    diff --git a/src/polls/index.html b/src/polls/index.html index a245d20..4a83c92 100644 --- a/src/polls/index.html +++ b/src/polls/index.html @@ -1,42 +1,42 @@ --- layout: pacemaker -title: ClusterLabs - Polls +title: Polls ---

    Polls

    Surveys

    diff --git a/src/quickstart-redhat-6.html b/src/quickstart-redhat-6.html index d1f1d3f..ae2934a 100644 --- a/src/quickstart-redhat-6.html +++ b/src/quickstart-redhat-6.html @@ -1,197 +1,199 @@ --- layout: pacemaker title: RHEL 6 Quickstart ---
    {% include quickstart-common.html %}

    RHEL 6.4 onwards

    Install

    - Pacemaker ships as part of the Red - Hat High - Availability Add-on. The easiest way to try it out on RHEL is to install it from the Scientific Linux or CentOS repositories. + Pacemaker ships as part of the Red Hat + High Availability Add-on. + The easiest way to try it out on RHEL is to install it from the + Scientific Linux + or CentOS repositories.

    If you are already running CentOS or Scientific Linux, you can skip this step. Otherwise, to teach the machine where to find the CentOS packages, run:

    [ALL] # cat < /etc/yum.repo.d/centos.repo [centos-6-base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ enabled=1 EOF

    Next we use yum to install pacemaker and some other necessary packages we will need:

    [ALL] # yum install pacemaker cman pcs ccs resource-agents

    Configure Cluster Membership and Messaging

    The supported stack on RHEL6 is based on CMAN, so thats what Pacemaker uses too.

    We now create a CMAN cluster and populate it with some nodes. Note that the name cannot exceed 15 characters (we'll use 'pacemaker1').

    [ONE] # ccs -f /etc/cluster/cluster.conf --createcluster pacemaker1 [ONE] # ccs -f /etc/cluster/cluster.conf --addnode node1 [ONE] # ccs -f /etc/cluster/cluster.conf --addnode node2

    Next we need to teach CMAN how to send it's fencing requests to Pacemaker. We do this regardless of whether or not fencing is enabled within Pacemaker.

    [ONE] # ccs -f /etc/cluster/cluster.conf --addfencedev pcmk agent=fence_pcmk [ONE] # ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect node1 [ONE] # ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect node2 [ONE] # ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk node1 pcmk-redirect port=node1 [ONE] # ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk node2 pcmk-redirect port=node2

    Now copy /etc/cluster/cluster.conf to all the other nodes that will be part of the cluster.

    Start the Cluster

    CMAN was originally written for rgmanager and assumes the cluster should not start until the node has - quorum, + quorum, so before we try to start the cluster, we need to disable this behavior:

    [ALL] # echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/sysconfig/cman

    Now, on each machine, run:

    [ALL] # service cman start [ALL] # service pacemaker start

    A note for users of prior RHEL versions

    The original cluster shell (crmsh) is no longer available on RHEL. To help people make the transition there is a quick reference guide for those wanting to know what the pcs equivalent is for various crmsh commands.

    Set Cluster Options

    With so many devices and possible topologies, it is nearly impossible to include Fencing in a document like this. For now we will disable it.

    [ONE] # pcs property set stonith-enabled=false

    One of the most common ways to deploy Pacemaker is in a 2-node configuration. However quorum as a concept makes no sense in this scenario (because you only have it when more than half the nodes are available), so we'll disable it too.

    [ONE] # pcs property set no-quorum-policy=ignore

    For demonstration purposes, we will force the cluster to move services after a single failure:

    [ONE] # pcs resource defaults migration-threshold=1

    Add a Resource

    Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

    [ONE] # pcs resource create my_first_svc Dummy op monitor interval=120s

    "my_first_svc" is the name the service will be known as.

    "ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

    "op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

    You should now be able to see the service running using:

    [ONE] # pcs status

    or

    [ONE] # crm_mon -1

    Simulate a Service Failure

    We can simulate an error by telling the service to stop directly (without telling the cluster):

    [ONE] # crm_resource --resource my_first_svc --force-stop

    If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

    Next Steps

    diff --git a/src/quickstart-redhat.html b/src/quickstart-redhat.html index 062ac23..19e9305 100644 --- a/src/quickstart-redhat.html +++ b/src/quickstart-redhat.html @@ -1,158 +1,160 @@ --- layout: pacemaker title: RHEL 7 Quickstart ---
    {% include quickstart-common.html %}

    RHEL 7

    Install

    - Pacemaker ships as part of the Red - Hat High - Availability Add-on. The easiest way to try it out on RHEL is to install it from the Scientific Linux or CentOS repositories. + Pacemaker ships as part of the Red Hat + High Availability Add-on. + The easiest way to try it out on RHEL is to install it from the + Scientific Linux + or CentOS repositories.

    If you are already running CentOS or Scientific Linux, you can skip this step. Otherwise, to teach the machine where to find the CentOS packages, run:

    [ALL] # cat < /etc/yum.repos.d/centos.repo [centos-7-base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ enabled=1 EOF

    Next we use yum to install pacemaker and some other necessary packages we will need:

    [ALL] # yum install pacemaker pcs resource-agents

    Create the Cluster

    The supported stack on RHEL7 is based on Corosync 2, so thats what Pacemaker uses too.

    First we set up the authentication needed for pcs.

    [ALL] # echo CHANGEME | passwd --stdin hacluster [ONE] # pcs cluster auth node1 node2 -u hacluster -p CHANGEME --force

    We now create a cluster and populate it with some nodes. Note that the name cannot exceed 15 characters (we'll use 'pacemaker1').

    [ONE] # pcs cluster setup --force --name pacemaker1 node1 node2

    Start the Cluster

    [ONE] # pcs cluster start --all

    Set Cluster Options

    With so many devices and possible topologies, it is nearly impossible to include Fencing in a document like this. For now we will disable it.

    [ONE] # pcs property set stonith-enabled=false

    One of the most common ways to deploy Pacemaker is in a 2-node configuration. However quorum as a concept makes no sense in this scenario (because you only have it when more than half the nodes are available), so we'll disable it too.

    [ONE] # pcs property set no-quorum-policy=ignore

    For demonstration purposes, we will force the cluster to move services after a single failure:

    [ONE] # pcs resource defaults migration-threshold=1

    Add a Resource

    Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

    [ONE] # pcs resource create my_first_svc Dummy op monitor interval=120s

    "my_first_svc" is the name the service will be known as.

    "ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

    "op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

    You should now be able to see the service running using:

    [ONE] # pcs status

    or

    [ONE] # crm_mon -1

    Simulate a Service Failure

    We can simulate an error by telling the service to stop directly (without telling the cluster):

    [ONE] # crm_resource --resource my_first_svc --force-stop

    If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval of 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

    Next Steps

    diff --git a/src/quickstart-suse-11.html b/src/quickstart-suse-11.html index 15d33e5..e8a07b0 100644 --- a/src/quickstart-suse-11.html +++ b/src/quickstart-suse-11.html @@ -1,129 +1,129 @@ --- layout: pacemaker title: SLES 11 Quickstart ---
    {% include quickstart-common.html %}

    SLES 11

    Install

    Pacemaker ships as part of the SUSE High Availability Extension. To install, follow the provided documentation. It is also available in openSUSE Leap and openSUSE Tumbleweed (for openSUSE, see the SLES 12 Quickstart guide.

    Create the Cluster

    The supported stack on SLES11 is based on Corosync/OpenAIS.

    To get started, install the cluster stack on all nodes.

    [ALL] # zypper install ha-cluster-bootstrap

    First we initialize the cluster on the first machine (node1):

    [ONE] # ha-cluster-init

    Now we can join the cluster from the second machine (node2):

    [ONE] # ha-cluster-join -c node1

    These two steps create and start a basic cluster together with the HAWK web interface. If given additional arguments, ha-cluster-init can also configure STONITH and OCFS2 as part of initial configuration.

    For more details on ha-cluster-init, see the output of ha-cluster-init --help.

    Set Cluster Options

    For demonstration purposes, we will force the cluster to move services after a single failure:

    [ONE] # crm configure property migration-threshold=1

    Add a Resource

    Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

    [ONE] # crm configure primitive my_first_svc ocf:pacemaker:Dummy op monitor interval=120s

    "my_first_svc" is the name the service will be known as.

    "ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

    "op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

    You should now be able to see the service running using:

    [ONE] # crm status

    Simulate a Service Failure

    We can simulate an error by telling the service stop directly (without telling the cluster):

    [ONE] # crm_resource --resource my_first_svc --force-stop

    If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

    You can also watch the transition from the HAWK dashboard, by going to https://node1:7630.

    Next Steps

    diff --git a/src/quickstart-suse.html b/src/quickstart-suse.html index d7d4601..764f731 100644 --- a/src/quickstart-suse.html +++ b/src/quickstart-suse.html @@ -1,131 +1,131 @@ --- layout: pacemaker title: SLES 12 Quickstart ---
    {% include quickstart-common.html %}

    SLES 12

    Install

    Pacemaker ships as part of the SUSE High Availability Extension. To install, follow the provided documentation. It is also available in openSUSE Leap and openSUSE Tumbleweed.

    Create the Cluster

    The supported stack on SLES12 is based on Corosync 2.x.

    To get started, install the cluster stack on all nodes.

    [ALL] # zypper install ha-cluster-bootstrap

    First we initialize the cluster on the first machine (node1):

    [ONE] # ha-cluster-init

    Now we can join the cluster from the second machine (node2):

    [ONE] # ha-cluster-join -c node1

    These two steps create and start a basic cluster together with the HAWK web interface. If given additional arguments, ha-cluster-init can also configure STONITH, OCFS2 and an administration IP address as part of initial configuration. It is also possible to choose whether to use multicast or unicast for corosync communication.

    For more details on ha-cluster-init, see the output of ha-cluster-init --help.

    Set Cluster Options

    For demonstration purposes, we will force the cluster to move services after a single failure:

    [ONE] # crm configure property migration-threshold=1

    Add a Resource

    Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

    [ONE] # crm configure primitive my_first_svc Dummy op monitor interval=120s

    "my_first_svc" is the name the service will be known as.

    "Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

    "op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

    You should now be able to see the service running using:

    [ONE] # crm status

    Simulate a Service Failure

    We can simulate an error by telling the service stop directly (without telling the cluster):

    [ONE] # crm_resource --resource my_first_svc --force-stop

    If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

    You can also watch the transition from the HAWK dashboard, by going to https://node1:7630.

    Next Steps

    diff --git a/src/quickstart-ubuntu.html b/src/quickstart-ubuntu.html index 940ea4b..729d50e 100644 --- a/src/quickstart-ubuntu.html +++ b/src/quickstart-ubuntu.html @@ -1,153 +1,153 @@ --- layout: pacemaker title: Ubuntu Quickstart ---
    {% include quickstart-common.html %}

    Ubuntu

    Ubuntu appears to have switched to Corosync 2 for it's LTS releases.

    We use aptitude to install pacemaker and some other necessary packages we will need:

    [ALL] # aptitude install pacemaker corosync fence-agents

    Configure Cluster Membership and Messaging

    Since the pcs tool from RHEL does not exist on Ubuntu, we well create the corosync configuration file on both machines manually:

    [ALL] # cat < /etc/corosync/corosync.conf totem { version: 2 secauth: off cluster_name: pacemaker1 transport: udpu } nodelist { node { ring0_addr: node1 nodeid: 101 } node { ring0_addr: node2 nodeid: 102 } } quorum { provider: corosync_votequorum two_node: 1 wait_for_all: 1 last_man_standing: 1 auto_tie_breaker: 0 } EOF

    Start the Cluster

    On each machine, run:

    [ALL] # service pacemaker start

    Set Cluster Options

    With so many devices and possible topologies, it is nearly impossible to include Fencing in a document like this. For now we will disable it.

    [ONE] # crm configure property stonith-enabled=false

    One of the most common ways to deploy Pacemaker is in a 2-node configuration. However quorum as a concept makes no sense in this scenario (because you only have it when more than half the nodes are available), so we'll disable it too.

    [ONE] # crm configure property no-quorum-policy=ignore

    For demonstration purposes, we will force the cluster to move services after a single failure:

    [ONE] # crm configure property migration-threshold=1

    Add a Resource

    Lets add a cluster service, we'll choose one doesn't require any configuration and works everywhere to make things easy. Here's the command:

    [ONE] # crm configure primitive my_first_svc ocf:pacemaker:Dummy op monitor interval=120s

    "my_first_svc" is the name the service will be known as.

    "ocf:pacemaker:Dummy" tells Pacemaker which script to use (Dummy - an agent that's useful as a template and for guides like this one), which namespace it is in (pacemaker) and what standard it conforms to - (OCF). + (OCF).

    "op monitor interval=120s" tells Pacemaker to check the health of this service every 2 minutes by calling the agent's monitor action.

    You should now be able to see the service running using:

    [ONE] # crm_mon -1

    Simulate a Service Failure

    We can simulate an error by telling the service stop directly (without telling the cluster):

    [ONE] # crm_resource --resource my_first_svc --force-stop

    If you now run crm_mon in interactive mode (the default), you should see (within the monitor interval - 2 minutes) the cluster notice that my_first_svc failed and move it to another node.

    Next Steps