diff --git a/README.md b/README.md
index cf528aa..ef3178c 100644
--- a/README.md
+++ b/README.md
@@ -1,128 +1,139 @@
# ClusterLabs.org website
## Installing Jekyll
-ClusterLabs,org is partially generated by jekyll. Installing jekyll requires
-the following dependencies:
+ClusterLabs.org is partially generated by Jekyll, a Ruby gem. The following dependencies are required by Jekyll:
* nodejs
* npm
-* ruby
-* ruby-devel
-* rubygems
-* rubygem-bundler
-* rubygem-rdiscount
+* Ruby 2.5.9
+* Bundler 1.16.1
-Once you have those, change to the `src` directory and run `bundle install`.
+Once you have these four dependencies, install Jekyll by changing to the `src` directory and running `bundle install`.
+
+Note:
+
+While the installation of nodejs and npm is trivial (with dnf), the installation of Ruby 2.5.9 and Bundler 1.16.1 can be more finicky. Here are some instructions that might work for you:
+
+* Install Ruby Version Manager and its dependencies: C/C++ build tools, zlib-devel, and autoconf.
+ * `sudo dnf install rvm`
+ * `sudo dnf install gcc g++`
+ * `sudo dnf install zlib-devel`
+ * `sudo dnf install autoconf`
+* Then, use rvm to install OpenSSL, to install Ruby 2.5.9, and to use Ruby 2.5.9.
+ * `rvm pkg install openssl`
+ * `rvm install 2.5.9 --rubygems 2.6.10 --with-openssl-dir=$HOME/.rvm/usr && rvm use 2.5.9`
+* Then, use gem to install Bundler 1.16.1.
+ * `gem install bundler:1.16.1`
## Using Jekyll
ClusterLabs.org's jekyll source is under the `src` directory. Jekyll will
generate static content to the html directory.
To generate content in a checkout for development and testing, change to the
`src` directory and run `bundle exec jekyll build` (to merely generate content)
or `bundle exec jekyll serve` (to generate and test via a local server).
To generate content on the production site, run
`JEKYLL_ENV=production bundle exec jekyll build` (which will enable such things
as asset digests).
If `src/Gemfile` changes, re-run `bundle install` afterward.
### Updating Ruby gems
Display Ruby dependencies, with their current version:
bundle list
Show available updates:
bundle outdated
Show where a gem comes from:
bundle info $GEM
Update one gem and dependencies (will update Gemfile.lock, which must be committed):
bundle update $GEM
If a gem can't update due to not supporting the local Ruby version or
installable versions of other gems, or you need to raise a dependency version
to fix a security issue, you can edit Gemfile to add a version restriction
like:
gem "gem-name", "2.7.0" -> exact version
gem "gem-name", ">= 2.0.2, < 5.0" -> version within range
gem "gem-name.rb", "~> 0.6.0" -> last number may increase
## Images, stylesheets and JavaScripts
We use the jekyll-assets plugin to manage "assets" such as images, stylesheets,
and JavaScript. One advantage is that digest hashes are automatically added to
the generated filenames when in production mode. This allows "cache busting"
when an asset changes, so we can use long cache times on the server end.
Another advantage is that sources are minified when in production mode.
How CSS is managed:
* CSS is generated from SASS sources
* `src/_assets/stylesheets/main.scss` is just a list of imports
* all other *.scss files beneath `src/_assets/stylesheets` contain the SASS to
be imported by `main.scss`
* jekyll will generate `html/assets/main.css` (or `main-_HASH_.css`) as the
combination of all imports
* web pages can reference the stylesheet via `{% stylesheet main %}`
JavaScript is managed similarly:
* `src/_assets/javascripts/main.js` is just a list of requires
* `src/_assets/javascripts/*.js` contain the JavaScript to be required by
`main.js`
* jekyll will copy these to `html/assets`
* jekyll will generate `html/assets/main.js` (or `main-_HASH_.js`) as the
combination of all JavaScript
* web pages can reference the script via `{% javascript main %}`
How images are managed:
* `src/_assets/images/*` are our images
* web pages can add an img tag using `{% image _NAME_._EXT_ %}`
* web pages can reference a path to an image (e.g. in a link's href)
using `{% asset_path _NAME_._EXT_ %}`
* CSS can reference a path to an image using
`url(asset_path("_NAME_._EXT_"))`
* only images that are referenced in one of these ways will be deployed
to the website, so `_assets` may contain image sources such as SVGs
that do not need to be deployed
* Tip: http://compresspng.com/ can often compress PNGs extremely well
## Site icons
Site icons used to be easy, right? `favicon.ico` seems downright traditional.
Unfortunately, site icons have become an ugly mess of incompatible proprietary
extensions. Even `favicon.ico` is just a proprietary extension (and obsolete, as
well). Now, there are also `apple-touch-icon[-NxN][-precomposed].png` (with at
least _12_ different sizes!), `browserconfig.xml`, `manifest.json`,
link tags with `rel=(icon|shortcut icon|apple-touch-icon-*)`, and Windows Phone
tile overlay divs.
If you want to be discouraged and confused, see:
* http://stackoverflow.com/questions/23849377/html-5-favicon-support
* https://mathiasbynens.be/notes/touch-icons
* https://css-tricks.com/favicon-quiz/
There is no way to handle the mess universally. In particular, some devices do
much better when different icon sizes are provided and listed in the HTML as
link tags, and will pick the size needed, whereas other devices will download
every single icon listed in those link tags, crippling page performance -- not
to mention the overhead that listing two dozen icon sizes adds to the HTML.
We've chosen a simple approach: provide two site icons, a 16x16 `favicon.ico`,
and a 180x180 `apple-touch-icon.png`, both listed in link tags in the HTML.
Most browsers/devices will choose one of these and scale it as needed.
## Web server configuration
The clusterlabs.org web server is configured to redirect certain old URLs to
their new locations, so be careful about renaming files.
diff --git a/src/_layouts/home.html b/src/_layouts/home.html
index 6db2f10..65f2917 100644
--- a/src/_layouts/home.html
+++ b/src/_layouts/home.html
@@ -1,207 +1,204 @@
---
layout: clusterlabs
---
The ClusterLabs stack unifies a large group of Open Source projects related to
High Availability into a cluster offering suitable
for both small and large deployments. Together,
Corosync,
Pacemaker,
DRBD,
ScanCore,
and many other projects have been enabling detection and recovery of
machine and application-level failures in production clusters since
1999. The ClusterLabs stack supports practically any redundancy
configuration imaginable.
-
- {% image clusterlabs3.svg %}
-
{% image Deploy-small.png %}
Deploy
We support many deployment scenarios, from the simplest
2-node standby cluster to a 32-node active/active
configuration.
We can also dramatically reduce hardware costs by allowing
several active/passive clusters to be combined and share a common
backup node.
{% image Monitor-small.png %}
Monitor
We monitor the system for both hardware and software failures.
In the event of a failure, we will automatically recover
your application and make sure it is available from one
of the remaning machines in the cluster.
{% image Recover-small.png %}
Recover
After a failure, we use advanced algorithms to quickly
determine the optimum locations for services based on
relative node preferences and/or requirements to run with
other cluster services (we call these "constraints").
At its core, a cluster is a distributed finite state
machine capable of co-ordinating the startup and recovery
of inter-related services across a set of machines.
System HA is possible without a cluster manager, but you save many headaches using one anyway
Even a distributed and/or replicated application that is
able to survive the failure of one or more components can
benefit from a higher level cluster:
- awareness of other applications in the stack
- a shared quorum implementation and calculation
- data integrity through fencing (a non-responsive process does not imply it is not doing anything)
- automated recovery of instances to ensure capacity
While SYS-V init replacements like systemd can provide
deterministic recovery of a complex stack of services, the
recovery is limited to one machine and lacks the context
of what is happening on other machines - context that is
crucial to determine the difference between a local
failure, clean startup or recovery after a total site
failure.
"The definitive open-source high-availability stack for the Linux
platform builds upon the Pacemaker cluster resource manager."
-- LINUX Journal,
"Ahead
of the Pack: the Pacemaker High-Availability Stack"
A Pacemaker stack is built on five core components:
- libQB - core services (logging, IPC, etc)
- Corosync - Membership, messaging and quorum
- Resource agents - A collection of scripts that interact with the underlying services managed by the cluster
- Fencing agents - A collection of scripts that interact with network power switches and SAN devices to isolate cluster members
- Pacemaker itself
We describe each of these in more detail as well as other optional components such as CLIs and GUIs.
Pacemaker has been around
since 2004
and is primarily a collaborative effort
between Red Hat
and SUSE, however we also
receive considerable help and support from the folks
at LinBit and the community in
general.
Corosync also began life in 2004
but was then part of the OpenAIS project.
It is primarily a Red Hat initiative,
with considerable help and support from the folks in the community.
The core ClusterLabs team is made up of full-time
developers from Australia, Austria, Canada, China, Czech
Repulic, England, Germany, Sweden and the USA. Contributions to
the code or documentation are always welcome.
The ClusterLabs stack ships with most modern enterprise
distributions and has been deployed in many critical
environments including Deutsche Flugsicherung GmbH
(DFS)
which uses Pacemaker to ensure
its air
traffic control systems are always available.