Page MenuHomeClusterLabs Projects

No OneTemporary

diff --git a/.gitignore b/.gitignore
index b98b675309..70dab3d3b0 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,250 +1,251 @@
# Common
\#*
.\#*
GPATH
GRTAGS
GTAGS
TAGS
Makefile
Makefile.in
.deps
.dirstamp
.libs
*.pc
*.pyc
*.bz2
*.tar.gz
*.tgz
*.la
*.lo
*.o
*~
*.gcda
*.gcno
# Autobuild
aclocal.m4
autoconf
autoheader
autom4te.cache/
automake
build.counter
compile
/confdefs.h
config.guess
config.log
config.status
config.sub
configure
/conftest*
depcomp
install-sh
include/stamp-*
libtool
libtool.m4
ltdl.m4
libltdl
ltmain.sh
missing
py-compile
/m4/argz.m4
/m4/ltargz.m4
/m4/ltoptions.m4
/m4/ltsugar.m4
/m4/ltversion.m4
/m4/lt~obsolete.m4
test-driver
ylwrap
# Configure targets
/cts/CTS.py
/cts/CTSlab.py
/cts/CTSvars.py
/cts/LSBDummy
/cts/OCFIPraTest.py
/cts/benchmark/clubench
/cts/cluster_test
/cts/cts
/cts/cts-cli
/cts/cts-coverage
/cts/cts-exec
/cts/cts-fencing
/cts/cts-log-watcher
/cts/cts-regression
/cts/cts-scheduler
/cts/cts-support
/cts/fence_dummy
/cts/lxc_autogen.sh
/cts/pacemaker-cts-dummyd
/cts/pacemaker-cts-dummyd@.service
/daemons/execd/pacemaker_remote
/daemons/execd/pacemaker_remote.service
/daemons/fenced/fence_legacy
/daemons/pacemakerd/pacemaker
/daemons/pacemakerd/pacemaker.combined.upstart
/daemons/pacemakerd/pacemaker.service
/daemons/pacemakerd/pacemaker.upstart
/doc/Doxyfile
/extra/logrotate/pacemaker
/extra/resources/ClusterMon
/extra/resources/HealthSMART
/extra/resources/SysInfo
/extra/resources/ifspeed
/extra/resources/o2cb
include/config.h
include/config.h.in
include/crm_config.h
publican.cfg
/tools/cibsecret
/tools/crm_error
/tools/crm_failcount
/tools/crm_master
/tools/crm_mon.service
/tools/crm_mon.upstart
/tools/crm_report
/tools/crm_rule
/tools/crm_standby
/tools/pcmk_simtimes
/tools/report.collector
/tools/report.common
# Build targets
*.7
*.7.xml
*.7.html
*.8
*.8.xml
*.8.html
/daemons/attrd/pacemaker-attrd
/daemons/based/pacemaker-based
/daemons/based/cibmon
/daemons/controld/pacemaker-controld
/daemons/execd/cts-exec-helper
/daemons/execd/pacemaker-execd
/daemons/execd/pacemaker-remoted
/daemons/fenced/cts-fence-helper
/daemons/fenced/pacemaker-fenced
/daemons/fenced/pacemaker-fenced.xml
/daemons/pacemakerd/pacemakerd
/daemons/schedulerd/pacemaker-schedulerd
/daemons/schedulerd/pacemaker-schedulerd.xml
/doc/*/tmp/**
/doc/*/publish
/doc/*.build
/doc/*/en-US/Ap-*.xml
/doc/*/en-US/Ch-*.xml
/doc/.ABI-build
/doc/HTML
/doc/abi_dumps
/doc/abi-check
/doc/api/*
/doc/compat_reports
/doc/crm_fencing.html
/doc/publican-catalog*
/doc/shared/en-US/*.xml
-/doc/shared/en-US/images/pcmk-*.png
-/doc/shared/en-US/images/Policy-Engine-*.png
/doc/sphinx/*/_build
/doc/sphinx/*/conf.py
+/doc/sphinx/shared/images/*.png
/lib/common/md5.c
/maint/testcc_helper.cc
/maint/testcc_*_h
/maint/mocked/based
scratch
/tools/attrd_updater
/tools/cibadmin
/tools/crmadmin
/tools/crm_attribute
/tools/crm_diff
/tools/crm_mon
/tools/crm_node
/tools/crm_resource
/tools/crm_shadow
/tools/crm_simulate
/tools/crm_ticket
/tools/crm_verify
/tools/iso8601
/tools/stonith_admin
xml/crm.dtd
xml/pacemaker*.rng
xml/versions.rng
xml/api/api-result*.rng
lib/gnu/libgnu.a
lib/gnu/stdalign.h
*.coverity
# Packager artifacts
*.rpm
/mock
/pacemaker.spec
/rpm/[A-Z]*
# make dist/export working directory
pacemaker-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]
# Test detritus
/cts/.regression.failed.diff
/cts/scheduler/*.ref
/cts/scheduler/*.up
/cts/scheduler/*.up.err
/cts/scheduler/bug-rh-1097457.log
/cts/scheduler/bug-rh-1097457.trs
/cts/scheduler/shadow.*
/cts/test-suite.log
/lib/*/tests/*/*.log
/lib/*/tests/*/*_test
/lib/*/tests/*/*.trs
/xml/test-*/*.up
/xml/test-*/*.up.err
/xml/assets/*.rng
/xml/assets/diffview.js
/xml/assets/xmlcatalog
# Release maintenance detritus
/maint/gnulib
# Formerly built files (helps when jumping back and forth in checkout)
/.ABI-build
/Doxyfile
/HTML
/abi_dumps
/abi-check
/compat_reports
/attrd
/cib
/coverage.sh
/crmd
/cts/HBDummy
/doc/Clusters_from_Scratch.txt
/doc/Pacemaker_Explained.txt
/doc/acls.html
+/doc/shared/en-US/images/pcmk-*.png
+/doc/shared/en-US/images/Policy-Engine-*.png
/fencing
/lib/common/tests/flags/pcmk__clear_flags_as
/lib/common/tests/flags/pcmk__set_flags_as
/lib/common/tests/flags/pcmk_all_flags_set
/lib/common/tests/flags/pcmk_any_flags_set
/lib/common/tests/operations/parse_op_key
/lib/common/tests/strings/pcmk__btoa
/lib/common/tests/strings/pcmk__parse_ll_range
/lib/common/tests/strings/pcmk__scan_double
/lib/common/tests/strings/pcmk__str_any_of
/lib/common/tests/strings/pcmk__strcmp
/lib/common/tests/strings/pcmk__char_in_any_str
/lib/common/tests/utils/pcmk_str_is_infinity
/lib/common/tests/utils/pcmk_str_is_minus_infinity
/lib/pengine/tests/rules/pe_cron_range_satisfied
/lrmd
/mcp
/pacemaker-*.spec
/pengine
#Other
coverity-*
logs
*.patch
*.diff
*.sed
*.orig
*.rej
*.swp
diff --git a/configure.ac b/configure.ac
index cb8754907e..69398fba4d 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1,2127 +1,2127 @@
dnl
dnl autoconf for Pacemaker
dnl
dnl Copyright 2009-2020 the Pacemaker project contributors
dnl
dnl The version control history for this file may have further details.
dnl
dnl This source code is licensed under the GNU General Public License version 2
dnl or later (GPLv2+) WITHOUT ANY WARRANTY.
dnl ===============================================
dnl Bootstrap
dnl ===============================================
AC_PREREQ(2.64)
AC_CONFIG_MACRO_DIR([m4])
AC_DEFUN([AC_DATAROOTDIR_CHECKED])
dnl Suggested structure:
dnl information on the package
dnl checks for programs
dnl checks for libraries
dnl checks for header files
dnl checks for types
dnl checks for structures
dnl checks for compiler characteristics
dnl checks for library functions
dnl checks for system services
m4_include([version.m4])
AC_INIT([pacemaker], VERSION_NUMBER, [users@clusterlabs.org], [pacemaker],
PCMK_URL)
PCMK_FEATURES=""
AC_CONFIG_AUX_DIR(.)
AC_CANONICAL_HOST
dnl Where #defines go (e.g. `AC_CHECK_HEADERS' below)
dnl
dnl Internal header: include/config.h
dnl - Contains ALL defines
dnl - include/config.h.in is generated automatically by autoheader
dnl - NOT to be included in any header files except crm_internal.h
dnl (which is also not to be included in any other header files)
dnl
dnl External header: include/crm_config.h
dnl - Contains a subset of defines checked here
dnl - Manually edit include/crm_config.h.in to have configure include
dnl new defines
dnl - Should not include HAVE_* defines
dnl - Safe to include anywhere
AC_CONFIG_HEADERS([include/config.h include/crm_config.h])
dnl 1.11: minimum automake version required
dnl foreign: don't require GNU-standard top-level files
dnl tar-ustar: use (older) POSIX variant of generated tar rather than v7
dnl silent-rules: allow "--enable-silent-rules" (no-op in 1.13+)
dnl subdir-objects: keep .o's with their .c's (no-op in 2.0+)
AM_INIT_AUTOMAKE([1.11 foreign tar-ustar silent-rules subdir-objects])
dnl Require pkg-config (with a minimum version)
PKG_PROG_PKG_CONFIG(0.18)
AS_IF([test "x${PKG_CONFIG}" != x], [],
[AC_MSG_ERROR([pkgconfig must be installed to build ${PACKAGE}])])
dnl PKG_NOARCH_INSTALLDIR is not available prior to pkg-config 0.27 and
dnl pkgconf 0.8.10 (uncomment next line to mimic that scenario)
dnl m4_ifdef([PKG_NOARCH_INSTALLDIR], [m4_undefine([PKG_NOARCH_INSTALLDIR])])
m4_ifndef([PKG_NOARCH_INSTALLDIR], [
AC_DEFUN([PKG_NOARCH_INSTALLDIR], [
AC_SUBST([noarch_pkgconfigdir], ['${datadir}/pkgconfig'])
])
])
PKG_NOARCH_INSTALLDIR
dnl Example 2.4. Silent Custom Rule to Generate a File
dnl %-bar.pc: %.pc
dnl $(AM_V_GEN)$(LN_S) $(notdir $^) $@
dnl Versioned attributes implementation is not yet production-ready
AC_DEFINE_UNQUOTED(ENABLE_VERSIONED_ATTRS, 0, [Enable versioned attributes])
CC_IN_CONFIGURE=yes
export CC_IN_CONFIGURE
LDD=ldd
GLIB_TESTS
dnl ========================================================================
dnl Compiler characteristics
dnl ========================================================================
AC_PROG_CC dnl Can force other with environment variable "CC".
AC_PROG_CC_STDC
AC_PROG_CXX dnl C++ is not needed for build, just maintainer utilities
dnl We use md5.c from gnulib, which has its own m4 macros. Per its docs:
dnl "The macro gl_EARLY must be called as soon as possible after verifying that
dnl the C compiler is working. ... The core part of the gnulib checks are done
dnl by the macro gl_INIT." In addition, prevent gnulib from introducing OpenSSL
dnl as a dependency.
gl_EARLY
gl_SET_CRYPTO_CHECK_DEFAULT([no])
gl_INIT
# --enable-new-dtags: Use RUNPATH instead of RPATH.
# It is necessary to have this done before libtool does linker detection.
# See also: https://github.com/kronosnet/kronosnet/issues/107
AX_CHECK_LINK_FLAG([-Wl,--enable-new-dtags],
[AM_LDFLAGS=-Wl,--enable-new-dtags],
[AC_MSG_ERROR(["Linker support for --enable-new-dtags is required"])])
AC_SUBST([AM_LDFLAGS])
saved_LDFLAGS="$LDFLAGS"
LDFLAGS="$AM_LDFLAGS $LDFLAGS"
LT_INIT([dlopen])
LDFLAGS="$saved_LDFLAGS"
LTDL_INIT([convenience])
AC_TYPE_SIZE_T
AC_CHECK_SIZEOF(char)
AC_CHECK_SIZEOF(short)
AC_CHECK_SIZEOF(int)
AC_CHECK_SIZEOF(long)
AC_CHECK_SIZEOF(long long)
dnl ===============================================
dnl Helpers
dnl ===============================================
cc_supports_flag() {
local CFLAGS="-Werror $@"
AC_MSG_CHECKING(whether $CC supports "$@")
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ ]], [[ ]])],
[RC=0; AC_MSG_RESULT(yes)],
[RC=1; AC_MSG_RESULT(no)])
return $RC
}
# Some tests need to use their own CFLAGS
cc_temp_flags() {
ac_save_CFLAGS="$CFLAGS"
CFLAGS="$*"
}
cc_restore_flags() {
CFLAGS=$ac_save_CFLAGS
}
dnl ===============================================
dnl Configure Options
dnl ===============================================
dnl Actual library checks come later, but pkg-config can be used here to grab
dnl external values to use as defaults for configure options
dnl --enable-* options
AC_ARG_ENABLE([ansi],
[AS_HELP_STRING([--enable-ansi],
[force GCC to compile to ANSI standard for older compilers. @<:@no@:>@])],
)
AC_ARG_ENABLE([fatal-warnings],
[AS_HELP_STRING([--enable-fatal-warnings],
[enable pedantic and fatal warnings for gcc @<:@yes@:>@])],
)
AC_ARG_ENABLE([quiet],
[AS_HELP_STRING([--enable-quiet],
[suppress make output unless there is an error @<:@no@:>@])],
)
AC_ARG_ENABLE([no-stack],
[AS_HELP_STRING([--enable-no-stack],
[build only the scheduler and its requirements @<:@no@:>@])],
)
AC_ARG_ENABLE([upstart],
[AS_HELP_STRING([--enable-upstart],
[enable support for managing resources via Upstart @<:@try@:>@])],
[],
[enable_upstart=try],
)
AC_ARG_ENABLE([systemd],
[AS_HELP_STRING([--enable-systemd],
[enable support for managing resources via systemd @<:@try@:>@])],
[],
[enable_systemd=try],
)
AC_ARG_ENABLE([hardening],
[AS_HELP_STRING([--enable-hardening],
[harden the resulting executables/libraries @<:@try@:>@])],
[ HARDENING="${enableval}" ],
[ HARDENING=try ],
)
# By default, we add symlinks at the pre-2.0.0 daemon name locations, so that:
# (1) tools that directly invoke those names for metadata etc. will still work
# (2) this installation can be used in a bundle container image used with
# cluster hosts running Pacemaker 1.1.17+
# If you know your target systems will not have any need for it, you can
# disable this option. Once the above use cases are no longer in wide use, we
# can disable this option by default, and once we no longer want to support
# them at all, we can drop the option altogether.
AC_ARG_ENABLE([legacy-links],
[AS_HELP_STRING([--enable-legacy-links],
[add symlinks for old daemon names @<:@yes@:>@])],
[ LEGACY_LINKS="${enableval}" ],
[ LEGACY_LINKS=yes ],
)
AM_CONDITIONAL(BUILD_LEGACY_LINKS, test "x${LEGACY_LINKS}" = "xyes")
dnl --with-* options
AC_DEFUN([VERSION_ARG],
[AC_ARG_WITH([version],
[AS_HELP_STRING([--with-version=VERSION],
[override package version @<:@$1@:>@])],
[ PACKAGE_VERSION="$withval" ])]
)
VERSION_ARG(VERSION_NUMBER)
AC_ARG_WITH([corosync],
[AS_HELP_STRING([--with-corosync],
[support the Corosync messaging and membership layer])],
[ SUPPORT_CS=$withval ],
[ SUPPORT_CS=try ],
)
AC_ARG_WITH([nagios],
[AS_HELP_STRING([--with-nagios],
[support nagios remote monitoring])],
[ SUPPORT_NAGIOS=$withval ],
[ SUPPORT_NAGIOS=try ],
)
AC_ARG_WITH([nagios-plugin-dir],
[AS_HELP_STRING([--with-nagios-plugin-dir=DIR],
[directory for nagios plugins @<:@LIBEXECDIR/nagios/plugins@:>@])],
[ NAGIOS_PLUGIN_DIR="$withval" ]
)
AC_ARG_WITH([nagios-metadata-dir],
[AS_HELP_STRING([--with-nagios-metadata-dir=DIR],
[directory for nagios plugins metadata @<:@DATADIR/nagios/plugins-metadata@:>@])],
[ NAGIOS_METADATA_DIR="$withval" ]
)
AC_ARG_WITH([acl],
[AS_HELP_STRING([--with-acl],
[support CIB ACL])],
[ SUPPORT_ACL=$withval ],
[ SUPPORT_ACL=yes ],
)
AC_ARG_WITH([cibsecrets],
[AS_HELP_STRING([--with-cibsecrets],
[support separate file for CIB secrets])],
[ SUPPORT_CIBSECRETS=$withval ],
[ SUPPORT_CIBSECRETS=no ],
)
PCMK_GNUTLS_PRIORITIES="NORMAL"
AC_ARG_WITH([gnutls-priorities],
[AS_HELP_STRING([--with-gnutls-priorities],
[default GnuTLS cipher priorities @<:@NORMAL@:>@])],
[ test x"$withval" = x"no" || PCMK_GNUTLS_PRIORITIES="$withval" ]
)
INITDIR=""
AC_ARG_WITH([initdir],
[AS_HELP_STRING([--with-initdir=DIR],
[directory for init (rc) scripts])],
[ INITDIR="$withval" ]
)
systemdsystemunitdir="${systemdsystemunitdir-}"
AC_ARG_WITH([systemdsystemunitdir],
[AS_HELP_STRING([--with-systemdsystemunitdir=DIR],
[directory for systemd unit files (advanced option: must match what systemd uses)])],
[ systemdsystemunitdir="$withval" ]
)
SUPPORT_PROFILING=0
AC_ARG_WITH([profiling],
[AS_HELP_STRING([--with-profiling],
[disable optimizations for effective profiling])],
[ SUPPORT_PROFILING=$withval ]
)
AC_ARG_WITH([coverage],
[AS_HELP_STRING([--with-coverage],
[disable optimizations for effective profiling])],
[ SUPPORT_COVERAGE=$withval ]
)
PUBLICAN_BRAND="common"
AC_ARG_WITH([brand],
[AS_HELP_STRING([--with-brand=brand],
[brand to use for generated documentation (set empty for no docs) @<:@common@:>@])],
[ test x"$withval" = x"no" || PUBLICAN_BRAND="$withval" ]
)
AC_SUBST(PUBLICAN_BRAND)
BUG_URL=""
AC_ARG_WITH([bug-url],
[AS_HELP_STRING([--with-bug-url=DIR],
[address where users should submit bug reports @<:@https://bugs.clusterlabs.org/enter_bug.cgi?product=Pacemaker@:>@])],
[ BUG_URL="$withval" ]
)
CONFIGDIR=""
AC_ARG_WITH([configdir],
[AS_HELP_STRING([--with-configdir=DIR],
[directory for Pacemaker configuration file @<:@SYSCONFDIR/sysconfig@:>@])],
[ CONFIGDIR="$withval" ]
)
CRM_LOG_DIR=""
AC_ARG_WITH([logdir],
[AS_HELP_STRING([--with-logdir=DIR],
[directory for Pacemaker log file @<:@LOCALSTATEDIR/log/pacemaker@:>@])],
[ CRM_LOG_DIR="$withval" ]
)
CRM_BUNDLE_DIR=""
AC_ARG_WITH([bundledir],
[AS_HELP_STRING([--with-bundledir=DIR],
[directory for Pacemaker bundle logs @<:@LOCALSTATEDIR/log/pacemaker/bundles@:>@])],
[ CRM_BUNDLE_DIR="$withval" ]
)
AC_ARG_WITH([sanitizers],
[AS_HELP_STRING([--with-sanitizers=...,...],
[enable SANitizer build, do *NOT* use for production. Only ASAN/UBSAN/TSAN are currently supported])],
[ SANITIZERS="$withval" ],
[ SANITIZERS="" ])
dnl The not-yet-released autoconf 2.70 will have a --runstatedir option.
dnl Until that's available, emulate it with our own --with-runstatedir.
pcmk_runstatedir=""
AC_ARG_WITH([runstatedir],
[AS_HELP_STRING([--with-runstatedir=DIR],
[modifiable per-process data @<:@LOCALSTATEDIR/run@:>@ (ignored if --runstatedir is available)])],
[ pcmk_runstatedir="$withval" ]
)
dnl This defaults to /usr/lib rather than libdir because it's determined by the
dnl OCF project and not pacemaker. Even if a user wants to install pacemaker to
dnl /usr/local or such, the OCF agents will be expected in their usual
dnl location. However, we do give the user the option to override it.
OCF_ROOT_DIR="/usr/lib/ocf"
AC_ARG_WITH([ocfdir],
[AS_HELP_STRING([--with-ocfdir=DIR],
[OCF resource agent root directory (advanced option: changing this may break other cluster components unless similarly configured) @<:@/usr/lib/ocf@:>@])],
[ OCF_ROOT_DIR="$withval" ]
)
AC_SUBST(OCF_ROOT_DIR)
dnl Get default from fence-agents if available
PKG_CHECK_VAR([FA_PREFIX], [fence-agents], [prefix],
[PCMK__FENCE_BINDIR="${FA_PREFIX}/sbin"],
[PCMK__FENCE_BINDIR="$sbindir"])
AC_ARG_WITH([fence-bindir],
[AS_HELP_STRING([--with-fence-bindir=DIR], m4_normalize([
directory for executable fence agents @<:@value from fence-agents
package if available otherwise SBINDIR@:>@]))],
[ PCMK__FENCE_BINDIR="$withval" ]
)
AC_SUBST(PCMK__FENCE_BINDIR)
CRM_DAEMON_USER=""
AC_ARG_WITH([daemon-user],
[AS_HELP_STRING([--with-daemon-user=USER],
[user to run unprivileged Pacemaker daemons as (advanced option: changing this may break other cluster components unless similarly configured) @<:@hacluster@:>@])],
[ CRM_DAEMON_USER="$withval" ]
)
CRM_DAEMON_GROUP=""
AC_ARG_WITH([daemon-group],
[AS_HELP_STRING([--with-daemon-group=GROUP],
[group to run unprivileged Pacemaker daemons as (advanced option: changing this may break other cluster components unless similarly configured) @<:@haclient@:>@])],
[ CRM_DAEMON_GROUP="$withval" ]
)
dnl Deprecated options
AC_ARG_WITH([pkg-name],
[AS_HELP_STRING([--with-pkg-name=name],
[deprecated and unused (will be removed in a future release)])],
)
AC_ARG_WITH([pkgname],
[AS_HELP_STRING([--with-pkgname=name],
[deprecated and unused (will be removed in a future release)])],
)
dnl ===============================================
dnl General Processing
dnl ===============================================
AC_DEFINE_UNQUOTED(PACEMAKER_VERSION, "$PACKAGE_VERSION",
[Current pacemaker version])
PACKAGE_SERIES=`echo $PACKAGE_VERSION | awk -F. '{ print $1"."$2 }'`
AC_SUBST(PACKAGE_SERIES)
AC_SUBST(PACKAGE_VERSION)
AC_PROG_LN_S
AC_PROG_MKDIR_P
if cc_supports_flag -Werror; then
WERROR="-Werror"
else
WERROR=""
fi
# Normalize enable_fatal_warnings (defaulting to yes, when compiler supports it)
if test "x${enable_fatal_warnings}" != "xno" ; then
if test "$GCC" = "yes" && test "x${WERROR}" != "x" ; then
enable_fatal_warnings=yes
else
AC_MSG_NOTICE(Compiler does not support fatal warnings)
enable_fatal_warnings=no
fi
fi
INIT_EXT=""
echo Our Host OS: $host_os/$host
AC_MSG_NOTICE(Sanitizing prefix: ${prefix})
case $prefix in
NONE)
prefix=/usr
dnl Fix default variables - "prefix" variable if not specified
if test "$localstatedir" = "\${prefix}/var"; then
localstatedir="/var"
fi
if test "$sysconfdir" = "\${prefix}/etc"; then
sysconfdir="/etc"
fi
;;
esac
AC_MSG_NOTICE(Sanitizing exec_prefix: ${exec_prefix})
case $exec_prefix in
prefix|NONE)
exec_prefix=$prefix
;;
esac
AC_MSG_NOTICE(Sanitizing INITDIR: ${INITDIR})
case $INITDIR in
prefix) INITDIR=$prefix;;
"")
AC_MSG_CHECKING(which init (rc) directory to use)
for initdir in /etc/init.d /etc/rc.d/init.d /sbin/init.d \
/usr/local/etc/rc.d /etc/rc.d
do
if
test -d $initdir
then
INITDIR=$initdir
break
fi
done
AC_MSG_RESULT($INITDIR)
;;
esac
AC_SUBST(INITDIR)
AC_MSG_NOTICE(Sanitizing libdir: ${libdir})
case $libdir in
prefix|NONE)
AC_MSG_CHECKING(which lib directory to use)
for aDir in lib64 lib
do
trydir="${exec_prefix}/${aDir}"
if
test -d ${trydir}
then
libdir=${trydir}
break
fi
done
AC_MSG_RESULT($libdir);
;;
esac
dnl Expand autoconf variables so that we don't end up with '${prefix}'
dnl in #defines and python scripts
dnl NOTE: Autoconf deliberately leaves them unexpanded to allow
dnl make exec_prefix=/foo install
dnl No longer being able to do this seems like no great loss to me...
eval prefix="`eval echo ${prefix}`"
eval exec_prefix="`eval echo ${exec_prefix}`"
eval bindir="`eval echo ${bindir}`"
eval sbindir="`eval echo ${sbindir}`"
eval libexecdir="`eval echo ${libexecdir}`"
eval datadir="`eval echo ${datadir}`"
eval sysconfdir="`eval echo ${sysconfdir}`"
eval sharedstatedir="`eval echo ${sharedstatedir}`"
eval localstatedir="`eval echo ${localstatedir}`"
eval libdir="`eval echo ${libdir}`"
eval includedir="`eval echo ${includedir}`"
eval oldincludedir="`eval echo ${oldincludedir}`"
eval infodir="`eval echo ${infodir}`"
eval mandir="`eval echo ${mandir}`"
dnl Home-grown variables
if [ test "x${runstatedir}" = "x" ]; then
if [ test "x${pcmk_runstatedir}" = "x" ]; then
runstatedir="${localstatedir}/run"
else
runstatedir="${pcmk_runstatedir}"
fi
fi
eval runstatedir="$(eval echo ${runstatedir})"
AC_DEFINE_UNQUOTED([PCMK_RUN_DIR], ["$runstatedir"],
[Location for modifiable per-process data])
AC_SUBST(runstatedir)
eval INITDIR="${INITDIR}"
eval docdir="`eval echo ${docdir}`"
if test x"${docdir}" = x""; then
docdir=${datadir}/doc/${PACKAGE}-${VERSION}
fi
AC_SUBST(docdir)
if test x"${CONFIGDIR}" = x""; then
CONFIGDIR="${sysconfdir}/sysconfig"
fi
AC_SUBST(CONFIGDIR)
if test x"${CRM_LOG_DIR}" = x""; then
CRM_LOG_DIR="${localstatedir}/log/pacemaker"
fi
AC_DEFINE_UNQUOTED(CRM_LOG_DIR,"$CRM_LOG_DIR", Location for Pacemaker log file)
AC_SUBST(CRM_LOG_DIR)
if test x"${CRM_BUNDLE_DIR}" = x""; then
CRM_BUNDLE_DIR="${localstatedir}/log/pacemaker/bundles"
fi
AC_DEFINE_UNQUOTED(CRM_BUNDLE_DIR,"$CRM_BUNDLE_DIR", Location for Pacemaker bundle logs)
AC_SUBST(CRM_BUNDLE_DIR)
eval PCMK__FENCE_BINDIR="`eval echo ${PCMK__FENCE_BINDIR}`"
AC_DEFINE_UNQUOTED(PCMK__FENCE_BINDIR,"$PCMK__FENCE_BINDIR",
[Location for executable fence agents])
if test x"${PCMK_GNUTLS_PRIORITIES}" = x""; then
AC_MSG_ERROR([Empty string not applicable with --with-gnutls-priorities])
fi
AC_DEFINE_UNQUOTED([PCMK_GNUTLS_PRIORITIES], ["$PCMK_GNUTLS_PRIORITIES"],
[GnuTLS cipher priorities])
if test x"${BUG_URL}" = x""; then
BUG_URL="https://bugs.clusterlabs.org/enter_bug.cgi?product=Pacemaker"
fi
AC_SUBST(BUG_URL)
for j in prefix exec_prefix bindir sbindir libexecdir datadir sysconfdir \
sharedstatedir localstatedir libdir includedir oldincludedir infodir \
mandir INITDIR docdir CONFIGDIR
do
dirname=`eval echo '${'${j}'}'`
if
test ! -d "$dirname"
then
AC_MSG_WARN([$j directory ($dirname) does not exist!])
fi
done
us_auth=
AC_CHECK_HEADER([sys/socket.h], [
AC_CHECK_DECL([SO_PEERCRED], [
# Linux
AC_CHECK_TYPE([struct ucred], [
us_auth=peercred_ucred;
AC_DEFINE([US_AUTH_PEERCRED_UCRED], [1],
[Define if Unix socket auth method is
getsockopt(s, SO_PEERCRED, &ucred, ...)])
], [
# OpenBSD
AC_CHECK_TYPE([struct sockpeercred], [
us_auth=localpeercred_sockepeercred;
AC_DEFINE([US_AUTH_PEERCRED_SOCKPEERCRED], [1],
[Define if Unix socket auth method is
getsockopt(s, SO_PEERCRED, &sockpeercred, ...)])
], [], [[#include <sys/socket.h>]])
], [[#define _GNU_SOURCE
#include <sys/socket.h>]])
], [], [[#include <sys/socket.h>]])
])
if test -z "${us_auth}"; then
# FreeBSD
AC_CHECK_DECL([getpeereid], [
us_auth=getpeereid;
AC_DEFINE([US_AUTH_GETPEEREID], [1],
[Define if Unix socket auth method is
getpeereid(s, &uid, &gid)])
], [
# Solaris/OpenIndiana
AC_CHECK_DECL([getpeerucred], [
us_auth=getpeerucred;
AC_DEFINE([US_AUTH_GETPEERUCRED], [1],
[Define if Unix socket auth method is
getpeercred(s, &ucred)])
], [
AC_MSG_ERROR([No way to authenticate a Unix socket peer])
], [[#include <ucred.h>]])
])
fi
dnl This OS-based decision-making is poor autotools practice;
dnl feature-based mechanisms are strongly preferred.
dnl
dnl So keep this section to a bare minimum; regard as a "necessary evil".
case "$host_os" in
*bsd*)
AC_DEFINE_UNQUOTED(ON_BSD, 1, Compiling for BSD platform)
INIT_EXT=".sh"
;;
*solaris*)
AC_DEFINE_UNQUOTED(ON_SOLARIS, 1, Compiling for Solaris platform)
;;
*linux*)
AC_DEFINE_UNQUOTED(ON_LINUX, 1, Compiling for Linux platform)
;;
darwin*)
AC_DEFINE_UNQUOTED(ON_DARWIN, 1, Compiling for Darwin platform)
LIBS="$LIBS -L${prefix}/lib"
CFLAGS="$CFLAGS -I${prefix}/include"
;;
esac
AC_SUBST(INIT_EXT)
AC_MSG_NOTICE(Host CPU: $host_cpu)
case "$host_cpu" in
ppc64|powerpc64)
case $CFLAGS in
*powerpc64*)
;;
*)
if test "$GCC" = yes; then
CFLAGS="$CFLAGS -m64"
fi
;;
esac
;;
esac
# C99 doesn't guarantee uint64_t type and related format specifiers, but
# prerequisites, corosync + libqb, use that widely, so the target platforms
# are already pre-constrained to those "64bit-clean" (doesn't imply native
# bit width) and hence we deliberately refrain from artificial surrogates
# (sans manipulation through cached values).
AC_CACHE_VAL(
[pcmk_cv_decl_inttypes],
[
AC_CHECK_DECLS(
[PRIu64, PRIu32, PRIx32,
SCNu64],
[pcmk_cv_decl_inttypes="PRIu64 PRIu32 PRIx32 SCNu64"],
[
# test shall only react on "no" cached result & error out respectively
if test "x$ac_cv_have_decl_PRIu64" = xno; then
AC_MSG_ERROR([lack of inttypes.h based specifier serving uint64_t (PRIu64)])
elif test "x$ac_cv_have_decl_PRIu32" = xno; then
AC_MSG_ERROR([lack of inttypes.h based specifier serving uint32_t (PRIu32)])
elif test "x$ac_cv_have_decl_PRIx32" = xno; then
AC_MSG_ERROR([lack of inttypes.h based hexa specifier serving uint32_t (PRIx32)])
elif test "x$ac_cv_have_decl_SCNu64" = xno; then
AC_MSG_ERROR([lack of inttypes.h based specifier gathering uint64_t (SCNu64)])
fi
],
[[#include <inttypes.h>]]
)
]
)
(
set $pcmk_cv_decl_inttypes
AC_DEFINE_UNQUOTED([U64T], [$1], [Correct format specifier for U64T])
AC_DEFINE_UNQUOTED([U32T], [$2], [Correct format specifier for U32T])
AC_DEFINE_UNQUOTED([X32T], [$3], [Correct format specifier for X32T])
AC_DEFINE_UNQUOTED([U64TS], [$4], [Correct format specifier for U64TS])
)
dnl ===============================================
dnl Program Paths
dnl ===============================================
PATH="$PATH:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin"
export PATH
dnl Replacing AC_PROG_LIBTOOL with AC_CHECK_PROG because LIBTOOL
dnl was NOT being expanded all the time thus causing things to fail.
AC_CHECK_PROGS(LIBTOOL, glibtool libtool libtool15 libtool13)
dnl Pacemaker's executable python scripts will invoke the python specified by
dnl configure's PYTHON variable. If not specified, AM_PATH_PYTHON will check a
dnl built-in list with (unversioned) "python" having precedence. To configure
dnl Pacemaker to use a specific python interpreter version, define PYTHON
dnl when calling configure, for example: ./configure PYTHON=/usr/bin/python3.6
dnl Ensure PYTHON is an absolute path
if test x"${PYTHON}" != x""; then
AC_PATH_PROG([PYTHON], [$PYTHON])
fi
case "x$PYTHON" in
x*python3*|x*platform-python*)
dnl When used with Python 3, Pacemaker requires a minimum of 3.2
AM_PATH_PYTHON([3.2])
;;
*)
dnl Otherwise, Pacemaker requires a minimum of 2.7
AM_PATH_PYTHON([2.7])
;;
esac
AC_PATH_PROGS([ASCIIDOC_CONV], [asciidoc asciidoctor])
AC_PATH_PROG([HELP2MAN], [help2man])
AC_PATH_PROG([PUBLICAN], [publican])
AC_PATH_PROG([SPHINX], [sphinx-build])
AC_PATH_PROG([INKSCAPE], [inkscape])
AC_PATH_PROG([XSLTPROC], [xsltproc])
AC_PATH_PROG([XMLCATALOG], [xmlcatalog])
dnl BASH is already an environment variable, so use something else
AC_PATH_PROG([BASH_PATH], [bash])
AC_PATH_PROGS(VALGRIND_BIN, valgrind, /usr/bin/valgrind)
AC_DEFINE_UNQUOTED(VALGRIND_BIN, "$VALGRIND_BIN", Valgrind command)
if test x"${LIBTOOL}" = x""; then
AC_MSG_ERROR(You need (g)libtool installed in order to build ${PACKAGE})
fi
dnl Bash is needed for building man pages and running regression tests
if test x"${BASH_PATH}" = x""; then
AC_MSG_ERROR(bash must be installed in order to build ${PACKAGE})
fi
AM_CONDITIONAL(BUILD_HELP, test x"${HELP2MAN}" != x"")
if test x"${HELP2MAN}" != x""; then
PCMK_FEATURES="$PCMK_FEATURES generated-manpages"
fi
MANPAGE_XSLT=""
if test x"${XSLTPROC}" != x""; then
AC_MSG_CHECKING(docbook to manpage transform)
# first try to figure out correct template using xmlcatalog query,
# resort to extensive (semi-deterministic) file search if that fails
DOCBOOK_XSL_URI='http://docbook.sourceforge.net/release/xsl/current'
DOCBOOK_XSL_PATH='manpages/docbook.xsl'
MANPAGE_XSLT=$(${XMLCATALOG} "" ${DOCBOOK_XSL_URI}/${DOCBOOK_XSL_PATH} \
| sed -n 's|^file://||p;q')
if test x"${MANPAGE_XSLT}" = x""; then
DIRS=$(find "${datadir}" -name $(basename $(dirname ${DOCBOOK_XSL_PATH})) \
-type d | LC_ALL=C sort)
XSLT=$(basename ${DOCBOOK_XSL_PATH})
for d in ${DIRS}; do
if test -f "${d}/${XSLT}"; then
MANPAGE_XSLT="${d}/${XSLT}"
break
fi
done
fi
fi
AC_MSG_RESULT($MANPAGE_XSLT)
AC_SUBST(MANPAGE_XSLT)
AM_CONDITIONAL(BUILD_XML_HELP, test x"${MANPAGE_XSLT}" != x"")
if test x"${MANPAGE_XSLT}" != x""; then
PCMK_FEATURES="$PCMK_FEATURES agent-manpages"
fi
AM_CONDITIONAL([IS_ASCIIDOC], [echo "${ASCIIDOC_CONV}" | grep -Eq 'asciidoc$'])
AM_CONDITIONAL([BUILD_ASCIIDOC], [test "x${ASCIIDOC_CONV}" != x])
if test "x${ASCIIDOC_CONV}" != x; then
PCMK_FEATURES="$PCMK_FEATURES ascii-docs"
fi
publican_intree_brand=no
if test x"${PUBLICAN_BRAND}" != x"" \
&& test x"${PUBLICAN}" != x"" \
&& test x"${INKSCAPE}" != x""; then
dnl special handling for clusterlabs brand (possibly in-tree version used)
test "${PUBLICAN_BRAND}" != "clusterlabs" \
|| test -d /usr/share/publican/Common_Content/clusterlabs
if test $? -ne 0; then
dnl Unknown option: brand_dir vs. Option brand_dir requires an argument
if ${PUBLICAN} build --brand_dir 2>&1 | grep -Eq 'brand_dir$'; then
AC_MSG_WARN([Cannot use in-tree clusterlabs brand, resorting to common])
PUBLICAN_BRAND=common
else
publican_intree_brand=yes
fi
fi
AC_MSG_NOTICE([Enabling Publican-generated documentation using ${PUBLICAN_BRAND} brand])
PCMK_FEATURES="$PCMK_FEATURES publican-docs"
fi
AM_CONDITIONAL([BUILD_DOCBOOK],
[test x"${PUBLICAN_BRAND}" != x"" \
&& test x"${PUBLICAN}" != x"" \
&& test x"${INKSCAPE}" != x""])
AM_CONDITIONAL([PUBLICAN_INTREE_BRAND],
[test x"${publican_intree_brand}" = x"yes"])
AM_CONDITIONAL([BUILD_SPHINX_DOCS],
- [test x"${SPHINX}" != x""])
+ [test x"${SPHINX}" != x"" && test x"${INKSCAPE}" != x""])
dnl Pacemaker's shell scripts (and thus man page builders) rely on GNU getopt
AC_MSG_CHECKING([for GNU-compatible getopt])
IFS_orig=$IFS
IFS=:
for PATH_DIR in $PATH; do
IFS=$IFS_orig
GETOPT_PATH="${PATH_DIR}/getopt"
if test -f "$GETOPT_PATH" && test -x "$GETOPT_PATH" ; then
$GETOPT_PATH -T >/dev/null 2>/dev/null
if test $? -eq 4; then
break
fi
fi
GETOPT_PATH=""
done
IFS=$IFS_orig
if test -n "$GETOPT_PATH"; then
AC_MSG_RESULT([$GETOPT_PATH])
else
AC_MSG_RESULT([no])
AC_MSG_ERROR(Pacemaker build requires a GNU-compatible getopt)
fi
AC_SUBST([GETOPT_PATH])
dnl ========================================================================
dnl checks for library functions to replace them
dnl
dnl NoSuchFunctionName:
dnl is a dummy function which no system supplies. It is here to make
dnl the system compile semi-correctly on OpenBSD which doesn't know
dnl how to create an empty archive
dnl
dnl scandir: Only on BSD.
dnl System-V systems may have it, but hidden and/or deprecated.
dnl A replacement function is supplied for it.
dnl
dnl setenv: is some bsdish function that should also be avoided (use
dnl putenv instead)
dnl On the other hand, putenv doesn't provide the right API for the
dnl code and has memory leaks designed in (sigh...) Fortunately this
dnl A replacement function is supplied for it.
dnl
dnl strerror: returns a string that corresponds to an errno.
dnl A replacement function is supplied for it.
dnl
dnl strnlen: is a gnu function similar to strlen, but safer.
dnl We wrote a tolerably-fast replacement function for it.
dnl
dnl strndup: is a gnu function similar to strdup, but safer.
dnl We wrote a tolerably-fast replacement function for it.
AC_REPLACE_FUNCS(alphasort NoSuchFunctionName scandir setenv strerror strchrnul unsetenv strnlen strndup)
dnl ===============================================
dnl Libraries
dnl ===============================================
AC_CHECK_LIB(socket, socket) dnl -lsocket
AC_CHECK_LIB(c, dlopen) dnl if dlopen is in libc...
AC_CHECK_LIB(dl, dlopen) dnl -ldl (for Linux)
AC_CHECK_LIB(rt, sched_getscheduler) dnl -lrt (for Tru64)
AC_CHECK_LIB(gnugetopt, getopt_long) dnl -lgnugetopt ( if available )
AC_CHECK_LIB(pam, pam_start) dnl -lpam (if available)
AC_CHECK_FUNCS([sched_setscheduler])
if test "$ac_cv_func_sched_setscheduler" != yes; then
PC_LIBS_RT=""
else
PC_LIBS_RT="-lrt"
fi
AC_SUBST(PC_LIBS_RT)
AC_CHECK_LIB(uuid, uuid_parse) dnl load the library if necessary
AC_CHECK_FUNCS(uuid_unparse) dnl OSX ships uuid_* as standard functions
AC_CHECK_HEADERS(uuid/uuid.h)
if test "x$ac_cv_func_uuid_unparse" != xyes; then
AC_MSG_ERROR(You do not have the libuuid development package installed)
fi
# Require glib 2.16.0 (2008-03) or later for g_hash_table_iter_init() etc.
PKG_CHECK_MODULES([GLIB], [glib-2.0 >= 2.16.0],
[CPPFLAGS="${CPPFLAGS} ${GLIB_CFLAGS}"
LIBS="${LIBS} ${GLIB_LIBS}"])
#
# Where is dlopen?
#
if test "$ac_cv_lib_c_dlopen" = yes; then
LIBADD_DL=""
elif test "$ac_cv_lib_dl_dlopen" = yes; then
LIBADD_DL=-ldl
else
LIBADD_DL=${lt_cv_dlopen_libs}
fi
dnl ========================================================================
dnl Headers
dnl ========================================================================
# Some distributions insert #warnings into deprecated headers. If we will
# enable fatal warnings for the build, then enable them for the header checks
# as well, otherwise the build could fail even though the header check
# succeeds. (We should probably be doing this in more places.)
if test "x${enable_fatal_warnings}" = xyes ; then
cc_temp_flags "$CFLAGS $WERROR"
fi
AC_CHECK_HEADERS(arpa/inet.h)
AC_CHECK_HEADERS(ctype.h)
AC_CHECK_HEADERS(dirent.h)
AC_CHECK_HEADERS(errno.h)
AC_CHECK_HEADERS(getopt.h)
AC_CHECK_HEADERS(glib.h)
AC_CHECK_HEADERS(grp.h)
AC_CHECK_HEADERS(limits.h)
AC_CHECK_HEADERS(linux/swab.h)
AC_CHECK_HEADERS(malloc.h)
AC_CHECK_HEADERS(netdb.h)
AC_CHECK_HEADERS(netinet/in.h)
AC_CHECK_HEADERS(netinet/ip.h)
AC_CHECK_HEADERS(pwd.h)
AC_CHECK_HEADERS(sgtty.h)
AC_CHECK_HEADERS(signal.h)
AC_CHECK_HEADERS(stdarg.h)
AC_CHECK_HEADERS(stddef.h)
AC_CHECK_HEADERS(stdio.h)
AC_CHECK_HEADERS(stdlib.h)
AC_CHECK_HEADERS(string.h)
AC_CHECK_HEADERS(strings.h)
AC_CHECK_HEADERS(sys/dir.h)
AC_CHECK_HEADERS(sys/ioctl.h)
AC_CHECK_HEADERS(sys/param.h)
AC_CHECK_HEADERS(sys/reboot.h)
AC_CHECK_HEADERS(sys/resource.h)
AC_CHECK_HEADERS(sys/socket.h)
AC_CHECK_HEADERS(sys/signalfd.h)
AC_CHECK_HEADERS(sys/sockio.h)
AC_CHECK_HEADERS(sys/stat.h)
AC_CHECK_HEADERS(sys/time.h)
AC_CHECK_HEADERS(sys/types.h)
AC_CHECK_HEADERS(sys/utsname.h)
AC_CHECK_HEADERS(sys/wait.h)
AC_CHECK_HEADERS(time.h)
AC_CHECK_HEADERS(unistd.h)
if test "x${enable_fatal_warnings}" = xyes ; then
cc_restore_flags
fi
dnl These headers need prerequisites before the tests will pass
dnl AC_CHECK_HEADERS(net/if.h)
PKG_CHECK_MODULES(LIBXML2, [libxml-2.0],
[CPPFLAGS="${CPPFLAGS} ${LIBXML2_CFLAGS}"
LIBS="${LIBS} ${LIBXML2_LIBS}"])
AC_CHECK_HEADERS(libxml/xpath.h)
if test "$ac_cv_header_libxml_xpath_h" != "yes"; then
AC_MSG_ERROR(libxml development headers not found)
fi
AC_CHECK_LIB(xslt, xsltApplyStylesheet, [],
AC_MSG_ERROR(Unsupported libxslt library version))
AC_CHECK_HEADERS(libxslt/xslt.h)
if test "$ac_cv_header_libxslt_xslt_h" != "yes"; then
AC_MSG_ERROR(libxslt development headers not found)
fi
AC_CACHE_CHECK(whether __progname and __progname_full are available,
pf_cv_var_progname,
AC_TRY_LINK([extern char *__progname, *__progname_full;],
[__progname = "foo"; __progname_full = "foo bar";],
pf_cv_var_progname="yes", pf_cv_var_progname="no"))
if test "$pf_cv_var_progname" = "yes"; then
AC_DEFINE(HAVE___PROGNAME,1,[ ])
fi
dnl ========================================================================
dnl Generic declarations
dnl ========================================================================
AC_CHECK_DECLS([CLOCK_MONOTONIC], [], [], [[
#include <time.h>
]])
dnl ========================================================================
dnl Structures
dnl ========================================================================
AC_CHECK_MEMBERS([struct tm.tm_gmtoff],,,[[#include <time.h>]])
AC_CHECK_MEMBER([struct dirent.d_type],
AC_DEFINE(HAVE_STRUCT_DIRENT_D_TYPE,1,[Define this if struct dirent has d_type]),,
[#include <dirent.h>])
dnl ========================================================================
dnl Functions
dnl ========================================================================
AC_CHECK_FUNCS(getopt, AC_DEFINE(HAVE_DECL_GETOPT, 1, [Have getopt function]))
AC_CHECK_FUNCS(nanosleep, AC_DEFINE(HAVE_DECL_NANOSLEEP, 1, [Have nanosleep function]))
AC_CACHE_CHECK(whether sscanf supports %m,
pf_cv_var_sscanf,
AC_RUN_IFELSE([AC_LANG_SOURCE([[
#include <stdio.h>
const char *s = "some-command-line-arg";
int main(int argc, char **argv) {
char *name = NULL;
int n = sscanf(s, "%ms", &name);
return n == 1 ? 0 : 1;
}
]])],
pf_cv_var_sscanf="yes", pf_cv_var_sscanf="no", pf_cv_var_sscanf="no"))
if test "$pf_cv_var_sscanf" = "yes"; then
AC_DEFINE(SSCANF_HAS_M, 1, [ ])
fi
dnl ========================================================================
dnl bzip2
dnl ========================================================================
AC_CHECK_HEADERS(bzlib.h)
AC_CHECK_LIB(bz2, BZ2_bzBuffToBuffCompress)
if test x$ac_cv_lib_bz2_BZ2_bzBuffToBuffCompress != xyes ; then
AC_MSG_ERROR(BZ2 libraries not found)
fi
if test x$ac_cv_header_bzlib_h != xyes; then
AC_MSG_ERROR(BZ2 Development headers not found)
fi
dnl ========================================================================
dnl sighandler_t is missing from Illumos, Solaris11 systems
dnl ========================================================================
AC_MSG_CHECKING([for sighandler_t])
AC_TRY_COMPILE([#include <signal.h>],[sighandler_t *f;],
has_sighandler_t=yes,has_sighandler_t=no)
AC_MSG_RESULT($has_sighandler_t)
if test "$has_sighandler_t" = "yes" ; then
AC_DEFINE( HAVE_SIGHANDLER_T, 1, [Define if sighandler_t available] )
fi
dnl ========================================================================
dnl ncurses
dnl ========================================================================
dnl
dnl A few OSes (e.g. Linux) deliver a default "ncurses" alongside "curses".
dnl Many non-Linux deliver "curses"; sites may add "ncurses".
dnl
dnl However, the source-code recommendation for both is to #include "curses.h"
dnl (i.e. "ncurses" still wants the include to be simple, no-'n', "curses.h").
dnl
dnl ncurse takes precedence.
dnl
AC_CHECK_HEADERS(curses.h)
AC_CHECK_HEADERS(curses/curses.h)
AC_CHECK_HEADERS(ncurses.h)
AC_CHECK_HEADERS(ncurses/ncurses.h)
dnl Although n-library is preferred, only look for it if the n-header was found.
CURSESLIBS=''
PC_NAME_CURSES=""
PC_LIBS_CURSES=""
if test "$ac_cv_header_ncurses_h" = "yes"; then
AC_CHECK_LIB(ncurses, printw,
[AC_DEFINE(HAVE_LIBNCURSES,1, have ncurses library)])
CURSESLIBS=`$PKG_CONFIG --libs ncurses` || CURSESLIBS='-lncurses'
PC_NAME_CURSES="ncurses"
fi
if test "$ac_cv_header_ncurses_ncurses_h" = "yes"; then
AC_CHECK_LIB(ncurses, printw,
[AC_DEFINE(HAVE_LIBNCURSES,1, have ncurses library)])
CURSESLIBS=`$PKG_CONFIG --libs ncurses` || CURSESLIBS='-lncurses'
PC_NAME_CURSES="ncurses"
fi
dnl Only look for non-n-library if there was no n-library.
if test X"$CURSESLIBS" = X"" -a "$ac_cv_header_curses_h" = "yes"; then
AC_CHECK_LIB(curses, printw,
[CURSESLIBS='-lcurses'; AC_DEFINE(HAVE_LIBCURSES,1, have curses library)])
PC_LIBS_CURSES="$CURSESLIBS"
fi
dnl Only look for non-n-library if there was no n-library.
if test X"$CURSESLIBS" = X"" -a "$ac_cv_header_curses_curses_h" = "yes"; then
AC_CHECK_LIB(curses, printw,
[CURSESLIBS='-lcurses'; AC_DEFINE(HAVE_LIBCURSES,1, have curses library)])
PC_LIBS_CURSES="$CURSESLIBS"
fi
if test "x$CURSESLIBS" != "x"; then
PCMK_FEATURES="$PCMK_FEATURES ncurses"
fi
dnl Check for printw() prototype compatibility
if test X"$CURSESLIBS" != X"" && cc_supports_flag -Wcast-qual; then
ac_save_LIBS=$LIBS
LIBS="$CURSESLIBS"
cc_temp_flags "-Wcast-qual $WERROR"
# avoid broken test because of hardened build environment in Fedora 23+
# - https://fedoraproject.org/wiki/Changes/Harden_All_Packages
# - https://bugzilla.redhat.com/1297985
if cc_supports_flag -fPIC; then
CFLAGS="$CFLAGS -fPIC"
fi
AC_MSG_CHECKING(whether printw() requires argument of "const char *")
AC_LINK_IFELSE(
[AC_LANG_PROGRAM([
#if defined(HAVE_NCURSES_H)
# include <ncurses.h>
#elif defined(HAVE_NCURSES_NCURSES_H)
# include <ncurses/ncurses.h>
#elif defined(HAVE_CURSES_H)
# include <curses.h>
#endif
],
[printw((const char *)"Test");]
)],
[pcmk_cv_compatible_printw=yes],
[pcmk_cv_compatible_printw=no]
)
LIBS=$ac_save_LIBS
cc_restore_flags
AC_MSG_RESULT([$pcmk_cv_compatible_printw])
if test "$pcmk_cv_compatible_printw" = no; then
AC_MSG_WARN([The printw() function of your ncurses or curses library is old, we will disable usage of the library. If you want to use this library anyway, please update to newer version of the library, ncurses 5.4 or later is recommended. You can get the library from http://www.gnu.org/software/ncurses/.])
AC_MSG_NOTICE([Disabling curses])
AC_DEFINE(HAVE_INCOMPATIBLE_PRINTW, 1, [Do we have incompatible printw() in curses library?])
fi
fi
AC_SUBST(CURSESLIBS)
AC_SUBST(PC_NAME_CURSES)
AC_SUBST(PC_LIBS_CURSES)
dnl ========================================================================
dnl Profiling and GProf
dnl ========================================================================
AC_MSG_NOTICE(Old CFLAGS: $CFLAGS)
case $SUPPORT_COVERAGE in
1|yes|true)
SUPPORT_PROFILING=1
PCMK_FEATURES="$PCMK_FEATURES coverage"
CFLAGS="$CFLAGS -fprofile-arcs -ftest-coverage"
dnl During linking, make sure to specify -lgcov or -coverage
;;
esac
case $SUPPORT_PROFILING in
1|yes|true)
SUPPORT_PROFILING=1
dnl Disable various compiler optimizations
CFLAGS="$CFLAGS -fno-omit-frame-pointer -fno-inline -fno-builtin "
dnl CFLAGS="$CFLAGS -fno-inline-functions -fno-default-inline -fno-inline-functions-called-once -fno-optimize-sibling-calls"
dnl Turn off optimization so tools can get accurate line numbers
CFLAGS=`echo $CFLAGS | sed -e 's/-O.\ //g' -e 's/-Wp,-D_FORTIFY_SOURCE=.\ //g' -e 's/-D_FORTIFY_SOURCE=.\ //g'`
CFLAGS="$CFLAGS -O0 -g3 -gdwarf-2"
dnl Update features
PCMK_FEATURES="$PCMK_FEATURES profile"
;;
*)
SUPPORT_PROFILING=0
;;
esac
AC_MSG_NOTICE(New CFLAGS: $CFLAGS)
AC_DEFINE_UNQUOTED(SUPPORT_PROFILING, $SUPPORT_PROFILING, Support for profiling)
dnl ========================================================================
dnl Cluster infrastructure - LibQB
dnl ========================================================================
if test x${enable_no_stack} = xyes; then
SUPPORT_CS=no
fi
PKG_CHECK_MODULES(libqb, libqb >= 0.13)
CPPFLAGS="$libqb_CFLAGS $CPPFLAGS"
LIBS="$libqb_LIBS $LIBS"
dnl libqb 2.02+ (2020-10)
AC_CHECK_FUNCS(qb_ipcc_auth_get,
AC_DEFINE(HAVE_IPCC_AUTH_GET, 1,
[Have qb_ipcc_auth_get function]))
PCMK_FEATURES="$PCMK_FEATURES libqb-logging libqb-ipc"
dnl libqb 0.17.0+ (2014-02)
AC_CHECK_FUNCS(qb_ipcs_connection_get_buffer_size,
AC_DEFINE(HAVE_IPCS_GET_BUFFER_SIZE, 1,
[Have qb_ipcc_get_buffer_size function]))
dnl libqb 2.0.0+ (2020-05)
CHECK_ENUM_VALUE([qb/qblog.h],[qb_log_conf],[QB_LOG_CONF_MAX_LINE_LEN])
CHECK_ENUM_VALUE([qb/qblog.h],[qb_log_conf],[QB_LOG_CONF_ELLIPSIS])
dnl Support Linux-HA fence agents if available
if test "$cross_compiling" != "yes"; then
CPPFLAGS="$CPPFLAGS -I${prefix}/include/heartbeat"
fi
AC_CHECK_HEADERS(stonith/stonith.h)
if test "$ac_cv_header_stonith_stonith_h" = "yes"; then
dnl On Debian, AC_CHECK_LIBS fail if a library has any unresolved symbols
dnl So check for all the dependencies (so they're added to LIBS) before checking for -lplumb
AC_CHECK_LIB(pils, PILLoadPlugin)
AC_CHECK_LIB(plumb, G_main_add_IPC_Channel)
PCMK_FEATURES="$PCMK_FEATURES lha-fencing"
fi
AM_CONDITIONAL([BUILD_LHA_SUPPORT], [test "$ac_cv_header_stonith_stonith_h" = "yes"])
dnl ===============================================
dnl Variables needed for substitution
dnl ===============================================
CRM_SCHEMA_DIRECTORY="${datadir}/pacemaker"
AC_DEFINE_UNQUOTED(CRM_SCHEMA_DIRECTORY,"$CRM_SCHEMA_DIRECTORY", Location for the Pacemaker Relax-NG Schema)
AC_SUBST(CRM_SCHEMA_DIRECTORY)
CRM_CORE_DIR="${localstatedir}/lib/pacemaker/cores"
AC_DEFINE_UNQUOTED(CRM_CORE_DIR,"$CRM_CORE_DIR", Location to store core files produced by Pacemaker daemons)
AC_SUBST(CRM_CORE_DIR)
if test x"${CRM_DAEMON_USER}" = x""; then
CRM_DAEMON_USER="hacluster"
fi
AC_DEFINE_UNQUOTED(CRM_DAEMON_USER,"$CRM_DAEMON_USER", User to run Pacemaker daemons as)
AC_SUBST(CRM_DAEMON_USER)
if test x"${CRM_DAEMON_GROUP}" = x""; then
CRM_DAEMON_GROUP="haclient"
fi
AC_DEFINE_UNQUOTED(CRM_DAEMON_GROUP,"$CRM_DAEMON_GROUP", Group to run Pacemaker daemons as)
AC_SUBST(CRM_DAEMON_GROUP)
CRM_PACEMAKER_DIR=${localstatedir}/lib/pacemaker
AC_DEFINE_UNQUOTED(CRM_PACEMAKER_DIR,"$CRM_PACEMAKER_DIR", Location to store directory produced by Pacemaker daemons)
AC_SUBST(CRM_PACEMAKER_DIR)
CRM_BLACKBOX_DIR=${localstatedir}/lib/pacemaker/blackbox
AC_DEFINE_UNQUOTED(CRM_BLACKBOX_DIR,"$CRM_BLACKBOX_DIR", Where to keep blackbox dumps)
AC_SUBST(CRM_BLACKBOX_DIR)
PE_STATE_DIR="${localstatedir}/lib/pacemaker/pengine"
AC_DEFINE_UNQUOTED(PE_STATE_DIR,"$PE_STATE_DIR", Where to keep scheduler outputs)
AC_SUBST(PE_STATE_DIR)
CRM_CONFIG_DIR="${localstatedir}/lib/pacemaker/cib"
AC_DEFINE_UNQUOTED(CRM_CONFIG_DIR,"$CRM_CONFIG_DIR", Where to keep configuration files)
AC_SUBST(CRM_CONFIG_DIR)
CRM_CONFIG_CTS="${localstatedir}/lib/pacemaker/cts"
AC_DEFINE_UNQUOTED(CRM_CONFIG_CTS,"$CRM_CONFIG_CTS", Where to keep cts stateful data)
AC_SUBST(CRM_CONFIG_CTS)
CRM_DAEMON_DIR="${libexecdir}/pacemaker"
AC_DEFINE_UNQUOTED(CRM_DAEMON_DIR,"$CRM_DAEMON_DIR", Location for Pacemaker daemons)
AC_SUBST(CRM_DAEMON_DIR)
CRM_STATE_DIR="${runstatedir}/crm"
AC_DEFINE_UNQUOTED([CRM_STATE_DIR], ["$CRM_STATE_DIR"],
[Where to keep state files and sockets])
AC_SUBST(CRM_STATE_DIR)
CRM_RSCTMP_DIR="${runstatedir}/resource-agents"
AC_DEFINE_UNQUOTED(CRM_RSCTMP_DIR,"$CRM_RSCTMP_DIR", Where resource agents should keep state files)
AC_SUBST(CRM_RSCTMP_DIR)
PACEMAKER_CONFIG_DIR="${sysconfdir}/pacemaker"
AC_DEFINE_UNQUOTED(PACEMAKER_CONFIG_DIR,"$PACEMAKER_CONFIG_DIR", Where to keep configuration files like authkey)
AC_SUBST(PACEMAKER_CONFIG_DIR)
OCF_RA_DIR="$OCF_ROOT_DIR/resource.d"
AC_DEFINE_UNQUOTED(OCF_RA_DIR,"$OCF_RA_DIR", Location for OCF RAs)
AC_SUBST(OCF_RA_DIR)
AC_DEFINE_UNQUOTED(SBIN_DIR,"$sbindir",[Location for system binaries])
AC_PATH_PROGS(GIT, git false)
AC_MSG_CHECKING(build version)
BUILD_VERSION=$Format:%h$
if test $BUILD_VERSION != ":%h$"; then
AC_MSG_RESULT(archive hash: $BUILD_VERSION)
elif test -x $GIT -a -d .git; then
BUILD_VERSION=`$GIT log --pretty="format:%h" -n 1`
AC_MSG_RESULT(git hash: $BUILD_VERSION)
else
# The current directory name make a reasonable default
# Most generated archives will include the hash or tag
BASE=`basename $PWD`
BUILD_VERSION=`echo $BASE | sed s:.*[[Pp]]acemaker-::`
AC_MSG_RESULT(directory based hash: $BUILD_VERSION)
fi
AC_DEFINE_UNQUOTED(BUILD_VERSION, "$BUILD_VERSION", Build version)
AC_SUBST(BUILD_VERSION)
HAVE_dbus=1
PKG_CHECK_MODULES([DBUS], [dbus-1],
[CPPFLAGS="${CPPFLAGS} ${DBUS_CFLAGS}"],
[HAVE_dbus=0])
AC_DEFINE_UNQUOTED(SUPPORT_DBUS, $HAVE_dbus, Support dbus)
AM_CONDITIONAL(BUILD_DBUS, test $HAVE_dbus = 1)
AC_CHECK_TYPES([DBusBasicValue],,,[[#include <dbus/dbus.h>]])
if test $HAVE_dbus = 0; then
PC_NAME_DBUS=""
else
PC_NAME_DBUS="dbus-1"
fi
AC_SUBST(PC_NAME_DBUS)
if test "x${enable_systemd}" != xno; then
if test $HAVE_dbus = 0; then
if test "x${enable_systemd}" = xyes; then
AC_MSG_FAILURE([cannot enable systemd without DBus])
else
enable_systemd=no
fi
fi
if test $(echo "$CPPFLAGS" | grep -q PCMK_TIME_EMERGENCY_CGT) \
|| test "x$ac_cv_have_decl_CLOCK_MONOTONIC" = xno; then
if test "x${enable_systemd}" = xyes; then
AC_MSG_FAILURE([cannot enable systemd without clock_gettime(CLOCK_MONOTONIC, ...)])
else
enable_systemd=no
fi
fi
if test "x${enable_systemd}" = xtry; then
AC_MSG_CHECKING([for systemd version query result via dbus-send])
ret=$({ dbus-send --system --print-reply \
--dest=org.freedesktop.systemd1 \
/org/freedesktop/systemd1 \
org.freedesktop.DBus.Properties.Get \
string:org.freedesktop.systemd1.Manager \
string:Version 2>/dev/null \
|| echo "this borked"; } | tail -n1)
# sanitize output a bit (interested just in value, not type),
# ret is intentionally unenquoted so as to normalize whitespace
ret=$(echo ${ret} | cut -d' ' -f2-)
AC_MSG_RESULT([${ret}])
if test "x${ret}" != xborked \
|| systemctl --version 2>/dev/null | grep -q systemd; then
enable_systemd=yes
else
enable_systemd=no
fi
fi
fi
AC_MSG_CHECKING([whether to enable support for managing resources via systemd])
AC_MSG_RESULT([${enable_systemd}])
HAVE_systemd=0
if test "x${enable_systemd}" = xyes; then
HAVE_systemd=1
PCMK_FEATURES="$PCMK_FEATURES systemd"
AC_MSG_CHECKING([which system unit file directory to use])
PKG_CHECK_VAR([systemdsystemunitdir], [systemd], [systemdsystemunitdir])
AC_MSG_RESULT([${systemdsystemunitdir}])
if test "x${systemdsystemunitdir}" = x""; then
AC_MSG_FAILURE([cannot enable systemd when systemdsystemunitdir unresolved])
fi
fi
AC_SUBST([systemdsystemunitdir])
AC_DEFINE_UNQUOTED(SUPPORT_SYSTEMD, $HAVE_systemd, Support systemd based system services)
AM_CONDITIONAL(BUILD_SYSTEMD, test $HAVE_systemd = 1)
AC_SUBST(SUPPORT_SYSTEMD)
if test "x${enable_upstart}" != xno; then
if test $HAVE_dbus = 0; then
if test "x${enable_upstart}" = xyes; then
AC_MSG_FAILURE([cannot enable Upstart without DBus])
else
enable_upstart=no
fi
fi
if test "x${enable_upstart}" = xtry; then
AC_MSG_CHECKING([for Upstart version query result via dbus-send])
ret=$({ dbus-send --system --print-reply --dest=com.ubuntu.Upstart \
/com/ubuntu/Upstart org.freedesktop.DBus.Properties.Get \
string:com.ubuntu.Upstart0_6 string:version 2>/dev/null \
|| echo "this borked"; } | tail -n1)
# sanitize output a bit (interested just in value, not type),
# ret is intentionally unenquoted so as to normalize whitespace
ret=$(echo ${ret} | cut -d' ' -f2-)
AC_MSG_RESULT([${ret}])
if test "x${ret}" != xborked \
|| initctl --version 2>/dev/null | grep -q upstart; then
enable_upstart=yes
else
enable_upstart=no
fi
fi
fi
AC_MSG_CHECKING([whether to enable support for managing resources via Upstart])
AC_MSG_RESULT([${enable_upstart}])
HAVE_upstart=0
if test "x${enable_upstart}" = xyes; then
HAVE_upstart=1
PCMK_FEATURES="$PCMK_FEATURES upstart"
fi
AC_DEFINE_UNQUOTED(SUPPORT_UPSTART, $HAVE_upstart, Support upstart based system services)
AM_CONDITIONAL(BUILD_UPSTART, test $HAVE_upstart = 1)
AC_SUBST(SUPPORT_UPSTART)
case $SUPPORT_NAGIOS in
1|yes|true)
if test $(echo "CPPFLAGS" | grep -q PCMK_TIME_EMERGENCY_CGT) \
|| test "x$ac_cv_have_decl_CLOCK_MONOTONIC" = xno; then
AC_MSG_FAILURE([cannot enable nagios without clock_gettime(CLOCK_MONOTONIC, ...)])
fi
SUPPORT_NAGIOS=1
;;
try)
if test $(echo "CPPFLAGS" | grep -q PCMK_TIME_EMERGENCY_CGT) \
|| test "x$ac_cv_have_decl_CLOCK_MONOTONIC" = xno; then
SUPPORT_NAGIOS=0
else
SUPPORT_NAGIOS=1
fi
;;
*)
SUPPORT_NAGIOS=0
;;
esac
if test $SUPPORT_NAGIOS = 1; then
PCMK_FEATURES="$PCMK_FEATURES nagios"
fi
AC_DEFINE_UNQUOTED(SUPPORT_NAGIOS, $SUPPORT_NAGIOS, Support nagios plugins)
AM_CONDITIONAL(BUILD_NAGIOS, test $SUPPORT_NAGIOS = 1)
if test x"$NAGIOS_PLUGIN_DIR" = x""; then
NAGIOS_PLUGIN_DIR="${libexecdir}/nagios/plugins"
fi
AC_DEFINE_UNQUOTED(NAGIOS_PLUGIN_DIR, "$NAGIOS_PLUGIN_DIR", Directory for nagios plugins)
AC_SUBST(NAGIOS_PLUGIN_DIR)
if test x"$NAGIOS_METADATA_DIR" = x""; then
NAGIOS_METADATA_DIR="${datadir}/nagios/plugins-metadata"
fi
AC_DEFINE_UNQUOTED(NAGIOS_METADATA_DIR, "$NAGIOS_METADATA_DIR", Directory for nagios plugins metadata)
AC_SUBST(NAGIOS_METADATA_DIR)
STACKS=""
CLUSTERLIBS=""
PC_NAME_CLUSTER=""
dnl ========================================================================
dnl Cluster stack - Corosync
dnl ========================================================================
dnl Normalize the values
case $SUPPORT_CS in
1|yes|true)
SUPPORT_CS=yes
missingisfatal=1
;;
try)
missingisfatal=0
;;
*)
SUPPORT_CS=no
;;
esac
AC_MSG_CHECKING(for native corosync)
COROSYNC_LIBS=""
if test $SUPPORT_CS = no; then
AC_MSG_RESULT(no (disabled))
SUPPORT_CS=0
else
AC_MSG_RESULT($SUPPORT_CS)
SUPPORT_CS=1
PKG_CHECK_MODULES(cpg, libcpg) dnl Fatal
PKG_CHECK_MODULES(cfg, libcfg) dnl Fatal
PKG_CHECK_MODULES(cmap, libcmap) dnl Fatal
PKG_CHECK_MODULES(quorum, libquorum) dnl Fatal
PKG_CHECK_MODULES(libcorosync_common, libcorosync_common) dnl Fatal
CFLAGS="$CFLAGS $libqb_CFLAGS $cpg_CFLAGS $cfg_CFLAGS $cmap_CFLAGS $quorum_CFLAGS $libcorosync_common_CFLAGS"
COROSYNC_LIBS="$COROSYNC_LIBS $cpg_LIBS $cfg_LIBS $cmap_LIBS $quorum_LIBS $libcorosync_common_LIBS"
CLUSTERLIBS="$CLUSTERLIBS $COROSYNC_LIBS"
PC_NAME_CLUSTER="$PC_CLUSTER_NAME libcfg libcmap libcorosync_common libcpg libquorum"
STACKS="$STACKS corosync-native"
fi
AC_DEFINE_UNQUOTED(SUPPORT_COROSYNC, $SUPPORT_CS, Support the Corosync messaging and membership layer)
AM_CONDITIONAL(BUILD_CS_SUPPORT, test $SUPPORT_CS = 1)
AC_SUBST(SUPPORT_COROSYNC)
dnl
dnl Cluster stack - Sanity
dnl
if test x${enable_no_stack} = xyes; then
AC_MSG_NOTICE(No cluster stack supported, building only the scheduler)
PCMK_FEATURES="$PCMK_FEATURES no-cluster-stack"
else
AC_MSG_CHECKING(for supported stacks)
if test x"$STACKS" = x; then
AC_MSG_FAILURE(You must support at least one cluster stack)
fi
AC_MSG_RESULT($STACKS)
PCMK_FEATURES="$PCMK_FEATURES $STACKS"
fi
PCMK_FEATURES="$PCMK_FEATURES atomic-attrd"
AC_SUBST(CLUSTERLIBS)
AC_SUBST(PC_NAME_CLUSTER)
dnl ========================================================================
dnl ACL
dnl ========================================================================
case $SUPPORT_ACL in
1|yes|true)
missingisfatal=1
;;
try)
missingisfatal=0
;;
*)
SUPPORT_ACL=no
;;
esac
AC_MSG_CHECKING(for acl support)
if test $SUPPORT_ACL = no; then
AC_MSG_RESULT(no (disabled))
SUPPORT_ACL=0
else
AC_MSG_RESULT($SUPPORT_ACL)
AC_CHECK_FUNCS(qb_ipcs_connection_auth_set, SUPPORT_ACL=1, SUPPORT_ACL=0)
if test $SUPPORT_ACL = 0; then
if test $missingisfatal = 0; then
AC_MSG_WARN(Unable to support ACL. You need to use libqb > 0.13.0)
else
AC_MSG_FAILURE(Unable to support ACL. You need to use libqb > 0.13.0)
fi
fi
fi
if test $SUPPORT_ACL = 1; then
PCMK_FEATURES="$PCMK_FEATURES acls"
fi
AM_CONDITIONAL(ENABLE_ACL, test "$SUPPORT_ACL" = "1")
AC_DEFINE_UNQUOTED(ENABLE_ACL, $SUPPORT_ACL, Build in support for CIB ACL)
dnl ========================================================================
dnl CIB secrets
dnl ========================================================================
case $SUPPORT_CIBSECRETS in
1|yes|true|try)
SUPPORT_CIBSECRETS=1
;;
*)
SUPPORT_CIBSECRETS=0
;;
esac
AC_DEFINE_UNQUOTED(SUPPORT_CIBSECRETS, $SUPPORT_CIBSECRETS, Support CIB secrets)
AM_CONDITIONAL(BUILD_CIBSECRETS, test $SUPPORT_CIBSECRETS = 1)
if test $SUPPORT_CIBSECRETS = 1; then
PCMK_FEATURES="$PCMK_FEATURES cibsecrets"
LRM_CIBSECRETS_DIR="${localstatedir}/lib/pacemaker/lrm/secrets"
AC_DEFINE_UNQUOTED(LRM_CIBSECRETS_DIR,"$LRM_CIBSECRETS_DIR", Location for CIB secrets)
AC_SUBST(LRM_CIBSECRETS_DIR)
fi
dnl ========================================================================
dnl GnuTLS
dnl ========================================================================
dnl gnutls_priority_set_direct available since 2.1.7 (released 2007-11-29)
AC_CHECK_LIB(gnutls, gnutls_priority_set_direct)
if test "$ac_cv_lib_gnutls_gnutls_priority_set_direct" != ""; then
AC_CHECK_HEADERS(gnutls/gnutls.h)
AC_CHECK_FUNCS([gnutls_sec_param_to_pk_bits]) dnl since 2.12.0 (2011-03-24)
if test "$ac_cv_header_gnutls_gnutls_h" != "yes"; then
PC_NAME_GNUTLS=""
else
PC_NAME_GNUTLS="gnutls"
fi
AC_SUBST(PC_NAME_GNUTLS)
fi
dnl ========================================================================
dnl PAM
dnl ========================================================================
AC_CHECK_HEADERS(security/pam_appl.h pam/pam_appl.h)
dnl ========================================================================
dnl System Health
dnl ========================================================================
dnl Check if servicelog development package is installed
SERVICELOG=servicelog-1
SERVICELOG_EXISTS="no"
AC_MSG_CHECKING(for $SERVICELOG packages)
if
$PKG_CONFIG --exists $SERVICELOG
then
PKG_CHECK_MODULES([SERVICELOG], [servicelog-1])
SERVICELOG_EXISTS="yes"
fi
AC_MSG_RESULT($SERVICELOG_EXISTS)
AM_CONDITIONAL(BUILD_SERVICELOG, test "$SERVICELOG_EXISTS" = "yes")
dnl Check if OpenIMPI packages and servicelog are installed
OPENIPMI="OpenIPMI OpenIPMIposix"
OPENIPMI_SERVICELOG_EXISTS="no"
AC_MSG_CHECKING(for $SERVICELOG $OPENIPMI packages)
if
$PKG_CONFIG --exists $OPENIPMI $SERVICELOG
then
PKG_CHECK_MODULES([OPENIPMI_SERVICELOG],[OpenIPMI OpenIPMIposix])
OPENIPMI_SERVICELOG_EXISTS="yes"
fi
AC_MSG_RESULT($OPENIPMI_SERVICELOG_EXISTS)
AM_CONDITIONAL(BUILD_OPENIPMI_SERVICELOG, test "$OPENIPMI_SERVICELOG_EXISTS" = "yes")
# --- ASAN/UBSAN/TSAN (see man gcc) ---
# when using SANitizers, we need to pass the -fsanitize..
# to both CFLAGS and LDFLAGS. The CFLAGS/LDFLAGS must be
# specified as first in the list or there will be runtime
# issues (for example user has to LD_PRELOAD asan for it to work
# properly).
if test -n "${SANITIZERS}"; then
SANITIZERS=$(echo $SANITIZERS | sed -e 's/,/ /g')
for SANITIZER in $SANITIZERS; do
case $SANITIZER in
asan|ASAN)
SANITIZERS_CFLAGS="$SANITIZERS_CFLAGS -fsanitize=address"
SANITIZERS_LDFLAGS="$SANITIZERS_LDFLAGS -fsanitize=address -lasan"
AC_CHECK_LIB([asan],[main],,AC_MSG_ERROR([Unable to find libasan]))
;;
ubsan|UBSAN)
SANITIZERS_CFLAGS="$SANITIZERS_CFLAGS -fsanitize=undefined"
SANITIZERS_LDFLAGS="$SANITIZERS_LDFLAGS -fsanitize=undefined -lubsan"
AC_CHECK_LIB([ubsan],[main],,AC_MSG_ERROR([Unable to find libubsan]))
;;
tsan|TSAN)
SANITIZERS_CFLAGS="$SANITIZERS_CFLAGS -fsanitize=thread"
SANITIZERS_LDFLAGS="$SANITIZERS_LDFLAGS -fsanitize=thread -ltsan"
AC_CHECK_LIB([tsan],[main],,AC_MSG_ERROR([Unable to find libtsan]))
;;
esac
done
fi
dnl ========================================================================
dnl Compiler flags
dnl ========================================================================
dnl Make sure that CFLAGS is not exported. If the user did
dnl not have CFLAGS in their environment then this should have
dnl no effect. However if CFLAGS was exported from the user's
dnl environment, then the new CFLAGS will also be exported
dnl to sub processes.
if export | fgrep " CFLAGS=" > /dev/null; then
SAVED_CFLAGS="$CFLAGS"
unset CFLAGS
CFLAGS="$SAVED_CFLAGS"
unset SAVED_CFLAGS
fi
AC_ARG_VAR([CFLAGS_HARDENED_LIB], [extra C compiler flags for hardened libraries])
AC_ARG_VAR([LDFLAGS_HARDENED_LIB], [extra linker flags for hardened libraries])
AC_ARG_VAR([CFLAGS_HARDENED_EXE], [extra C compiler flags for hardened executables])
AC_ARG_VAR([LDFLAGS_HARDENED_EXE], [extra linker flags for hardened executables])
CC_EXTRAS=""
if test "$GCC" != yes; then
CFLAGS="$CFLAGS -g"
else
CFLAGS="$CFLAGS -ggdb"
dnl When we don't have diagnostic push / pull, we can't explicitly disable
dnl checking for nonliteral formats in the places where they occur on purpose
dnl thus we disable nonliteral format checking globally as we are aborting
dnl on warnings.
dnl what makes the things really ugly is that nonliteral format checking is
dnl obviously available as an extra switch in very modern gcc but for older
dnl gcc this is part of -Wformat=2
dnl so if we have push/pull we can enable -Wformat=2 -Wformat-nonliteral
dnl if we don't have push/pull but -Wformat-nonliteral we can enable -Wformat=2
dnl otherwise none of both
gcc_diagnostic_push_pull=no
cc_temp_flags "$CFLAGS $WERROR"
AC_MSG_CHECKING([for gcc diagnostic push / pull])
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[
#pragma GCC diagnostic push
#pragma GCC diagnostic pop
]])],
[
AC_MSG_RESULT([yes])
gcc_diagnostic_push_pull=yes
], AC_MSG_RESULT([no]))
cc_restore_flags
if cc_supports_flag "-Wformat-nonliteral"; then
gcc_format_nonliteral=yes
else
gcc_format_nonliteral=no
fi
# We had to eliminate -Wnested-externs because of libtool changes
# Make sure to order options so that the former stand for prerequisites
# of the latter (e.g., -Wformat-nonliteral requires -Wformat).
EXTRA_FLAGS="-fgnu89-inline
-Wall
-Waggregate-return
-Wbad-function-cast
-Wcast-align
-Wdeclaration-after-statement
-Wendif-labels
-Wfloat-equal
-Wformat-security
-Wmissing-prototypes
-Wmissing-declarations
-Wnested-externs
-Wno-long-long
-Wno-strict-aliasing
-Wpointer-arith
-Wstrict-prototypes
-Wwrite-strings
-Wunused-but-set-variable
-Wunsigned-char"
if test "x$gcc_diagnostic_push_pull" = "xyes"; then
AC_DEFINE([GCC_FORMAT_NONLITERAL_CHECKING_ENABLED], [],
[gcc can complain about nonliterals in format])
EXTRA_FLAGS="$EXTRA_FLAGS
-Wformat=2
-Wformat-nonliteral"
else
if test "x$gcc_format_nonliteral" = "xyes"; then
EXTRA_FLAGS="$EXTRA_FLAGS -Wformat=2"
fi
fi
# Additional warnings it might be nice to enable one day
# -Wshadow
# -Wunreachable-code
for j in $EXTRA_FLAGS
do
if
cc_supports_flag $CC_EXTRAS $j
then
CC_EXTRAS="$CC_EXTRAS $j"
fi
done
if test "x${enable_ansi}" = xyes && cc_supports_flag -std=iso9899:199409 ; then
AC_MSG_NOTICE(Enabling ANSI Compatibility)
CC_EXTRAS="$CC_EXTRAS -ansi -D_GNU_SOURCE -DANSI_ONLY"
fi
AC_MSG_NOTICE(Activated additional gcc flags: ${CC_EXTRAS})
fi
dnl
dnl Hardening flags
dnl
dnl The prime control of whether to apply (targeted) hardening build flags and
dnl which ones is --{enable,disable}-hardening option passed to ./configure:
dnl
dnl --enable-hardening=try (default):
dnl depending on whether any of CFLAGS_HARDENED_EXE, LDFLAGS_HARDENED_EXE,
dnl CFLAGS_HARDENED_LIB or LDFLAGS_HARDENED_LIB environment variables
dnl (see below) is set and non-null, all these custom flags (even if not
dnl set) are used as are, otherwise the best effort is made to offer
dnl reasonably strong hardening in several categories (RELRO, PIE,
dnl "bind now", stack protector) according to what the selected toolchain
dnl can offer
dnl
dnl --enable-hardening:
dnl same effect as --enable-hardening=try when the environment variables
dnl in question are suppressed
dnl
dnl --disable-hardening:
dnl do not apply any targeted hardening measures at all
dnl
dnl The user-injected environment variables that regulate the hardening in
dnl default case are as follows:
dnl
dnl * CFLAGS_HARDENED_EXE, LDFLAGS_HARDENED_EXE
dnl compiler and linker flags (respectively) for daemon programs
dnl (pacemakerd, pacemaker-attrd, pacemaker-controld, pacemaker-execd,
dnl cib, stonithd, pacemaker-remoted, pacemaker-schedulerd)
dnl
dnl * CFLAGS_HARDENED_LIB, LDFLAGS_HARDENED_LIB
dnl compiler and linker flags (respectively) for libraries linked
dnl with the daemon programs
dnl
dnl Note that these are purposedly targeted variables (addressing particular
dnl targets all over the scattered Makefiles) and have no effect outside of
dnl the predestined scope (e.g., CLI utilities). For a global reach,
dnl use CFLAGS, LDFLAGS, etc. as usual.
dnl
dnl For guidance on the suitable flags consult, for instance:
dnl https://fedoraproject.org/wiki/Changes/Harden_All_Packages#Detailed_Harden_Flags_Description
dnl https://owasp.org/index.php/C-Based_Toolchain_Hardening#GCC.2FBinutils
dnl
if test "x${HARDENING}" != "xtry"; then
unset CFLAGS_HARDENED_EXE
unset CFLAGS_HARDENED_LIB
unset LDFLAGS_HARDENED_EXE
unset LDFLAGS_HARDENED_LIB
fi
if test "x${HARDENING}" = "xno"; then
AC_MSG_NOTICE([Hardening: explicitly disabled])
elif test "x${HARDENING}" = "xyes" \
|| test "$(env | grep -Ec '^(C|LD)FLAGS_HARDENED_(EXE|LIB)=.')" = 0; then
dnl We'll figure out on our own...
CFLAGS_HARDENED_EXE=
CFLAGS_HARDENED_LIB=
LDFLAGS_HARDENED_EXE=
LDFLAGS_HARDENED_LIB=
relro=0
pie=0
bindnow=0
# daemons incl. libs: partial RELRO
flag="-Wl,-z,relro"
CC_CHECK_LDFLAGS(["${flag}"],
[LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}";
LDFLAGS_HARDENED_LIB="${LDFLAGS_HARDENED_LIB} ${flag}";
relro=1])
# daemons: PIE for both CFLAGS and LDFLAGS
if cc_supports_flag -fPIE; then
flag="-pie"
CC_CHECK_LDFLAGS(["${flag}"],
[CFLAGS_HARDENED_EXE="${CFLAGS_HARDENED_EXE} -fPIE";
LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}";
pie=1])
fi
# daemons incl. libs: full RELRO if sensible + as-needed linking
# so as to possibly mitigate startup performance
# hit caused by excessive linking with unneeded
# libraries
if test "${relro}" = 1 && test "${pie}" = 1; then
flag="-Wl,-z,now"
CC_CHECK_LDFLAGS(["${flag}"],
[LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}";
LDFLAGS_HARDENED_LIB="${LDFLAGS_HARDENED_LIB} ${flag}";
bindnow=1])
fi
if test "${bindnow}" = 1; then
flag="-Wl,--as-needed"
CC_CHECK_LDFLAGS(["${flag}"],
[LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}";
LDFLAGS_HARDENED_LIB="${LDFLAGS_HARDENED_LIB} ${flag}"])
fi
# universal: prefer strong > all > default stack protector if possible
flag=
if cc_supports_flag -fstack-protector-strong; then
flag="-fstack-protector-strong"
elif cc_supports_flag -fstack-protector-all; then
flag="-fstack-protector-all"
elif cc_supports_flag -fstack-protector; then
flag="-fstack-protector"
fi
if test -n "${flag}"; then
CC_EXTRAS="${CC_EXTRAS} ${flag}"
stackprot=1
fi
if test "${relro}" = 1 \
|| test "${pie}" = 1 \
|| test "${stackprot}" = 1; then
AC_MSG_NOTICE([Hardening: relro=${relro} pie=${pie} bindnow=${bindnow} stackprot=${flag}])
else
AC_MSG_WARN([Hardening: no suitable features in the toolchain detected])
fi
else
AC_MSG_NOTICE([Hardening: using custom flags])
fi
CFLAGS="$SANITIZERS_CFLAGS $CFLAGS $CC_EXTRAS"
LDFLAGS="$SANITIZERS_LDFLAGS $LDFLAGS"
CFLAGS_HARDENED_EXE="$SANITIZERS_CFLAGS $CFLAGS_HARDENED_EXE"
LDFLAGS_HARDENED_EXE="$SANITIZERS_LDFLAGS $LDFLAGS_HARDENED_EXE"
NON_FATAL_CFLAGS="$CFLAGS"
AC_SUBST(NON_FATAL_CFLAGS)
dnl
dnl We reset CFLAGS to include our warnings *after* all function
dnl checking goes on, so that our warning flags don't keep the
dnl AC_*FUNCS() calls above from working. In particular, -Werror will
dnl *always* cause us troubles if we set it before here.
dnl
dnl
if test "x${enable_fatal_warnings}" = xyes ; then
AC_MSG_NOTICE(Enabling Fatal Warnings)
CFLAGS="$CFLAGS $WERROR"
fi
AC_SUBST(CFLAGS)
dnl This is useful for use in Makefiles that need to remove one specific flag
CFLAGS_COPY="$CFLAGS"
AC_SUBST(CFLAGS_COPY)
AC_SUBST(LIBADD_DL) dnl extra flags for dynamic linking libraries
AC_SUBST(LOCALE)
dnl Options for cleaning up the compiler output
QUIET_LIBTOOL_OPTS=""
QUIET_MAKE_OPTS=""
if test "x${enable_quiet}" = "xyes"; then
QUIET_LIBTOOL_OPTS="--silent"
QUIET_MAKE_OPTS="-s" # POSIX compliant
fi
AC_MSG_RESULT(Suppress make details: ${enable_quiet})
dnl Put the above variables to use
LIBTOOL="${LIBTOOL} --tag=CC \$(QUIET_LIBTOOL_OPTS)"
MAKEFLAGS="${MAKEFLAGS} ${QUIET_MAKE_OPTS}"
AC_SUBST(CC)
AC_SUBST(MAKEFLAGS)
AC_SUBST(LIBTOOL)
AC_SUBST(QUIET_LIBTOOL_OPTS)
AC_DEFINE_UNQUOTED(CRM_FEATURES, "$PCMK_FEATURES", Set of enabled features)
AC_SUBST(PCMK_FEATURES)
dnl Files we output that need to be executable
AC_CONFIG_FILES([cts/CTSlab.py], [chmod +x cts/CTSlab.py])
AC_CONFIG_FILES([cts/LSBDummy], [chmod +x cts/LSBDummy])
AC_CONFIG_FILES([cts/OCFIPraTest.py], [chmod +x cts/OCFIPraTest.py])
AC_CONFIG_FILES([cts/cluster_test], [chmod +x cts/cluster_test])
AC_CONFIG_FILES([cts/cts], [chmod +x cts/cts])
AC_CONFIG_FILES([cts/cts-cli], [chmod +x cts/cts-cli])
AC_CONFIG_FILES([cts/cts-coverage], [chmod +x cts/cts-coverage])
AC_CONFIG_FILES([cts/cts-exec], [chmod +x cts/cts-exec])
AC_CONFIG_FILES([cts/cts-fencing], [chmod +x cts/cts-fencing])
AC_CONFIG_FILES([cts/cts-log-watcher], [chmod +x cts/cts-log-watcher])
AC_CONFIG_FILES([cts/cts-regression], [chmod +x cts/cts-regression])
AC_CONFIG_FILES([cts/cts-scheduler], [chmod +x cts/cts-scheduler])
AC_CONFIG_FILES([cts/cts-support], [chmod +x cts/cts-support])
AC_CONFIG_FILES([cts/lxc_autogen.sh], [chmod +x cts/lxc_autogen.sh])
AC_CONFIG_FILES([cts/benchmark/clubench], [chmod +x cts/benchmark/clubench])
AC_CONFIG_FILES([cts/fence_dummy], [chmod +x cts/fence_dummy])
AC_CONFIG_FILES([cts/pacemaker-cts-dummyd], [chmod +x cts/pacemaker-cts-dummyd])
AC_CONFIG_FILES([daemons/fenced/fence_legacy], [chmod +x daemons/fenced/fence_legacy])
AC_CONFIG_FILES([doc/abi-check], [chmod +x doc/abi-check])
AC_CONFIG_FILES([extra/resources/ClusterMon], [chmod +x extra/resources/ClusterMon])
AC_CONFIG_FILES([extra/resources/HealthSMART], [chmod +x extra/resources/HealthSMART])
AC_CONFIG_FILES([extra/resources/SysInfo], [chmod +x extra/resources/SysInfo])
AC_CONFIG_FILES([extra/resources/ifspeed], [chmod +x extra/resources/ifspeed])
AC_CONFIG_FILES([extra/resources/o2cb], [chmod +x extra/resources/o2cb])
AC_CONFIG_FILES([tools/crm_failcount], [chmod +x tools/crm_failcount])
AC_CONFIG_FILES([tools/crm_master], [chmod +x tools/crm_master])
AC_CONFIG_FILES([tools/crm_report], [chmod +x tools/crm_report])
AC_CONFIG_FILES([tools/crm_standby], [chmod +x tools/crm_standby])
AC_CONFIG_FILES([tools/cibsecret], [chmod +x tools/cibsecret])
AC_CONFIG_FILES([tools/pcmk_simtimes], [chmod +x tools/pcmk_simtimes])
dnl Other files we output
AC_CONFIG_FILES(Makefile \
cts/Makefile \
cts/CTS.py \
cts/CTSvars.py \
cts/benchmark/Makefile \
cts/pacemaker-cts-dummyd@.service \
daemons/Makefile \
daemons/attrd/Makefile \
daemons/based/Makefile \
daemons/controld/Makefile \
daemons/execd/Makefile \
daemons/execd/pacemaker_remote \
daemons/execd/pacemaker_remote.service \
daemons/fenced/Makefile \
daemons/pacemakerd/Makefile \
daemons/pacemakerd/pacemaker \
daemons/pacemakerd/pacemaker.service \
daemons/pacemakerd/pacemaker.upstart \
daemons/pacemakerd/pacemaker.combined.upstart \
daemons/schedulerd/Makefile \
devel/Makefile \
doc/Doxyfile \
doc/Makefile \
doc/Clusters_from_Scratch/publican.cfg \
doc/Pacemaker_Administration/publican.cfg \
doc/Pacemaker_Development/publican.cfg \
doc/Pacemaker_Explained/publican.cfg \
doc/Pacemaker_Remote/publican.cfg \
doc/sphinx/Makefile \
extra/Makefile \
extra/alerts/Makefile \
extra/resources/Makefile \
extra/logrotate/Makefile \
extra/logrotate/pacemaker \
include/Makefile \
include/crm/Makefile \
include/crm/cib/Makefile \
include/crm/common/Makefile \
include/crm/cluster/Makefile \
include/crm/fencing/Makefile \
include/crm/pengine/Makefile \
include/pcmki/Makefile \
replace/Makefile \
lib/Makefile \
lib/libpacemaker.pc \
lib/pacemaker.pc \
lib/pacemaker-cib.pc \
lib/pacemaker-lrmd.pc \
lib/pacemaker-service.pc \
lib/pacemaker-pe_rules.pc \
lib/pacemaker-pe_status.pc \
lib/pacemaker-fencing.pc \
lib/pacemaker-cluster.pc \
lib/common/Makefile \
lib/common/tests/Makefile \
lib/common/tests/agents/Makefile \
lib/common/tests/cmdline/Makefile \
lib/common/tests/flags/Makefile \
lib/common/tests/operations/Makefile \
lib/common/tests/strings/Makefile \
lib/common/tests/utils/Makefile \
lib/cluster/Makefile \
lib/cib/Makefile \
lib/gnu/Makefile \
lib/pacemaker/Makefile \
lib/pengine/Makefile \
lib/pengine/tests/Makefile \
lib/pengine/tests/rules/Makefile \
lib/fencing/Makefile \
lib/lrmd/Makefile \
lib/services/Makefile \
maint/Makefile \
tests/Makefile \
tools/Makefile \
tools/report.collector \
tools/report.common \
tools/crm_mon.service \
tools/crm_mon.upstart \
xml/Makefile \
xml/pacemaker-schemas.pc \
)
dnl Now process the entire list of files added by previous
dnl calls to AC_CONFIG_FILES()
AC_OUTPUT()
dnl *****************
dnl Configure summary
dnl *****************
AC_MSG_RESULT([])
AC_MSG_RESULT([$PACKAGE configuration:])
AC_MSG_RESULT([ Version = ${VERSION} (Build: $BUILD_VERSION)])
AC_MSG_RESULT([ Features =${PCMK_FEATURES}])
AC_MSG_RESULT([])
AC_MSG_RESULT([ Prefix = ${prefix}])
AC_MSG_RESULT([ Executables = ${sbindir}])
AC_MSG_RESULT([ Man pages = ${mandir}])
AC_MSG_RESULT([ Libraries = ${libdir}])
AC_MSG_RESULT([ Header files = ${includedir}])
AC_MSG_RESULT([ Arch-independent files = ${datadir}])
AC_MSG_RESULT([ State information = ${localstatedir}])
AC_MSG_RESULT([ System configuration = ${sysconfdir}])
AC_MSG_RESULT([])
AC_MSG_RESULT([ HA group name = ${CRM_DAEMON_GROUP}])
AC_MSG_RESULT([ HA user name = ${CRM_DAEMON_USER}])
AC_MSG_RESULT([])
AC_MSG_RESULT([ CFLAGS = ${CFLAGS}])
AC_MSG_RESULT([ CFLAGS_HARDENED_EXE = ${CFLAGS_HARDENED_EXE}])
AC_MSG_RESULT([ CFLAGS_HARDENED_LIB = ${CFLAGS_HARDENED_LIB}])
AC_MSG_RESULT([ LDFLAGS_HARDENED_EXE = ${LDFLAGS_HARDENED_EXE}])
AC_MSG_RESULT([ LDFLAGS_HARDENED_LIB = ${LDFLAGS_HARDENED_LIB}])
AC_MSG_RESULT([ Libraries = ${LIBS}])
AC_MSG_RESULT([ Stack Libraries = ${CLUSTERLIBS}])
AC_MSG_RESULT([ Unix socket auth method = ${us_auth}])
diff --git a/doc/Makefile.am b/doc/Makefile.am
index 76f89082ea..a37554f7de 100644
--- a/doc/Makefile.am
+++ b/doc/Makefile.am
@@ -1,458 +1,408 @@
#
-# Copyright 2003-2019 the Pacemaker project contributors
+# Copyright 2003-2020 the Pacemaker project contributors
#
# The version control history for this file may have further details.
#
# This source code is licensed under the GNU General Public License version 2
# or later (GPLv2+) WITHOUT ANY WARRANTY.
#
include $(top_srcdir)/mk/common.mk
# Deprecated plaintext documents (dynamically converted to HTML)
DEPRECATED_ORIGINAL = crm_fencing.txt
DEPRECATED_GENERATED =
if BUILD_ASCIIDOC
DEPRECATED_GENERATED += $(DEPRECATED_ORIGINAL:%.txt=%.html)
endif
DEPRECATED_ALL = $(DEPRECATED_ORIGINAL) $(DEPRECATED_GENERATED)
# Current documentation based on asciidoc, DocBook, and pandoc
-BOOKS = Clusters_from_Scratch \
- Pacemaker_Administration \
- Pacemaker_Development \
- Pacemaker_Explained \
- Pacemaker_Remote
+BOOKS =
doc_DATA = $(DEPRECATED_ALL)
noinst_SCRIPTS = abi-check
# The sphinx docs are not yet built by default, but we still want to distribute
# them so configure can build the Makefile in a distribution.
DIST_SUBDIRS = sphinx
-EXTRA_DIST = $(DEPRECATED_ORIGINAL) $(SHARED_TXT) $(PNGS_ORIGINAL) $(DOTS) $(SVGS)
+EXTRA_DIST = $(DEPRECATED_ORIGINAL) $(SHARED_TXT)
EXTRA_DIST += $(CFS_TXT) $(CFS_XML_ONLY)
EXTRA_DIST += $(PA_TXT) $(PA_XML_ONLY)
EXTRA_DIST += $(PD_TXT) $(PD_XML_ONLY)
EXTRA_DIST += $(PE_TXT) $(PE_XML_ONLY)
EXTRA_DIST += $(PR_TXT) $(PR_XML_ONLY)
EXTRA_DIST += pcs-crmsh-quick-ref.md
# toplevel rsync destination for www targets (without trailing slash)
RSYNC_DEST ?= root@www.clusterlabs.org:/var/www/html
# recursive, preserve symlinks/permissions/times, verbose, compress,
# don't cross filesystems, sparse, show progress
RSYNC_OPTS = -rlptvzxS --progress
LAST_RELEASE ?= Pacemaker-$(VERSION)
TAG ?= $(shell [ -n "`git tag --points-at HEAD | head -1`" ] \
&& ( git tag --points-at HEAD | head -1 ) \
|| git log --pretty=format:%H -n 1 HEAD)
# What formats to build by default: pdf,html,html-single,html-desktop,epub
DOCBOOK_FORMATS := html-desktop
# What languages to build and upload to website by default
# (currently only en-US because translations are out of date)
DOCBOOK_LANGS := en-US
-# @TODO We could simplify this (and .gitignore) by establishing a convention
-# that original image source begins with an uppercase letter and generated
-# files with lowercase.
-
-# Scheduler transition graphs
-# @TODO Add original XML, and generate DOTs via crm_simulate
-DOTS = $(wildcard shared/en-US/images/*.dot)
-
-# Vector sources for images
-# @TODO Generate transition SVGs from DOTs via dot
-SVGS = $(wildcard shared/en-US/images/pcmk-*.svg) \
- $(DOTS:%.dot=%.svg)
-
-# Final images (some originally in PNG, others generated from SVG)
-PNGS_ORIGINAL = Pacemaker_Remote/en-US/images/pcmk-ha-cluster-stack.png \
- Pacemaker_Remote/en-US/images/pcmk-ha-remote-stack.png \
- shared/en-US/images/Console.png \
- shared/en-US/images/Editing-eth0.png \
- shared/en-US/images/Installer.png \
- shared/en-US/images/Network.png \
- shared/en-US/images/Partitioning.png \
- shared/en-US/images/Welcome.png \
- shared/en-US/images/resource-set.png \
- shared/en-US/images/three-sets.png \
- shared/en-US/images/two-sets.png
-PNGS_GENERATED = $(SVGS:%.svg=%-small.png) \
- $(SVGS:%.svg=%.png) \
- $(SVGS:%.svg=%-large.png)
-PNGS = $(PNGS_ORIGINAL) $(PNGS_GENERATED)
-
-graphics: $(PNGS)
-
-
-# two-phased attempts for Inkscape pre-1.0 and 1.0+ (upcoming) discrepancy
-%.png: %.svg
- $(AM_V_GEN) { $(INKSCAPE) --export-dpi=90 -C --export-png=$@ $< \
- || $(INKSCAPE) --export-dpi=90 -C --export-filename=$@ $<; } $(PCMK_quiet)
-
-%-small.png: %.svg
- $(AM_V_GEN) { $(INKSCAPE) --export-dpi=45 -C --export-png=$@ $< \
- || $(INKSCAPE) --export-dpi=45 -C --export-filename=$@ $<; } $(PCMK_quiet)
-
-%-large.png: %.svg
- $(AM_V_GEN) { $(INKSCAPE) --export-dpi=180 -C --export-png=$@ $< \
- || $(INKSCAPE) --export-dpi=180 -C --export-filename=$@ $<; } $(PCMK_quiet)
-
if IS_ASCIIDOC
ASCIIDOC_HTML_ARGS = --unsafe --backend=xhtml11
ASCIIDOC_DBOOK_ARGS = -b docbook -d book
else
ASCIIDOC_HTML_ARGS = --backend=html5
ASCIIDOC_DBOOK_ARGS = -b docbook45 -d book
endif
%.html: %.txt
$(AM_V_GEN)$(ASCIIDOC_CONV) $(ASCIIDOC_HTML_ARGS) --out-file=$@ $< $(PCMK_quiet)
#
# Generate DocBook XML from asciidoc text.
#
# Build each chapter as a book (since the numbering isn't right for
# articles and only books can have appendices) and then strip out the
# bits we don't want or need.
#
# XXX Sequence of tr/sed commands should be replaced with a single XSLT
#
%.xml: %.txt
$(AM_V_at)$(MKDIR_P) $(shell dirname $@) # might not exist in VPATH build
$(AM_V_at)$(ASCIIDOC_CONV) $(ASCIIDOC_DBOOK_ARGS) -o - $< | tr -d '\036\r' >$@-t # Convert, fix line endings
$(AM_V_at)sed -i 's/\ lang="en"//' $@-t # Never specify a language in the chapters
$(AM_V_at)sed -i 's/simpara/para/g' $@-t # publican doesn't correctly render footnotes with simpara
$(AM_V_at)sed -i 's/.*<date>.*//g' $@-t # Remove dangling tag
$(AM_V_at)sed -i 's/.*preface>//g' $@-t # Remove preface elements
$(AM_V_at)sed -i 's:<title></title>::g' $@-t # Remove empty title
$(AM_V_at)sed -i 's/chapter/section/g' $@-t # Chapters become sections, so that books can become chapters
$(AM_V_at)sed -i 's/<.*bookinfo.*>//g' $@-t # Strip out bookinfo, we don't need it
$(AM_V_at)! grep -q "<appendix" $@-t || sed -i \
's/.*book>//;tb;bf;:b;N;s/.*<title>.*<\/title>.*//;tb;/<appendix/{:i;n;/<\/appendix/{p;d};bi};bb;:f;p;d' \
$@-t # We just want the appendix tag (asciidoctor adds non-empty book-level title)
$(AM_V_at)sed -i 's/book>/chapter>/g' $@-t # Rename to chapter (won't trigger if previous sed did)
$(AM_V_GEN)mv $@-t $@
# For Makefile debugging
.PHONY: vars
vars:
@echo DEPRECATED_ORIGINAL=\'$(DEPRECATED_ORIGINAL)\'
@echo DEPRECATED_GENERATED=\'$(DEPRECATED_GENERATED)\'
@echo BOOKS=\'$(BOOKS)\'
@echo LAST_RELEASE=\'$(LAST_RELEASE)\'
@echo TAG=\'$(TAG)\'
.PHONY: deprecated-upload
deprecated-upload: $(DEPRECATED_ALL)
rsync $(RSYNC_OPTS) $(DEPRECATED_ALL) "$(RSYNC_DEST)/$(PACKAGE)/doc/"
.PHONY: deprecated-clean
deprecated-clean:
-rm -f $(DEPRECATED_GENERATED)
# publican-clusterlabs/xsl/{html,html-single,pdf}.xsl refer to URIs
# requiring Internet access, hence we shadow that with a XML catalog-based
# redirect to local files brought with Publican installation;
# this is what newer Publican normally does with the system-wide catalog
# upon its installation, but let's provide a compatibility for older
# or badly installed instances (via adding the created file into
# XML_CATALOG_FILES for libxml2 backing Publican as a fallback);
# note that nextCatalog arrangement needed so as to overcome
# https://rt.cpan.org/Public/Bug/Display.html?id=113781
publican-catalog-fallback:
@exec >$@-t \
&& echo '<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog">' \
&& echo '<rewriteURI uriStartString="https://fedorahosted.org/released/publican/xsl/docbook4/" rewritePrefix="file:///usr/share/publican/xsl/"/>' \
&& echo '</catalog>'
$(AM_V_GEN)mv $@-t $@
publican-catalog: publican-catalog-fallback
@exec >$@-t \
&& echo '<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog">' \
&& echo '<nextCatalog catalog="file:///etc/xml/catalog"/>' \
&& echo '<nextCatalog catalog="file://$(CURDIR)/$<"/>' \
&& echo '</catalog>'
$(AM_V_GEN)mv $@-t $@
COMMON_XML = Author_Group.xml Book_Info.xml Revision_History.xml
SHARED_TXT=$(wildcard shared/en-US/*.txt)
SHARED_XML=$(SHARED_TXT:%.txt=%.xml)
if PUBLICAN_INTREE_BRAND
PUBLICAN_INTREE_DEPS = publican-catalog
PUBLICAN_INTREE_ENV = XML_CATALOG_FILES="$(CURDIR)/publican-catalog"
PUBLICAN_INTREE_OPT = --brand_dir="$(top_srcdir)/publican-clusterlabs"
else
PUBLICAN_INTREE_DEPS =
PUBLICAN_INTREE_ENV =
PUBLICAN_INTREE_OPT =
endif
# Clusters From Scratch
CFS_SHARED_TXT = $(addprefix shared/en-US/,pacemaker-intro.txt)
CFS_SHARED_XML = $(CFS_SHARED_TXT:%.txt=%.xml)
CFS_TXT = $(wildcard Clusters_from_Scratch/en-US/*.txt)
CFS_XML_GEN = $(CFS_TXT:%.txt=%.xml)
CFS_XML_ONLY = $(addprefix $(srcdir)/Clusters_from_Scratch/en-US/,$(COMMON_XML) \
Clusters_from_Scratch.ent \
Clusters_from_Scratch.xml \
Preface.xml)
CFS_DEPS = $(PNGS) $(CFS_SHARED_XML) $(CFS_XML_ONLY) $(CFS_XML_GEN)
# We have to hardcode the book name
# With '%' the test for 'newness' fails
Clusters_from_Scratch.build: $(CFS_DEPS) $(PUBLICAN_INTREE_DEPS)
@echo "Building $(@:%.build=%) because of $?"
rm -rf "$(@:%.build=%)/publish"/* "$(@:%.build=%)/tmp"
$(AM_V_PUB)cd $(@:%.build=%) && RPM_BUILD_DIR="" $(PUBLICAN_INTREE_ENV) \
$(PUBLICAN) build --src_dir="$(srcdir)" --publish \
--langs="$(DOCBOOK_LANGS)" --formats="$(DOCBOOK_FORMATS)" \
$(PUBLICAN_INTREE_OPT) $(PCMK_quiet)
rm -rf "$(@:%.build=%)/tmp"
touch "$@"
# Pacemaker Administration
PA_TXT = $(wildcard Pacemaker_Administration/en-US/*.txt)
PA_XML_GEN = $(PA_TXT:%.txt=%.xml)
PA_XML_ONLY = $(addprefix $(srcdir)/Pacemaker_Administration/en-US/,$(COMMON_XML) \
Pacemaker_Administration.ent \
Pacemaker_Administration.xml \
Preface.xml)
PA_DEPS = $(PA_XML_ONLY) $(PA_XML_GEN)
# We have to hardcode the book name
# With '%' the test for 'newness' fails
Pacemaker_Administration.build: $(PA_DEPS) $(PUBLICAN_INTREE_DEPS)
@echo Building $(@:%.build=%) because of $?
rm -rf $(@:%.build=%)/publish/*
$(AM_V_PUB)cd $(@:%.build=%) && RPM_BUILD_DIR="" $(PUBLICAN_INTREE_ENV) \
$(PUBLICAN) build --src_dir="$(srcdir)" --publish \
--langs="$(DOCBOOK_LANGS)" --formats="$(DOCBOOK_FORMATS)" \
$(PUBLICAN_INTREE_OPT) $(PCMK_quiet)
rm -rf $(@:%.build=%)/tmp
touch "$@"
# Pacemaker Development
PD_TXT = $(wildcard Pacemaker_Development/en-US/*.txt)
PD_XML_GEN = $(PD_TXT:%.txt=%.xml)
PD_XML_ONLY = $(addprefix $(srcdir)/Pacemaker_Development/en-US/,$(COMMON_XML) \
Pacemaker_Development.ent \
Pacemaker_Development.xml)
PD_DEPS = $(PD_XML_ONLY) $(PD_XML_GEN)
# We have to hardcode the book name
# With '%' the test for 'newness' fails
Pacemaker_Development.build: $(PD_DEPS) $(PUBLICAN_INTREE_DEPS)
@echo Building $(@:%.build=%) because of $?
rm -rf $(@:%.build=%)/publish/* $(@:%.build=%)/tmp
$(AM_V_PUB)cd $(@:%.build=%) && RPM_BUILD_DIR="" $(PUBLICAN_INTREE_ENV) \
$(PUBLICAN) build --src_dir="$(srcdir)" --publish \
--langs="$(DOCBOOK_LANGS)" --formats="$(DOCBOOK_FORMATS)" \
$(PUBLICAN_INTREE_OPT) $(PCMK_quiet)
rm -rf $(@:%.build=%)/tmp
touch "$@"
# Pacemaker Explained
PE_SHARED_TXT = $(addprefix shared/en-US/,pacemaker-intro.txt)
PE_SHARED_XML = $(PE_SHARED_TXT:%.txt=%.xml)
PE_TXT = $(wildcard Pacemaker_Explained/en-US/*.txt)
PE_XML_GEN = $(PE_TXT:%.txt=%.xml)
PE_XML_ONLY = $(addprefix $(srcdir)/Pacemaker_Explained/en-US/,$(COMMON_XML) \
Pacemaker_Explained.ent \
Pacemaker_Explained.xml \
Preface.xml)
PE_DEPS = $(PNGS) $(PE_SHARED_XML) $(PE_XML_ONLY) $(PE_XML_GEN)
# We have to hardcode the book name
# With '%' the test for 'newness' fails
Pacemaker_Explained.build: $(PE_DEPS) $(PUBLICAN_INTREE_DEPS)
@echo Building $(@:%.build=%) because of $?
rm -rf $(@:%.build=%)/publish/* $(@:%.build=%)/tmp
$(AM_V_PUB)cd $(@:%.build=%) && RPM_BUILD_DIR="" $(PUBLICAN_INTREE_ENV) \
$(PUBLICAN) build --src_dir="$(srcdir)" --publish \
--langs="$(DOCBOOK_LANGS)" --formats="$(DOCBOOK_FORMATS)" \
$(PUBLICAN_INTREE_OPT) $(PCMK_quiet)
rm -rf $(@:%.build=%)/tmp
touch "$@"
# Pacemaker Remote
PR_TXT = $(wildcard Pacemaker_Remote/en-US/*.txt)
PR_XML_GEN = $(PR_TXT:%.txt=%.xml)
PR_XML_ONLY = $(addprefix $(srcdir)/Pacemaker_Remote/en-US/,$(COMMON_XML) \
Pacemaker_Remote.ent \
Pacemaker_Remote.xml)
PR_DEPS = $(PNGS) $(PR_XML_ONLY) $(PR_XML_GEN)
# We have to hardcode the book name
# With '%' the test for 'newness' fails
Pacemaker_Remote.build: $(PR_DEPS) $(PUBLICAN_INTREE_DEPS)
@echo Building $(@:%.build=%) because of $?
rm -rf $(@:%.build=%)/publish/* $(@:%.build=%)/tmp
$(AM_V_PUB)cd $(@:%.build=%) && RPM_BUILD_DIR="" $(PUBLICAN_INTREE_ENV) \
$(PUBLICAN) build --src_dir="$(srcdir)" --publish \
--langs="$(DOCBOOK_LANGS)" --formats="$(DOCBOOK_FORMATS)" \
$(PUBLICAN_INTREE_OPT) $(PCMK_quiet)
rm -rf $(@:%.build=%)/tmp
touch "$@"
# Build all books for upload to ClusterLabs
.PHONY: books
books: books-clean
if BUILD_DOCBOOK
for book in $(BOOKS); do \
sed -i.sed 's@^brand:.*@brand: clusterlabs@' $$book/publican.cfg; \
done
$(MAKE) $(AM_MAKEFLAGS) DOCBOOK_FORMATS="pdf,html,html-single,epub" \
DOCBOOK_LANGS="$(DOCBOOK_LANGS)" all-local
endif
.PHONY: books-upload
books-upload: books
if BUILD_DOCBOOK
@echo Uploading current $(PACKAGE_SERIES) documentation set to clusterlabs.org
@for book in $(BOOKS); do \
echo Uploading $$book...; \
echo "Generated on `date` from version: $(shell git log --pretty="format:%h %d" -n 1)" \
>> $$book/publish/build-$(PACKAGE_SERIES).txt; \
rsync $(RSYNC_OPTS) $$book/publish/* "$(RSYNC_DEST)/$(PACKAGE)/doc/"; \
done
endif
.PHONY: books-clean
books-clean:
-for book in $(BOOKS); do \
rm -rf $$book/tmp $$book/publish; \
done
-rm -f $(PNGS_GENERATED) \
$(SHARED_XML) \
$(CFS_XML_GEN) \
$(PA_XML_GEN) \
$(PD_XML_GEN) \
$(PE_XML_GEN) \
$(PR_XML_GEN) \
publican-catalog-fallback \
publican-catalog
if BUILD_DOCBOOK
all-local: $(BOOKS:%=%.build) */publican.cfg
install-data-local: all-local
for book in $(BOOKS); do \
filelist=`find $$book/publish/* -print`; \
for f in $$filelist; do \
p=`echo $$f | sed s:publish/:: | sed s:Pacemaker/::`; \
if [ -d $$f ]; then \
$(INSTALL) -d -m 775 $(DESTDIR)$(docdir)/$$p; \
else \
$(INSTALL) -m 644 $$f $(DESTDIR)$(docdir)/$$p; \
fi \
done; \
done
endif
BRAND_DEPS = $(wildcard publican-clusterlabs/en-US/*.png) \
$(wildcard publican-clusterlabs/en-US/*.xml)
brand-build: $(BRAND_DEPS)
cd publican-clusterlabs && publican build --formats=xml --langs=all --publish
brand: brand-build
@echo "Installing branded content..."
cd publican-clusterlabs && sudo publican install_brand --path=$(datadir)/publican/Common_Content
brand-rpm-clean:
-find publican-clusterlabs -name "*.noarch.rpm" -exec rm -f \{\} \;
brand-rpm-build: brand-rpm-clean brand-build
cd publican-clusterlabs && \
$(PUBLICAN) --src_dir="$(srcdir)" package --binary
brand-rpm-install: brand-rpm-build
find publican-clusterlabs -name "*.noarch.rpm" -exec sudo rpm -Uvh --force \{\} \;
pdf:
$(MAKE) $(AM_MAKEFLAGS) DOCBOOK_FORMATS="pdf" all-local
# Annotated source code as HTML
global:
$(MAKE) $(AM_MAKEFLAGS) -C .. clean-generic
cd .. && gtags -q && htags -sanhIT doc
global-upload: global
rsync $(RSYNC_OPTS) HTML/ "$(RSYNC_DEST)/$(PACKAGE)/global/$(TAG)/"
global-clean:
-rm -rf HTML
# Man pages as HTML
%.8.html: %.8
groff -mandoc `man -w ./$<` -T html > $@
%.7.html: %.7
groff -mandoc `man -w ./$<` -T html > $@
manhtml:
$(MAKE) $(AM_MAKEFLAGS) -C .. all
find .. -name "[a-z]*.[78]" -exec $(MAKE) $(AM_MAKEFLAGS) \{\}.html \;
manhtml-upload: manhtml
find .. -name "[a-z]*.[78].html" -exec \
rsync $(RSYNC_OPTS) \{\} "$(RSYNC_DEST)/$(PACKAGE)/man/" \;
manhtml-clean:
-find .. -name "[a-z]*.[78].html" -exec rm \{\} \;
# API documentation as HTML
doxygen: Doxyfile
doxygen Doxyfile
doxygen-upload: doxygen
rsync $(RSYNC_OPTS) api/html/ "$(RSYNC_DEST)/$(PACKAGE)/doxygen/$(TAG)/"
doxygen-clean:
-rm -rf api
# ABI compatibility report as HTML
abi: abi-check
./abi-check $(PACKAGE) $(LAST_RELEASE) $(TAG)
abi-www:
export RSYNC_DEST=$(RSYNC_DEST); ./abi-check -u $(PACKAGE) $(LAST_RELEASE) $(TAG)
abi-clean:
-rm -rf abi_dumps compat_reports
# All HTML documentation (except ABI compatibility, which is run separately)
.PHONY: www
www: clean-local deprecated-upload manhtml-upload global-upload doxygen-upload books-upload
clean-local: brand-rpm-clean global-clean manhtml-clean doxygen-clean abi-clean books-clean deprecated-clean
diff --git a/doc/shared/en-US/images/Network.png b/doc/shared/en-US/images/Network.png
deleted file mode 100644
index dadb162e67..0000000000
Binary files a/doc/shared/en-US/images/Network.png and /dev/null differ
diff --git a/doc/shared/en-US/images/Console.png b/doc/sphinx/Clusters_from_Scratch/images/Console.png
similarity index 100%
rename from doc/shared/en-US/images/Console.png
rename to doc/sphinx/Clusters_from_Scratch/images/Console.png
diff --git a/doc/shared/en-US/images/Editing-eth0.png b/doc/sphinx/Clusters_from_Scratch/images/Editing-eth0.png
similarity index 100%
rename from doc/shared/en-US/images/Editing-eth0.png
rename to doc/sphinx/Clusters_from_Scratch/images/Editing-eth0.png
diff --git a/doc/shared/en-US/images/Installer.png b/doc/sphinx/Clusters_from_Scratch/images/Installer.png
similarity index 100%
rename from doc/shared/en-US/images/Installer.png
rename to doc/sphinx/Clusters_from_Scratch/images/Installer.png
diff --git a/doc/shared/en-US/images/Partitioning.png b/doc/sphinx/Clusters_from_Scratch/images/Partitioning.png
similarity index 100%
rename from doc/shared/en-US/images/Partitioning.png
rename to doc/sphinx/Clusters_from_Scratch/images/Partitioning.png
diff --git a/doc/shared/en-US/images/Welcome.png b/doc/sphinx/Clusters_from_Scratch/images/Welcome.png
similarity index 100%
rename from doc/shared/en-US/images/Welcome.png
rename to doc/sphinx/Clusters_from_Scratch/images/Welcome.png
diff --git a/doc/sphinx/Clusters_from_Scratch/installation.rst b/doc/sphinx/Clusters_from_Scratch/installation.rst
index 249d58be38..f5dcc7d1e8 100644
--- a/doc/sphinx/Clusters_from_Scratch/installation.rst
+++ b/doc/sphinx/Clusters_from_Scratch/installation.rst
@@ -1,408 +1,408 @@
Installation
------------
Install |CFS_DISTRO| |CFS_DISTRO_VER|
################################################################################################
Boot the Install Image
______________________
Download the 4GB |CFS_DISTRO| |CFS_DISTRO_VER| `DVD ISO <http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1804.iso>`_.
Use the image to boot a virtual machine, or burn it to a DVD or USB drive and
boot a physical server from that.
After starting the installation, select your language and keyboard layout at
the welcome screen.
-.. figure:: ../../shared/en-US/images/Welcome.png
+.. figure:: images/Welcome.png
:scale: 80%
:width: 1024
:height: 800
:align: center
:alt: Installation Welcome Screen
|CFS_DISTRO| |CFS_DISTRO_VER| Installation Welcome Screen
Installation Options
____________________
At this point, you get a chance to tweak the default installation options.
-.. figure:: ../../shared/en-US/images/Installer.png
+.. figure:: images/Installer.png
:scale: 80%
:width: 1024
:height: 800
:align: center
:alt: Installation Summary Screen
|CFS_DISTRO| |CFS_DISTRO_VER| Installation Summary Screen
Ignore the **SOFTWARE SELECTION** section (try saying that 10 times quickly). The
**Infrastructure Server** environment does have add-ons with much of the software
we need, but we will leave it as a **Minimal Install** here, so that we can see
exactly what software is required later.
Configure Network
_________________
In the **NETWORK & HOSTNAME** section:
- Edit **Host Name:** as desired. For this example, we will use
**pcmk-1.localdomain**.
- Select your network device, press **Configure...**, and manually assign a fixed
IP address. For this example, we'll use 192.168.122.101 under **IPv4 Settings**
(with an appropriate netmask, gateway and DNS server).
- Flip the switch to turn your network device on, and press **Done**.
-.. figure:: ../../shared/en-US/images/Editing-eth0.png
+.. figure:: images/Editing-eth0.png
:scale: 80%
:width: 1024
:height: 800
:align: center
:alt: Editing eth0
|CFS_DISTRO| |CFS_DISTRO_VER| Network Interface Screen
.. IMPORTANT::
Do not accept the default network settings.
Cluster machines should never obtain an IP address via DHCP, because
DHCP's periodic address renewal will interfere with corosync.
Configure Disk
______________
By default, the installer's automatic partitioning will use LVM (which allows
us to dynamically change the amount of space allocated to a given partition).
However, it allocates all free space to the ``/`` (aka. **root**) partition, which
cannot be reduced in size later (dynamic increases are fine).
In order to follow the DRBD and GFS2 portions of this guide, we need to reserve
space on each machine for a replicated volume.
Enter the **INSTALLATION DESTINATION** section, ensure the hard drive you want to
install to is selected, select **I will configure partitioning**, and press **Done**.
In the **MANUAL PARTITIONING** screen that comes next, click the option to create
mountpoints automatically. Select the ``/`` mountpoint, and reduce the desired
capacity by 1GiB or so. Select **Modify...** by the volume group name, and change
the **Size policy:** to **As large as possible**, to make the reclaimed space
available inside the LVM volume group. We'll add the additional volume later.
-.. figure:: ../../shared/en-US/images/Partitioning.png
+.. figure:: images/Partitioning.png
:scale: 80%
:width: 1024
:height: 800
:align: center
:alt: Manual Partitioning Screen
|CFS_DISTRO| |CFS_DISTRO_VER| Manual Partitioning Screen
Press **Done**, then **Accept changes**.
Configure Time Synchronization
______________________________
It is highly recommended to enable NTP on your cluster nodes. Doing so
ensures all nodes agree on the current time and makes reading log files
significantly easier.
|CFS_DISTRO| will enable NTP automatically. If you want to change any time-related
settings (such as time zone or NTP server), you can do this in the
**TIME & DATE** section.
Finish Install
______________
Select **Begin Installation**. Once it completes, set a root password, and reboot
as instructed. For the purposes of this document, it is not necessary to create
any additional users. After the node reboots, you'll see a login prompt on
the console. Login using **root** and the password you created earlier.
-.. figure:: ../../shared/en-US/images/Console.png
+.. figure:: images/Console.png
:scale: 80%
:width: 1024
:height: 768
:align: center
:alt: Console Prompt
|CFS_DISTRO| |CFS_DISTRO_VER| Console Prompt
.. NOTE::
From here on, we're going to be working exclusively from the terminal.
Configure the OS
################
Verify Networking
_________________
Ensure that the machine has the static IP address you configured earlier.
.. code-block:: none
[root@pcmk-1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:8e:eb:41 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.101/24 brd 192.168.122.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::e45:c99b:34c0:c657/64 scope link noprefixroute
valid_lft forever preferred_lft forever
.. NOTE::
If you ever need to change the node's IP address from the command line, follow
these instructions, replacing **${device}** with the name of your network device:
.. code-block:: none
[root@pcmk-1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-${device} # manually edit as desired
[root@pcmk-1 ~]# nmcli dev disconnect ${device}
[root@pcmk-1 ~]# nmcli con reload ${device}
[root@pcmk-1 ~]# nmcli con up ${device}
This makes **NetworkManager** aware that a change was made on the config file.
Next, ensure that the routes are as expected:
.. code-block:: none
[root@pcmk-1 ~]# ip route
default via 192.168.122.1 dev eth0 proto static metric 100
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.101 metric 100
If there is no line beginning with **default via**, then you may need to add a line such as
``GATEWAY="192.168.122.1"``
to the device configuration using the same process as described above for
changing the IP address.
Now, check for connectivity to the outside world. Start small by
testing whether we can reach the gateway we configured.
.. code-block:: none
[root@pcmk-1 ~]# ping -c 1 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.254 ms
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms
Now try something external; choose a location you know should be available.
.. code-block:: none
[root@pcmk-1 ~]# ping -c 1 www.clusterlabs.org
PING oss-uk-1.clusterlabs.org (109.74.197.241) 56(84) bytes of data.
64 bytes from oss-uk-1.clusterlabs.org (109.74.197.241): icmp_seq=1 ttl=49 time=333 ms
--- oss-uk-1.clusterlabs.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 333.204/333.204/333.204/0.000 ms
Login Remotely
______________
The console isn't a very friendly place to work from, so we will now
switch to accessing the machine remotely via SSH where we can
use copy and paste, etc.
From another host, check whether we can see the new host at all:
.. code-block:: none
beekhof@f16 ~ # ping -c 1 192.168.122.101
PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data.
64 bytes from 192.168.122.101: icmp_req=1 ttl=64 time=1.01 ms
--- 192.168.122.101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.012/1.012/1.012/0.000 ms
Next, login as root via SSH.
.. code-block:: none
beekhof@f16 ~ # ssh -l root 192.168.122.101
The authenticity of host '192.168.122.101 (192.168.122.101)' can't be established.
ECDSA key fingerprint is 6e:b7:8f:e2:4c:94:43:54:a8:53:cc:20:0f:29:a4:e0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.122.101' (ECDSA) to the list of known hosts.
root@192.168.122.101's password:
Last login: Tue Aug 11 13:14:39 2015
[root@pcmk-1 ~]#
Apply Updates
_____________
Apply any package updates released since your installation image was created:
.. code-block:: none
[root@pcmk-1 ~]# yum update
.. index::
single: node; short name
Use Short Node Names
____________________
During installation, we filled in the machine's fully qualified domain
name (FQDN), which can be rather long when it appears in cluster logs and
status output. See for yourself how the machine identifies itself:
.. code-block:: none
[root@pcmk-1 ~]# uname -n
pcmk-1.localdomain
We can use the `hostnamectl` tool to strip off the domain name:
.. code-block:: none
[root@pcmk-1 ~]# hostnamectl set-hostname $(uname -n | sed s/\\..*//)
Now, check that the machine is using the correct name:
.. code-block:: none
[root@pcmk-1 ~]# uname -n
pcmk-1
You may want to reboot to ensure all updates take effect.
Repeat for Second Node
######################
Repeat the Installation steps so far, so that you have two
nodes ready to have the cluster software installed.
For the purposes of this document, the additional node is called
pcmk-2 with address 192.168.122.102.
Configure Communication Between Nodes
#####################################
Configure Host Name Resolution
______________________________
Confirm that you can communicate between the two new nodes:
.. code-block:: none
[root@pcmk-1 ~]# ping -c 3 192.168.122.102
PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data.
64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=0.343 ms
64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.402 ms
64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.558 ms
--- 192.168.122.102 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.343/0.434/0.558/0.092 ms
Now we need to make sure we can communicate with the machines by their
name. If you have a DNS server, add additional entries for the two
machines. Otherwise, you'll need to add the machines to ``/etc/hosts``
on both nodes. Below are the entries for my cluster nodes:
.. code-block:: none
[root@pcmk-1 ~]# grep pcmk /etc/hosts
192.168.122.101 pcmk-1.clusterlabs.org pcmk-1
192.168.122.102 pcmk-2.clusterlabs.org pcmk-2
We can now verify the setup by again using ping:
.. code-block:: none
[root@pcmk-1 ~]# ping -c 3 pcmk-2
PING pcmk-2.clusterlabs.org (192.168.122.101) 56(84) bytes of data.
64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=1 ttl=64 time=0.164 ms
64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=2 ttl=64 time=0.475 ms
64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=3 ttl=64 time=0.186 ms
--- pcmk-2.clusterlabs.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.164/0.275/0.475/0.141 ms
.. index:: SSH
Configure SSH
_____________
SSH is a convenient and secure way to copy files and perform commands
remotely. For the purposes of this guide, we will create a key without a
password (using the -N option) so that we can perform remote actions
without being prompted.
.. WARNING::
Unprotected SSH keys (those without a password) are not recommended for
servers exposed to the outside world. We use them here only to simplify
the demo.
Create a new key and allow anyone with that key to log in:
.. index::
single: SSH; key
.Creating and Activating a new SSH Key
.. code-block:: none
[root@pcmk-1 ~]# ssh-keygen -t dsa -f ~/.ssh/id_dsa -N ""
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
91:09:5c:82:5a:6a:50:08:4e:b2:0c:62:de:cc:74:44 root@pcmk-1.clusterlabs.org
The key's randomart image is:
+--[ DSA 1024]----+
|==.ooEo.. |
|X O + .o o |
| * A + |
| + . |
| . S |
| |
| |
| |
| |
+-----------------+
[root@pcmk-1 ~]# cp ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys
Install the key on the other node:
.. code-block:: none
[root@pcmk-1 ~]# scp -r ~/.ssh pcmk-2:
The authenticity of host 'pcmk-2 (192.168.122.102)' can't be established.
ECDSA key fingerprint is SHA256:63xNPkPYq98rYznf3T9QYJAzlaGiAsSgFVNHOZjPWqc.
ECDSA key fingerprint is MD5:d9:bf:6e:32:88:be:47:3d:96:f1:96:27:65:05:0b:c3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pcmk-2,192.168.122.102' (ECDSA) to the list of known hosts.
root@pcmk-2's password:
id_dsa
id_dsa.pub
authorized_keys
known_hosts
Test that you can now run commands remotely, without being prompted:
.. code-block:: none
[root@pcmk-1 ~]# ssh pcmk-2 -- uname -n
pcmk-2
diff --git a/doc/sphinx/Makefile.am b/doc/sphinx/Makefile.am
index 91543bab58..664bbbcf9c 100644
--- a/doc/sphinx/Makefile.am
+++ b/doc/sphinx/Makefile.am
@@ -1,125 +1,169 @@
#
# Copyright 2003-2020 the Pacemaker project contributors
#
# The version control history for this file may have further details.
#
# This source code is licensed under the GNU General Public License version 2
# or later (GPLv2+) WITHOUT ANY WARRANTY.
#
include $(top_srcdir)/mk/common.mk
# Things you might want to override on the command line
# Books to generate
BOOKS ?= Clusters_from_Scratch \
Pacemaker_Administration \
Pacemaker_Development \
Pacemaker_Explained \
Pacemaker_Remote
# Output formats to generate. Possible values:
# html (multiple HTML files)
# dirhtml (HTML files named index.html in multiple directories)
# singlehtml (a single large HTML file)
# text
# pdf
# epub
# latex
# linkcheck (not actually a format; check validity of external links)
#
# The results will end up in <book>/_build/<format>
BOOK_FORMATS ?= html
# Set to "a4" or "letter" if building latex format
PAPER ?= letter
# Additional options for sphinx-build
SPHINXFLAGS ?=
# toplevel rsync destination for www targets (without trailing slash)
RSYNC_DEST ?= root@www.clusterlabs.org:/var/www/html
# End of useful overrides
-EXTRA_DIST = $(wildcard */*.rst)
+# Example scheduler transition graphs
+# @TODO The original CIB XML for these is long lost. Ideally, we would recreate
+# something similar and keep those here instead of the DOTs (or use a couple of
+# scheduler regression test inputs instead), then regenerate the SVG
+# equivalents using crm_simulate and dot when making a release.
+DOTS = $(wildcard shared/images/*.dot)
+
+# Vector sources for generated PNGs (including SVG equivalents of DOTS, created
+# manually using dot)
+SVGS = $(wildcard shared/images/pcmk-*.svg) $(DOTS:%.dot=%.svg)
+
+# PNG images generated from SVGS
+#
+# These will not be accessible in a VPATH build, which will generate warnings
+# when building the documentation, but the make will still succeed. It is
+# nontrivial to get them working for VPATH builds and not worth the effort.
+PNGS_GENERATED = $(SVGS:%.svg=%.png)
+
+# Original PNG image sources
+PNGS_Clusters_from_Scratch = $(wildcard Clusters_from_Scratch/images/*.png)
+PNGS_Pacemaker_Explained = $(wildcard Pacemaker_Explained/images/*.png)
+PNGS_Pacemaker_Remote = $(wildcard Pacemaker_Remote/images/*.png)
+
+EXTRA_DIST = $(wildcard */*.rst) $(DOTS) $(SVGS) \
+ $(PNGS_Clusters_from_Scratch) \
+ $(PNGS_Pacemaker_Explained) \
+ $(PNGS_Pacemaker_Remote)
# recursive, preserve symlinks/permissions/times, verbose, compress,
# don't cross filesystems, sparse, show progress
RSYNC_OPTS = -rlptvzxS --progress
BOOK_RSYNC_DEST = $(RSYNC_DEST)/$(PACKAGE)/doc/$(PACKAGE_SERIES)
TAG ?= $(shell [ -n "`git tag --points-at HEAD | head -1`" ] \
&& ( git tag --points-at HEAD | head -1 ) \
|| git log --pretty=format:Pacemaker-2.0.3-%h -n 1 HEAD)
BOOK = none
-DEPS_Clusters_from_Scratch = shared/pacemaker-intro.rst
-DEPS_Pacemaker_Administration = shared/pacemaker-intro.rst
+DEPS_intro = shared/pacemaker-intro.rst $(PNGS_GENERATED)
+
+DEPS_Clusters_from_Scratch = $(DEPS_intro) $(PNGS_Clusters_from_Scratch)
+DEPS_Pacemaker_Administration = $(DEPS_intro)
DEPS_Pacemaker_Development =
-DEPS_Pacemaker_Explained = shared/pacemaker-intro.rst
-DEPS_Pacemaker_Remote = $(wildcard $(srcdir)/Pacemaker_Remote/images/*.png)
+DEPS_Pacemaker_Explained = $(DEPS_intro) $(PNGS_Pacemaker_Explained)
+DEPS_Pacemaker_Remote = $(PNGS_Pacemaker_Remote)
if BUILD_SPHINX_DOCS
+INKSCAPE_CMD = $(INKSCAPE) --export-dpi=90 -C
+
+# Pattern rule to generate PNGs from SVGs
+# (--export-png works with Inkscape <1.0, --export-filename with >=1.0;
+# create the destination directory in case this is a VPATH build)
+%.png: %.svg
+ $(AM_V_at)-$(MKDIR_P) "$(shell dirname "$@")"
+ $(AM_V_GEN) { \
+ $(INKSCAPE_CMD) --export-png="$@" "$<" 2>/dev/null \
+ || $(INKSCAPE_CMD) --export-filename="$@" "$<"; \
+ } $(PCMK_quiet)
+
# Create a book's Sphinx configuration.
# Create the book directory in case this is a VPATH build.
$(BOOKS:%=%/conf.py): conf.py.in
$(AM_V_at)-$(MKDIR_P) "$(@:%/conf.py=%)"
$(AM_V_GEN)sed \
-e 's/%VERSION%/$(VERSION)/g' \
-e 's/%BOOK_ID%/$(@:%/conf.py=%)/g' \
-e 's/%BOOK_TITLE%/$(subst _, ,$(@:%/conf.py=%))/g' \
+ -e 's#%SRC_DIR%#$(abs_srcdir)#g' \
$(<) > "$@"
$(BOOK)/_build: _static/pacemaker.css $(BOOK)/conf.py $(DEPS_$(BOOK)) $(wildcard $(srcdir)/$(BOOK)/*.rst)
@echo 'Building "$(subst _, ,$(BOOK))" because of $?' $(PCMK_quiet)
$(AM_V_at)rm -rf "$@"
$(AM_V_BOOK)for format in $(BOOK_FORMATS); do \
echo -e "\n * Building $$format" $(PCMK_quiet); \
doctrees="doctrees"; \
real_format="$$format"; \
case "$$format" in \
pdf) real_format="latex" ;; \
gettext) doctrees="gettext-doctrees" ;; \
esac; \
$(SPHINX) -b "$$real_format" -d "$@/$$doctrees" \
-c "$(builddir)/$(BOOK)" \
-D latex_paper_size=$(PAPER) $(SPHINXFLAGS) \
"$(srcdir)/$(BOOK)" "$@/$$format" \
$(PCMK_quiet); \
if [ "$$format" = "pdf" ]; then \
$(MAKE) $(AM_MAKEFLAGS) -C "$@/$$format" \
all-pdf; \
fi; \
done
endif
.PHONY: books-upload
books-upload: all
if BUILD_SPHINX_DOCS
@echo "Uploading $(PACKAGE_SERIES) documentation set"
@for book in $(BOOKS); do \
echo " * $$book"; \
buildfile="$$book/_build/build-$(PACKAGE_SERIES).txt"; \
echo "Generated on `date --utc` from version $(TAG)" \
> "$$buildfile"; \
rsync $(RSYNC_OPTS) "$$buildfile" \
$(BOOK_FORMATS:%=$$book/_build/%) \
"$(BOOK_RSYNC_DEST)/$$book/"; \
done
endif
all-local:
if BUILD_SPHINX_DOCS
@for book in $(BOOKS); do \
$(MAKE) $(AM_MAKEFLAGS) BOOK=$$book \
PAPER="$(PAPER)" SPHINXFLAGS="$(SPHINXFLAGS)" \
BOOK_FORMATS="$(BOOK_FORMATS)" $$book/_build; \
done
endif
clean-local:
- $(AM_V_at)-rm -rf $(BOOKS:%="$(builddir)/%/_build") $(BOOKS:%="$(builddir)/%/conf.py")
+ $(AM_V_at)-rm -rf \
+ $(BOOKS:%="$(builddir)/%/_build") \
+ $(BOOKS:%="$(builddir)/%/conf.py") \
+ $(PNGS_GENERATED)
diff --git a/doc/sphinx/Pacemaker_Administration/tools.rst b/doc/sphinx/Pacemaker_Administration/tools.rst
index 64fd4789ae..5899467e91 100644
--- a/doc/sphinx/Pacemaker_Administration/tools.rst
+++ b/doc/sphinx/Pacemaker_Administration/tools.rst
@@ -1,570 +1,570 @@
.. index:: command-line tool
Using Pacemaker Command-Line Tools
----------------------------------
.. index::
single: command-line tool; output format
.. _cmdline_output:
Controlling Command Line Output
###############################
Some of the pacemaker command line utilities have been converted to a new
output system. Among these tools are ``crm_mon`` and ``stonith_admin``. This
is an ongoing project, and more tools will be converted over time. This system
lets you control the formatting of output with ``--output-as=`` and the
destination of output with ``--output-to=``.
The available formats vary by tool, but at least plain text and XML are
supported by all tools that use the new system. The default format is plain
text. The default destination is stdout but can be redirected to any file.
Some formats support command line options for changing the style of the output.
For instance:
.. code-block:: none
# crm_mon --help-output
Usage:
crm_mon [OPTION?]
Provides a summary of cluster's current state.
Outputs varying levels of detail in a number of different formats.
Output Options:
--output-as=FORMAT Specify output format as one of: console (default), html, text, xml
--output-to=DEST Specify file name for output (or "-" for stdout)
--html-cgi Add text needed to use output in a CGI program
--html-stylesheet=URI Link to an external CSS stylesheet
--html-title=TITLE Page title
--text-fancy Use more highly formatted output
.. index::
single: crm_mon
single: command-line tool; crm_mon
.. _crm_mon:
Monitor a Cluster with crm_mon
##############################
The ``crm_mon`` utility displays the current state of an active cluster. It can
show the cluster status organized by node or by resource, and can be used in
either single-shot or dynamically updating mode. It can also display operations
performed and information about failures.
Using this tool, you can examine the state of the cluster for irregularities,
and see how it responds when you cause or simulate failures.
See the manual page or the output of ``crm_mon --help`` for a full description
of its many options.
.. topic:: Sample output from crm_mon -1
.. code-block:: none
Cluster Summary:
* Stack: corosync
* Current DC: node2 (version 2.0.0-1) - partition with quorum
* Last updated: Mon Jan 29 12:18:42 2018
* Last change: Mon Jan 29 12:18:40 2018 by root via crm_attribute on node3
* 5 nodes configured
* 2 resources configured
Node List:
* Online: [ node1 node2 node3 node4 node5 ]
* Active resources:
* Fencing (stonith:fence_xvm): Started node1
* IP (ocf:heartbeat:IPaddr2): Started node2
.. topic:: Sample output from crm_mon -n -1
.. code-block:: none
Cluster Summary:
* Stack: corosync
* Current DC: node2 (version 2.0.0-1) - partition with quorum
* Last updated: Mon Jan 29 12:21:48 2018
* Last change: Mon Jan 29 12:18:40 2018 by root via crm_attribute on node3
* 5 nodes configured
* 2 resources configured
* Node List:
* Node node1: online
* Fencing (stonith:fence_xvm): Started
* Node node2: online
* IP (ocf:heartbeat:IPaddr2): Started
* Node node3: online
* Node node4: online
* Node node5: online
As mentioned in an earlier chapter, the DC is the node is where decisions are
made. The cluster elects a node to be DC as needed. The only significance of
the choice of DC to an administrator is the fact that its logs will have the
most information about why decisions were made.
.. index::
pair: crm_mon; CSS
.. _crm_mon_css:
Styling crm_mon HTML output
___________________________
Various parts of ``crm_mon``'s HTML output have a CSS class associated with
them. Not everything does, but some of the most interesting portions do. In
the following example, the status of each node has an ``online`` class and the
details of each resource have an ``rsc-ok`` class.
.. code-block:: html
<h2>Node List</h2>
<ul>
<li>
<span>Node: cluster01</span><span class="online"> online</span>
</li>
<li><ul><li><span class="rsc-ok">ping (ocf::pacemaker:ping): Started</span></li></ul></li>
<li>
<span>Node: cluster02</span><span class="online"> online</span>
</li>
<li><ul><li><span class="rsc-ok">ping (ocf::pacemaker:ping): Started</span></li></ul></li>
</ul>
By default, a stylesheet for styling these classes is included in the head of
the HTML output. The relevant portions of this stylesheet that would be used
in the above example is:
.. code-block:: css
<style>
.online { color: green }
.rsc-ok { color: green }
</style>
If you want to override some or all of the styling, simply create your own
stylesheet, place it on a web server, and pass ``--html-stylesheet=<URL>``
to ``crm_mon``. The link is added after the default stylesheet, so your
changes take precedence. You don't need to duplicate the entire default.
Only include what you want to change.
.. index::
single: cibadmin
single: command-line tool; cibadmin
.. _cibadmin:
Edit the CIB XML with cibadmin
##############################
The most flexible tool for modifying the configuration is Pacemaker's
``cibadmin`` command. With ``cibadmin``, you can query, add, remove, update
or replace any part of the configuration. All changes take effect immediately,
so there is no need to perform a reload-like operation.
The simplest way of using ``cibadmin`` is to use it to save the current
configuration to a temporary file, edit that file with your favorite
text or XML editor, and then upload the revised configuration.
.. topic:: Safely using an editor to modify the cluster configuration
.. code-block:: none
# cibadmin --query > tmp.xml
# vi tmp.xml
# cibadmin --replace --xml-file tmp.xml
Some of the better XML editors can make use of a RELAX NG schema to
help make sure any changes you make are valid. The schema describing
the configuration can be found in ``pacemaker.rng``, which may be
deployed in a location such as ``/usr/share/pacemaker`` depending on your
operating system distribution and how you installed the software.
If you want to modify just one section of the configuration, you can
query and replace just that section to avoid modifying any others.
.. topic:: Safely using an editor to modify only the resources section
.. code-block:: none
# cibadmin --query --scope resources > tmp.xml
# vi tmp.xml
# cibadmin --replace --scope resources --xml-file tmp.xml
To quickly delete a part of the configuration, identify the object you wish to
delete by XML tag and id. For example, you might search the CIB for all
STONITH-related configuration:
.. topic:: Searching for STONITH-related configuration items
.. code-block:: none
# cibadmin --query | grep stonith
<nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
<nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="1"/>
<primitive id="child_DoFencing" class="stonith" type="external/vmware">
<lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
<lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
<lrm_resource id="child_DoFencing:1" type="external/vmware" class="stonith">
<lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
<lrm_resource id="child_DoFencing:2" type="external/vmware" class="stonith">
<lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
<lrm_resource id="child_DoFencing:3" type="external/vmware" class="stonith">
If you wanted to delete the ``primitive`` tag with id ``child_DoFencing``,
you would run:
.. code-block:: none
# cibadmin --delete --xml-text '<primitive id="child_DoFencing"/>'
See the cibadmin man page for more options.
.. warning::
Never edit the live ``cib.xml`` file directly. Pacemaker will detect such
changes and refuse to use the configuration.
.. index::
single: crm_shadow
single: command-line tool; crm_shadow
.. _crm_shadow:
Batch Configuration Changes with crm_shadow
###########################################
Often, it is desirable to preview the effects of a series of configuration
changes before updating the live configuration all at once. For this purpose,
``crm_shadow`` creates a "shadow" copy of the configuration and arranges for
all the command-line tools to use it.
To begin, simply invoke ``crm_shadow --create`` with a name of your choice,
and follow the simple on-screen instructions. Shadow copies are identified with
a name to make it possible to have more than one.
.. warning::
Read this section and the on-screen instructions carefully; failure to do so
could result in destroying the cluster's active configuration!
.. topic:: Creating and displaying the active sandbox
.. code-block:: none
# crm_shadow --create test
Setting up shadow instance
Type Ctrl-D to exit the crm_shadow shell
shadow[test]:
shadow[test] # crm_shadow --which
test
From this point on, all cluster commands will automatically use the shadow copy
instead of talking to the cluster's active configuration. Once you have
finished experimenting, you can either make the changes active via the
``--commit`` option, or discard them using the ``--delete`` option. Again, be
sure to follow the on-screen instructions carefully!
For a full list of ``crm_shadow`` options and commands, invoke it with the
``--help`` option.
.. topic:: Use sandbox to make multiple changes all at once, discard them, and verify real configuration is untouched
.. code-block:: none
shadow[test] # crm_failcount -r rsc_c001n01 -G
scope=status name=fail-count-rsc_c001n01 value=0
shadow[test] # crm_standby --node c001n02 -v on
shadow[test] # crm_standby --node c001n02 -G
scope=nodes name=standby value=on
shadow[test] # cibadmin --erase --force
shadow[test] # cibadmin --query
<cib crm_feature_set="3.0.14" validate-with="pacemaker-3.0" epoch="112" num_updates="2" admin_epoch="0" cib-last-written="Mon Jan 8 23:26:47 2018" update-origin="rhel7-1" update-client="crm_node" update-user="root" have-quorum="1" dc-uuid="1">
<configuration>
<crm_config/>
<nodes/>
<resources/>
<constraints/>
</configuration>
<status/>
</cib>
shadow[test] # crm_shadow --delete test --force
Now type Ctrl-D to exit the crm_shadow shell
shadow[test] # exit
# crm_shadow --which
No active shadow configuration defined
# cibadmin -Q
<cib crm_feature_set="3.0.14" validate-with="pacemaker-3.0" epoch="110" num_updates="2" admin_epoch="0" cib-last-written="Mon Jan 8 23:26:47 2018" update-origin="rhel7-1" update-client="crm_node" update-user="root" have-quorum="1">
<configuration>
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair id="cib-bootstrap-1" name="stonith-enabled" value="1"/>
<nvpair id="cib-bootstrap-2" name="pe-input-series-max" value="30000"/>
See the next section, :ref:`crm_simulate`, for how to test your changes before
committing them to the live cluster.
.. index::
single: crm_simulate
single: command-line tool; crm_simulate
.. _crm_simulate:
Simulate Cluster Activity with crm_simulate
###########################################
The command-line tool `crm_simulate` shows the results of the same logic
the cluster itself uses to respond to a particular cluster configuration and
status.
As always, the man page is the primary documentation, and should be consulted
for further details. This section aims for a better conceptual explanation and
practical examples.
Replaying cluster decision-making logic
_______________________________________
At any given time, one node in a Pacemaker cluster will be elected DC, and that
node will run Pacemaker's scheduler to make decisions.
Each time decisions need to be made (a "transition"), the DC will have log
messages like "Calculated transition ... saving inputs in ..." with a file
name. You can grab the named file and replay the cluster logic to see why
particular decisions were made. The file contains the live cluster
configuration at that moment, so you can also look at it directly to see the
value of node attributes, etc., at that time.
The simplest usage is (replacing $FILENAME with the actual file name):
.. topic:: Simulate cluster response to a given CIB
.. code-block:: none
# crm_simulate --simulate --xml-file $FILENAME
That will show the cluster state when the process started, the actions that
need to be taken ("Transition Summary"), and the resulting cluster state if the
actions succeed. Most actions will have a brief description of why they were
required.
The transition inputs may be compressed. ``crm_simulate`` can handle these
compressed files directly, though if you want to edit the file, you'll need to
uncompress it first.
You can do the same simulation for the live cluster configuration at the
current moment. This is useful mainly when using ``crm_shadow`` to create a
sandbox version of the CIB; the ``--live-check`` option will use the shadow CIB
if one is in effect.
.. topic:: Simulate cluster response to current live CIB or shadow CIB
.. code-block:: none
# crm_simulate --simulate --live-check
Why decisions were made
_______________________
To get further insight into the "why", it gets user-unfriendly very quickly. If
you add the ``--show-scores`` option, you will also see all the scores that
went into the decision-making. The node with the highest cumulative score for a
resource will run it. You can look for ``-INFINITY`` scores in particular to
see where complete bans came into effect.
You can also add ``-VVVV`` to get more detailed messages about what's happening
under the hood. You can add up to two more V's even, but that's usually useful
only if you're a masochist or tracing through the source code.
Visualizing the action sequence
_______________________________
Another handy feature is the ability to generate a visual graph of the actions
needed, using the ``--dot-file`` option. This relies on the separate
Graphviz [#]_ project.
.. topic:: Generate a visual graph of cluster actions from a saved CIB
.. code-block:: none
# crm_simulate --simulate --xml-file $FILENAME --dot-file $FILENAME.dot
# dot $FILENAME.dot -Tsvg > $FILENAME.svg
``$FILENAME.dot`` will contain a GraphViz representation of the cluster's
response to your changes, including all actions with their ordering
dependencies.
``$FILENAME.svg`` will be the same information in a standard graphical format
that you can view in your browser or other app of choice. You could, of course,
use other ``dot`` options to generate other formats.
How to interpret the graphical output:
* Bubbles indicate actions, and arrows indicate ordering dependencies
* Resource actions have text of the form
``<RESOURCE>_<ACTION>_<INTERVAL_IN_MS> <NODE>`` indicating that the
specified action will be executed for the specified resource on the
specified node, once if interval is 0 or at specified recurring interval
otherwise
* Actions with black text will be sent to the executor (that is, the
appropriate agent will be invoked)
* Actions with orange text are "pseudo" actions that the cluster uses
internally for ordering but require no real activity
* Actions with a solid green border are part of the transition (that is, the
cluster will attempt to execute them in the given order -- though a
transition can be interrupted by action failure or new events)
* Dashed arrows indicate dependencies that are not present in the transition
graph
* Actions with a dashed border will not be executed. If the dashed border is
blue, the cluster does not feel the action needs to be executed. If the
dashed border is red, the cluster would like to execute the action but
cannot. Any actions depending on an action with a dashed border will not be
able to execute.
* Loops should not happen, and should be reported as a bug if found.
.. topic:: Small Cluster Transition
- .. image:: ../../shared/en-US/images/Policy-Engine-small.png
+ .. image:: ../shared/images/Policy-Engine-small.png
:alt: An example transition graph as represented by Graphviz
:height: 325
:width: 1161
:scale: 75 %
:align: center
In the above example, it appears that a new node, ``pcmk-2``, has come online
and that the cluster is checking to make sure ``rsc1``, ``rsc2`` and ``rsc3``
are not already running there (indicated by the ``rscN_monitor_0`` entries).
Once it did that, and assuming the resources were not active there, it would
have liked to stop ``rsc1`` and ``rsc2`` on ``pcmk-1`` and move them to
``pcmk-2``. However, there appears to be some problem and the cluster cannot or
is not permitted to perform the stop actions which implies it also cannot
perform the start actions. For some reason, the cluster does not want to start
``rsc3`` anywhere.
.. topic:: Complex Cluster Transition
- .. image:: ../../shared/en-US/images/Policy-Engine-big.png
+ .. image:: ../shared/images/Policy-Engine-big.png
:alt: Complex transition graph that you're not expected to be able to read
:width: 1455
:height: 1945
:scale: 75 %
:align: center
What-if scenarios
_________________
You can make changes to the saved or shadow CIB and simulate it again, to see
how Pacemaker would react differently. You can edit the XML by hand, use
command-line tools such as ``cibadmin`` with either a shadow CIB or the
``CIB_file`` environment variable set to the filename, or use higher-level tool
support (see the man pages of the specific tool you're using for how to perform
actions on a saved CIB file rather than the live CIB).
You can also inject node failures and/or action failures into the simulation;
see the ``crm_simulate`` man page for more details.
This capability is useful when using a shadow CIB to edit the configuration.
Before committing the changes to the live cluster with ``crm_shadow --commit``,
you can use ``crm_simulate`` to see how the cluster will react to the changes.
.. _attrd_updater:
.. _crm_attribute:
.. index::
single: attrd_updater
single: command-line tool; attrd_updater
single: crm_attribute
single: command-line tool; crm_attribute
Manage Node Attributes, Cluster Options and Defaults with crm_attribute and attrd_updater
#########################################################################################
``crm_attribute`` and ``attrd_updater`` are confusingly similar tools with subtle
differences.
``attrd_updater`` can query and update node attributes. ``crm_attribute`` can query
and update not only node attributes, but also cluster options, resource
defaults, and operation defaults.
To understand the differences, it helps to understand the various types of node
attribute.
.. table:: **Types of Node Attributes**
+-----------+----------+-------------------+------------------+----------------+----------------+
| Type | Recorded | Recorded in | Survive full | Manageable by | Manageable by |
| | in CIB? | attribute manager | cluster restart? | crm_attribute? | attrd_updater? |
| | | memory? | | | |
+===========+==========+===================+==================+================+================+
| permanent | yes | no | yes | yes | no |
+-----------+----------+-------------------+------------------+----------------+----------------+
| transient | yes | yes | no | yes | yes |
+-----------+----------+-------------------+------------------+----------------+----------------+
| private | no | yes | no | no | yes |
+-----------+----------+-------------------+------------------+----------------+----------------+
As you can see from the table above, ``crm_attribute`` can manage permanent and
transient node attributes, while ``attrd_updater`` can manage transient and
private node attributes.
The difference between the two tools lies mainly in *how* they update node
attributes: ``attrd_updater`` always contacts the Pacemaker attribute manager
directly, while ``crm_attribute`` will contact the attribute manager only for
transient node attributes, and will instead modify the CIB directly for
permanent node attributes (and for transient node attributes when unable to
contact the attribute manager).
By contacting the attribute manager directly, ``attrd_updater`` can change
an attribute's "dampening" (whether changes are immediately flushed to the CIB
or after a specified amount of time, to minimize disk writes for frequent
changes), set private node attributes (which are never written to the CIB), and
set attributes for nodes that don't yet exist.
By modifying the CIB directly, ``crm_attribute`` can set permanent node
attributes (which are only in the CIB and not managed by the attribute
manager), and can be used with saved CIB files and shadow CIBs.
However a transient node attribute is set, it is synchronized between the CIB
and the attribute manager, on all nodes.
.. index::
single: crm_failcount
single: command-line tool; crm_failcount
single: crm_node
single: command-line tool; crm_node
single: crm_report
single: command-line tool; crm_report
single: crm_standby
single: command-line tool; crm_standby
single: crm_verify
single: command-line tool; crm_verify
single: stonith_admin
single: command-line tool; stonith_admin
Other Commonly Used Tools
#########################
Other command-line tools include:
* ``crm_failcount``: query or delete resource fail counts
* ``crm_node``: manage cluster nodes
* ``crm_report``: generate a detailed cluster report for bug submissions
* ``crm_resource``: manage cluster resources
* ``crm_standby``: manage standby status of nodes
* ``crm_verify``: validate a CIB
* ``stonith_admin``: manage fencing devices
See the manual pages for details.
.. rubric:: Footnotes
.. [#] Graph visualization software. See http://www.graphviz.org/ for details.
diff --git a/doc/sphinx/Pacemaker_Explained/constraints.rst b/doc/sphinx/Pacemaker_Explained/constraints.rst
index 98d272257f..c321eb971b 100644
--- a/doc/sphinx/Pacemaker_Explained/constraints.rst
+++ b/doc/sphinx/Pacemaker_Explained/constraints.rst
@@ -1,986 +1,986 @@
.. index::
single: constraint
single: resource; constraint
.. _constraints:
Resource Constraints
--------------------
.. index::
single: resource; score
single: node; score
Scores
######
Scores of all kinds are integral to how the cluster works.
Practically everything from moving a resource to deciding which
resource to stop in a degraded cluster is achieved by manipulating
scores in some way.
Scores are calculated per resource and node. Any node with a
negative score for a resource can't run that resource. The cluster
places a resource on the node with the highest score for it.
Infinity Math
_____________
Pacemaker implements **INFINITY** (or equivalently, **+INFINITY**) internally as a
score of 1,000,000. Addition and subtraction with it follow these three basic
rules:
* Any value + **INFINITY** = **INFINITY**
* Any value - **INFINITY** = -**INFINITY**
* **INFINITY** - **INFINITY** = **-INFINITY**
.. note::
What if you want to use a score higher than 1,000,000? Typically this possibility
arises when someone wants to base the score on some external metric that might
go above 1,000,000.
The short answer is you can't.
The long answer is it is sometimes possible work around this limitation
creatively. You may be able to set the score to some computed value based on
the external metric rather than use the metric directly. For nodes, you can
store the metric as a node attribute, and query the attribute when computing
the score (possibly as part of a custom resource agent).
.. _location-constraint:
.. index::
single: location constraint
single: constraint; location
Deciding Which Nodes a Resource Can Run On
##########################################
*Location constraints* tell the cluster which nodes a resource can run on.
There are two alternative strategies. One way is to say that, by default,
resources can run anywhere, and then the location constraints specify nodes
that are not allowed (an *opt-out* cluster). The other way is to start with
nothing able to run anywhere, and use location constraints to selectively
enable allowed nodes (an *opt-in* cluster).
Whether you should choose opt-in or opt-out depends on your
personal preference and the make-up of your cluster. If most of your
resources can run on most of the nodes, then an opt-out arrangement is
likely to result in a simpler configuration. On the other-hand, if
most resources can only run on a small subset of nodes, an opt-in
configuration might be simpler.
.. index::
pair: XML element; rsc_location
single: constraint; location
single: constraint; rsc_location
Location Properties
___________________
.. table:: **Attributes of a rsc_location Element**
+--------------------+---------+----------------------------------------------------------------------------------------------+
| Attribute | Default | Description |
+====================+=========+==============================================================================================+
| id | | .. index:: |
| | | single: rsc_location; attribute, id |
| | | single: attribute; id (rsc_location) |
| | | single: id; rsc_location attribute |
| | | |
| | | A unique name for the constraint (required) |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| rsc | | .. index:: |
| | | single: rsc_location; attribute, rsc |
| | | single: attribute; rsc (rsc_location) |
| | | single: rsc; rsc_location attribute |
| | | |
| | | The name of the resource to which this constraint |
| | | applies. A location constraint must either have a |
| | | ``rsc``, have a ``rsc-pattern``, or contain at |
| | | least one resource set. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| rsc-pattern | | .. index:: |
| | | single: rsc_location; attribute, rsc-pattern |
| | | single: attribute; rsc-pattern (rsc_location) |
| | | single: rsc-pattern; rsc_location attribute |
| | | |
| | | A pattern matching the names of resources to which |
| | | this constraint applies. The syntax is the same as |
| | | `POSIX <http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04>`_ |
| | | extended regular expressions, with the addition of an |
| | | initial *!* indicating that resources *not* matching |
| | | the pattern are selected. If the regular expression |
| | | contains submatches, and the constraint is governed by |
| | | a :ref:`rule <rules>`, the submatches can be |
| | | referenced as **%0** through **%9** in the rule's |
| | | ``score-attribute`` or a rule expression's ``attribute``. |
| | | A location constraint must either have a ``rsc``, have a |
| | | ``rsc-pattern``, or contain at least one resource set. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| node | | .. index:: |
| | | single: rsc_location; attribute, node |
| | | single: attribute; node (rsc_location) |
| | | single: node; rsc_location attribute |
| | | |
| | | The name of the node to which this constraint applies. |
| | | A location constraint must either have a ``node`` and |
| | | ``score``, or contain at least one rule. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| score | | .. index:: |
| | | single: rsc_location; attribute, score |
| | | single: attribute; score (rsc_location) |
| | | single: score; rsc_location attribute |
| | | |
| | | Positive values indicate a preference for running the |
| | | affected resource(s) on ``node`` -- the higher the value, |
| | | the stronger the preference. Negative values indicate |
| | | the resource(s) should avoid this node (a value of |
| | | **-INFINITY** changes "should" to "must"). A location |
| | | constraint must either have a ``node`` and ``score``, |
| | | or contain at least one rule. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
| resource-discovery | always | .. index:: |
| | | single: rsc_location; attribute, resource-discovery |
| | | single: attribute; resource-discovery (rsc_location) |
| | | single: resource-discovery; rsc_location attribute |
| | | |
| | | Whether Pacemaker should perform resource discovery |
| | | (that is, check whether the resource is already running) |
| | | for this resource on this node. This should normally be |
| | | left as the default, so that rogue instances of a |
| | | service can be stopped when they are running where they |
| | | are not supposed to be. However, there are two |
| | | situations where disabling resource discovery is a good |
| | | idea: when a service is not installed on a node, |
| | | discovery might return an error (properly written OCF |
| | | agents will not, so this is usually only seen with other |
| | | agent types); and when Pacemaker Remote is used to scale |
| | | a cluster to hundreds of nodes, limiting resource |
| | | discovery to allowed nodes can significantly boost |
| | | performance. |
| | | |
| | | * ``always:`` Always perform resource discovery for |
| | | the specified resource on this node. |
| | | |
| | | * ``never:`` Never perform resource discovery for the |
| | | specified resource on this node. This option should |
| | | generally be used with a -INFINITY score, although |
| | | that is not strictly required. |
| | | |
| | | * ``exclusive:`` Perform resource discovery for the |
| | | specified resource only on this node (and other nodes |
| | | similarly marked as ``exclusive``). Multiple location |
| | | constraints using ``exclusive`` discovery for the |
| | | same resource across different nodes creates a subset |
| | | of nodes resource-discovery is exclusive to. If a |
| | | resource is marked for ``exclusive`` discovery on one |
| | | or more nodes, that resource is only allowed to be |
| | | placed within that subset of nodes. |
+--------------------+---------+----------------------------------------------------------------------------------------------+
.. warning::
Setting ``resource-discovery`` to ``never`` or ``exclusive`` removes Pacemaker's
ability to detect and stop unwanted instances of a service running
where it's not supposed to be. It is up to the system administrator (you!)
to make sure that the service can *never* be active on nodes without
``resource-discovery`` (such as by leaving the relevant software uninstalled).
.. index::
single: Asymmetrical Clusters
single: Opt-In Clusters
Asymmetrical "Opt-In" Clusters
______________________________
To create an opt-in cluster, start by preventing resources from running anywhere
by default:
.. code-block:: none
# crm_attribute --name symmetric-cluster --update false
Then start enabling nodes. The following fragment says that the web
server prefers **sles-1**, the database prefers **sles-2** and both can
fail over to **sles-3** if their most preferred node fails.
.. topic:: Opt-in location constraints for two resources
.. code-block:: xml
<constraints>
<rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
<rsc_location id="loc-2" rsc="Webserver" node="sles-3" score="0"/>
<rsc_location id="loc-3" rsc="Database" node="sles-2" score="200"/>
<rsc_location id="loc-4" rsc="Database" node="sles-3" score="0"/>
</constraints>
.. index::
single: Symmetrical Clusters
single: Opt-Out Clusters
Symmetrical "Opt-Out" Clusters
______________________________
To create an opt-out cluster, start by allowing resources to run
anywhere by default:
.. code-block:: none
# crm_attribute --name symmetric-cluster --update true
Then start disabling nodes. The following fragment is the equivalent
of the above opt-in configuration.
.. topic:: Opt-out location constraints for two resources
.. code-block:: xml
<constraints>
<rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
<rsc_location id="loc-2-do-not-run" rsc="Webserver" node="sles-2" score="-INFINITY"/>
<rsc_location id="loc-3-do-not-run" rsc="Database" node="sles-1" score="-INFINITY"/>
<rsc_location id="loc-4" rsc="Database" node="sles-2" score="200"/>
</constraints>
.. _node-score-equal:
What if Two Nodes Have the Same Score
_____________________________________
If two nodes have the same score, then the cluster will choose one.
This choice may seem random and may not be what was intended, however
the cluster was not given enough information to know any better.
.. topic:: Constraints where a resource prefers two nodes equally
.. code-block:: xml
<constraints>
<rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="INFINITY"/>
<rsc_location id="loc-2" rsc="Webserver" node="sles-2" score="INFINITY"/>
<rsc_location id="loc-3" rsc="Database" node="sles-1" score="500"/>
<rsc_location id="loc-4" rsc="Database" node="sles-2" score="300"/>
<rsc_location id="loc-5" rsc="Database" node="sles-2" score="200"/>
</constraints>
In the example above, assuming no other constraints and an inactive
cluster, **Webserver** would probably be placed on **sles-1** and **Database** on
**sles-2**. It would likely have placed **Webserver** based on the node's
uname and **Database** based on the desire to spread the resource load
evenly across the cluster. However other factors can also be involved
in more complex configurations.
.. index::
single: constraint; ordering
single: resource; start order
.. _s-resource-ordering:
Specifying the Order in which Resources Should Start/Stop
#########################################################
*Ordering constraints* tell the cluster the order in which certain
resource actions should occur.
.. important::
Ordering constraints affect *only* the ordering of resource actions;
they do *not* require that the resources be placed on the
same node. If you want resources to be started on the same node
*and* in a specific order, you need both an ordering constraint *and*
a colocation constraint (see :ref:`s-resource-colocation`), or
alternatively, a group (see :ref:`group-resources`).
.. index::
pair: XML element; rsc_order
pair: constraint; ordering
Ordering Properties
___________________
.. table:: **Attributes of a rsc_order Element**
+--------------+----------------------------+-------------------------------------------------------------------+
| Field | Default | Description |
+==============+============================+===================================================================+
| id | | .. index:: |
| | | single: rsc_order; attribute, id |
| | | single: attribute; id (rsc_order) |
| | | single: id; rsc_order attribute |
| | | |
| | | A unique name for the constraint |
+--------------+----------------------------+-------------------------------------------------------------------+
| first | | .. index:: |
| | | single: rsc_order; attribute, first |
| | | single: attribute; first (rsc_order) |
| | | single: first; rsc_order attribute |
| | | |
| | | Name of the resource that the ``then`` resource |
| | | depends on |
+--------------+----------------------------+-------------------------------------------------------------------+
| then | | .. index:: |
| | | single: rsc_order; attribute, then |
| | | single: attribute; then (rsc_order) |
| | | single: then; rsc_order attribute |
| | | |
| | | Name of the dependent resource |
+--------------+----------------------------+-------------------------------------------------------------------+
| first-action | start | .. index:: |
| | | single: rsc_order; attribute, first-action |
| | | single: attribute; first-action (rsc_order) |
| | | single: first-action; rsc_order attribute |
| | | |
| | | The action that the ``first`` resource must complete |
| | | before ``then-action`` can be initiated for the ``then`` |
| | | resource. Allowed values: ``start``, ``stop``, |
| | | ``promote``, ``demote``. |
+--------------+----------------------------+-------------------------------------------------------------------+
| then-action | value of ``first-action`` | .. index:: |
| | | single: rsc_order; attribute, then-action |
| | | single: attribute; then-action (rsc_order) |
| | | single: first-action; rsc_order attribute |
| | | |
| | | The action that the ``then`` resource can execute only |
| | | after the ``first-action`` on the ``first`` resource has |
| | | completed. Allowed values: ``start``, ``stop``, |
| | | ``promote``, ``demote``. |
+--------------+----------------------------+-------------------------------------------------------------------+
| kind | Mandatory | .. index:: |
| | | single: rsc_order; attribute, kind |
| | | single: attribute; kind (rsc_order) |
| | | single: kind; rsc_order attribute |
| | | |
| | | How to enforce the constraint. Allowed values: |
| | | |
| | | * ``Mandatory:`` ``then-action`` will never be initiated |
| | | for the ``then`` resource unless and until ``first-action`` |
| | | successfully completes for the ``first`` resource. |
| | | |
| | | * ``Optional:`` The constraint applies only if both specified |
| | | resource actions are scheduled in the same transition |
| | | (that is, in response to the same cluster state). This |
| | | means that ``then-action`` is allowed on the ``then`` |
| | | resource regardless of the state of the ``first`` resource, |
| | | but if both actions happen to be scheduled at the same time, |
| | | they will be ordered. |
| | | |
| | | * ``Serialize:`` Ensure that the specified actions are never |
| | | performed concurrently for the specified resources. |
| | | ``First-action`` and ``then-action`` can be executed in either |
| | | order, but one must complete before the other can be initiated. |
| | | An example use case is when resource start-up puts a high load |
| | | on the host. |
+--------------+----------------------------+-------------------------------------------------------------------+
| symmetrical | TRUE for ``Mandatory`` and | .. index:: |
| | ``Optional`` kinds. FALSE | single: rsc_order; attribute, symmetrical |
| | for ``Serialize`` kind. | single: attribute; symmetrical (rsc)order) |
| | | single: symmetrical; rsc_order attribute |
| | | |
| | | If true, the reverse of the constraint applies for the |
| | | opposite action (for example, if B starts after A starts, |
| | | then B stops before A stops). ``Serialize`` orders cannot |
| | | be symmetrical. |
+--------------+----------------------------+-------------------------------------------------------------------+
``Promote`` and ``demote`` apply to the master role of :ref:`promotable <s-resource-promotable>`
resources.
Optional and mandatory ordering
_______________________________
Here is an example of ordering constraints where **Database** *must* start before
**Webserver**, and **IP** *should* start before **Webserver** if they both need to be
started:
.. topic:: Optional and mandatory ordering constraints
.. code-block:: xml
<constraints>
<rsc_order id="order-1" first="IP" then="Webserver" kind="Optional"/>
<rsc_order id="order-2" first="Database" then="Webserver" kind="Mandatory" />
</constraints>
Because the above example lets ``symmetrical`` default to TRUE, **Webserver**
must be stopped before **Database** can be stopped, and **Webserver** should be
stopped before **IP** if they both need to be stopped.
.. index::
single: constraint; colocation
single: resource; location relative to other resources
.. _s-resource-colocation:
Placing Resources Relative to other Resources
#############################################
*Colocation constraints* tell the cluster that the location of one resource
depends on the location of another one.
Colocation has an important side-effect: it affects the order in which
resources are assigned to a node. Think about it: You can't place A relative to
B unless you know where B is [#]_.
So when you are creating colocation constraints, it is important to
consider whether you should colocate A with B, or B with A.
Another thing to keep in mind is that, assuming A is colocated with
B, the cluster will take into account A's preferences when
deciding which node to choose for B.
For a detailed look at exactly how this occurs, see
`Colocation Explained <http://clusterlabs.org/doc/Colocation_Explained.pdf>`_.
.. important::
Colocation constraints affect *only* the placement of resources; they do *not*
require that the resources be started in a particular order. If you want
resources to be started on the same node *and* in a specific order, you need
both an ordering constraint (see :ref:`s-resource-ordering`) *and* a colocation
constraint, or alternatively, a group (see :ref:`group-resources`).
.. index::
pair: XML element; rsc_colocation
pair: constraint; colocation
Colocation Properties
_____________________
.. table:: **Attributes of a rsc_colocation Constraint**
+----------------+---------+--------------------------------------------------------+
| Field | Default | Description |
+================+=========+========================================================+
| id | | .. index:: |
| | | single: rsc_colocation; attribute, id |
| | | single: attribute; id (rsc_colocation) |
| | | single: id; rsc_colocation attribute |
| | | |
| | | A unique name for the constraint (required). |
+----------------+---------+--------------------------------------------------------+
| rsc | | .. index:: |
| | | single: rsc_colocation; attribute, rsc |
| | | single: attribute; rsc (rsc_colocation) |
| | | single: rsc; rsc_colocation attribute |
| | | |
| | | The name of a resource that should be located |
| | | relative to ``with-rsc`` (required). |
+----------------+---------+--------------------------------------------------------+
| with-rsc | | .. index:: |
| | | single: rsc_colocation; attribute, with-rsc |
| | | single: attribute; with-rsc (rsc_colocation) |
| | | single: with-rsc; rsc_colocation attribute |
| | | |
| | | The name of the resource used as the colocation |
| | | target. The cluster will decide where to put this |
| | | resource first and then decide where to put |
| | | ``rsc`` (required). |
+----------------+---------+--------------------------------------------------------+
| node-attribute | #uname | .. index:: |
| | | single: rsc_colocation; attribute, node-attribute |
| | | single: attribute; node-attribute (rsc_colocation) |
| | | single: node-attribute; rsc_colocation attribute |
| | | |
| | | The node attribute that must be the same on the |
| | | node running ``rsc`` and the node running ``with-rsc`` |
| | | for the constraint to be satisfied. (For details, |
| | | see :ref:`s-coloc-attribute`.) |
+----------------+---------+--------------------------------------------------------+
| score | | .. index:: |
| | | single: rsc_colocation; attribute, score |
| | | single: attribute; score (rsc_colocation) |
| | | single: score; rsc_colocation attribute |
| | | |
| | | Positive values indicate the resources should run on |
| | | the same node. Negative values indicate the resources |
| | | should run on different nodes. Values of |
| | | +/- **INFINITY** change "should" to "must". |
+----------------+---------+--------------------------------------------------------+
Mandatory Placement
___________________
Mandatory placement occurs when the constraint's score is
**+INFINITY** or **-INFINITY**. In such cases, if the constraint can't be
satisfied, then the **rsc** resource is not permitted to run. For
``score=INFINITY``, this includes cases where the ``with-rsc`` resource is
not active.
If you need resource **A** to always run on the same machine as
resource **B**, you would add the following constraint:
.. topic:: Mandatory colocation constraint for two resources
.. code-block:: xml
<rsc_colocation id="colocate" rsc="A" with-rsc="B" score="INFINITY"/>
Remember, because **INFINITY** was used, if **B** can't run on any
of the cluster nodes (for whatever reason) then **A** will not
be allowed to run. Whether **A** is running or not has no effect on **B**.
Alternatively, you may want the opposite -- that **A** *cannot*
run on the same machine as **B**. In this case, use ``score="-INFINITY"``.
.. topic:: Mandatory anti-colocation constraint for two resources
.. code-block:: xml
<rsc_colocation id="anti-colocate" rsc="A" with-rsc="B" score="-INFINITY"/>
Again, by specifying **-INFINITY**, the constraint is binding. So if the
only place left to run is where **B** already is, then **A** may not run anywhere.
As with **INFINITY**, **B** can run even if **A** is stopped. However, in this
case **A** also can run if **B** is stopped, because it still meets the
constraint of **A** and **B** not running on the same node.
Advisory Placement
__________________
If mandatory placement is about "must" and "must not", then advisory
placement is the "I'd prefer if" alternative. For constraints with
scores greater than **-INFINITY** and less than **INFINITY**, the cluster
will try to accommodate your wishes but may ignore them if the
alternative is to stop some of the cluster resources.
As in life, where if enough people prefer something it effectively
becomes mandatory, advisory colocation constraints can combine with
other elements of the configuration to behave as if they were
mandatory.
.. topic:: Advisory colocation constraint for two resources
.. code-block:: xml
<rsc_colocation id="colocate-maybe" rsc="A" with-rsc="B" score="500"/>
.. _s-coloc-attribute:
Colocation by Node Attribute
____________________________
The ``node-attribute`` property of a colocation constraints allows you to express
the requirement, "these resources must be on similar nodes".
As an example, imagine that you have two Storage Area Networks (SANs) that are
not controlled by the cluster, and each node is connected to one or the other.
You may have two resources **r1** and **r2** such that **r2** needs to use the same
SAN as **r1**, but doesn't necessarily have to be on the same exact node.
In such a case, you could define a :ref:`node attribute <node_attributes>` named
**san**, with the value **san1** or **san2** on each node as appropriate. Then, you
could colocate **r2** with **r1** using ``node-attribute`` set to **san**.
.. _s-resource-sets:
Resource Sets
#############
.. index::
single: constraint; resource set
single: resource; resource set
*Resource sets* allow multiple resources to be affected by a single constraint.
.. topic:: A set of 3 resources
.. code-block:: xml
<resource_set id="resource-set-example">
<resource_ref id="A"/>
<resource_ref id="B"/>
<resource_ref id="C"/>
</resource_set>
Resource sets are valid inside ``rsc_location``, ``rsc_order``
(see :ref:`s-resource-sets-ordering`), ``rsc_colocation``
(see :ref:`s-resource-sets-colocation`), and ``rsc_ticket``
(see :ref:`ticket-constraints`) constraints.
A resource set has a number of properties that can be set, though not all
have an effect in all contexts.
.. index::
pair: XML element; resource_set
.. topic:: **Attributes of a resource_set Element**
+-------------+---------+--------------------------------------------------------+
| Field | Default | Description |
+=============+=========+========================================================+
| id | | .. index:: |
| | | single: resource_set; attribute, id |
| | | single: attribute; id (resource_set) |
| | | single: id; resource_set attribute |
| | | |
| | | A unique name for the set |
+-------------+---------+--------------------------------------------------------+
| sequential | true | .. index:: |
| | | single: resource_set; attribute, sequential |
| | | single: attribute; sequential (resource_set) |
| | | single: sequential; resource_set attribute |
| | | |
| | | Whether the members of the set must be acted on in |
| | | order. Meaningful within ``rsc_order`` and |
| | | ``rsc_colocation``. |
+-------------+---------+--------------------------------------------------------+
| require-all | true | .. index:: |
| | | single: resource_set; attribute, require-all |
| | | single: attribute; require-all (resource_set) |
| | | single: require-all; resource_set attribute |
| | | |
| | | Whether all members of the set must be active before |
| | | continuing. With the current implementation, the |
| | | cluster may continue even if only one member of the |
| | | set is started, but if more than one member of the set |
| | | is starting at the same time, the cluster will still |
| | | wait until all of those have started before continuing |
| | | (this may change in future versions). Meaningful |
| | | within ``rsc_order``. |
+-------------+---------+--------------------------------------------------------+
| role | | .. index:: |
| | | single: resource_set; attribute, role |
| | | single: attribute; role (resource_set) |
| | | single: role; resource_set attribute |
| | | |
| | | Limit the effect of the constraint to the specified |
| | | role. Meaningful within ``rsc_location``, |
| | | ``rsc_colocation`` and ``rsc_ticket``. |
+-------------+---------+--------------------------------------------------------+
| action | | .. index:: |
| | | single: resource_set; attribute, action |
| | | single: attribute; action (resource_set) |
| | | single: action; resource_set attribute |
| | | |
| | | Limit the effect of the constraint to the specified |
| | | action. Meaningful within ``rsc_order``. |
+-------------+---------+--------------------------------------------------------+
| score | | .. index:: |
| | | single: resource_set; attribute, score |
| | | single: attribute; score (resource_set) |
| | | single: score; resource_set attribute |
| | | |
| | | *Advanced use only.* Use a specific score for this |
| | | set within the constraint. |
+-------------+---------+--------------------------------------------------------+
.. _s-resource-sets-ordering:
Ordering Sets of Resources
##########################
A common situation is for an administrator to create a chain of ordered
resources, such as:
.. topic:: A chain of ordered resources
.. code-block:: xml
<constraints>
<rsc_order id="order-1" first="A" then="B" />
<rsc_order id="order-2" first="B" then="C" />
<rsc_order id="order-3" first="C" then="D" />
</constraints>
.. topic:: Visual representation of the four resources' start order for the above constraints
- .. image:: ../../shared/en-US/images/resource-set.png
+ .. image:: images/resource-set.png
:alt: Ordered set
Ordered Set
___________
To simplify this situation, resource sets (see :ref:`s-resource-sets`) can be used
within ordering constraints:
.. topic:: A chain of ordered resources expressed as a set
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-example" sequential="true">
<resource_ref id="A"/>
<resource_ref id="B"/>
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
</rsc_order>
</constraints>
While the set-based format is not less verbose, it is significantly easier to
get right and maintain.
.. important::
If you use a higher-level tool, pay attention to how it exposes this
functionality. Depending on the tool, creating a set **A B** may be equivalent to
**A then B**, or **B then A**.
Ordering Multiple Sets
______________________
The syntax can be expanded to allow sets of resources to be ordered relative to
each other, where the members of each individual set may be ordered or
unordered (controlled by the ``sequential`` property). In the example below, **A**
and **B** can both start in parallel, as can **C** and **D**, however **C** and
**D** can only start once *both* **A** *and* **B** are active.
.. topic:: Ordered sets of unordered resources
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-1" sequential="false">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
<resource_set id="ordered-set-2" sequential="false">
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
</rsc_order>
</constraints>
.. topic:: Visual representation of the start order for two ordered sets of
unordered resources
- .. image:: ../../shared/en-US/images/two-sets.png
+ .. image:: images/two-sets.png
:alt: Two ordered sets
Of course either set -- or both sets -- of resources can also be internally
ordered (by setting ``sequential="true"``) and there is no limit to the number
of sets that can be specified.
.. topic:: Advanced use of set ordering - Three ordered sets, two of which are
internally unordered
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-1" sequential="false">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
<resource_set id="ordered-set-2" sequential="true">
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
<resource_set id="ordered-set-3" sequential="false">
<resource_ref id="E"/>
<resource_ref id="F"/>
</resource_set>
</rsc_order>
</constraints>
.. topic:: Visual representation of the start order for the three sets defined above
- .. image:: ../../shared/en-US/images/three-sets.png
+ .. image:: images/three-sets.png
:alt: Three ordered sets
.. important::
An ordered set with ``sequential=false`` makes sense only if there is another
set in the constraint. Otherwise, the constraint has no effect.
Resource Set OR Logic
_____________________
The unordered set logic discussed so far has all been "AND" logic. To illustrate
this take the 3 resource set figure in the previous section. Those sets can be
expressed, **(A and B) then (C) then (D) then (E and F)**.
Say for example we want to change the first set, **(A and B)**, to use "OR" logic
so the sets look like this: **(A or B) then (C) then (D) then (E and F)**. This
functionality can be achieved through the use of the ``require-all`` option.
This option defaults to TRUE which is why the "AND" logic is used by default.
Setting ``require-all=false`` means only one resource in the set needs to be
started before continuing on to the next set.
.. topic:: Resource Set "OR" logic: Three ordered sets, where the first set is
internally unordered with "OR" logic
.. code-block:: xml
<constraints>
<rsc_order id="order-1">
<resource_set id="ordered-set-1" sequential="false" require-all="false">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
<resource_set id="ordered-set-2" sequential="true">
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
<resource_set id="ordered-set-3" sequential="false">
<resource_ref id="E"/>
<resource_ref id="F"/>
</resource_set>
</rsc_order>
</constraints>
.. important::
An ordered set with ``require-all=false`` makes sense only in conjunction with
``sequential=false``. Think of it like this: ``sequential=false`` modifies the set
to be an unordered set using "AND" logic by default, and adding
``require-all=false`` flips the unordered set's "AND" logic to "OR" logic.
.. _s-resource-sets-colocation:
Colocating Sets of Resources
############################
Another common situation is for an administrator to create a set of
colocated resources.
The simplest way to do this is to define a resource group (see
:ref:`group-resources`), but that cannot always accurately express the desired
relationships. For example, maybe the resources do not need to be ordered.
Another way would be to define each relationship as an individual constraint,
but that causes a difficult-to-follow constraint explosion as the number of
resources and combinations grow.
.. topic:: Colocation chain as individual constraints, where A is placed first,
then B, then C, then D
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" rsc="D" with-rsc="C" score="INFINITY"/>
<rsc_colocation id="coloc-2" rsc="C" with-rsc="B" score="INFINITY"/>
<rsc_colocation id="coloc-3" rsc="B" with-rsc="A" score="INFINITY"/>
</constraints>
To express complicated relationships with a simplified syntax [#]_,
:ref:`resource sets <s-resource-sets>` can be used within colocation constraints.
.. topic:: Equivalent colocation chain expressed using **resource_set**
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" score="INFINITY" >
<resource_set id="colocated-set-example" sequential="true">
<resource_ref id="A"/>
<resource_ref id="B"/>
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
</rsc_colocation>
</constraints>
.. note::
Within a ``resource_set``, the resources are listed in the order they are
*placed*, which is the reverse of the order in which they are *colocated*.
In the above example, resource **A** is placed before resource **B**, which is
the same as saying resource **B** is colocated with resource **A**.
As with individual constraints, a resource that can't be active prevents any
resource that must be colocated with it from being active. In both of the two
previous examples, if **B** is unable to run, then both **C** and by inference **D**
must remain stopped.
.. important::
If you use a higher-level tool, pay attention to how it exposes this
functionality. Depending on the tool, creating a set **A B** may be equivalent to
**A with B**, or **B with A**.
Resource sets can also be used to tell the cluster that entire *sets* of
resources must be colocated relative to each other, while the individual
members within any one set may or may not be colocated relative to each other
(determined by the set's ``sequential`` property).
In the following example, resources **B**, **C**, and **D** will each be colocated
with **A** (which will be placed first). **A** must be able to run in order for any
of the resources to run, but any of **B**, **C**, or **D** may be stopped without
affecting any of the others.
.. topic:: Using colocated sets to specify a shared dependency
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" score="INFINITY" >
<resource_set id="colocated-set-2" sequential="false">
<resource_ref id="B"/>
<resource_ref id="C"/>
<resource_ref id="D"/>
</resource_set>
<resource_set id="colocated-set-1" sequential="true">
<resource_ref id="A"/>
</resource_set>
</rsc_colocation>
</constraints>
.. note::
Pay close attention to the order in which resources and sets are listed.
While the members of any one sequential set are placed first to last (i.e., the
colocation dependency is last with first), multiple sets are placed last to
first (i.e. the colocation dependency is first with last).
.. important::
A colocated set with ``sequential="false"`` makes sense only if there is
another set in the constraint. Otherwise, the constraint has no effect.
There is no inherent limit to the number and size of the sets used.
The only thing that matters is that in order for any member of one set
in the constraint to be active, all members of sets listed after it must also
be active (and naturally on the same node); and if a set has ``sequential="true"``,
then in order for one member of that set to be active, all members listed
before it must also be active.
If desired, you can restrict the dependency to instances of promotable clone
resources that are in a specific role, using the set's ``role`` property.
.. topic:: Colocation in which the members of the middle set have no interdependencies,
and the last set listed applies only to instances in the master role
.. code-block:: xml
<constraints>
<rsc_colocation id="coloc-1" score="INFINITY" >
<resource_set id="colocated-set-1" sequential="true">
<resource_ref id="F"/>
<resource_ref id="G"/>
</resource_set>
<resource_set id="colocated-set-2" sequential="false">
<resource_ref id="C"/>
<resource_ref id="D"/>
<resource_ref id="E"/>
</resource_set>
<resource_set id="colocated-set-3" sequential="true" role="Master">
<resource_ref id="A"/>
<resource_ref id="B"/>
</resource_set>
</rsc_colocation>
</constraints>
.. topic:: Visual representation of the above example (resources are placed from
left to right)
- .. image:: ../../shared/en-US/images/pcmk-colocated-sets.png
+ .. image:: ../shared/images/pcmk-colocated-sets.png
:alt: Colocation chain
.. note::
Unlike ordered sets, colocated sets do not use the ``require-all`` option.
.. [#] While the human brain is sophisticated enough to read the constraint
in any order and choose the correct one depending on the situation,
the cluster is not quite so smart. Yet.
.. [#] which is not the same as saying easy to follow
diff --git a/doc/shared/en-US/images/resource-set.png b/doc/sphinx/Pacemaker_Explained/images/resource-set.png
similarity index 100%
rename from doc/shared/en-US/images/resource-set.png
rename to doc/sphinx/Pacemaker_Explained/images/resource-set.png
diff --git a/doc/shared/en-US/images/three-sets.png b/doc/sphinx/Pacemaker_Explained/images/three-sets.png
similarity index 100%
rename from doc/shared/en-US/images/three-sets.png
rename to doc/sphinx/Pacemaker_Explained/images/three-sets.png
diff --git a/doc/shared/en-US/images/two-sets.png b/doc/sphinx/Pacemaker_Explained/images/two-sets.png
similarity index 100%
rename from doc/shared/en-US/images/two-sets.png
rename to doc/sphinx/Pacemaker_Explained/images/two-sets.png
diff --git a/doc/sphinx/conf.py.in b/doc/sphinx/conf.py.in
index ff01143980..04924e1d27 100644
--- a/doc/sphinx/conf.py.in
+++ b/doc/sphinx/conf.py.in
@@ -1,314 +1,314 @@
""" Sphinx configuration for Pacemaker documentation
"""
__copyright__ = "Copyright 2020 the Pacemaker project contributors"
__license__ = "GNU General Public License version 2 or later (GPLv2+) WITHOUT ANY WARRANTY"
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import datetime
import os
import sys
# Variables that can be used later in this file
authors = "the Pacemaker project contributors"
year = datetime.datetime.now().year
doc_license = "Creative Commons Attribution-ShareAlike International Public License"
doc_license += " version 4.0 or later (CC-BY-SA v4.0+)"
# rST markup to insert at beginning of every document; mainly used for
#
# .. |<abbr>| replace:: <Full text>
#
# where occurrences of |<abbr>| in the rST will be substituted with <Full text>
rst_prolog="""
.. |CFS_DISTRO| replace:: CentOS
.. |CFS_DISTRO_VER| replace:: 7.5
.. |REMOTE_DISTRO| replace:: CentOS
.. |REMOTE_DISTRO_VER| replace:: 7.4
"""
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = '%BOOK_ID%'
copyright = "2009-%s %s. Released under the terms of the %s" % (year, authors, doc_license)
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The full version, including alpha/beta/rc tags.
release = '%VERSION%'
# The short X.Y version.
version = release.rsplit('.', 1)[0]
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'vs'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'pyramid'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
html_style = 'pacemaker.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = "%BOOK_TITLE%"
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = [ '../_static' ]
+html_static_path = [ '%SRC_DIR%/_static' ]
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Pacemakerdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', '%BOOK_ID%.tex', '%BOOK_TITLE%', authors, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', '%BOOK_ID%', 'Part of the Pacemaker documentation set', [authors], 8)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', '%BOOK_ID%', '%BOOK_TITLE%', authors, '%BOOK_TITLE%',
'Pacemaker is an advanced, scalable high-availability cluster resource manager.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# -- Options for Epub output ---------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = '%BOOK_TITLE%'
epub_author = authors
epub_publisher = 'ClusterLabs.org'
epub_copyright = copyright
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
epub_scheme = 'URL'
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
epub_identifier = 'http://www.clusterlabs.org/pacemaker/doc/2.0/%BOOK_ID%/epub/%BOOK_ID%.epub'
# A unique identification for the text.
epub_uid = 'ClusterLabs.org-Pacemaker-%BOOK_ID%'
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files that should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file.
epub_exclude_files = [
'_static/doctools.js',
'_static/jquery.js',
'_static/searchtools.js',
'_static/underscore.js',
'_static/basic.css',
'_static/websupport.js',
'search.html',
]
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True
diff --git a/doc/shared/en-US/images/Policy-Engine-big.dot b/doc/sphinx/shared/images/Policy-Engine-big.dot
similarity index 100%
rename from doc/shared/en-US/images/Policy-Engine-big.dot
rename to doc/sphinx/shared/images/Policy-Engine-big.dot
diff --git a/doc/shared/en-US/images/Policy-Engine-big.svg b/doc/sphinx/shared/images/Policy-Engine-big.svg
similarity index 100%
rename from doc/shared/en-US/images/Policy-Engine-big.svg
rename to doc/sphinx/shared/images/Policy-Engine-big.svg
diff --git a/doc/shared/en-US/images/Policy-Engine-small.dot b/doc/sphinx/shared/images/Policy-Engine-small.dot
similarity index 100%
rename from doc/shared/en-US/images/Policy-Engine-small.dot
rename to doc/sphinx/shared/images/Policy-Engine-small.dot
diff --git a/doc/shared/en-US/images/Policy-Engine-small.svg b/doc/sphinx/shared/images/Policy-Engine-small.svg
similarity index 100%
rename from doc/shared/en-US/images/Policy-Engine-small.svg
rename to doc/sphinx/shared/images/Policy-Engine-small.svg
diff --git a/doc/shared/en-US/images/pcmk-active-active.svg b/doc/sphinx/shared/images/pcmk-active-active.svg
similarity index 100%
rename from doc/shared/en-US/images/pcmk-active-active.svg
rename to doc/sphinx/shared/images/pcmk-active-active.svg
diff --git a/doc/shared/en-US/images/pcmk-active-passive.svg b/doc/sphinx/shared/images/pcmk-active-passive.svg
similarity index 100%
rename from doc/shared/en-US/images/pcmk-active-passive.svg
rename to doc/sphinx/shared/images/pcmk-active-passive.svg
diff --git a/doc/shared/en-US/images/pcmk-colocated-sets.svg b/doc/sphinx/shared/images/pcmk-colocated-sets.svg
similarity index 100%
rename from doc/shared/en-US/images/pcmk-colocated-sets.svg
rename to doc/sphinx/shared/images/pcmk-colocated-sets.svg
diff --git a/doc/shared/en-US/images/pcmk-internals.svg b/doc/sphinx/shared/images/pcmk-internals.svg
similarity index 100%
rename from doc/shared/en-US/images/pcmk-internals.svg
rename to doc/sphinx/shared/images/pcmk-internals.svg
diff --git a/doc/shared/en-US/images/pcmk-overview.svg b/doc/sphinx/shared/images/pcmk-overview.svg
similarity index 100%
rename from doc/shared/en-US/images/pcmk-overview.svg
rename to doc/sphinx/shared/images/pcmk-overview.svg
diff --git a/doc/shared/en-US/images/pcmk-shared-failover.svg b/doc/sphinx/shared/images/pcmk-shared-failover.svg
similarity index 100%
rename from doc/shared/en-US/images/pcmk-shared-failover.svg
rename to doc/sphinx/shared/images/pcmk-shared-failover.svg
diff --git a/doc/shared/en-US/images/pcmk-stack.svg b/doc/sphinx/shared/images/pcmk-stack.svg
similarity index 100%
rename from doc/shared/en-US/images/pcmk-stack.svg
rename to doc/sphinx/shared/images/pcmk-stack.svg
diff --git a/doc/sphinx/shared/pacemaker-intro.rst b/doc/sphinx/shared/pacemaker-intro.rst
index 37c39afd56..a56e53fed7 100644
--- a/doc/sphinx/shared/pacemaker-intro.rst
+++ b/doc/sphinx/shared/pacemaker-intro.rst
@@ -1,201 +1,201 @@
What Is Pacemaker?
####################
Pacemaker is a high-availability *cluster resource manager* -- software that
runs on a set of hosts (a *cluster* of *nodes*) in order to preserve integrity
and minimize downtime of desired services (*resources*). [#]_ It is maintained
by the `ClusterLabs <https://www.ClusterLabs.org/>`_ community.
Pacemaker's key features include:
* Detection of and recovery from node- and service-level failures
* Ability to ensure data integrity by fencing faulty nodes
* Support for one or more nodes per cluster
* Support for multiple resource interface standards (anything that can be
scripted can be clustered)
* Support (but no requirement) for shared storage
* Support for practically any redundancy configuration (active/passive, N+1,
etc.)
* Automatically replicated configuration that can be updated from any node
* Ability to specify cluster-wide relationships between services,
such as ordering, colocation and anti-colocation
* Support for advanced service types, such as *clones* (services that need to
be active on multiple nodes), *promotable clones* (clones that can run in
one of two roles), and containerized services
* Unified, scriptable cluster management tools
.. note:: Fencing
*Fencing*, also known as *STONITH* (an acronym for Shoot The Other Node In
The Head), is the ability to ensure that it is not possible for a node to be
running a service. This is accomplished via *fence devices* such as
intelligent power switches that cut power to the target, or intelligent
network switches that cut the target's access to the local network.
Pacemaker represents fence devices as a special class of resource.
A cluster cannot safely recover from certain failure conditions, such as an
unresponsive node, without fencing.
Cluster Architecture
____________________
At a high level, a cluster can be viewed as having these parts (which together
are often referred to as the *cluster stack*):
* **Resources:** These are the reason for the cluster's being -- the services
that need to be kept highly available.
* **Resource agents:** These are scripts or operating system components that
start, stop, and monitor resources, given a set of resource parameters.
These provide a uniform interface between Pacemaker and the managed
services.
* **Fence agents:** These are scripts that execute node fencing actions,
given a target and fence device parameters.
* **Cluster membership layer:** This component provides reliable messaging,
membership, and quorum information about the cluster. Currently, Pacemaker
supports `Corosync <http://www.corosync.org/>`_ as this layer.
* **Cluster resource manager:** Pacemaker provides the brain that processes
and reacts to events that occur in the cluster. These events may include
nodes joining or leaving the cluster; resource events caused by failures,
maintenance, or scheduled activities; and other administrative actions.
To achieve the desired availability, Pacemaker may start and stop resources
and fence nodes.
* **Cluster tools:** These provide an interface for users to interact with the
cluster. Various command-line and graphical (GUI) interfaces are available.
Most managed services are not, themselves, cluster-aware. However, many popular
open-source cluster filesystems make use of a common *Distributed Lock
Manager* (DLM), which makes direct use of Corosync for its messaging and
membership capabilities and Pacemaker for the ability to fence nodes.
-.. image:: ../../shared/en-US/images/pcmk-stack.png
+.. image:: ../shared/images/pcmk-stack.png
:alt: Example cluster stack
:scale: 75 %
:align: center
Pacemaker Architecture
______________________
Pacemaker itself is composed of multiple daemons that work together:
* pacemakerd
* pacemaker-attrd
* pacemaker-based
* pacemaker-controld
* pacemaker-execd
* pacemaker-fenced
* pacemaker-schedulerd
-.. image:: ../../shared/en-US/images/pcmk-internals.png
+.. image:: ../shared/images/pcmk-internals.png
:alt: Pacemaker software components
:scale: 65 %
:align: center
The Pacemaker master process (pacemakerd) spawns all the other daemons, and
respawns them if they unexpectedly exit.
The *Cluster Information Base* (CIB) is an
`XML <https://en.wikipedia.org/wiki/XML>`_ representation of the cluster's
configuration and the state of all nodes and resources. The *CIB manager*
(pacemaker-based) keeps the CIB synchronized across the cluster, and handles
requests to modify it.
The *attribute manager* (pacemaker-attrd) maintains a database of attributes
for all nodes, keeps it synchronized across the cluster, and handles requests
to modify them. These attributes are usually recorded in the CIB.
Given a snapshot of the CIB as input, the *scheduler* (pacemaker-schedulerd)
determines what actions are necessary to achieve the desired state of the
cluster.
The *local executor* (pacemaker-execd) handles requests to execute
resource agents on the local cluster node, and returns the result.
The *fencer* (pacemaker-fenced) handles requests to fence nodes. Given a target
node, the fencer decides which cluster node(s) should execute which fencing
device(s), and calls the necessary fencing agents (either directly, or via
requests to the fencer peers on other nodes), and returns the result.
The *controller* (pacemaker-controld) is Pacemaker's coordinator, maintaining a
consistent view of the cluster membership and orchestrating all the other
components.
Pacemaker centralizes cluster decision-making by electing one of the controller
instances as the 'Designated Controller' ('DC'). Should the elected DC process
(or the node it is on) fail, a new one is quickly established. The DC responds
to cluster events by taking a current snapshot of the CIB, feeding it to the
scheduler, then asking the executors (either directly on the local node, or via
requests to controller peers on other nodes) and the fencer to execute any
necessary actions.
.. note:: **Old daemon names**
The Pacemaker daemons were renamed in version 2.0. You may still find
references to the old names, especially in documentation targeted to
version 1.1.
.. table::
+-------------------+---------------------+
| Old name | New name |
+===================+=====================+
| attrd | pacemaker-attrd |
+-------------------+---------------------+
| cib | pacemaker-based |
+-------------------+---------------------+
| crmd | pacemaker-controld |
+-------------------+---------------------+
| lrmd | pacemaker-execd |
+-------------------+---------------------+
| stonithd | pacemaker-fenced |
+-------------------+---------------------+
| pacemaker_remoted | pacemaker-remoted |
+-------------------+---------------------+
Node Redundancy Designs
_______________________
Pacemaker supports practically any `node redundancy configuration
<https://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations>`_
including *Active/Active*, *Active/Passive*, *N+1*, *N+M*, *N-to-1* and
*N-to-N*.
Active/passive clusters with two (or more) nodes using Pacemaker and
`DRBD <https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device>`_ are
a cost-effective high-availability solution for many situations. One of the
nodes provides the desired services, and if it fails, the other node takes
over.
-.. image:: ../../shared/en-US/images/pcmk-active-passive.png
+.. image:: ../shared/images/pcmk-active-passive.png
:alt: Active/Passive Redundancy
:align: center
:scale: 75 %
Pacemaker also supports multiple nodes in a shared-failover design, reducing
hardware costs by allowing several active/passive clusters to be combined and
share a common backup node.
-.. image:: ../../shared/en-US/images/pcmk-shared-failover.png
+.. image:: ../shared/images/pcmk-shared-failover.png
:alt: Shared Failover
:align: center
:scale: 75 %
When shared storage is available, every node can potentially be used for
failover. Pacemaker can even run multiple copies of services to spread out the
workload. This is sometimes called N to N Redundancy.
-.. image:: ../../shared/en-US/images/pcmk-active-active.png
+.. image:: ../shared/images/pcmk-active-active.png
:alt: N to N Redundancy
:align: center
:scale: 75 %
.. rubric:: Footnotes
.. [#] *Cluster* is sometimes used in other contexts to refer to hosts grouped
together for other purposes, such as high-performance computing (HPC),
but Pacemaker is not intended for those purposes.

File Metadata

Mime Type
text/x-diff
Expires
Sat, Nov 23, 10:02 AM (1 d, 7 h)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1018544
Default Alt Text
(223 KB)

Event Timeline