diff --git a/.gitignore b/.gitignore index abf43bc60b..ac6bfd4156 100644 --- a/.gitignore +++ b/.gitignore @@ -1,168 +1,171 @@ # Common \#* .\#* GPATH GRTAGS GTAGS TAGS Makefile Makefile.in .deps .libs *.pc *.pyc *.bz2 *.rpm *.la *.lo *.o *~ *.gcda *.gcno # Autobuild aclocal.m4 autoconf autoheader autom4te.cache/ automake build.counter compile config.guess config.log config.status config.sub configure depcomp install-sh include/stamp-* libltdl.tar libtool libtool.m4 ltdl.m4 ltmain.sh missing py-compile m4/ltoptions.m4 m4/ltsugar.m4 m4/ltversion.m4 m4/lt~obsolete.m4 test-driver ylwrap # Configure targets Doxyfile coverage.sh cts/CTSvars.py cts/LSBDummy cts/benchmark/clubench cts/lxc_autogen.sh extra/logrotate/pacemaker include/config.h include/config.h.in include/crm_config.h lrmd/pacemaker_remote +lrmd/pacemaker_remoted lrmd/pacemaker_remote.service mcp/pacemaker mcp/pacemaker.combined.upstart mcp/pacemaker.service mcp/pacemaker.upstart pengine/regression.core.sh publican.cfg shell/modules/help.py shell/modules/ra.py shell/modules/ui.py shell/modules/vars.py tools/cibsecret tools/coverage.sh +tools/crm_error tools/crm_mon.upstart tools/crm_report tools/report.common lrmd/regression.py fencing/regression.py # Build targets *.7 *.7.xml *.7.html *.8 *.8.xml *.8.html +attrd/attrd doc/*/en-US/images/*.png doc/*/tmp/** doc/*/publish cib/cib cib/cibmon cib/cibpipe crmd/atest crmd/crmd doc/Clusters_from_Scratch.txt doc/Pacemaker_Explained.txt doc/acls.html doc/crm_fencing.html fencing/stonith-test fencing/stonith_admin fencing/stonithd fencing/stonithd.xml lrmd/lrmd lrmd/lrmd_test mcp/pacemakerd pengine/pengine pengine/pengine.xml pengine/ptest shell/regression/testcases/confbasic-xml.filter scratch -tools/attrd tools/attrd_updater tools/cibadmin tools/crm_attribute tools/crm_diff tools/crm_mon tools/crm_node tools/crm_resource tools/crm_shadow tools/crm_simulate tools/crm_uuid tools/crm_verify tools/crmadmin tools/iso8601 tools/crm_ticket tools/report.collector.1 xml/crm.dtd -xml/pacemaker.rng +xml/pacemaker*.rng +xml/versions.rng extra/rgmanager/ccs2cib extra/rgmanager/ccs_flatten extra/rgmanager/disable_rgmanager doc/Clusters_from_Scratch.build doc/Clusters_from_Scratch/en-US/Ap-*.xml doc/Clusters_from_Scratch/en-US/Ch-*.xml doc/Pacemaker_Explained.build doc/Pacemaker_Explained/en-US/Ch-*.xml doc/Pacemaker_Explained/en-US/Ap-*.xml doc/Pacemaker_Remote.build doc/Pacemaker_Remote/en-US/Ch-*.xml lib/gnu/libgnu.a lib/gnu/stdalign.h *.coverity #Other mock HTML -pacemaker.spec +pacemaker*.spec pengine/.regression.failed.diff ClusterLabs-pacemaker-*.tar.gz coverity-* compat_reports .ABI-build abi_dumps logs *.patch *.diff *.sed *.orig *.rej *.swp pengine/test10/shadow.* diff --git a/README.markdown b/README.markdown index 8d57f0f842..e4f911b7e3 100644 --- a/README.markdown +++ b/README.markdown @@ -1,92 +1,94 @@ # Pacemaker ## What is Pacemaker? Pacemaker is an advanced, scalable High-Availability cluster resource manager for Linux-HA (Heartbeat) and/or Corosync. It supports "n-node" clusters with significant capabilities for managing resources and dependencies. It will run scripts at initialization, when machines go up or down, when related resources fail and can be configured to periodically check resource health. ## For more information look at: * [Website](http://www.clusterlabs.org) * [Issues/Bugs](http://bugs.clusterlabs.org) * [Mailing list](http://oss.clusterlabs.org/mailman/listinfo/pacemaker). * [Documentation](http://www.clusterlabs.org/doc) ## User interfaces / shells There are multiple user interfaces for Pacemaker, both command line tools, graphical user interfaces and web frontends. The _crm shell_ used to be included in the Pacemaker source tree, but is now maintained as a separate project. This is not meant to be an exhaustive list: * _crmsh_: https://crmsh.github.io/ * _pcs_: https://github.com/feist/pcs/ * _LCMC_: http://lcmc.sourceforge.net/ * _hawk_: https://github.com/ClusterLabs/hawk ## Build Dependencies * automake * autoconf * libtool-ltdl-devel * libuuid-devel * pkgconfig * python * glib2-devel * libxml2-devel * libxslt-devel * python-devel * gcc-c++ * bzip2-devel * gnutls-devel * pam-devel * libqb-devel ## Cluster Stack Dependencies (Pick at least one) * clusterlib-devel (CMAN) * corosynclib-devel (Corosync) * heartbeat-devel (Heartbeat) ## Optional Build Dependencies * ncurses-devel * openssl-devel * libselinux-devel +* systemd-devel +* dbus-devel * cluster-glue-libs-devel (LHA style fencing agents) * libesmtp-devel (Email alerts) * lm_sensors-devel (SNMP alerts) * net-snmp-devel (SNMP alerts) * asciidoc (documentation) * help2man (documentation) * publican (documentation) * inkscape (documentation) * docbook-style-xsl (documentation) ## Source Control (GIT) git clone git://github.com/ClusterLabs/pacemaker.git [See Github](https://github.com/ClusterLabs/pacemaker) ## Installing from source $ ./autogen.sh $ ./configure $ make $ sudo make install ## How you can help If you find this project useful, you may want to consider supporting its future development. There are a number of ways to support the project. * Test and report issues. * Tick something off our [todo list](https://github.com/ClusterLabs/pacemaker/blob/master/TODO.markdown) * Help others on the [mailing list](http://oss.clusterlabs.org/mailman/listinfo/pacemaker). * Contribute documentation, examples and test cases. * Contribute patches. * Spread the word. diff --git a/configure.ac b/configure.ac index e460ae9931..a45af59a55 100644 --- a/configure.ac +++ b/configure.ac @@ -1,1920 +1,1920 @@ dnl dnl autoconf for Pacemaker dnl dnl License: GNU General Public License (GPL) dnl =============================================== dnl Bootstrap dnl =============================================== AC_PREREQ(2.59) dnl Suggested structure: dnl information on the package dnl checks for programs dnl checks for libraries dnl checks for header files dnl checks for types dnl checks for structures dnl checks for compiler characteristics dnl checks for library functions dnl checks for system services m4_include([version.m4]) AC_INIT([pacemaker], VERSION_NUMBER, pacemaker@oss.clusterlabs.org,pacemaker,http://clusterlabs.org) PCMK_FEATURES="" HB_PKG=heartbeat AC_CONFIG_AUX_DIR(.) AC_CANONICAL_HOST dnl Where #defines go (e.g. `AC_CHECK_HEADERS' below) dnl dnl Internal header: include/config.h dnl - Contains ALL defines dnl - include/config.h.in is generated automatically by autoheader dnl - NOT to be included in any header files except lha_internal.h dnl (which is also not to be included in any other header files) dnl dnl External header: include/crm_config.h dnl - Contains a subset of defines checked here dnl - Manually edit include/crm_config.h.in to have configure include dnl new defines dnl - Should not include HAVE_* defines dnl - Safe to include anywhere AM_CONFIG_HEADER(include/config.h include/crm_config.h) ALL_LINGUAS="en fr" AC_ARG_WITH(version, [ --with-version=version Override package version (if you're a packager needing to pretend) ], [ PACKAGE_VERSION="$withval" ]) AC_ARG_WITH(pkg-name, [ --with-pkg-name=name Override package name (if you're a packager needing to pretend) ], [ PACKAGE_NAME="$withval" ]) dnl Older distros may need: AM_INIT_AUTOMAKE($PACKAGE_NAME, $PACKAGE_VERSION) AM_INIT_AUTOMAKE AC_DEFINE_UNQUOTED(PACEMAKER_VERSION, "$PACKAGE_VERSION", Current pacemaker version) PACKAGE_SERIES=`echo $PACKAGE_VERSION | awk -F. '{ print $1"."$2 }'` AC_SUBST(PACKAGE_SERIES) AC_SUBST(PACKAGE_VERSION) dnl automake >= 1.11 offers --enable-silent-rules for suppressing the output from dnl normal compilation. When a failure occurs, it will then display the full dnl command line dnl Wrap in m4_ifdef to avoid breaking on older platforms m4_ifdef([AM_SILENT_RULES],[AM_SILENT_RULES([yes])]) dnl Example 2.4. Silent Custom Rule to Generate a File dnl %-bar.pc: %.pc dnl $(AM_V_GEN)$(LN_S) $(notdir $^) $@ CC_IN_CONFIGURE=yes export CC_IN_CONFIGURE LDD=ldd BUILD_ATOMIC_ATTRD=1 dnl ======================================================================== dnl Compiler characteristics dnl ======================================================================== AC_PROG_CC dnl Can force other with environment variable "CC". AM_PROG_CC_C_O AC_PROG_CC_STDC gl_EARLY gl_INIT AC_LIBTOOL_DLOPEN dnl Enable dlopen support... AC_LIBLTDL_CONVENIENCE dnl make libltdl a convenience lib AC_PROG_LIBTOOL AC_PROG_YACC AM_PROG_LEX AC_C_STRINGIZE AC_TYPE_SIZE_T AC_CHECK_SIZEOF(char) AC_CHECK_SIZEOF(short) AC_CHECK_SIZEOF(int) AC_CHECK_SIZEOF(long) AC_CHECK_SIZEOF(long long) AC_STRUCT_TIMEZONE dnl =============================================== dnl Helpers dnl =============================================== cc_supports_flag() { local CFLAGS="-Werror $@" AC_MSG_CHECKING(whether $CC supports "$@") AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ ]], [[ ]])], [RC=0; AC_MSG_RESULT(yes)],[RC=1; AC_MSG_RESULT(no)]) return $RC } try_extract_header_define() { AC_MSG_CHECKING(if $2 in $1 exists) Cfile=$srcdir/extract_define.$2.${$} printf "#include \n" > ${Cfile}.c printf "#include <%s>\n" $1 >> ${Cfile}.c printf "int main(int argc, char **argv) {\n" >> ${Cfile}.c printf "#ifdef %s\n" $2 >> ${Cfile}.c printf "printf(\"%%s\", %s);\n" $2 >> ${Cfile}.c printf "#endif \n return 0; }\n" >> ${Cfile}.c $CC $CFLAGS ${Cfile}.c -o ${Cfile} 2>/dev/null value= if test -x ${Cfile}; then value=`${Cfile} 2>/dev/null` fi if test x"${value}" == x""; then value=$3 AC_MSG_RESULT(default: $value) else AC_MSG_RESULT($value) fi printf $value rm -rf ${Cfile}.c ${Cfile} ${Cfile}.dSYM ${Cfile}.gcno } extract_header_define() { AC_MSG_CHECKING(for $2 in $1) Cfile=$srcdir/extract_define.$2.${$} printf "#include \n" > ${Cfile}.c printf "#include <%s>\n" $1 >> ${Cfile}.c printf "int main(int argc, char **argv) { printf(\"%%s\", %s); return 0; }\n" $2 >> ${Cfile}.c $CC $CFLAGS ${Cfile}.c -o ${Cfile} value=`${Cfile}` AC_MSG_RESULT($value) printf $value rm -rf ${Cfile}.c ${Cfile} ${Cfile}.dSYM ${Cfile}.gcno } dnl =============================================== dnl Configure Options dnl =============================================== dnl Some systems, like Solaris require a custom package name AC_ARG_WITH(pkgname, [ --with-pkgname=name name for pkg (typically for Solaris) ], [ PKGNAME="$withval" ], [ PKGNAME="LXHAhb" ], ) AC_SUBST(PKGNAME) AC_ARG_ENABLE([ansi], [ --enable-ansi force GCC to compile to ANSI/ANSI standard for older compilers. [default=no]]) AC_ARG_ENABLE([fatal-warnings], [ --enable-fatal-warnings very pedantic and fatal warnings for gcc [default=yes]]) AC_ARG_ENABLE([quiet], [ --enable-quiet Supress make output unless there is an error [default=no]]) AC_ARG_ENABLE([thread-safe], [ --enable-thread-safe Enable some client libraries to be thread safe. [default=no]]) AC_ARG_ENABLE([bundled-ltdl], [ --enable-bundled-ltdl Configure, build and install the standalone ltdl library bundled with ${PACKAGE} [default=no]]) LTDL_LIBS="" AC_ARG_ENABLE([no-stack], [ --enable-no-stack Only build the Policy Engine and pieces needed to support it [default=no]]) AC_ARG_ENABLE([upstart], [ --enable-upstart Do not build support for the Upstart init system [default=yes]]) AC_ARG_ENABLE([systemd], [ --enable-systemd Do not build support for the Systemd init system [default=yes]]) AC_ARG_WITH(ais, [ --with-ais Support the Corosync messaging and membership layer ], [ SUPPORT_CS=$withval ], [ SUPPORT_CS=try ], ) AC_ARG_WITH(corosync, [ --with-corosync Support the Corosync messaging and membership layer ], [ SUPPORT_CS=$withval ] dnl initialized in AC_ARG_WITH(ais...) already, dnl don't reset to try if it was given as --without-ais ) AC_ARG_WITH(heartbeat, [ --with-heartbeat Support the Heartbeat messaging and membership layer ], [ SUPPORT_HEARTBEAT=$withval ], [ SUPPORT_HEARTBEAT=try ], ) AC_ARG_WITH(cman, [ --with-cman Support the consumption of membership and quorum from cman ], [ SUPPORT_CMAN=$withval ], [ SUPPORT_CMAN=try ], ) AC_ARG_WITH(cpg, [ --with-cs-quorum Support the consumption of membership and quorum from corosync ], [ SUPPORT_CS_QUORUM=$withval ], [ SUPPORT_CS_QUORUM=try ], ) AC_ARG_WITH(nagios, [ --with-nagios Support nagios remote monitoring ], [ SUPPORT_NAGIOS=$withval ], [ SUPPORT_NAGIOS=try ], ) AC_ARG_WITH(nagios-plugin-dir, [ --with-nagios-plugin-dir=DIR Directory for nagios plugins [${NAGIOS_PLUGIN_DIR}]], [ NAGIOS_PLUGIN_DIR="$withval" ] ) AC_ARG_WITH(nagios-metadata-dir, [ --with-nagios-metadata-dir=DIR Directory for nagios plugins metadata [${NAGIOS_METADATA_DIR}]], [ NAGIOS_METADATA_DIR="$withval" ] ) AC_ARG_WITH(snmp, [ --with-snmp Support the SNMP protocol ], [ SUPPORT_SNMP=$withval ], [ SUPPORT_SNMP=try ], ) AC_ARG_WITH(esmtp, [ --with-esmtp Support the sending mail notifications with the esmtp library ], [ SUPPORT_ESMTP=$withval ], [ SUPPORT_ESMTP=try ], ) AC_ARG_WITH(acl, [ --with-acl Support CIB ACL ], [ SUPPORT_ACL=$withval ], [ SUPPORT_ACL=yes ], ) AC_ARG_WITH(cibsecrets, [ --with-cibsecrets Support CIB secrets ], [ SUPPORT_CIBSECRETS=$withval ], [ SUPPORT_CIBSECRETS=no ], ) CSPREFIX="" AC_ARG_WITH(ais-prefix, [ --with-ais-prefix=DIR Prefix used when Corosync was installed [$prefix]], [ CSPREFIX=$withval ], [ CSPREFIX=$prefix ]) LCRSODIR="" AC_ARG_WITH(lcrso-dir, [ --with-lcrso-dir=DIR Corosync lcrso files. ], [ LCRSODIR="$withval" ]) INITDIR="" AC_ARG_WITH(initdir, [ --with-initdir=DIR directory for init (rc) scripts [${INITDIR}]], [ INITDIR="$withval" ]) SUPPORT_PROFILING=0 AC_ARG_WITH(profiling, [ --with-profiling Disable optimizations for effective profiling ], [ SUPPORT_PROFILING=$withval ]) AC_ARG_WITH(coverage, [ --with-coverage Disable optimizations for effective profiling ], [ SUPPORT_COVERAGE=$withval ]) PUBLICAN_BRAND="common" AC_ARG_WITH(brand, [ --with-brand=brand Brand to use for generated documentation [$PUBLICAN_BRAND]], [ PUBLICAN_BRAND="$withval" ]) AC_SUBST(PUBLICAN_BRAND) ASCIIDOC_CLI_TYPE="pcs" AC_ARG_WITH(doc-cli, [ --with-doc-cli=cli_type CLI type to use for generated documentation. [$ASCIIDOC_CLI_TYPE]], [ ASCIIDOC_CLI_TYPE="$withval" ]) AC_SUBST(ASCIIDOC_CLI_TYPE) dnl =============================================== dnl General Processing dnl =============================================== AC_SUBST(HB_PKG) INIT_EXT="" echo Our Host OS: $host_os/$host AC_MSG_NOTICE(Sanitizing prefix: ${prefix}) case $prefix in NONE) prefix=/usr dnl Fix default variables - "prefix" variable if not specified if test "$localstatedir" = "\${prefix}/var"; then localstatedir="/var" fi if test "$sysconfdir" = "\${prefix}/etc"; then sysconfdir="/etc" fi ;; esac AC_MSG_NOTICE(Sanitizing exec_prefix: ${exec_prefix}) case $exec_prefix in dnl For consistency with Heartbeat, map NONE->$prefix NONE) exec_prefix=$prefix;; prefix) exec_prefix=$prefix;; esac AC_MSG_NOTICE(Sanitizing ais_prefix: ${CSPREFIX}) case $CSPREFIX in dnl For consistency with Heartbeat, map NONE->$prefix NONE) CSPREFIX=$prefix;; prefix) CSPREFIX=$prefix;; esac AC_MSG_NOTICE(Sanitizing INITDIR: ${INITDIR}) case $INITDIR in prefix) INITDIR=$prefix;; "") AC_MSG_CHECKING(which init (rc) directory to use) for initdir in /etc/init.d /etc/rc.d/init.d /sbin/init.d \ /usr/local/etc/rc.d /etc/rc.d do if test -d $initdir then INITDIR=$initdir break fi done AC_MSG_RESULT($INITDIR);; esac AC_SUBST(INITDIR) AC_MSG_NOTICE(Sanitizing libdir: ${libdir}) case $libdir in dnl For consistency with Heartbeat, map NONE->$prefix *prefix*|NONE) AC_MSG_CHECKING(which lib directory to use) for aDir in lib64 lib do trydir="${exec_prefix}/${aDir}" if test -d ${trydir} then libdir=${trydir} break fi done AC_MSG_RESULT($libdir); ;; esac dnl Expand autoconf variables so that we dont end up with '${prefix}' dnl in #defines and python scripts dnl NOTE: Autoconf deliberately leaves them unexpanded to allow dnl make exec_prefix=/foo install dnl No longer being able to do this seems like no great loss to me... eval prefix="`eval echo ${prefix}`" eval exec_prefix="`eval echo ${exec_prefix}`" eval bindir="`eval echo ${bindir}`" eval sbindir="`eval echo ${sbindir}`" eval libexecdir="`eval echo ${libexecdir}`" eval datadir="`eval echo ${datadir}`" eval sysconfdir="`eval echo ${sysconfdir}`" eval sharedstatedir="`eval echo ${sharedstatedir}`" eval localstatedir="`eval echo ${localstatedir}`" eval libdir="`eval echo ${libdir}`" eval includedir="`eval echo ${includedir}`" eval oldincludedir="`eval echo ${oldincludedir}`" eval infodir="`eval echo ${infodir}`" eval mandir="`eval echo ${mandir}`" dnl Home-grown variables eval INITDIR="${INITDIR}" eval docdir="`eval echo ${docdir}`" if test x"${docdir}" = x""; then docdir=${datadir}/doc/${PACKAGE}-${VERSION} #docdir=${datadir}/doc/packages/${PACKAGE} fi AC_SUBST(docdir) for j in prefix exec_prefix bindir sbindir libexecdir datadir sysconfdir \ sharedstatedir localstatedir libdir includedir oldincludedir infodir \ mandir INITDIR docdir do dirname=`eval echo '${'${j}'}'` if test ! -d "$dirname" then AC_MSG_WARN([$j directory ($dirname) does not exist!]) fi done dnl This OS-based decision-making is poor autotools practice; dnl feature-based mechanisms are strongly preferred. dnl dnl So keep this section to a bare minimum; regard as a "necessary evil". case "$host_os" in *bsd*) AC_DEFINE_UNQUOTED(ON_BSD, 1, Compiling for BSD platform) LIBS="-L/usr/local/lib" CPPFLAGS="$CPPFLAGS -I/usr/local/include" INIT_EXT=".sh" ;; *solaris*) AC_DEFINE_UNQUOTED(ON_SOLARIS, 1, Compiling for Solaris platform) ;; *linux*) AC_DEFINE_UNQUOTED(ON_LINUX, 1, Compiling for Linux platform) CFLAGS="$CFLAGS -I${prefix}/include" ;; darwin*) AC_DEFINE_UNQUOTED(ON_DARWIN, 1, Compiling for Darwin platform) LIBS="$LIBS -L${prefix}/lib" CFLAGS="$CFLAGS -I${prefix}/include" ;; esac dnl Eventually remove this CFLAGS="$CFLAGS -I${prefix}/include/heartbeat" AC_SUBST(INIT_EXT) AC_MSG_NOTICE(Host CPU: $host_cpu) case "$host_cpu" in ppc64|powerpc64) case $CFLAGS in *powerpc64*) ;; *) if test "$GCC" = yes; then CFLAGS="$CFLAGS -m64" fi ;; esac esac AC_MSG_CHECKING(which format is needed to print uint64_t) ac_save_CFLAGS=$CFLAGS CFLAGS="-Wall -Werror" AC_COMPILE_IFELSE( [AC_LANG_PROGRAM( [ #include #include #include ], [ int max = 512; uint64_t bignum = 42; char *buffer = malloc(max); const char *random = "random"; snprintf(buffer, max-1, "", bignum, random); fprintf(stderr, "Result: %s\n", buffer); ] )], [U64T="%lu"], [U64T="%llu"] ) CFLAGS=$ac_save_CFLAGS AC_MSG_RESULT($U64T) AC_DEFINE_UNQUOTED(U64T, "$U64T", Correct printf format for logging uint64_t) dnl =============================================== dnl Program Paths dnl =============================================== PATH="$PATH:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin" export PATH dnl Replacing AC_PROG_LIBTOOL with AC_CHECK_PROG because LIBTOOL dnl was NOT being expanded all the time thus causing things to fail. AC_CHECK_PROGS(LIBTOOL, glibtool libtool libtool15 libtool13) AM_PATH_PYTHON AC_CHECK_PROGS(MAKE, gmake make) AC_PATH_PROGS(HTML2TXT, lynx w3m) AC_PATH_PROGS(HELP2MAN, help2man) AC_PATH_PROGS(POD2MAN, pod2man, pod2man) AC_PATH_PROGS(ASCIIDOC, asciidoc) AC_PATH_PROGS(PUBLICAN, publican) AC_PATH_PROGS(INKSCAPE, inkscape) AC_PATH_PROGS(XSLTPROC, xsltproc) AC_PATH_PROGS(FOP, fop) AC_PATH_PROGS(SSH, ssh, /usr/bin/ssh) AC_PATH_PROGS(SCP, scp, /usr/bin/scp) AC_PATH_PROGS(TAR, tar) AC_PATH_PROGS(MD5, md5) AC_PATH_PROGS(TEST, test) AC_PATH_PROGS(PKGCONFIG, pkg-config) AC_PATH_PROGS(XML2CONFIG, xml2-config) AC_PATH_PROGS(VALGRIND_BIN, valgrind, /usr/bin/valgrind) AC_DEFINE_UNQUOTED(VALGRIND_BIN, "$VALGRIND_BIN", Valgrind command) dnl Disable these until we decide if the stonith config file should be supported dnl AC_PATH_PROGS(BISON, bison) dnl AC_PATH_PROGS(FLEX, flex) dnl AC_PATH_PROGS(HAVE_YACC, $YACC) if test x"${LIBTOOL}" = x""; then AC_MSG_ERROR(You need (g)libtool installed in order to build ${PACKAGE}) fi if test x"${MAKE}" = x""; then AC_MSG_ERROR(You need (g)make installed in order to build ${PACKAGE}) fi AM_CONDITIONAL(BUILD_HELP, test x"${HELP2MAN}" != x"") if test x"${HELP2MAN}" != x""; then PCMK_FEATURES="$PCMK_FEATURES generated-manpages" fi MANPAGE_XSLT="" if test x"${XSLTPROC}" != x""; then AC_MSG_CHECKING(docbook to manpage transform) XSLT=`find ${datadir} -name docbook.xsl` for xsl in $XSLT; do dname=`dirname $xsl` bname=`basename $dname` if test "$bname" = "manpages"; then MANPAGE_XSLT="$xsl" break fi done fi AC_MSG_RESULT($MANPAGE_XSLT) AC_SUBST(MANPAGE_XSLT) AM_CONDITIONAL(BUILD_XML_HELP, test x"${MANPAGE_XSLT}" != x"") if test x"${MANPAGE_XSLT}" != x""; then PCMK_FEATURES="$PCMK_FEATURES agent-manpages" fi AM_CONDITIONAL(BUILD_ASCIIDOC, test x"${ASCIIDOC}" != x"") if test x"${ASCIIDOC}" != x""; then PCMK_FEATURES="$PCMK_FEATURES ascii-docs" fi SUPPORT_STONITH_CONFIG=0 if test x"${HAVE_YACC}" != x"" -a x"${FLEX}" != x"" -a x"${BISON}" != x""; then SUPPORT_STONITH_CONFIG=1 PCMK_FEATURES="$PCMK_FEATURES st-conf" fi AM_CONDITIONAL(BUILD_STONITH_CONFIG, test $SUPPORT_STONITH_CONFIG = 1) AC_DEFINE_UNQUOTED(SUPPORT_STONITH_CONFIG, $SUPPORT_STONITH_CONFIG, Support a stand-alone stonith config file in addition to the CIB) AM_CONDITIONAL(BUILD_DOCBOOK, test x"${PUBLICAN}" != x"" -a x"${INKSCAPE}" != x"") if test x"${PUBLICAN}" != x"" -a x"${INKSCAPE}" != x""; then AC_MSG_NOTICE(Enabling publican) PCMK_FEATURES="$PCMK_FEATURES publican-docs" fi dnl ======================================================================== dnl checks for library functions to replace them dnl dnl NoSuchFunctionName: dnl is a dummy function which no system supplies. It is here to make dnl the system compile semi-correctly on OpenBSD which doesn't know dnl how to create an empty archive dnl dnl scandir: Only on BSD. dnl System-V systems may have it, but hidden and/or deprecated. dnl A replacement function is supplied for it. dnl dnl setenv: is some bsdish function that should also be avoided (use dnl putenv instead) dnl On the other hand, putenv doesn't provide the right API for the dnl code and has memory leaks designed in (sigh...) Fortunately this dnl A replacement function is supplied for it. dnl dnl strerror: returns a string that corresponds to an errno. dnl A replacement function is supplied for it. dnl dnl strnlen: is a gnu function similar to strlen, but safer. dnl We wrote a tolearably-fast replacement function for it. dnl dnl strndup: is a gnu function similar to strdup, but safer. dnl We wrote a tolearably-fast replacement function for it. AC_REPLACE_FUNCS(alphasort NoSuchFunctionName scandir setenv strerror strchrnul unsetenv strnlen strndup) dnl =============================================== dnl Libraries dnl =============================================== AC_CHECK_LIB(socket, socket) dnl -lsocket AC_CHECK_LIB(c, dlopen) dnl if dlopen is in libc... AC_CHECK_LIB(dl, dlopen) dnl -ldl (for Linux) AC_CHECK_LIB(rt, sched_getscheduler) dnl -lrt (for Tru64) AC_CHECK_LIB(gnugetopt, getopt_long) dnl -lgnugetopt ( if available ) AC_CHECK_LIB(pam, pam_start) dnl -lpam (if available) AC_CHECK_FUNCS([sched_setscheduler]) AC_CHECK_LIB(uuid, uuid_parse) dnl load the library if necessary AC_CHECK_FUNCS(uuid_unparse) dnl OSX ships uuid_* as standard functions AC_CHECK_HEADERS(uuid/uuid.h) if test "x$ac_cv_func_uuid_unparse" != xyes; then AC_MSG_ERROR(You do not have the libuuid development package installed) fi if test x"${PKGCONFIG}" = x""; then AC_MSG_ERROR(You need pkgconfig installed in order to build ${PACKAGE}) fi if test "x${enable_thread_safe}" = "xyes"; then GPKGNAME="gthread-2.0" else GPKGNAME="glib-2.0" fi if $PKGCONFIG --exists $GPKGNAME then GLIBCONFIG="$PKGCONFIG $GPKGNAME" else set -x echo PKG_CONFIG_PATH=$PKG_CONFIG_PATH $PKGCONFIG --exists $GPKGNAME; echo $? $PKGCONFIG --cflags $GPKGNAME; echo $? $PKGCONFIG $GPKGNAME; echo $? set +x AC_MSG_ERROR(You need glib2-devel installed in order to build ${PACKAGE}) fi AC_MSG_RESULT(using $GLIBCONFIG) # # Where is dlopen? # if test "$ac_cv_lib_c_dlopen" = yes; then LIBADD_DL="" elif test "$ac_cv_lib_dl_dlopen" = yes; then LIBADD_DL=-ldl else LIBADD_DL=${lt_cv_dlopen_libs} fi dnl dnl Check for location of gettext dnl dnl On at least Solaris 2.x, where it is in libc, specifying lintl causes dnl grief. Ensure minimal result, not the sum of all possibilities. dnl And do libc first. dnl Known examples: dnl c: Linux, Solaris 2.6+ dnl intl: BSD, AIX AC_CHECK_LIB(c, gettext) if test x$ac_cv_lib_c_gettext != xyes; then AC_CHECK_LIB(intl, gettext) fi if test x$ac_cv_lib_c_gettext != xyes -a x$ac_cv_lib_intl_gettext != xyes; then AC_MSG_ERROR(You need gettext installed in order to build ${PACKAGE}) fi if test "X$GLIBCONFIG" != X; then AC_MSG_CHECKING(for special glib includes: ) GLIBHEAD=`$GLIBCONFIG --cflags` AC_MSG_RESULT($GLIBHEAD) CPPFLAGS="$CPPFLAGS $GLIBHEAD" AC_MSG_CHECKING(for glib library flags) GLIBLIB=`$GLIBCONFIG --libs` AC_MSG_RESULT($GLIBLIB) LIBS="$LIBS $GLIBLIB" fi dnl FreeBSD needs -lcompat for ftime() used by lrmd.c AC_CHECK_LIB([compat], [ftime], [COMPAT_LIBS='-lcompat']) AC_SUBST(COMPAT_LIBS) dnl ======================================================================== dnl Headers dnl ======================================================================== AC_HEADER_STDC AC_CHECK_HEADERS(arpa/inet.h) AC_CHECK_HEADERS(asm/types.h) AC_CHECK_HEADERS(assert.h) AC_CHECK_HEADERS(auth-client.h) AC_CHECK_HEADERS(ctype.h) AC_CHECK_HEADERS(dirent.h) AC_CHECK_HEADERS(errno.h) AC_CHECK_HEADERS(fcntl.h) AC_CHECK_HEADERS(getopt.h) AC_CHECK_HEADERS(glib.h) AC_CHECK_HEADERS(grp.h) AC_CHECK_HEADERS(limits.h) AC_CHECK_HEADERS(linux/errqueue.h) AC_CHECK_HEADERS(linux/swab.h) AC_CHECK_HEADERS(malloc.h) AC_CHECK_HEADERS(netdb.h) AC_CHECK_HEADERS(netinet/in.h) AC_CHECK_HEADERS(netinet/ip.h) AC_CHECK_HEADERS(pam/pam_appl.h) AC_CHECK_HEADERS(pthread.h) AC_CHECK_HEADERS(pwd.h) AC_CHECK_HEADERS(security/pam_appl.h) AC_CHECK_HEADERS(sgtty.h) AC_CHECK_HEADERS(signal.h) AC_CHECK_HEADERS(stdarg.h) AC_CHECK_HEADERS(stddef.h) AC_CHECK_HEADERS(stdio.h) AC_CHECK_HEADERS(stdlib.h) AC_CHECK_HEADERS(string.h) AC_CHECK_HEADERS(strings.h) AC_CHECK_HEADERS(sys/dir.h) AC_CHECK_HEADERS(sys/ioctl.h) AC_CHECK_HEADERS(sys/param.h) AC_CHECK_HEADERS(sys/poll.h) AC_CHECK_HEADERS(sys/reboot.h) AC_CHECK_HEADERS(sys/resource.h) AC_CHECK_HEADERS(sys/select.h) AC_CHECK_HEADERS(sys/socket.h) AC_CHECK_HEADERS(sys/signalfd.h) AC_CHECK_HEADERS(sys/sockio.h) AC_CHECK_HEADERS(sys/stat.h) AC_CHECK_HEADERS(sys/time.h) AC_CHECK_HEADERS(sys/timeb.h) AC_CHECK_HEADERS(sys/types.h) AC_CHECK_HEADERS(sys/uio.h) AC_CHECK_HEADERS(sys/un.h) AC_CHECK_HEADERS(sys/utsname.h) AC_CHECK_HEADERS(sys/wait.h) AC_CHECK_HEADERS(time.h) AC_CHECK_HEADERS(unistd.h) AC_CHECK_HEADERS(winsock.h) dnl These headers need prerequisits before the tests will pass dnl AC_CHECK_HEADERS(net/if.h) dnl AC_CHECK_HEADERS(netinet/icmp6.h) dnl AC_CHECK_HEADERS(netinet/ip6.h) dnl AC_CHECK_HEADERS(netinet/ip_icmp.h) AC_MSG_CHECKING(for special libxml2 includes) if test "x$XML2CONFIG" = "x"; then AC_MSG_ERROR(libxml2 config not found) else XML2HEAD="`$XML2CONFIG --cflags`" AC_MSG_RESULT($XML2HEAD) AC_CHECK_LIB(xml2, xmlReadMemory) AC_CHECK_LIB(xslt, xsltApplyStylesheet) fi CPPFLAGS="$CPPFLAGS $XML2HEAD" AC_CHECK_HEADERS(libxml/xpath.h) AC_CHECK_HEADERS(libxslt/xslt.h) if test "$ac_cv_header_libxml_xpath_h" != "yes"; then AC_MSG_ERROR(The libxml developement headers were not found) fi if test "$ac_cv_header_libxslt_xslt_h" != "yes"; then AC_MSG_ERROR(The libxslt developement headers were not found) fi dnl ======================================================================== dnl Structures dnl ======================================================================== AC_CHECK_MEMBERS([struct tm.tm_gmtoff],,,[[#include ]]) AC_CHECK_MEMBERS([lrm_op_t.rsc_deleted],,,[[#include ]]) AC_CHECK_MEMBER([struct dirent.d_type], AC_DEFINE(HAVE_STRUCT_DIRENT_D_TYPE,1,[Define this if struct dirent has d_type]),, [#include ]) dnl ======================================================================== dnl Functions dnl ======================================================================== AC_CHECK_FUNCS(g_log_set_default_handler) AC_CHECK_FUNCS(getopt, AC_DEFINE(HAVE_DECL_GETOPT, 1, [Have getopt function])) AC_CHECK_FUNCS(nanosleep, AC_DEFINE(HAVE_DECL_NANOSLEEP, 1, [Have nanosleep function])) dnl ======================================================================== dnl ltdl dnl ======================================================================== AC_CHECK_LIB(ltdl, lt_dlopen, [LTDL_foo=1]) if test "x${enable_bundled_ltdl}" = "xyes"; then if test $ac_cv_lib_ltdl_lt_dlopen = yes; then AC_MSG_NOTICE([Disabling usage of installed ltdl]) fi ac_cv_lib_ltdl_lt_dlopen=no fi LIBLTDL_DIR="" if test $ac_cv_lib_ltdl_lt_dlopen != yes ; then AC_MSG_NOTICE([Installing local ltdl]) LIBLTDL_DIR=libltdl ( cd $srcdir ; $TAR -xvf libltdl.tar ) if test "$?" -ne 0; then AC_MSG_ERROR([$TAR of libltdl.tar in $srcdir failed]) fi AC_CONFIG_SUBDIRS(libltdl) else LIBS="$LIBS -lltdl" AC_MSG_NOTICE([Using installed ltdl]) INCLTDL="" LIBLTDL="" fi AC_SUBST(INCLTDL) AC_SUBST(LIBLTDL) AC_SUBST(LIBLTDL_DIR) dnl ======================================================================== dnl bzip2 dnl ======================================================================== AC_CHECK_HEADERS(bzlib.h) AC_CHECK_LIB(bz2, BZ2_bzBuffToBuffCompress) if test x$ac_cv_lib_bz2_BZ2_bzBuffToBuffCompress != xyes ; then AC_MSG_ERROR(BZ2 libraries not found) fi if test x$ac_cv_header_bzlib_h != xyes; then AC_MSG_ERROR(BZ2 Development headers not found) fi dnl ======================================================================== dnl sighandler_t is missing from Illumos, Solaris11 systems dnl ======================================================================== AC_MSG_CHECKING([for sighandler_t]) AC_TRY_COMPILE([#include ],[sighandler_t *f;], has_sighandler_t=yes,has_sighandler_t=no) AC_MSG_RESULT($has_sighandler_t) if test "$has_sighandler_t" = "yes" ; then AC_DEFINE( HAVE_SIGHANDLER_T, 1, [Define if sighandler_t available] ) fi dnl ======================================================================== dnl ncurses dnl ======================================================================== dnl dnl A few OSes (e.g. Linux) deliver a default "ncurses" alongside "curses". dnl Many non-Linux deliver "curses"; sites may add "ncurses". dnl dnl However, the source-code recommendation for both is to #include "curses.h" dnl (i.e. "ncurses" still wants the include to be simple, no-'n', "curses.h"). dnl dnl ncurse takes precedence. dnl AC_CHECK_HEADERS(curses.h) AC_CHECK_HEADERS(curses/curses.h) AC_CHECK_HEADERS(ncurses.h) AC_CHECK_HEADERS(ncurses/ncurses.h) dnl Although n-library is preferred, only look for it if the n-header was found. CURSESLIBS='' if test "$ac_cv_header_ncurses_h" = "yes"; then AC_CHECK_LIB(ncurses, printw, [AC_DEFINE(HAVE_LIBNCURSES,1, have ncurses library)] ) - CURSESLIBS=`$PKGCONFIG --libs ncurses`; + CURSESLIBS=`$PKGCONFIG --libs ncurses` || CURSESLIBS='-lncurses' fi if test "$ac_cv_header_ncurses_ncurses_h" = "yes"; then AC_CHECK_LIB(ncurses, printw, [AC_DEFINE(HAVE_LIBNCURSES,1, have ncurses library)] ) - CURSESLIBS=`$PKGCONFIG --libs ncurses`; + CURSESLIBS=`$PKGCONFIG --libs ncurses` || CURSESLIBS='-lncurses' fi dnl Only look for non-n-library if there was no n-library. if test X"$CURSESLIBS" = X"" -a "$ac_cv_header_curses_h" = "yes"; then AC_CHECK_LIB(curses, printw, [CURSESLIBS='-lcurses'; AC_DEFINE(HAVE_LIBCURSES,1, have curses library)] ) fi dnl Only look for non-n-library if there was no n-library. if test X"$CURSESLIBS" = X"" -a "$ac_cv_header_curses_curses_h" = "yes"; then AC_CHECK_LIB(curses, printw, [CURSESLIBS='-lcurses'; AC_DEFINE(HAVE_LIBCURSES,1, have curses library)] ) fi if test "x$CURSESLIBS" != "x"; then PCMK_FEATURES="$PCMK_FEATURES ncurses" fi dnl Check for printw() prototype compatibility if test X"$CURSESLIBS" != X"" && cc_supports_flag -Wcast-qual && cc_supports_flag -Werror; then AC_MSG_CHECKING(whether printw() requires argument of "const char *") ac_save_LIBS=$LIBS LIBS="$CURSESLIBS $LIBS" ac_save_CFLAGS=$CFLAGS CFLAGS="-Wcast-qual -Werror" AC_LINK_IFELSE( [AC_LANG_PROGRAM( [ #if defined(HAVE_NCURSES_H) # include #elif defined(HAVE_NCURSES_NCURSES_H) # include #elif defined(HAVE_CURSES_H) # include #endif ], [printw((const char *)"Test");] )], [ac_cv_compatible_printw=yes], [ac_cv_compatible_printw=no] ) LIBS=$ac_save_LIBS CFLAGS=$ac_save_CFLAGS AC_MSG_RESULT([$ac_cv_compatible_printw]) if test "$ac_cv_compatible_printw" = no; then AC_MSG_WARN([The printw() function of your ncurses or curses library is old, we will disable usage of the library. If you want to use this library anyway, please update to newer version of the library, ncurses 5.4 or later is recommended. You can get the library from http://www.gnu.org/software/ncurses/.]) AC_MSG_NOTICE([Disabling curses]) AC_DEFINE(HAVE_INCOMPATIBLE_PRINTW, 1, [Do we have incompatible printw() in curses library?]) fi fi AC_SUBST(CURSESLIBS) dnl ======================================================================== dnl Profiling and GProf dnl ======================================================================== AC_MSG_NOTICE(Old CFLAGS: $CFLAGS) case $SUPPORT_COVERAGE in 1|yes|true) SUPPORT_PROFILING=1 PCMK_FEATURES="$PCMK_FEATURES coverage" CFLAGS="$CFLAGS -fprofile-arcs -ftest-coverage" dnl During linking, make sure to specify -lgcov or -coverage dnl Enable gprof #LIBS="$LIBS -pg" #CFLAGS="$CFLAGS -pg" ;; esac case $SUPPORT_PROFILING in 1|yes|true) SUPPORT_PROFILING=1 dnl Disable various compiler optimizations CFLAGS="$CFLAGS -fno-omit-frame-pointer -fno-inline -fno-builtin " dnl CFLAGS="$CFLAGS -fno-inline-functions -fno-default-inline -fno-inline-functions-called-once -fno-optimize-sibling-calls" dnl Turn off optimization so tools can get accurate line numbers CFLAGS=`echo $CFLAGS | sed -e 's/-O.\ //g' -e 's/-Wp,-D_FORTIFY_SOURCE=.\ //g' -e 's/-D_FORTIFY_SOURCE=.\ //g'` CFLAGS="$CFLAGS -O0 -g3 -gdwarf-2" dnl Update features PCMK_FEATURES="$PCMK_FEATURES profile" ;; *) SUPPORT_PROFILING=0;; esac AC_MSG_NOTICE(New CFLAGS: $CFLAGS) AC_DEFINE_UNQUOTED(SUPPORT_PROFILING, $SUPPORT_PROFILING, Support for profiling) dnl ======================================================================== dnl Cluster infrastructure - Heartbeat / LibQB dnl ======================================================================== dnl Compatability checks AC_CHECK_MEMBERS([struct lrm_ops.fail_rsc],,,[[#include ]]) if test x${enable_no_stack} = xyes; then SUPPORT_HEARTBEAT=no SUPPORT_CS=no fi PKG_CHECK_MODULES(libqb, libqb, HAVE_libqb=1, HAVE_libqb=0) AC_CHECK_HEADERS(qb/qbipc_common.h) AC_CHECK_LIB(qb, qb_ipcs_connection_auth_set) LIBQB_LOG=1 PCMK_FEATURES="$PCMK_FEATURES libqb-logging libqb-ipc" AC_CHECK_FUNCS(qb_ipcs_connection_get_buffer_size, AC_DEFINE(HAVE_IPCS_GET_BUFFER_SIZE, 1, [Have qb_ipcc_get_buffer_size function])) if ! pkg-config --atleast-version 0.13 libqb then AC_MSG_FAILURE(Version of libqb is too old: v0.13 or greater requried) fi LIBS="$LIBS $libqb_LIBS" AC_CHECK_HEADERS(heartbeat/hb_config.h) AC_CHECK_HEADERS(heartbeat/glue_config.h) AC_CHECK_HEADERS(stonith/stonith.h) AC_CHECK_HEADERS(agent_config.h) GLUE_HEADER=none HAVE_GLUE=0 if test "$ac_cv_header_heartbeat_glue_config_h" = "yes"; then GLUE_HEADER=glue_config.h HAVE_GLUE=1 elif test "$ac_cv_header_heartbeat_hb_config_h" = "yes"; then GLUE_HEADER=hb_config.h HAVE_GLUE=1 else AC_MSG_WARN(cluster-glue development headers were not found) fi if test "$ac_cv_header_stonith_stonith_h" = "yes"; then PCMK_FEATURES="$PCMK_FEATURES lha-fencing" fi if test $HAVE_GLUE = 1; then dnl On Debian, AC_CHECK_LIBS fail if a library has any unresolved symbols dnl So check for all the depenancies (so they're added to LIBS) before checking for -lplumb AC_CHECK_LIB(pils, PILLoadPlugin) AC_CHECK_LIB(plumb, G_main_add_IPC_Channel) fi dnl =============================================== dnl Variables needed for substitution dnl =============================================== CRM_DTD_DIRECTORY="${datadir}/pacemaker" AC_DEFINE_UNQUOTED(CRM_DTD_DIRECTORY,"$CRM_DTD_DIRECTORY", Location for the Pacemaker Relax-NG Schema) AC_SUBST(CRM_DTD_DIRECTORY) CRM_CORE_DIR=`try_extract_header_define $GLUE_HEADER HA_COREDIR ${localstatedir}/lib/pacemaker/cores` AC_DEFINE_UNQUOTED(CRM_CORE_DIR,"$CRM_CORE_DIR", Location to store core files produced by Pacemaker daemons) AC_SUBST(CRM_CORE_DIR) CRM_DAEMON_USER=`try_extract_header_define $GLUE_HEADER HA_CCMUSER hacluster` AC_DEFINE_UNQUOTED(CRM_DAEMON_USER,"$CRM_DAEMON_USER", User to run Pacemaker daemons as) AC_SUBST(CRM_DAEMON_USER) CRM_DAEMON_GROUP=`try_extract_header_define $GLUE_HEADER HA_APIGROUP haclient` AC_DEFINE_UNQUOTED(CRM_DAEMON_GROUP,"$CRM_DAEMON_GROUP", Group to run Pacemaker daemons as) AC_SUBST(CRM_DAEMON_GROUP) CRM_STATE_DIR=${localstatedir}/run/crm AC_DEFINE_UNQUOTED(CRM_STATE_DIR,"$CRM_STATE_DIR", Where to keep state files and sockets) AC_SUBST(CRM_STATE_DIR) CRM_BLACKBOX_DIR=${localstatedir}/lib/pacemaker/blackbox AC_DEFINE_UNQUOTED(CRM_BLACKBOX_DIR,"$CRM_BLACKBOX_DIR", Where to keep blackbox dumps) AC_SUBST(CRM_BLACKBOX_DIR) PE_STATE_DIR="${localstatedir}/lib/pacemaker/pengine" AC_DEFINE_UNQUOTED(PE_STATE_DIR,"$PE_STATE_DIR", Where to keep PEngine outputs) AC_SUBST(PE_STATE_DIR) CRM_CONFIG_DIR="${localstatedir}/lib/pacemaker/cib" AC_DEFINE_UNQUOTED(CRM_CONFIG_DIR,"$CRM_CONFIG_DIR", Where to keep configuration files) AC_SUBST(CRM_CONFIG_DIR) CRM_CONFIG_CTS="${localstatedir}/lib/pacemaker/cts" AC_DEFINE_UNQUOTED(CRM_CONFIG_CTS,"$CRM_CONFIG_CTS", Where to keep cts stateful data) AC_SUBST(CRM_CONFIG_CTS) CRM_LEGACY_CONFIG_DIR="${localstatedir}/lib/heartbeat/crm" AC_DEFINE_UNQUOTED(CRM_LEGACY_CONFIG_DIR,"$CRM_LEGACY_CONFIG_DIR", Where Pacemaker used to keep configuration files) AC_SUBST(CRM_LEGACY_CONFIG_DIR) CRM_DAEMON_DIR="${libexecdir}/pacemaker" AC_DEFINE_UNQUOTED(CRM_DAEMON_DIR,"$CRM_DAEMON_DIR", Location for Pacemaker daemons) AC_SUBST(CRM_DAEMON_DIR) HB_DAEMON_DIR=`try_extract_header_define $GLUE_HEADER HA_LIBHBDIR $libdir/heartbeat` AC_DEFINE_UNQUOTED(HB_DAEMON_DIR,"$HB_DAEMON_DIR", Location Heartbeat expects Pacemaker daemons to be in) AC_SUBST(HB_DAEMON_DIR) dnl Needed so that the Corosync plugin can clear out the directory as Heartbeat does HA_STATE_DIR=`try_extract_header_define $GLUE_HEADER HA_VARRUNDIR ${localstatedir}/run` AC_DEFINE_UNQUOTED(HA_STATE_DIR,"$HA_STATE_DIR", Where Heartbeat keeps state files and sockets) AC_SUBST(HA_STATE_DIR) CRM_RSCTMP_DIR=`try_extract_header_define agent_config.h HA_RSCTMPDIR $HA_STATE_DIR/resource-agents` AC_MSG_CHECKING(Scratch dir for resource agents) AC_MSG_RESULT($CRM_RSCTMP_DIR) AC_DEFINE_UNQUOTED(CRM_RSCTMP_DIR,"$CRM_RSCTMP_DIR", Where resource agents should keep state files) AC_SUBST(CRM_RSCTMP_DIR) dnl Needed for the location of hostcache in CTS.py HA_VARLIBHBDIR=`try_extract_header_define $GLUE_HEADER HA_VARLIBHBDIR ${localstatedir}/lib/heartbeat` AC_SUBST(HA_VARLIBHBDIR) AC_DEFINE_UNQUOTED(UUID_FILE,"$localstatedir/lib/heartbeat/hb_uuid", Location of Heartbeat's UUID file) OCF_ROOT_DIR=`try_extract_header_define $GLUE_HEADER OCF_ROOT_DIR /usr/lib/ocf` if test "X$OCF_ROOT_DIR" = X; then AC_MSG_ERROR(Could not locate OCF directory) fi AC_SUBST(OCF_ROOT_DIR) OCF_RA_DIR=`try_extract_header_define $GLUE_HEADER OCF_RA_DIR $OCF_ROOT_DIR/resource.d` AC_DEFINE_UNQUOTED(OCF_RA_DIR,"$OCF_RA_DIR", Location for OCF RAs) AC_SUBST(OCF_RA_DIR) RH_STONITH_DIR="$sbindir" AC_DEFINE_UNQUOTED(RH_STONITH_DIR,"$RH_STONITH_DIR", Location for Red Hat Stonith agents) RH_STONITH_PREFIX="fence_" AC_DEFINE_UNQUOTED(RH_STONITH_PREFIX,"$RH_STONITH_PREFIX", Prefix for Red Hat Stonith agents) AC_PATH_PROGS(GIT, git false) AC_MSG_CHECKING(build version) BUILD_VERSION=$Format:%h$ if test $BUILD_VERSION != ":%h$"; then AC_MSG_RESULT(archive hash: $BUILD_VERSION) elif test -x $GIT -a -d .git; then BUILD_VERSION=`$GIT log --pretty="format:%h" -n 1` AC_MSG_RESULT(git hash: $BUILD_VERSION) else # The current directory name make a reasonable default # Most generated archives will include the hash or tag BASE=`basename $PWD` BUILD_VERSION=`echo $BASE | sed s:.*[[Pp]]acemaker-::` AC_MSG_RESULT(directory based hash: $BUILD_VERSION) fi AC_DEFINE_UNQUOTED(BUILD_VERSION, "$BUILD_VERSION", Build version) AC_SUBST(BUILD_VERSION) HAVE_dbus=1 HAVE_upstart=0 HAVE_systemd=0 PKG_CHECK_MODULES(DBUS, dbus-1, ,HAVE_dbus=0) AC_DEFINE_UNQUOTED(SUPPORT_DBUS, $HAVE_dbus, Support dbus) AM_CONDITIONAL(BUILD_DBUS, test $HAVE_dbus = 1) if test $HAVE_dbus = 1; then CFLAGS="$CFLAGS `$PKGCONFIG --cflags dbus-1`" fi DBUS_LIBS="$CFLAGS `$PKGCONFIG --libs dbus-1`" AC_SUBST(DBUS_LIBS) AC_CHECK_TYPES([DBusBasicValue],,,[[#include ]]) if test $HAVE_dbus = 1 -a "x${enable_upstart}" != xno; then HAVE_upstart=1 PCMK_FEATURES="$PCMK_FEATURES upstart" fi AC_DEFINE_UNQUOTED(SUPPORT_UPSTART, $HAVE_upstart, Support upstart based system services) AM_CONDITIONAL(BUILD_UPSTART, test $HAVE_upstart = 1) if $PKGCONFIG --exists systemd then systemdunitdir=`$PKGCONFIG --variable=systemdsystemunitdir systemd` AC_SUBST(systemdunitdir) else enable_systemd=no fi if test $HAVE_dbus = 1 -a "x${enable_systemd}" != xno; then if test -n "$systemdunitdir" -a "x$systemdunitdir" != xno; then HAVE_systemd=1 PCMK_FEATURES="$PCMK_FEATURES systemd" fi fi AC_DEFINE_UNQUOTED(SUPPORT_SYSTEMD, $HAVE_systemd, Support systemd based system services) AM_CONDITIONAL(BUILD_SYSTEMD, test $HAVE_systemd = 1) case $SUPPORT_NAGIOS in 1|yes|true|try) SUPPORT_NAGIOS=1;; *) SUPPORT_NAGIOS=0;; esac if test $SUPPORT_NAGIOS = 1; then PCMK_FEATURES="$PCMK_FEATURES nagios" fi AC_DEFINE_UNQUOTED(SUPPORT_NAGIOS, $SUPPORT_NAGIOS, Support nagios plugins) AM_CONDITIONAL(BUILD_NAGIOS, test $SUPPORT_NAGIOS = 1) if test x"$NAGIOS_PLUGIN_DIR" = x""; then NAGIOS_PLUGIN_DIR="${libexecdir}/nagios/plugins" fi AC_DEFINE_UNQUOTED(NAGIOS_PLUGIN_DIR, "$NAGIOS_PLUGIN_DIR", Directory for nagios plugins) AC_SUBST(NAGIOS_PLUGIN_DIR) if test x"$NAGIOS_METADATA_DIR" = x""; then NAGIOS_METADATA_DIR="${datadir}/nagios/plugins-metadata" fi AC_DEFINE_UNQUOTED(NAGIOS_METADATA_DIR, "$NAGIOS_METADATA_DIR", Directory for nagios plugins metadata) AC_SUBST(NAGIOS_METADATA_DIR) STACKS="" CLUSTERLIBS="" dnl ======================================================================== dnl Cluster stack - Heartbeat dnl ======================================================================== case $SUPPORT_HEARTBEAT in 1|yes|true|try) AC_MSG_CHECKING(for heartbeat support) AC_CHECK_LIB(hbclient, ll_cluster_new, [SUPPORT_HEARTBEAT=1], [if test $SUPPORT_HEARTBEAT != try; then AC_MSG_FAILURE(Unable to support Heartbeat: client libraries not found) fi]) if test $SUPPORT_HEARTBEAT = 1 ; then STACKS="$STACKS heartbeat" dnl objdump -x ${libdir}/libccmclient.so | grep SONAME | awk '{print $2}' AC_DEFINE_UNQUOTED(CCM_LIBRARY, "libccmclient.so.1", Library to load for ccm support) AC_DEFINE_UNQUOTED(HEARTBEAT_LIBRARY, "libhbclient.so.1", Library to load for heartbeat support) BUILD_ATOMIC_ATTRD=0 else SUPPORT_HEARTBEAT=0 fi ;; *) SUPPORT_HEARTBEAT=0;; esac AM_CONDITIONAL(BUILD_HEARTBEAT_SUPPORT, test $SUPPORT_HEARTBEAT = 1) AC_DEFINE_UNQUOTED(SUPPORT_HEARTBEAT, $SUPPORT_HEARTBEAT, Support the Heartbeat messaging and membership layer) AC_SUBST(SUPPORT_HEARTBEAT) dnl ======================================================================== dnl Cluster stack - Corosync dnl ======================================================================== dnl Normalize the values case $SUPPORT_CS in 1|yes|true) SUPPORT_CS=yes missingisfatal=1;; try) missingisfatal=0;; *) SUPPORT_CS=no;; esac AC_MSG_CHECKING(for native corosync) COROSYNC_LIBS="" CS_USES_LIBQB=0 PCMK_SERVICE_ID=9 LCRSODIR="$libdir" if test $SUPPORT_CS = no; then AC_MSG_RESULT(no (disabled)) SUPPORT_CS=0 else AC_MSG_RESULT($SUPPORT_CS, with '$CSPREFIX') PKG_CHECK_MODULES(cpg, libcpg) dnl Fatal PKG_CHECK_MODULES(cfg, libcfg) dnl Fatal PKG_CHECK_MODULES(cmap, libcmap, HAVE_cmap=1, HAVE_cmap=0) PKG_CHECK_MODULES(cman, libcman, HAVE_cman=1, HAVE_cman=0) PKG_CHECK_MODULES(confdb, libconfdb, HAVE_confdb=1, HAVE_confdb=0) PKG_CHECK_MODULES(fenced, libfenced, HAVE_fenced=1, HAVE_fenced=0) PKG_CHECK_MODULES(quorum, libquorum, HAVE_quorum=1, HAVE_quorum=0) PKG_CHECK_MODULES(oldipc, libcoroipcc, HAVE_oldipc=1, HAVE_oldipc=0) if test $HAVE_oldipc = 1; then SUPPORT_CS=1 CFLAGS="$CFLAGS $oldipc_FLAGS $cpg_FLAGS $cfg_FLAGS" COROSYNC_LIBS="$COROSYNC_LIBS $oldipc_LIBS $cpg_LIBS $cfg_LIBS" elif test $HAVE_libqb = 1; then SUPPORT_CS=1 CS_USES_LIBQB=1 CFLAGS="$CFLAGS $libqb_FLAGS $cpg_FLAGS $cfg_FLAGS" COROSYNC_LIBS="$COROSYNC_LIBS $libqb_LIBS $cpg_LIBS $cfg_LIBS" AC_CHECK_LIB(corosync_common, cs_strerror) else aisreason="corosync/libqb IPC libraries not found by pkg_config" fi AC_DEFINE_UNQUOTED(HAVE_CONFDB, $HAVE_confdb, Have the old herarchial Corosync config API) AC_DEFINE_UNQUOTED(HAVE_CMAP, $HAVE_cmap, Have the new non-herarchial Corosync config API) fi if test $SUPPORT_CS = 1 -a x$HAVE_oldipc = x0 ; then dnl Support for plugins was removed about the time the IPC was dnl moved to libqb. dnl The only option now is the built-in quorum API CFLAGS="$CFLAGS $cmap_CFLAGS $quorum_CFLAGS" COROSYNC_LIBS="$COROSYNC_LIBS $cmap_LIBS $quorum_LIBS" STACKS="$STACKS corosync-native" AC_DEFINE_UNQUOTED(SUPPORT_CS_QUORUM, 1, Support the consumption of membership and quorum from corosync) fi SUPPORT_PLUGIN=0 if test $SUPPORT_CS = 1 -a x$HAVE_confdb = x1; then dnl Need confdb to support cman and the plugins SUPPORT_PLUGIN=1 BUILD_ATOMIC_ATTRD=0 LCRSODIR=`$PKGCONFIG corosync --variable=lcrsodir` STACKS="$STACKS corosync-plugin" COROSYNC_LIBS="$COROSYNC_LIBS $confdb_LIBS" if test $SUPPORT_CMAN != no; then if test $HAVE_cman = 1 -a $HAVE_fenced = 1; then SUPPORT_CMAN=1 STACKS="$STACKS cman" CFLAGS="$CFLAGS $cman_FLAGS $fenced_FLAGS" COROSYNC_LIBS="$COROSYNC_LIBS $cman_LIBS $fenced_LIBS" fi fi fi dnl Normalize SUPPORT_CS and SUPPORT_CMAN for use with #if directives if test $SUPPORT_CMAN != 1; then SUPPORT_CMAN=0 fi if test $SUPPORT_CS = 1; then CLUSTERLIBS="$CLUSTERLIBS $COROSYNC_LIBS" elif test $SUPPORT_CS != 0; then SUPPORT_CS=0 if test $missingisfatal = 0; then AC_MSG_WARN(Unable to support Corosync: $aisreason) else AC_MSG_FAILURE(Unable to support Corosync: $aisreason) fi fi AC_DEFINE_UNQUOTED(SUPPORT_COROSYNC, $SUPPORT_CS, Support the Corosync messaging and membership layer) AC_DEFINE_UNQUOTED(SUPPORT_CMAN, $SUPPORT_CMAN, Support the consumption of membership and quorum from cman) AC_DEFINE_UNQUOTED(CS_USES_LIBQB, $CS_USES_LIBQB, Does corosync use libqb for its ipc) AC_DEFINE_UNQUOTED(PCMK_SERVICE_ID, $PCMK_SERVICE_ID, Corosync service number) AC_DEFINE_UNQUOTED(SUPPORT_PLUGIN, $SUPPORT_PLUGIN, Support the Pacemaker plugin for Corosync) AM_CONDITIONAL(BUILD_CS_SUPPORT, test $SUPPORT_CS = 1) AM_CONDITIONAL(BUILD_CS_PLUGIN, test $SUPPORT_PLUGIN = 1) AM_CONDITIONAL(BUILD_CMAN, test $SUPPORT_CMAN = 1) AM_CONDITIONAL(BUILD_ATOMIC_ATTRD, test $BUILD_ATOMIC_ATTRD = 1) AC_DEFINE_UNQUOTED(HAVE_ATOMIC_ATTRD, $BUILD_ATOMIC_ATTRD, Support the new atomic attrd) AC_SUBST(SUPPORT_CMAN) AC_SUBST(SUPPORT_CS) dnl dnl Cluster stack - Sanity dnl if test x${enable_no_stack} = xyes; then AC_MSG_NOTICE(No cluster stack supported. Just building the Policy Engine) PCMK_FEATURES="$PCMK_FEATURES no-cluster-stack" else AC_MSG_CHECKING(for supported stacks) if test x"$STACKS" = x; then AC_MSG_FAILURE(You must support at least one cluster stack (heartbeat or corosync) ) fi AC_MSG_RESULT($STACKS) PCMK_FEATURES="$PCMK_FEATURES $STACKS" fi if test ${BUILD_ATOMIC_ATTRD} = 1; then PCMK_FEATURES="$PCMK_FEATURES atomic-attrd" fi AC_SUBST(CLUSTERLIBS) AC_SUBST(LCRSODIR) dnl ======================================================================== dnl SNMP dnl ======================================================================== case $SUPPORT_SNMP in 1|yes|true) missingisfatal=1;; try) missingisfatal=0;; *) SUPPORT_SNMP=no;; esac SNMPLIBS="" AC_MSG_CHECKING(for snmp support) if test $SUPPORT_SNMP = no; then AC_MSG_RESULT(no (disabled)) SUPPORT_SNMP=0 else SNMPCONFIG="" AC_MSG_RESULT($SUPPORT_SNMP) AC_CHECK_HEADERS(net-snmp/net-snmp-config.h) if test "x${ac_cv_header_net_snmp_net_snmp_config_h}" != "xyes"; then SUPPORT_SNMP="no" fi if test $SUPPORT_SNMP != no; then AC_PATH_PROGS(SNMPCONFIG, net-snmp-config) if test "X${SNMPCONFIG}" = "X"; then AC_MSG_RESULT(You need the net_snmp development package to continue.) SUPPORT_SNMP=no fi fi if test $SUPPORT_SNMP != no; then AC_MSG_CHECKING(for special snmp libraries) SNMPLIBS=`$SNMPCONFIG --agent-libs` AC_MSG_RESULT($SNMPLIBS) fi if test $SUPPORT_SNMP != no; then savedLibs=$LIBS LIBS="$LIBS $SNMPLIBS" dnl On many systems libcrypto is needed when linking against libsnmp. dnl Check to see if it exists, and if so use it. dnl AC_CHECK_LIB(crypto, CRYPTO_free, CRYPTOLIB="-lcrypto",) dnl AC_SUBST(CRYPTOLIB) AC_CHECK_FUNCS(netsnmp_transport_open_client) if test $ac_cv_func_netsnmp_transport_open_client != yes; then AC_CHECK_FUNCS(netsnmp_tdomain_transport) if test $ac_cv_func_netsnmp_tdomain_transport != yes; then SUPPORT_SNMP=no else AC_DEFINE_UNQUOTED(NETSNMPV53, 1, [Use the older 5.3 version of the net-snmp API]) fi fi LIBS=$savedLibs fi if test $SUPPORT_SNMP = no; then SNMPLIBS="" SUPPORT_SNMP=0 if test $missingisfatal = 0; then AC_MSG_WARN(Unable to support SNMP) else AC_MSG_FAILURE(Unable to support SNMP) fi else SUPPORT_SNMP=1 fi fi if test $SUPPORT_SNMP = 1; then PCMK_FEATURES="$PCMK_FEATURES snmp" fi AC_SUBST(SNMPLIBS) AM_CONDITIONAL(ENABLE_SNMP, test "$SUPPORT_SNMP" = "1") AC_DEFINE_UNQUOTED(ENABLE_SNMP, $SUPPORT_SNMP, Build in support for sending SNMP traps) dnl ======================================================================== dnl ESMTP dnl ======================================================================== case $SUPPORT_ESMTP in 1|yes|true) missingisfatal=1;; try) missingisfatal=0;; *) SUPPORT_ESMTP=no;; esac ESMTPLIB="" AC_MSG_CHECKING(for esmtp support) if test $SUPPORT_ESMTP = no; then AC_MSG_RESULT(no (disabled)) SUPPORT_ESMTP=0 else ESMTPCONFIG="" AC_MSG_RESULT($SUPPORT_ESMTP) AC_CHECK_HEADERS(libesmtp.h) if test "x${ac_cv_header_libesmtp_h}" != "xyes"; then ENABLE_ESMTP="no" fi if test $SUPPORT_ESMTP != no; then AC_PATH_PROGS(ESMTPCONFIG, libesmtp-config) if test "X${ESMTPCONFIG}" = "X"; then AC_MSG_RESULT(You need the libesmtp development package to continue.) SUPPORT_ESMTP=no fi fi if test $SUPPORT_ESMTP != no; then AC_MSG_CHECKING(for special esmtp libraries) ESMTPLIBS=`$ESMTPCONFIG --libs | tr '\n' ' '` AC_MSG_RESULT($ESMTPLIBS) fi if test $SUPPORT_ESMTP = no; then SUPPORT_ESMTP=0 if test $missingisfatal = 0; then AC_MSG_WARN(Unable to support ESMTP) else AC_MSG_FAILURE(Unable to support ESMTP) fi else SUPPORT_ESMTP=1 PCMK_FEATURES="$PCMK_FEATURES libesmtp" fi fi AC_SUBST(ESMTPLIBS) AM_CONDITIONAL(ENABLE_ESMTP, test "$SUPPORT_ESMTP" = "1") AC_DEFINE_UNQUOTED(ENABLE_ESMTP, $SUPPORT_ESMTP, Build in support for sending mail notifications with ESMTP) dnl ======================================================================== dnl ACL dnl ======================================================================== case $SUPPORT_ACL in 1|yes|true) missingisfatal=1;; try) missingisfatal=0;; *) SUPPORT_ACL=no;; esac AC_MSG_CHECKING(for acl support) if test $SUPPORT_ACL = no; then AC_MSG_RESULT(no (disabled)) SUPPORT_ACL=0 else AC_MSG_RESULT($SUPPORT_ACL) SUPPORT_ACL=1 AC_CHECK_LIB(qb, qb_ipcs_connection_auth_set) if test $ac_cv_lib_qb_qb_ipcs_connection_auth_set != yes; then SUPPORT_ACL=0 fi if test $SUPPORT_ACL = 0; then if test $missingisfatal = 0; then AC_MSG_WARN(Unable to support ACL. You need to use libqb > 0.13.0) else AC_MSG_FAILURE(Unable to support ACL. You need to use libqb > 0.13.0) fi fi fi if test $SUPPORT_ACL = 1; then PCMK_FEATURES="$PCMK_FEATURES acls" fi AM_CONDITIONAL(ENABLE_ACL, test "$SUPPORT_ACL" = "1") AC_DEFINE_UNQUOTED(ENABLE_ACL, $SUPPORT_ACL, Build in support for CIB ACL) dnl ======================================================================== dnl CIB secrets dnl ======================================================================== case $SUPPORT_CIBSECRETS in 1|yes|true|try) SUPPORT_CIBSECRETS=1;; *) SUPPORT_CIBSECRETS=0;; esac AC_DEFINE_UNQUOTED(SUPPORT_CIBSECRETS, $SUPPORT_CIBSECRETS, Support CIB secrets) AM_CONDITIONAL(BUILD_CIBSECRETS, test $SUPPORT_CIBSECRETS = 1) if test $SUPPORT_CIBSECRETS = 1; then PCMK_FEATURES="$PCMK_FEATURES cibsecrets" LRM_CIBSECRETS_DIR="${localstatedir}/lib/pacemaker/lrm/secrets" AC_DEFINE_UNQUOTED(LRM_CIBSECRETS_DIR,"$LRM_CIBSECRETS_DIR", Location for CIB secrets) AC_SUBST(LRM_CIBSECRETS_DIR) LRM_LEGACY_CIBSECRETS_DIR="${localstatedir}/lib/heartbeat/lrm/secrets" AC_DEFINE_UNQUOTED(LRM_LEGACY_CIBSECRETS_DIR,"$LRM_LEGACY_CIBSECRETS_DIR", Legacy location for CIB secrets) AC_SUBST(LRM_LEGACY_CIBSECRETS_DIR) fi dnl ======================================================================== dnl GnuTLS dnl ======================================================================== AC_CHECK_HEADERS(gnutls/gnutls.h) AC_CHECK_HEADERS(security/pam_appl.h pam/pam_appl.h) dnl GNUTLS library: Attempt to determine by 'libgnutls-config' program. dnl If no 'libgnutls-config', try traditional autoconf means. AC_PATH_PROGS(LIBGNUTLS_CONFIG, libgnutls-config) if test -n "$LIBGNUTLS_CONFIG"; then AC_MSG_CHECKING(for gnutls header flags) GNUTLSHEAD="`$LIBGNUTLS_CONFIG --cflags`"; AC_MSG_RESULT($GNUTLSHEAD) AC_MSG_CHECKING(for gnutls library flags) GNUTLSLIBS="`$LIBGNUTLS_CONFIG --libs`"; AC_MSG_RESULT($GNUTLSLIBS) fi AC_CHECK_LIB(gnutls, gnutls_init) AC_CHECK_FUNCS(gnutls_priority_set_direct) AC_SUBST(GNUTLSHEAD) AC_SUBST(GNUTLSLIBS) dnl ======================================================================== dnl System Health dnl ======================================================================== dnl Check if servicelog development package is installed SERVICELOG=servicelog-1 SERVICELOG_EXISTS="no" AC_MSG_CHECKING(for $SERVICELOG packages) if $PKGCONFIG --exists $SERVICELOG then PKG_CHECK_MODULES([SERVICELOG], [servicelog-1]) SERVICELOG_EXISTS="yes" fi AC_MSG_RESULT($SERVICELOG_EXISTS) AM_CONDITIONAL(BUILD_SERVICELOG, test "$SERVICELOG_EXISTS" = "yes") dnl Check if OpenIMPI packages and servicelog are installed OPENIPMI="OpenIPMI OpenIPMIposix" OPENIPMI_SERVICELOG_EXISTS="no" AC_MSG_CHECKING(for $SERVICELOG $OPENIPMI packages) if $PKGCONFIG --exists $OPENIPMI $SERVICELOG then PKG_CHECK_MODULES([OPENIPMI_SERVICELOG],[OpenIPMI OpenIPMIposix]) OPENIPMI_SERVICELOG_EXISTS="yes" fi AC_MSG_RESULT($OPENIPMI_SERVICELOG_EXISTS) AM_CONDITIONAL(BUILD_OPENIPMI_SERVICELOG, test "$OPENIPMI_SERVICELOG_EXISTS" = "yes") dnl ======================================================================== dnl Compiler flags dnl ======================================================================== dnl Make sure that CFLAGS is not exported. If the user did dnl not have CFLAGS in their environment then this should have dnl no effect. However if CFLAGS was exported from the user's dnl environment, then the new CFLAGS will also be exported dnl to sub processes. CC_ERRORS="" CC_EXTRAS="" if export | fgrep " CFLAGS=" > /dev/null; then SAVED_CFLAGS="$CFLAGS" unset CFLAGS CFLAGS="$SAVED_CFLAGS" unset SAVED_CFLAGS fi if test "$GCC" != yes; then CFLAGS="$CFLAGS -g" enable_fatal_warnings=no else CFLAGS="$CFLAGS -ggdb" # We had to eliminate -Wnested-externs because of libtool changes EXTRA_FLAGS="-fgnu89-inline -Wall -Waggregate-return -Wbad-function-cast -Wcast-align -Wdeclaration-after-statement -Wendif-labels -Wfloat-equal -Wformat=2 -Wformat-security -Wformat-nonliteral -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -Wno-long-long -Wno-strict-aliasing -Wpointer-arith -Wstrict-prototypes -Wwrite-strings -Wunused-but-set-variable -Wunsigned-char" # Additional warnings it might be nice to enable one day # -Wshadow # -Wunreachable-code case "$host_os" in *solaris*) ;; *) EXTRA_FLAGS="$EXTRA_FLAGS -fstack-protector-all" ;; esac for j in $EXTRA_FLAGS do if cc_supports_flag $j then CC_EXTRAS="$CC_EXTRAS $j" fi done dnl In lib/ais/Makefile.am there's a gcc option available as of v4.x GCC_MAJOR=`gcc -v 2>&1 | awk 'END{print $3}' | sed 's/[.].*//'` AM_CONDITIONAL(GCC_4, test "${GCC_MAJOR}" = 4) dnl System specific options case "$host_os" in *linux*|*bsd*) if test "${enable_fatal_warnings}" = "unknown"; then enable_fatal_warnings=yes fi ;; esac if test "x${enable_fatal_warnings}" != xno && cc_supports_flag -Werror ; then enable_fatal_warnings=yes else enable_fatal_warnings=no fi if test "x${enable_ansi}" = xyes && cc_supports_flag -std=iso9899:199409 ; then AC_MSG_NOTICE(Enabling ANSI Compatibility) CC_EXTRAS="$CC_EXTRAS -ansi -D_GNU_SOURCE -DANSI_ONLY" fi AC_MSG_NOTICE(Activated additional gcc flags: ${CC_EXTRAS}) fi CFLAGS="$CFLAGS $CC_EXTRAS" NON_FATAL_CFLAGS="$CFLAGS" AC_SUBST(NON_FATAL_CFLAGS) dnl dnl We reset CFLAGS to include our warnings *after* all function dnl checking goes on, so that our warning flags don't keep the dnl AC_*FUNCS() calls above from working. In particular, -Werror will dnl *always* cause us troubles if we set it before here. dnl dnl if test "x${enable_fatal_warnings}" = xyes ; then AC_MSG_NOTICE(Enabling Fatal Warnings) CFLAGS="$CFLAGS -Werror" fi AC_SUBST(CFLAGS) dnl This is useful for use in Makefiles that need to remove one specific flag CFLAGS_COPY="$CFLAGS" AC_SUBST(CFLAGS_COPY) AC_SUBST(LIBADD_DL) dnl extra flags for dynamic linking libraries AC_SUBST(LIBADD_INTL) dnl extra flags for GNU gettext stuff... AC_SUBST(LOCALE) dnl Options for cleaning up the compiler output QUIET_LIBTOOL_OPTS="" QUIET_MAKE_OPTS="" if test "x${enable_quiet}" = "xyes"; then QUIET_LIBTOOL_OPTS="--quiet" QUIET_MAKE_OPTS="--quiet" fi AC_MSG_RESULT(Supress make details: ${enable_quiet}) dnl Put the above variables to use LIBTOOL="${LIBTOOL} --tag=CC \$(QUIET_LIBTOOL_OPTS)" MAKE="${MAKE} \$(QUIET_MAKE_OPTS)" AC_SUBST(CC) AC_SUBST(MAKE) AC_SUBST(LIBTOOL) AC_SUBST(QUIET_MAKE_OPTS) AC_SUBST(QUIET_LIBTOOL_OPTS) AC_DEFINE_UNQUOTED(CRM_FEATURES, "$PCMK_FEATURES", Set of enabled features) AC_SUBST(PCMK_FEATURES) dnl The Makefiles and shell scripts we output AC_CONFIG_FILES(Makefile \ Doxyfile \ coverage.sh \ cts/Makefile \ cts/CTSvars.py \ cts/LSBDummy \ cts/benchmark/Makefile \ cts/benchmark/clubench \ cts/lxc_autogen.sh \ cib/Makefile \ attrd/Makefile \ crmd/Makefile \ pengine/Makefile \ pengine/regression.core.sh \ doc/Makefile \ doc/Pacemaker_Explained/publican.cfg \ doc/Clusters_from_Scratch/publican.cfg \ doc/Pacemaker_Remote/publican.cfg \ include/Makefile \ include/crm/Makefile \ include/crm/cib/Makefile \ include/crm/common/Makefile \ include/crm/cluster/Makefile \ include/crm/fencing/Makefile \ include/crm/pengine/Makefile \ replace/Makefile \ lib/Makefile \ lib/pacemaker.pc \ lib/pacemaker-cib.pc \ lib/pacemaker-lrmd.pc \ lib/pacemaker-service.pc \ lib/pacemaker-pengine.pc \ lib/pacemaker-fencing.pc \ lib/pacemaker-cluster.pc \ lib/ais/Makefile \ lib/common/Makefile \ lib/cluster/Makefile \ lib/cib/Makefile \ lib/pengine/Makefile \ lib/transition/Makefile \ lib/fencing/Makefile \ lib/lrmd/Makefile \ lib/services/Makefile \ mcp/Makefile \ mcp/pacemaker \ mcp/pacemaker.service \ mcp/pacemaker.upstart \ mcp/pacemaker.combined.upstart \ fencing/Makefile \ fencing/regression.py \ lrmd/Makefile \ lrmd/regression.py \ lrmd/pacemaker_remote.service \ lrmd/pacemaker_remote \ extra/Makefile \ extra/resources/Makefile \ extra/rgmanager/Makefile \ extra/logrotate/Makefile \ extra/logrotate/pacemaker \ tools/Makefile \ tools/crm_report \ tools/report.common \ tools/cibsecret \ tools/crm_mon.upstart \ xml/Makefile \ lib/gnu/Makefile \ ) dnl Now process the entire list of files added by previous dnl calls to AC_CONFIG_FILES() AC_OUTPUT() dnl ***************** dnl Configure summary dnl ***************** AC_MSG_RESULT([]) AC_MSG_RESULT([$PACKAGE configuration:]) AC_MSG_RESULT([ Version = ${VERSION} (Build: $BUILD_VERSION)]) AC_MSG_RESULT([ Features =${PCMK_FEATURES}]) AC_MSG_RESULT([]) AC_MSG_RESULT([ Prefix = ${prefix}]) AC_MSG_RESULT([ Executables = ${sbindir}]) AC_MSG_RESULT([ Man pages = ${mandir}]) AC_MSG_RESULT([ Libraries = ${libdir}]) AC_MSG_RESULT([ Header files = ${includedir}]) AC_MSG_RESULT([ Arch-independent files = ${datadir}]) AC_MSG_RESULT([ State information = ${localstatedir}]) AC_MSG_RESULT([ System configuration = ${sysconfdir}]) AC_MSG_RESULT([ Corosync Plugins = ${LCRSODIR}]) AC_MSG_RESULT([]) AC_MSG_RESULT([ Use system LTDL = ${ac_cv_lib_ltdl_lt_dlopen}]) AC_MSG_RESULT([]) AC_MSG_RESULT([ HA group name = ${CRM_DAEMON_GROUP}]) AC_MSG_RESULT([ HA user name = ${CRM_DAEMON_USER}]) AC_MSG_RESULT([]) AC_MSG_RESULT([ CFLAGS = ${CFLAGS}]) AC_MSG_RESULT([ Libraries = ${LIBS}]) AC_MSG_RESULT([ Stack Libraries = ${CLUSTERLIBS}]) diff --git a/cts/CTSlab.py b/cts/CTSlab.py index 9b336a5beb..a3494e1b4e 100755 --- a/cts/CTSlab.py +++ b/cts/CTSlab.py @@ -1,165 +1,158 @@ #!/usr/bin/python '''CTS: Cluster Testing System: Lab environment module ''' __copyright__ = ''' Copyright (C) 2001,2005 Alan Robertson Licensed under the GNU GPL. ''' # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. from UserDict import UserDict import sys, types, string, string, signal, os, socket pdir = os.path.dirname(sys.path[0]) sys.path.insert(0, pdir) # So that things work from the source directory try: - from cts.CTSvars import * - from cts.CM_ais import * - from cts.CM_lha import crm_lha - from cts.CTSaudits import AuditList - from cts.CTStests import TestList + from cts.CTSvars import * + from cts.CM_ais import * + from cts.CM_lha import crm_lha + from cts.CTSaudits import AuditList + from cts.CTStests import TestList from cts.CTSscenarios import * from cts.logging import LogFactory - -except ImportError: - sys.stderr.write("abort: couldn't find cts libraries in [%s]\n" % +except ImportError as e: + sys.stderr.write("abort: %s\n" % e) + sys.stderr.write("check your install and PYTHONPATH; couldn't find cts libraries in:\n%s\n" % ' '.join(sys.path)) - sys.stderr.write("(check your install and PYTHONPATH)\n") - - # Now do it again to get more details - from cts.CTSvars import * - from cts.CM_ais import * - from cts.CM_lha import crm_lha - from cts.CTSaudits import AuditList - from cts.CTStests import TestList - from cts.CTSscenarios import * - from cts.logging import LogFactory - sys.exit(-1) + sys.exit(1) +# These are globals so they can be used by the signal handler. cm = None scenario = None - LogFactory().add_stderr() + + def sig_handler(signum, frame) : LogFactory().log("Interrupted by signal %d"%signum) if scenario: scenario.summarize() if signum == 15 : if scenario: scenario.TearDown() sys.exit(1) + if __name__ == '__main__': Environment = CtsLab(sys.argv[1:]) NumIter = Environment["iterations"] Tests = [] # Set the signal handler signal.signal(15, sig_handler) signal.signal(10, sig_handler) # Create the Cluster Manager object if Environment["Stack"] == "heartbeat": cm = crm_lha(Environment) elif Environment["Stack"] == "openais (whitetank)": cm = crm_whitetank(Environment) elif Environment["Stack"] == "corosync 2.x": cm = crm_mcp(Environment) elif Environment["Stack"] == "corosync (cman)": cm = crm_cman(Environment) elif Environment["Stack"] == "corosync (plugin v1)": cm = crm_cs_v1(Environment) elif Environment["Stack"] == "corosync (plugin v0)": cm = crm_cs_v0(Environment) else: LogFactory().log("Unknown stack: "+Environment["stack"]) sys.exit(1) if Environment["TruncateLog"] == 1: - Environment.log("Truncating %s" % LogFile) - lf = open(LogFile, "w"); - if lf != None: - lf.truncate(0) - lf.close() + if Environment["OutputFile"] is None: + LogFactory().log("Ignoring truncate request because no output file specified") + else: + LogFactory().log("Truncating %s" % Environment["OutputFile"]) + with open(Environment["OutputFile"], "w") as outputfile: + outputfile.truncate(0) Audits = AuditList(cm) if Environment["ListTests"] == 1: Tests = TestList(cm, Audits) LogFactory().log("Total %d tests"%len(Tests)) for test in Tests : LogFactory().log(str(test.name)); sys.exit(0) elif len(Environment["tests"]) == 0: Tests = TestList(cm, Audits) else: Chosen = Environment["tests"] for TestCase in Chosen: match = None for test in TestList(cm, Audits): if test.name == TestCase: match = test if not match: usage("--choose: No applicable/valid tests chosen") else: Tests.append(match) # Scenario selection if Environment["scenario"] == "basic-sanity": scenario = RandomTests(cm, [ BasicSanityCheck(Environment) ], Audits, Tests) elif Environment["scenario"] == "all-once": NumIter = len(Tests) scenario = AllOnce( cm, [ BootCluster(Environment), PacketLoss(Environment) ], Audits, Tests) elif Environment["scenario"] == "sequence": scenario = Sequence( cm, [ BootCluster(Environment), PacketLoss(Environment) ], Audits, Tests) elif Environment["scenario"] == "boot": scenario = Boot(cm, [ LeaveBooted(Environment)], Audits, []) else: scenario = RandomTests( cm, [ BootCluster(Environment), PacketLoss(Environment) ], Audits, Tests) LogFactory().log(">>>>>>>>>>>>>>>> BEGINNING " + repr(NumIter) + " TESTS ") LogFactory().log("Stack: %s (%s)" % (Environment["Stack"], Environment["Name"])) LogFactory().log("Schema: %s" % Environment["Schema"]) LogFactory().log("Scenario: %s" % scenario.__doc__) LogFactory().log("CTS Master: %s" % Environment["cts-master"]) LogFactory().log("CTS Logfile: %s" % Environment["OutputFile"]) LogFactory().log("Random Seed: %s" % Environment["RandSeed"]) LogFactory().log("Syslog variant: %s" % Environment["syslogd"].strip()) LogFactory().log("System log files: %s" % Environment["LogFileName"]) -# Environment.log(" ") if Environment.has_key("IPBase"): LogFactory().log("Base IP for resources: %s" % Environment["IPBase"]) LogFactory().log("Cluster starts at boot: %d" % Environment["at-boot"]) Environment.dump() rc = Environment.run(scenario, NumIter) sys.exit(rc) diff --git a/cts/README b/cts/README index 5fa4841e42..b2ff427de5 100644 --- a/cts/README +++ b/cts/README @@ -1,192 +1,138 @@ -BASIC REQUIREMENTS BEFORE STARTING: - -Three or more machines: one test exerciser and two or more test cluster machines. - - The two test cluster machines need to be on the same subnet - and they should have journalling filesystems for - all of their filesystems other than /boot - You also need a number of free IP addresses on that subnet to test - mutual IP address takeover - - The test exerciser machine doesn't need to be on the same subnet - as the test cluster machines. Minimal demands are made on the - exerciser machine - it just has to stay up during the tests. - However, it does need to have a current copy of the cts test - scripts. It is worth noting that these scripts are coordinated - with particular versions of Pacemaker, so that in general you - have to have the same version of test scripts as the rest of - Pacemaker. + + PACEMAKER + CLUSTER TEST SUITE (CTS) + + +Purpose +------- + +CTS thoroughly exercises a pacemaker test cluster by running a randomized +series of predefined tests on the cluster. CTS can be run against a +pre-existing cluster configuration or (more typically) overwrite the existing +configuration with a test configuration. + + +Requirements +------------ + +* Three or more machines (one test exerciser and two or more test cluster + machines). + +* The test cluster machines should be on the same subnet and have journalling + filesystems (ext3, ext4, xfs, etc.) for all of their filesystems other than + /boot. You also need a number of free IP addresses on that subnet if you + intend to test mutual IP address takeover. + +* The test exerciser machine doesn't need to be on the same subnet as the test + cluster machines. Minimal demands are made on the exerciser machine - it + just has to stay up during the tests. + +* It helps a lot in tracking problems if all machines' clocks are closely + synchronized. NTP does this automatically, but you can do it by hand if you + want. + +* The exerciser needs to be able to ssh over to the cluster nodes as root + without a password challenge. Configure ssh accordingly (see the Mini-HOWTO + at the end of this document for more details). + +* The exerciser needs to be able to resolve the machine names of the + test cluster - either by DNS or by /etc/hosts. + +Preparation +----------- -Install Pacemaker on all machines. +Install Pacemaker (including CTS) on all machines. These scripts are +coordinated with particular versions of Pacemaker, so you need the same version +of CTS as the rest of Pacemaker, and you need the same version of +pacemaker and CTS on both the test exerciser and the test cluster machines. Configure cluster communications (Corosync, CMAN or Heartbeat) on the cluster machines and verify everything works. NOTE: Do not run the cluster on the test exerciser machine. NOTE: Wherever machine names are mentioned in these configuration files, they must match the machines' `uname -n` name. This may or may not match the machines' FQDN (fully qualified domain name) - it depends on how you (and your OS) have named the machines. -It helps a lot in tracking problems if the three machines' clocks are -closely synchronized. xntpd does this, but you can do it by hand if -you want. - -Make sure all your filesystems are journalling filesystems (/boot can be -ext2 if you want). This means filesystems like ext3. +Run CTS +------- -Here's what you need to do to run CTS: +Now assuming you did all this, what you need to do is run CTSlab.py: -The exerciser needs to be able to ssh over to the cluster nodes as root -without a password challenge. Configure ssh accordingly. - (see the Mini-HOWTOs at the end for more details) - -The exerciser needs to be able to resolve the machine names of the -test cluster - either by DNS or by /etc/hosts. + python ./CTSlab.py [options] number-of-tests-to-run +You must specify which nodes are part of the cluster with --nodes, e.g.: -Now assuming you did all this, what you need to do is run CTSlab.py + --node "pcmk-1 pcmk-2 pcmk-3" - python ./CTSlab.py [options] number-of-tests-to-run +Most people will want to save the output with --outputfile, e.g.: -You must specify which nodes are part of the cluster: - --nodes, eg. --node "pcmk-1 pcmk-2 pcmk-3" + --outputfile ~/cts.log -Most people will want to save the output: - --outputfile, eg. --outputfile ~/cts.log +Unless you want to test your pre-existing cluster configuration, you also want: -Unless you want to test your own cluster configuration, you will also want: --clobber-cib --populate-resources - --test-ip-base, eg. --test-ip-base 192.168.9.100 + --test-ip-base $IP # e.g. --test-ip-base 192.168.9.100 - and configure some sort of fencing: - --stonith, eg. --stonith rhcs to use fence_xvm or --stonith lha to use external/ssh +and configure some sort of fencing: + + --stonith $TYPE # e.g. "--stonith rhcs" to use fence_xvm or "--stonith lha" to use external/ssh A complete command line might look like: python ./CTSlab.py --nodes "pcmk-1 pcmk-2 pcmk-3" --outputfile ~/cts.log \ --clobber-cib --populate-resources --test-ip-base 192.168.9.100 \ --stonith rhcs 50 +For more options, use the --help option. -For other options, use the --help option and see the Mini-HOWTOs at the end for more details on setting up external/ssh. +To extract the result of a particular test, run: -HINT: To extract the result of a particular test, run: crm_report -T $test +Mini-HOWTO: Allow passwordless remote SSH connections +----------------------------------------------------- +The CTS scripts run "ssh -l root" so you don't have to do any of your testing +logged in as root on the test machine. Here is how to allow such connections +without requiring a password to be entered each time: -============== -Mini-HOWTOs: -============== - --------------------------------------------------------------------------------- -How to make OpenSSH allow you to login as root across the network without -a password. --------------------------------------------------------------------------------- - -All our scripts run ssh -l root, so you don't have to do any of your testing -logged in as root on the test machine - -1) Grab your key from the exerciser machine: - - take the single line out of ~/.ssh/identity.pub - and put it into root's authorized_keys file. - [This has changed to: copying the line from ~/.ssh/id_dsa.pub into - root's authorized_keys file ] +* On your test exerciser, create an SSH key if you do not already have one. + Most commonly, SSH keys will be in your ~/.ssh directory, with the + private key file not having an extension, and the public key file + named the same with the extension ".pub" (for example, ~/.ssh/id_dsa.pub). - NOTE: If you don't have an id_dsa.pub file, create it by running: + If you don't already have a key, you can create one with: ssh-keygen -t dsa -2) Run this command on each of the cluster machines as root: - -ssh -v -l myid ererciser-machine cat /home/myid/.ssh/identity.pub \ - >> ~root/.ssh/authorized_keys - -[For most people, this has changed to: - ssh -v -l myid exerciser-machine cat /home/myid/.ssh/id_dsa.pub \ - >> ~root/.ssh/authorized_keys -] - - You will probably have to provide your password, and possibly say - "yes" to some questions about accepting the identity of the - test machines - -3) You must also do the corresponding update for the exerciser - machine itself as root: - - cat /home/myid/.ssh/identity.pub >> ~root/.ssh/authorized_keys - - To test this, try this command from the exerciser machine for each - of your cluster machines, and for the exerciser machine itself. - -ssh -l root cluster-machine - -If this works without prompting for a password, you're in business... -If not, you need to look at the ssh/openssh documentation and the output from -the -v options above... - --------------------------------------------------------------------------------- -How to configure OpenSSH for StonithdTest --------------------------------------------------------------------------------- - -This configure enables cluster machines to ssh over to each other without a -password challenge. - -1) On each of the cluster machines, grab your key: - - take the single line out of ~/.ssh/identity.pub - and put it into root's authorized_keys file. - [This has changed to: copying the line from ~/.ssh/id_dsa.pub into - root's authorized_keys file ] - - NOTE: If you don't have an id_dsa.pub file, create it by running: - - ssh-keygen -t dsa - -2) Run this command on each of the cluster machines as root: - -ssh -v -l myid cluster_machine_1 cat /home/myid/.ssh/identity.pub \ - >> ~root/.ssh/authorized_keys - -ssh -v -l myid cluster_machine_2 cat /home/myid/.ssh/identity.pub \ - >> ~root/.ssh/authorized_keys - -...... - -ssh -v -l myid cluster_machine_n cat /home/myid/.ssh/identity.pub \ - >> ~root/.ssh/authorized_keys - -[For most people, this has changed to: - ssh -v -l myid cluster_machine cat /home/myid/.ssh/id_dsa.pub \ - >> ~root/.ssh/authorized_keys -] +* From your test exerciser, authorize your SSH public key for root on all test + machines (both the exerciser and the cluster test machines): - You will probably have to provide your password, and possibly say - "yes" to some questions about accepting the identity of the - test machines + ssh-copy-id -i ~/.ssh/id_dsa.pub root@$MACHINE -To test this, try this command from any machine for each -of other cluster machines, and for the machine itself. + You will probably have to provide your password, and possibly say + "yes" to some questions about accepting the identity of the test machines. - ssh -l root cluster-machine + The above assumes you have a DSA SSH key in the specified location; + if you have some other type of key (RSA, ECDSA, etc.), use its file name + in the -i option above. -This should work without prompting for a password, -If not, you need to look at the ssh/openssh documentation and the output from -the -v options above... + If you have an old version of SSH that doesn't have ssh-copy-id, + you can take the single line out of your public key file + (e.g. ~/.ssh/identity.pub or ~/.ssh/id_dsa.pub) and manually add it to + root's ~/.ssh/authorized_keys file on each test machine. -3) Make sure the 'at' daemon is enabled on the test cluster machines +* To test, try this command from the exerciser machine for each + of your cluster machines, and for the exerciser machine itself. -This is normally the 'atd' service started by /etc/init.d/atd). This -doesn't mean just start it, it means enable it to start on every boot -into your default init state (probably either 3 or 5). + ssh -l root $MACHINE -Usually this can be achieved with: - chkconfig --add atd - chkconfig atd on + If this works without prompting for a password, you're in business. + If not, look at the documentation for your version of ssh. diff --git a/cts/environment.py b/cts/environment.py index 2b2a3438e2..e76c36d29e 100644 --- a/cts/environment.py +++ b/cts/environment.py @@ -1,677 +1,680 @@ ''' Classes related to producing and searching logs ''' __copyright__=''' Copyright (C) 2014 Andrew Beekhof Licensed under the GNU GPL. ''' # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. import types, string, select, sys, time, re, os, struct, signal, socket import time, syslog, random, traceback, base64, pickle, binascii, fcntl from cts.remote import * class Environment: def __init__(self, args): - print repr(self) self.data = {} self.Nodes = [] self["DeadTime"] = 300 self["StartTime"] = 300 self["StableTime"] = 30 self["tests"] = [] self["IPagent"] = "IPaddr2" self["DoStandby"] = 1 self["DoFencing"] = 1 self["XmitLoss"] = "0.0" self["RecvLoss"] = "0.0" self["ClobberCIB"] = 0 self["CIBfilename"] = None self["CIBResource"] = 0 self["DoBSC"] = 0 self["use_logd"] = 0 self["oprofile"] = [] self["warn-inactive"] = 0 self["ListTests"] = 0 self["benchmark"] = 0 self["LogWatcher"] = "any" self["SyslogFacility"] = "daemon" self["LogFileName"] = "/var/log/messages" self["Schema"] = "pacemaker-2.0" self["Stack"] = "corosync" self["stonith-type"] = "external/ssh" self["stonith-params"] = "hostlist=all,livedangerously=yes" self["loop-minutes"] = 60 self["valgrind-prefix"] = None self["valgrind-procs"] = "cib crmd attrd pengine stonith-ng" self["valgrind-opts"] = """--leak-check=full --show-reachable=yes --trace-children=no --num-callers=25 --gen-suppressions=all --suppressions="""+CTSvars.CTS_home+"""/cts.supp""" self["experimental-tests"] = 0 self["container-tests"] = 0 self["valgrind-tests"] = 0 self["unsafe-tests"] = 1 self["loop-tests"] = 1 self["scenario"] = "random" self["stats"] = 0 self["docker"] = 0 self.RandomGen = random.Random() self.logger = LogFactory() self.SeedRandom() self.rsh = RemoteFactory().getInstance() self.target = "localhost" self.parse_args(args) self.discover() self.validate() def SeedRandom(self, seed=None): if not seed: seed = int(time.time()) if self.has_key("RandSeed"): self.logger.log("New random seed is: " + str(seed)) else: self.logger.log("Random seed is: " + str(seed)) self["RandSeed"] = seed self.RandomGen.seed(str(seed)) def dump(self): keys = [] for key in self.data.keys(): keys.append(key) keys.sort() for key in keys: self.logger.debug("Environment["+key+"]:\t"+str(self[key])) def keys(self): return self.data.keys() def has_key(self, key): if key == "nodes": return True return self.data.has_key(key) def __getitem__(self, key): if key == "nodes": return self.Nodes elif key == "Name": return self.get_stack_short() elif self.data.has_key(key): return self.data[key] else: return None def __setitem__(self, key, value): if key == "Stack": self.set_stack(value) elif key == "node-limit": self.data[key] = value self.filter_nodes() elif key == "nodes": self.Nodes = [] for node in value: # I don't think I need the IP address, etc. but this validates # the node name against /etc/hosts and/or DNS, so it's a # GoodThing(tm). try: n = node.strip() if self.data["docker"] == 0: gethostbyname_ex(n) self.Nodes.append(n) except: self.logger.log(node+" not found in DNS... aborting") raise self.filter_nodes() else: self.data[key] = value def RandomNode(self): '''Choose a random node from the cluster''' return self.RandomGen.choice(self["nodes"]) def set_stack(self, name): # Normalize stack names if name == "heartbeat" or name == "lha": self.data["Stack"] = "heartbeat" elif name == "openais" or name == "ais" or name == "whitetank": self.data["Stack"] = "openais (whitetank)" elif name == "corosync" or name == "cs" or name == "mcp": self.data["Stack"] = "corosync 2.x" elif name == "cman": self.data["Stack"] = "corosync (cman)" elif name == "v1": self.data["Stack"] = "corosync (plugin v1)" elif name == "v0": self.data["Stack"] = "corosync (plugin v0)" else: print "Unknown stack: "+name sys.exit(1) def get_stack_short(self): # Create the Cluster Manager object if not self.data.has_key("Stack"): return "unknown" elif self.data["Stack"] == "heartbeat": return "crm-lha" elif self.data["Stack"] == "corosync 2.x": if self["docker"]: return "crm-mcp-docker" else: return "crm-mcp" elif self.data["Stack"] == "corosync (cman)": return "crm-cman" elif self.data["Stack"] == "corosync (plugin v1)": return "crm-plugin-v1" elif self.data["Stack"] == "corosync (plugin v0)": return "crm-plugin-v0" else: LogFactory().log("Unknown stack: "+self.data["stack"]) sys.exit(1) def detect_syslog(self): # Detect syslog variant if not self.has_key("syslogd"): if self["have_systemd"]: # Systemd self["syslogd"] = self.rsh(self.target, "systemctl list-units | grep syslog.*\.service.*active.*running | sed 's:.service.*::'", stdout=1).strip() else: # SYS-V self["syslogd"] = self.rsh(self.target, "chkconfig --list | grep syslog.*on | awk '{print $1}' | head -n 1", stdout=1).strip() if not self.has_key("syslogd") or not self["syslogd"]: # default self["syslogd"] = "rsyslog" def detect_at_boot(self): # Detect if the cluster starts at boot if not self.has_key("at-boot"): atboot = 0 if self["have_systemd"]: # Systemd atboot = atboot or not self.rsh(self.target, "systemctl is-enabled heartbeat.service") atboot = atboot or not self.rsh(self.target, "systemctl is-enabled corosync.service") atboot = atboot or not self.rsh(self.target, "systemctl is-enabled pacemaker.service") else: # SYS-V atboot = atboot or not self.rsh(self.target, "chkconfig --list | grep -e corosync.*on -e heartbeat.*on -e pacemaker.*on") self["at-boot"] = atboot def detect_ip_offset(self): # Try to determin an offset for IPaddr resources if self["CIBResource"] and not self.has_key("IPBase"): network=self.rsh(self.target, "ip addr | grep inet | grep -v -e link -e inet6 -e '/32' -e ' lo' | awk '{print $2}'", stdout=1).strip() self["IPBase"] = self.rsh(self.target, "nmap -sn -n %s | grep 'scan report' | awk '{print $NF}' | sed 's:(::' | sed 's:)::' | sort -V | tail -n 1" % network, stdout=1).strip() if not self["IPBase"]: self["IPBase"] = " fe80::1234:56:7890:1000" self.logger.log("Could not determine an offset for IPaddr resources. Perhaps nmap is not installed on the nodes.") self.logger.log("Defaulting to '%s', use --test-ip-base to override" % self["IPBase"]) elif int(self["IPBase"].split('.')[3]) >= 240: self.logger.log("Could not determine an offset for IPaddr resources. Upper bound is too high: %s %s" % (self["IPBase"], self["IPBase"].split('.')[3])) self["IPBase"] = " fe80::1234:56:7890:1000" self.logger.log("Defaulting to '%s', use --test-ip-base to override" % self["IPBase"]) def filter_nodes(self): if self["node-limit"] > 0: if len(self["nodes"]) > self["node-limit"]: self.logger.log("Limiting the number of nodes configured=%d (max=%d)" %(len(self["nodes"]), self["node-limit"])) while len(self["nodes"]) > self["node-limit"]: self["nodes"].pop(len(self["nodes"])-1) def validate(self): if len(self["nodes"]) < 1: print "No nodes specified!" sys.exit(1) def discover(self): self.target = random.Random().choice(self["nodes"]) master = socket.gethostname() # Use the IP where possible to avoid name lookup failures for ip in socket.gethostbyname_ex(master)[2]: if ip != "127.0.0.1": master = ip break; self["cts-master"] = master if not self.has_key("have_systemd"): self["have_systemd"] = not self.rsh(self.target, "systemctl list-units") self.detect_syslog() self.detect_at_boot() self.detect_ip_offset() self.validate() def parse_args(self, args): skipthis=None if not args: args=sys.argv[1:] for i in range(0, len(args)): if skipthis: skipthis=None continue elif args[i] == "-l" or args[i] == "--limit-nodes": skipthis=1 self["node-limit"] = int(args[i+1]) elif args[i] == "-r" or args[i] == "--populate-resources": self["CIBResource"] = 1 self["ClobberCIB"] = 1 elif args[i] == "--outputfile": skipthis=1 self["OutputFile"] = args[i+1] LogFactory().add_file(self["OutputFile"]) elif args[i] == "-L" or args[i] == "--logfile": skipthis=1 self["LogWatcher"] = "remote" self["LogAuditDisabled"] = 1 self["LogFileName"] = args[i+1] elif args[i] == "--ip" or args[i] == "--test-ip-base": skipthis=1 self["IPBase"] = args[i+1] self["CIBResource"] = 1 self["ClobberCIB"] = 1 elif args[i] == "--oprofile": skipthis=1 self["oprofile"] = args[i+1].split(' ') elif args[i] == "--trunc": self["TruncateLog"]=1 elif args[i] == "--list-tests" or args[i] == "--list" : self["ListTests"]=1 elif args[i] == "--benchmark": self["benchmark"]=1 elif args[i] == "--bsc": self["DoBSC"] = 1 self["scenario"] = "basic-sanity" elif args[i] == "--qarsh": RemoteFactory().enable_qarsh() elif args[i] == "--docker": self["docker"] = 1 RemoteFactory().enable_docker() elif args[i] == "--stonith" or args[i] == "--fencing": skipthis=1 if args[i+1] == "1" or args[i+1] == "yes": self["DoFencing"]=1 elif args[i+1] == "0" or args[i+1] == "no": self["DoFencing"]=0 elif args[i+1] == "rhcs" or args[i+1] == "xvm" or args[i+1] == "virt": self["DoStonith"]=1 self["stonith-type"] = "fence_xvm" self["stonith-params"] = "pcmk_arg_map=domain:uname,delay=0" elif args[i+1] == "docker": self["DoStonith"]=1 self["stonith-type"] = "fence_docker_cts" elif args[i+1] == "scsi": self["DoStonith"]=1 self["stonith-type"] = "fence_scsi" self["stonith-params"] = "delay=0" elif args[i+1] == "ssh" or args[i+1] == "lha": self["DoStonith"]=1 self["stonith-type"] = "external/ssh" self["stonith-params"] = "hostlist=all,livedangerously=yes" elif args[i+1] == "north": self["DoStonith"]=1 self["stonith-type"] = "fence_apc" self["stonith-params"] = "ipaddr=north-apc,login=apc,passwd=apc,pcmk_host_map=north-01:2;north-02:3;north-03:4;north-04:5;north-05:6;north-06:7;north-07:9;north-08:10;north-09:11;north-10:12;north-11:13;north-12:14;north-13:15;north-14:18;north-15:17;north-16:19;" elif args[i+1] == "south": self["DoStonith"]=1 self["stonith-type"] = "fence_apc" self["stonith-params"] = "ipaddr=south-apc,login=apc,passwd=apc,pcmk_host_map=south-01:2;south-02:3;south-03:4;south-04:5;south-05:6;south-06:7;south-07:9;south-08:10;south-09:11;south-10:12;south-11:13;south-12:14;south-13:15;south-14:18;south-15:17;south-16:19;" elif args[i+1] == "east": self["DoStonith"]=1 self["stonith-type"] = "fence_apc" self["stonith-params"] = "ipaddr=east-apc,login=apc,passwd=apc,pcmk_host_map=east-01:2;east-02:3;east-03:4;east-04:5;east-05:6;east-06:7;east-07:9;east-08:10;east-09:11;east-10:12;east-11:13;east-12:14;east-13:15;east-14:18;east-15:17;east-16:19;" elif args[i+1] == "west": self["DoStonith"]=1 self["stonith-type"] = "fence_apc" self["stonith-params"] = "ipaddr=west-apc,login=apc,passwd=apc,pcmk_host_map=west-01:2;west-02:3;west-03:4;west-04:5;west-05:6;west-06:7;west-07:9;west-08:10;west-09:11;west-10:12;west-11:13;west-12:14;west-13:15;west-14:18;west-15:17;west-16:19;" elif args[i+1] == "openstack": self["DoStonith"]=1 self["stonith-type"] = "fence_openstack" print "Obtaining OpenStack credentials from the current environment" self["stonith-params"] = "region=%s,tenant=%s,auth=%s,user=%s,password=%s" % ( os.environ['OS_REGION_NAME'], os.environ['OS_TENANT_NAME'], os.environ['OS_AUTH_URL'], os.environ['OS_USERNAME'], os.environ['OS_PASSWORD'] ) elif args[i+1] == "rhevm": self["DoStonith"]=1 self["stonith-type"] = "fence_rhevm" print "Obtaining RHEV-M credentials from the current environment" self["stonith-params"] = "login=%s,passwd=%s,ipaddr=%s,ipport=%s,ssl=1,shell_timeout=10" % ( os.environ['RHEVM_USERNAME'], os.environ['RHEVM_PASSWORD'], os.environ['RHEVM_SERVER'], os.environ['RHEVM_PORT'], ) else: self.usage(args[i+1]) elif args[i] == "--stonith-type": self["stonith-type"] = args[i+1] skipthis=1 elif args[i] == "--stonith-args": self["stonith-params"] = args[i+1] skipthis=1 elif args[i] == "--standby": skipthis=1 if args[i+1] == "1" or args[i+1] == "yes": self["DoStandby"] = 1 elif args[i+1] == "0" or args[i+1] == "no": self["DoStandby"] = 0 else: self.usage(args[i+1]) elif args[i] == "--clobber-cib" or args[i] == "-c": self["ClobberCIB"] = 1 elif args[i] == "--cib-filename": skipthis=1 self["CIBfilename"] = args[i+1] elif args[i] == "--xmit-loss": try: float(args[i+1]) except ValueError: print ("--xmit-loss parameter should be float") self.usage(args[i+1]) skipthis=1 self["XmitLoss"] = args[i+1] elif args[i] == "--recv-loss": try: float(args[i+1]) except ValueError: print ("--recv-loss parameter should be float") self.usage(args[i+1]) skipthis=1 self["RecvLoss"] = args[i+1] elif args[i] == "--choose": skipthis=1 self["tests"].append(args[i+1]) self["scenario"] = "sequence" elif args[i] == "--nodes": skipthis=1 self["nodes"] = args[i+1].split(' ') elif args[i] == "-g" or args[i] == "--group" or args[i] == "--dsh-group": skipthis=1 self["OutputFile"] = "%s/cluster-%s.log" % (os.environ['HOME'], args[i+1]) LogFactory().add_file(self["OutputFile"], "CTS") dsh_file = "%s/.dsh/group/%s" % (os.environ['HOME'], args[i+1]) # Hacks to make my life easier if args[i+1] == "r6": self["Stack"] = "cman" self["DoStonith"]=1 self["stonith-type"] = "fence_xvm" self["stonith-params"] = "delay=0" self["IPBase"] = " fe80::1234:56:7890:4000" elif args[i+1] == "virt1": self["Stack"] = "corosync" self["DoStonith"]=1 self["stonith-type"] = "fence_xvm" self["stonith-params"] = "delay=0" self["IPBase"] = " fe80::1234:56:7890:1000" elif args[i+1] == "east16" or args[i+1] == "nsew": self["Stack"] = "corosync" self["DoStonith"]=1 self["stonith-type"] = "fence_apc" self["stonith-params"] = "ipaddr=east-apc,login=apc,passwd=apc,pcmk_host_map=east-01:2;east-02:3;east-03:4;east-04:5;east-05:6;east-06:7;east-07:9;east-08:10;east-09:11;east-10:12;east-11:13;east-12:14;east-13:15;east-14:18;east-15:17;east-16:19;" self["IPBase"] = " fe80::1234:56:7890:2000" if args[i+1] == "east16": # Requires newer python than available via nsew self["IPagent"] = "Dummy" elif args[i+1] == "corosync8": self["Stack"] = "corosync" self["DoStonith"]=1 self["stonith-type"] = "fence_rhevm" print "Obtaining RHEV-M credentials from the current environment" self["stonith-params"] = "login=%s,passwd=%s,ipaddr=%s,ipport=%s,ssl=1,shell_timeout=10" % ( os.environ['RHEVM_USERNAME'], os.environ['RHEVM_PASSWORD'], os.environ['RHEVM_SERVER'], os.environ['RHEVM_PORT'], ) self["IPBase"] = " fe80::1234:56:7890:3000" if os.path.isfile(dsh_file): self["nodes"] = [] f = open(dsh_file, 'r') for line in f: l = line.strip().rstrip() if not l.startswith('#'): self["nodes"].append(l) f.close() else: print("Unknown DSH group: %s" % args[i+1]) elif args[i] == "--syslog-facility" or args[i] == "--facility": skipthis=1 self["SyslogFacility"] = args[i+1] elif args[i] == "--seed": skipthis=1 self.SeedRandom(args[i+1]) elif args[i] == "--warn-inactive": self["warn-inactive"] = 1 elif args[i] == "--schema": skipthis=1 self["Schema"] = args[i+1] elif args[i] == "--ais": self["Stack"] = "openais" elif args[i] == "--at-boot" or args[i] == "--cluster-starts-at-boot": skipthis=1 if args[i+1] == "1" or args[i+1] == "yes": self["at-boot"] = 1 elif args[i+1] == "0" or args[i+1] == "no": self["at-boot"] = 0 else: self.usage(args[i+1]) elif args[i] == "--heartbeat" or args[i] == "--lha": self["Stack"] = "heartbeat" elif args[i] == "--hae": self["Stack"] = "openais" self["Schema"] = "hae" elif args[i] == "--stack": if args[i+1] == "fedora" or args[i+1] == "fedora-17" or args[i+1] == "fedora-18": self["Stack"] = "corosync" elif args[i+1] == "rhel-6": self["Stack"] = "cman" elif args[i+1] == "rhel-7": self["Stack"] = "corosync" else: self["Stack"] = args[i+1] skipthis=1 elif args[i] == "--once": self["scenario"] = "all-once" elif args[i] == "--boot": self["scenario"] = "boot" elif args[i] == "--valgrind-tests": self["valgrind-tests"] = 1 elif args[i] == "--no-loop-tests": self["loop-tests"] = 0 elif args[i] == "--loop-minutes": skipthis=1 try: self["loop-minutes"]=int(args[i+1]) except ValueError: self.usage(args[i]) elif args[i] == "--no-unsafe-tests": self["unsafe-tests"] = 0 elif args[i] == "--experimental-tests": self["experimental-tests"] = 1 elif args[i] == "--container-tests": self["container-tests"] = 1 elif args[i] == "--set": skipthis=1 (name, value) = args[i+1].split('=') self[name] = value print "Setting %s = %s" % (name, value) + elif args[i] == "--help": + self.usage(args[i], 0) + elif args[i] == "--": break else: try: NumIter=int(args[i]) self["iterations"] = NumIter except ValueError: self.usage(args[i]) - def usage(arg, status=1): - print "Illegal argument %s" % (arg) + def usage(self, arg, status=1): + if status: + print "Illegal argument %s" % arg print "usage: " + sys.argv[0] +" [options] number-of-iterations" print "\nCommon options: " print "\t [--nodes 'node list'] list of cluster nodes separated by whitespace" print "\t [--group | -g 'name'] use the nodes listed in the named DSH group (~/.dsh/groups/$name)" print "\t [--limit-nodes max] only use the first 'max' cluster nodes supplied with --nodes" print "\t [--stack (v0|v1|cman|corosync|heartbeat|openais)] which cluster stack is installed" print "\t [--list-tests] list the valid tests" print "\t [--benchmark] add the timing information" print "\t " print "Options that CTS will usually auto-detect correctly: " print "\t [--logfile path] where should the test software look for logs from cluster nodes" print "\t [--syslog-facility name] which syslog facility should the test software log to" print "\t [--at-boot (1|0)] does the cluster software start at boot time" print "\t [--test-ip-base ip] offset for generated IP address resources" print "\t " print "Options for release testing: " print "\t [--populate-resources | -r] generate a sample configuration" print "\t [--choose name] run only the named test" print "\t [--stonith (1 | 0 | yes | no | rhcs | ssh)]" print "\t [--once] run all valid tests once" print "\t " print "Additional (less common) options: " print "\t [--clobber-cib | -c ] erase any existing configuration" print "\t [--outputfile path] optional location for the test software to write logs to" print "\t [--trunc] truncate logfile before starting" print "\t [--xmit-loss lost-rate(0.0-1.0)]" print "\t [--recv-loss lost-rate(0.0-1.0)]" print "\t [--standby (1 | 0 | yes | no)]" print "\t [--fencing (1 | 0 | yes | no | rhcs | lha | openstack )]" print "\t [--stonith-type type]" print "\t [--stonith-args name=value]" print "\t [--bsc]" print "\t [--no-loop-tests] dont run looping/time-based tests" print "\t [--no-unsafe-tests] dont run tests that are unsafe for use with ocfs2/drbd" print "\t [--valgrind-tests] include tests using valgrind" print "\t [--experimental-tests] include experimental tests" print "\t [--container-tests] include pacemaker_remote tests that run in lxc container resources" print "\t [--oprofile 'node list'] list of cluster nodes to run oprofile on]" print "\t [--qarsh] use the QARSH backdoor to access nodes instead of SSH" print "\t [--docker] Indicates nodes are docker nodes." print "\t [--seed random_seed]" print "\t [--set option=value]" print "\t " print "\t Example: " print "\t python sys.argv[0] -g virt1 --stack cs -r --stonith ssh --schema pacemaker-1.0 500" sys.exit(status) class EnvFactory: instance = None def __init__(self): pass def getInstance(self, args=None): if not EnvFactory.instance: EnvFactory.instance = Environment(args) return EnvFactory.instance diff --git a/cts/patterns.py b/cts/patterns.py index 13307e5e62..fe5299a5fd 100644 --- a/cts/patterns.py +++ b/cts/patterns.py @@ -1,532 +1,532 @@ from UserDict import UserDict import sys, time, types, syslog, os, struct, string, signal, traceback, warnings, socket from cts.CTSvars import * patternvariants = {} class BasePatterns: def __init__(self, name): self.name = name patternvariants[name] = self self.ignore = [] self.BadNews = [] self.components = {} self.commands = { "StatusCmd" : "crmadmin -t 60000 -S %s 2>/dev/null", "CibQuery" : "cibadmin -Ql", "CibAddXml" : "cibadmin --modify -c --xml-text %s", "CibDelXpath" : "cibadmin --delete --xpath %s", # 300,000 == 5 minutes "RscRunning" : CTSvars.CRM_DAEMON_DIR + "/lrmd_test -R -r %s", "CIBfile" : "%s:"+CTSvars.CRM_CONFIG_DIR+"/cib.xml", "TmpDir" : "/tmp", "BreakCommCmd" : "iptables -A INPUT -s %s -j DROP >/dev/null 2>&1", "FixCommCmd" : "iptables -D INPUT -s %s -j DROP >/dev/null 2>&1", # tc qdisc add dev lo root handle 1: cbq avpkt 1000 bandwidth 1000mbit # tc class add dev lo parent 1: classid 1:1 cbq rate "$RATE"kbps allot 17000 prio 5 bounded isolated # tc filter add dev lo parent 1: protocol ip prio 16 u32 match ip dst 127.0.0.1 match ip sport $PORT 0xFFFF flowid 1:1 # tc qdisc add dev lo parent 1: netem delay "$LATENCY"msec "$(($LATENCY/4))"msec 10% 2> /dev/null > /dev/null "ReduceCommCmd" : "", "RestoreCommCmd" : "tc qdisc del dev lo root", "UUIDQueryCmd" : "crmadmin -N", "MaintenanceModeOn" : "cibadmin --modify -c --xml-text ''", "MaintenanceModeOff" : "cibadmin --delete --xpath \"//nvpair[@name='maintenance-mode']\"", "StandbyCmd" : "crm_attribute -VQ -U %s -n standby -l forever -v %s 2>/dev/null", "StandbyQueryCmd" : "crm_attribute -QG -U %s -n standby -l forever -d off 2>/dev/null", } self.search = { "Pat:DC_IDLE" : "crmd.*State transition.*-> S_IDLE", # This wont work if we have multiple partitions "Pat:Local_started" : "%s\W.*The local CRM is operational", "Pat:Slave_started" : "%s\W.*State transition.*-> S_NOT_DC", "Pat:Master_started": "%s\W.*State transition.*-> S_IDLE", "Pat:We_stopped" : "heartbeat.*%s.*Heartbeat shutdown complete", "Pat:Logd_stopped" : "%s\W.*logd:.*Exiting write process", "Pat:They_stopped" : "%s\W.*LOST:.* %s ", "Pat:They_dead" : "node %s.*: is dead", "Pat:TransitionComplete" : "Transition status: Complete: complete", "Pat:Fencing_start" : "Initiating remote operation .* for %s", "Pat:Fencing_ok" : "stonith.*remote_op_done:.*Operation .* of %s by .*: OK", "Pat:RscOpOK" : "process_lrm_event:.*Operation %s_%s.*ok.*confirmed", "Pat:RscRemoteOpOK" : "process_lrm_event:.*Operation %s_%s.*ok.*node=%s, .*confirmed.*true", "Pat:NodeFenced" : "tengine_stonith_notify:.*Peer %s was terminated .*: OK", "Pat:FenceOpOK" : "Operation .* for host '%s' with device .* returned: 0", } def get_component(self, key): if self.components.has_key(key): return self.components[key] print "Unknown component '%s' for %s" % (key, self.name) return [] def get_patterns(self, key): if key == "BadNews": return self.BadNews elif key == "BadNewsIgnore": return self.ignore elif key == "Commands": return self.commands elif key == "Search": return self.search elif key == "Components": return self.components def __getitem__(self, key): if key == "Name": return self.name elif self.commands.has_key(key): return self.commands[key] elif self.search.has_key(key): return self.search[key] else: print "Unknown template '%s' for %s" % (key, self.name) return None class crm_lha(BasePatterns): def __init__(self, name): BasePatterns.__init__(self, name) self.commands.update({ "StartCmd" : "service heartbeat start > /dev/null 2>&1", "StopCmd" : "service heartbeat stop > /dev/null 2>&1", "EpocheCmd" : "crm_node -H -e", "QuorumCmd" : "crm_node -H -q", "ParitionCmd" : "crm_node -H -p", }) self.search.update({ # Patterns to look for in the log files for various occasions... "Pat:ChildKilled" : "%s\W.*heartbeat.*%s.*killed by signal 9", "Pat:ChildRespawn" : "%s\W.*heartbeat.*Respawning client.*%s", "Pat:ChildExit" : "(ERROR|error): Client .* exited with return code", }) self.BadNews = [ r"error:", r"crit:", r"ERROR:", r"CRIT:", r"Shutting down...NOW", r"Timer I_TERMINATE just popped", r"input=I_ERROR", r"input=I_FAIL", r"input=I_INTEGRATED cause=C_TIMER_POPPED", r"input=I_FINALIZED cause=C_TIMER_POPPED", r"input=I_ERROR", r", exiting\.", r"WARN.*Ignoring HA message.*vote.*not in our membership list", r"pengine.*Attempting recovery of resource", r"is taking more than 2x its timeout", r"Confirm not received from", r"Welcome reply not received from", r"Attempting to schedule .* after a stop", r"Resource .* was active at shutdown", r"duplicate entries for call_id", r"Search terminated:", r"No need to invoke the TE", r"global_timer_callback:", r"Faking parameter digest creation", r"Parameters to .* action changed:", r"Parameters to .* changed", ] self.ignore = [ "(ERROR|error): crm_abort:.*crm_glib_handler: ", "(ERROR|error): Message hist queue is filling up", "stonithd.*CRIT: external_hostlist:.*'vmware gethosts' returned an empty hostlist", "stonithd.*(ERROR|error): Could not list nodes for stonith RA external/vmware.", "pengine.*Preventing .* from re-starting", ] class crm_cs_v0(BasePatterns): def __init__(self, name): BasePatterns.__init__(self, name) self.commands.update({ "EpocheCmd" : "crm_node -e --openais", "QuorumCmd" : "crm_node -q --openais", "ParitionCmd" : "crm_node -p --openais", "StartCmd" : "service corosync start", "StopCmd" : "service corosync stop", }) self.search.update({ # The next pattern is too early # "Pat:We_stopped" : "%s.*Service engine unloaded: Pacemaker Cluster Manager", # The next pattern would be preferred, but it doesn't always come out # "Pat:We_stopped" : "%s.*Corosync Cluster Engine exiting with status", "Pat:We_stopped" : "%s\W.*Service engine unloaded: corosync cluster quorum service", "Pat:They_stopped" : "%s\W.*crmd.*Node %s\[.*state is now lost", "Pat:They_dead" : "corosync:.*Node %s is now: lost", "Pat:ChildExit" : "Child process .* exited", "Pat:ChildKilled" : "%s\W.*corosync.*Child process %s terminated with signal 9", "Pat:ChildRespawn" : "%s\W.*corosync.*Respawning failed child process: %s", + + "Pat:InfraUp" : "%s\W.*corosync.*Initializing transport", + "Pat:PacemakerUp" : "%s\W.*pacemakerd.*Starting Pacemaker", }) self.ignore = [ r"crm_mon:", r"crmadmin:", r"update_trace_data", r"async_notify:.*strange, client not found", r"Parse error: Ignoring unknown option .*nodename", r"error: log_operation:.*Operation 'reboot' .* with device 'FencingFail' returned:", r"Child process .* terminated with signal 9", r"getinfo response error: 1$", "sbd.* error: inquisitor_child: DEBUG MODE IS ACTIVE", "sbd.* pcmk: error: crm_ipc_read: Connection to cib_ro failed", "sbd.* pcmk: error: mainloop_gio_callback: Connection to cib_ro.* closed .I/O condition=17", ] self.BadNews = [ r"error:", r"crit:", r"ERROR:", r"CRIT:", r"Shutting down...NOW", r"Timer I_TERMINATE just popped", r"input=I_ERROR", r"input=I_FAIL", r"input=I_INTEGRATED cause=C_TIMER_POPPED", r"input=I_FINALIZED cause=C_TIMER_POPPED", r"input=I_ERROR", r", exiting\.", r"(WARN|warn).*Ignoring HA message.*vote.*not in our membership list", r"pengine.*Attempting recovery of resource", r"is taking more than 2x its timeout", r"Confirm not received from", r"Welcome reply not received from", r"Attempting to schedule .* after a stop", r"Resource .* was active at shutdown", r"duplicate entries for call_id", r"Search terminated:", r":global_timer_callback", r"Faking parameter digest creation", r"Parameters to .* action changed:", r"Parameters to .* changed", r"The .* process .* terminated with signal", r"Child process .* terminated with signal", r"LogActions:.*Recover", r"rsyslogd.* imuxsock lost .* messages from pid .* due to rate-limiting", r"Peer is not part of our cluster", r"We appear to be in an election loop", r"Unknown node -> we will not deliver message", r"crm_write_blackbox", r"pacemakerd.*Could not connect to Cluster Configuration Database API", r"Receiving messages from a node we think is dead", r"share the same cluster nodeid", r"share the same name", #r"crm_ipc_send:.*Request .* failed", #r"crm_ipc_send:.*Sending to .* is disabled until pending reply is received", # Not inherently bad, but worth tracking #r"No need to invoke the TE", #r"ping.*: DEBUG: Updated connected = 0", #r"Digest mis-match:", r"te_graph_trigger:.*Transition failed: terminated", r"process_ping_reply", r"warn.*:retrieveCib", #r"Executing .* fencing operation", #r"fence_pcmk.* Call to fence", #r"fence_pcmk", r"cman killed by node", r"Election storm", r"stalled the FSA with pending inputs", ] self.components["common-ignore"] = [ "Pending action:", "error: crm_log_message_adv:", "resources were active at shutdown", "pending LRM operations at shutdown", "Lost connection to the CIB service", "Connection to the CIB terminated...", "Sending message to CIB service FAILED", "apply_xml_diff:.*Diff application failed!", "crmd.*Action A_RECOVER .* not supported", "unconfirmed_actions:.*Waiting on .* unconfirmed actions", "cib_native_msgready:.*Message pending on command channel", "crmd.*do_exit:.*Performing A_EXIT_1 - forcefully exiting the CRMd", "verify_stopped:.*Resource .* was active at shutdown. You may ignore this error if it is unmanaged.", "error: attrd_connection_destroy:.*Lost connection to attrd", "info: te_fence_node:.*Executing .* fencing operation", "crm_write_blackbox:", # "error: native_create_actions: Resource .*stonith::.* is active on 2 nodes attempting recovery", # "error: process_pe_message: Transition .* ERRORs found during PE processing", ] self.components["corosync-ignore"] = [ r"error: pcmk_cpg_dispatch:.*Connection to the CPG API failed: Library error", r"The .* process .* exited", r"pacemakerd.*error: pcmk_child_exit:.*Child process .* exited", r"cib.*error: cib_cs_destroy:.*Corosync connection lost", r"stonith-ng.*error: stonith_peer_cs_destroy:.*Corosync connection terminated", r"The cib process .* exited: Invalid argument", r"The attrd process .* exited: Transport endpoint is not connected", r"The crmd process .* exited: Link has been severed", r"error: pcmk_child_exit:.*Child process cib .* exited: Invalid argument", r"error: pcmk_child_exit:.*Child process attrd .* exited: Transport endpoint is not connected", r"error: pcmk_child_exit:.*Child process crmd .* exited: Link has been severed", r"lrmd.*error: crm_ipc_read:.*Connection to stonith-ng failed", r"lrmd.*error: mainloop_gio_callback:.*Connection to stonith-ng.* closed", r"lrmd.*error: stonith_connection_destroy_cb:.*LRMD lost STONITH connection", r"crmd.*do_state_transition:.*State transition .* S_RECOVERY", r"crmd.*error: do_log:.*FSA: Input I_ERROR", r"crmd.*error: do_log:.*FSA: Input I_TERMINATE", r"crmd.*error: pcmk_cman_dispatch:.*Connection to cman failed", r"crmd.*error: crmd_fast_exit:.*Could not recover from internal error", r"error: crm_ipc_read:.*Connection to cib_shm failed", r"error: mainloop_gio_callback:.*Connection to cib_shm.* closed", r"error: stonith_connection_failed:.*STONITH connection failed", ] self.components["corosync"] = [ r"pacemakerd.*error: cfg_connection_destroy:.*Connection destroyed", r"pacemakerd.*error: mcp_cpg_destroy:.*Connection destroyed", r"crit: attrd_(cs|cpg)_destroy:.*Lost connection to Corosync service", r"stonith_peer_cs_destroy:.*Corosync connection terminated", r"cib_cs_destroy:.*Corosync connection lost! Exiting.", r"crmd_(cs|quorum)_destroy:.*connection terminated", r"pengine.*Scheduling Node .* for STONITH", r"tengine_stonith_notify:.*Peer .* was terminated .*: OK", ] self.components["cib-ignore"] = [ "lrmd.*Connection to stonith-ng failed", "lrmd.*Connection to stonith-ng.* closed", "lrmd.*LRMD lost STONITH connection", "lrmd.*STONITH connection failed, finalizing .* pending operations", ] self.components["cib"] = [ "State transition .* S_RECOVERY", "Respawning .* crmd", "Respawning .* attrd", "Connection to cib_.* failed", "Connection to cib_.* closed", "Connection to the CIB terminated...", "(Child process|The) crmd .* exited: Generic Pacemaker error", "(Child process|The) attrd .* exited: (Connection reset by peer|Transport endpoint is not connected)", "Lost connection to CIB service", "crmd.*Input I_TERMINATE from do_recover", "crmd.*I_ERROR.*crmd_cib_connection_destroy", "crmd.*Could not recover from internal error", ] self.components["lrmd"] = [ "State transition .* S_RECOVERY", "LRM Connection failed", "Respawning .* crmd", "Connection to lrmd failed", "Connection to lrmd.* closed", "crmd.*I_ERROR.*lrm_connection_destroy", "(Child process|The) crmd .* exited: Generic Pacemaker error", "crmd.*Input I_TERMINATE from do_recover", "crmd.*Could not recover from internal error", ] self.components["lrmd-ignore"] = [] self.components["crmd"] = [ # "WARN: determine_online_status: Node .* is unclean", # "Scheduling Node .* for STONITH", # "Executing .* fencing operation", # Only if the node wasn't the DC: "State transition S_IDLE", "State transition .* -> S_IDLE", ] self.components["crmd-ignore"] = [] self.components["attrd"] = [] self.components["attrd-ignore"] = [] self.components["pengine"] = [ "State transition .* S_RECOVERY", "Respawning .* crmd", "(The|Child process) crmd .* exited: Generic Pacemaker error", "Connection to pengine failed", "Connection to pengine.* closed", "Connection to the Policy Engine failed", "crmd.*I_ERROR.*save_cib_contents", "crmd.*Input I_TERMINATE from do_recover", "crmd.*Could not recover from internal error", ] self.components["pengine-ignore"] = [] self.components["stonith"] = [ "Connection to stonith-ng failed", "LRMD lost STONITH connection", "Connection to stonith-ng.* closed", "Fencing daemon connection failed", "crmd.*stonith_api_add_notification:.*Callback already present", ] self.components["stonith-ignore"] = [ "LogActions: Recover Fencing", "Updating failcount for Fencing", "error: crm_ipc_read: Connection to stonith-ng failed", "error: mainloop_gio_callback: Connection to stonith-ng.*closed (I/O condition=17)", "crit: tengine_stonith_connection_destroy: Fencing daemon connection failed", "error: te_connect_stonith:.*Sign-in failed: triggered a retry", "STONITH connection failed, finalizing .* pending operations.", "process_lrm_event:.*Operation Fencing.* Error", ] self.components["stonith-ignore"].extend(self.components["common-ignore"]) class crm_mcp(crm_cs_v0): ''' The crm version 4 cluster manager class. It implements the things we need to talk to and manipulate crm clusters running on top of native corosync (no plugins) ''' def __init__(self, name): crm_cs_v0.__init__(self, name) self.commands.update({ "StartCmd" : "service corosync start && service pacemaker start", "StopCmd" : "service pacemaker stop; service pacemaker_remote stop; service corosync stop", "EpocheCmd" : "crm_node -e", "QuorumCmd" : "crm_node -q", "ParitionCmd" : "crm_node -p", }) self.search.update({ # Close enough... "Corosync Cluster Engine exiting normally" isn't printed - # reliably and there's little interest in doing anything it + # reliably and there's little interest in doing anything about it "Pat:We_stopped" : "%s\W.*Unloading all Corosync service engines", "Pat:They_stopped" : "%s\W.*crmd.*Node %s\[.*state is now lost", "Pat:They_dead" : "crmd.*Node %s\[.*state is now lost", "Pat:ChildExit" : "The .* process exited", "Pat:ChildKilled" : "%s\W.*pacemakerd.*The %s process .* terminated with signal 9", "Pat:ChildRespawn" : "%s\W.*pacemakerd.*Respawning failed child process: %s", - - "Pat:InfraUp" : "%s\W.*corosync.*Initializing transport", - "Pat:PacemakerUp" : "%s\W.*pacemakerd.*Starting Pacemaker", }) # if self.Env["have_systemd"]: # self.update({ # # When systemd is in use, we can look for this instead # "Pat:We_stopped" : "%s.*Stopped Corosync Cluster Engine", # }) class crm_mcp_docker(crm_mcp): ''' The crm version 4 cluster manager class. It implements the things we need to talk to and manipulate crm clusters running on top of native corosync (no plugins) ''' def __init__(self, name): crm_mcp.__init__(self, name) self.commands.update({ "StartCmd" : "pcmk_start", "StopCmd" : "pcmk_stop", }) class crm_cman(crm_cs_v0): ''' The crm version 3 cluster manager class. It implements the things we need to talk to and manipulate crm clusters running on top of openais ''' def __init__(self, name): crm_cs_v0.__init__(self, name) self.commands.update({ "StartCmd" : "service pacemaker start", "StopCmd" : "service pacemaker stop; service pacemaker_remote stop", "EpocheCmd" : "crm_node -e --cman", "QuorumCmd" : "crm_node -q --cman", "ParitionCmd" : "crm_node -p --cman", "Pat:We_stopped" : "%s.*Unloading all Corosync service engines", "Pat:They_stopped" : "%s\W.*crmd.*Node %s\[.*state is now lost", "Pat:They_dead" : "crmd.*Node %s\[.*state is now lost", "Pat:ChildKilled" : "%s\W.*pacemakerd.*The %s process .* terminated with signal 9", "Pat:ChildRespawn" : "%s\W.*pacemakerd.*Respawning failed child process: %s", }) class PatternSelector: def __init__(self, name=None): self.name = name self.base = BasePatterns("crm-base") if not name: crm_cs_v0("crm-plugin-v0") crm_cman("crm-cman") crm_mcp("crm-mcp") crm_lha("crm-lha") elif name == "crm-lha": crm_lha(name) elif name == "crm-plugin-v0": crm_cs_v0(name) elif name == "crm-cman": crm_cman(name) elif name == "crm-mcp": crm_mcp(name) elif name == "crm-mcp-docker": crm_mcp_docker(name) def get_variant(self, variant): if patternvariants.has_key(variant): return patternvariants[variant] print "defaulting to crm-base for %s" % variant return self.base def get_patterns(self, variant, kind): return self.get_variant(variant).get_patterns(kind) def get_template(self, variant, key): v = self.get_variant(variant) return v[key] def get_component(self, variant, kind): return self.get_variant(variant).get_component(kind) def __getitem__(self, key): return self.get_template(self.name, key) # python cts/CTSpatt.py -k crm-mcp -t StartCmd if __name__ == '__main__': pdir=os.path.dirname(sys.path[0]) sys.path.insert(0, pdir) # So that things work from the source directory from cts.CTSvars import * kind=None template=None skipthis=None args=sys.argv[1:] for i in range(0, len(args)): if skipthis: skipthis=None continue elif args[i] == "-k" or args[i] == "--kind": skipthis=1 kind = args[i+1] elif args[i] == "-t" or args[i] == "--template": skipthis=1 template = args[i+1] else: print "Illegal argument " + args[i] print PatternSelector(kind)[template] diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt index 230a208b52..0d67ecd90b 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt @@ -1,139 +1,140 @@ -[[_what_is_stonith]] = Configure STONITH = +== What is STONITH? == + STONITH (Shoot The Other Node In The Head aka. fencing) protects your data from being corrupted by rogue nodes or unintended concurrent access. Just because a node is unresponsive doesn't mean it has stopped accessing your data. The only way to be 100% sure that your data is safe, is to use STONITH to ensure that the node is truly offline before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. == Choose a STONITH Device == It is crucial that your STONITH device can allow the cluster to differentiate between a node failure and a network failure. The biggest mistake people make in choosing a STONITH device is to use a remote power switch (such as many on-board IPMI controllers) that shares power with the node it controls. In such cases, the cluster cannot be sure if the node is really offline, or active and suffering from a network fault. Likewise, any device that relies on the machine being active (such as SSH-based "devices" used during testing) are inappropriate. == Configure the Cluster for STONITH == . Configure the STONITH device itself to be able to fence your nodes and accept fencing requests. . Install the STONITH agent(s). To see what packages are available, run `yum search fence-agents fence-virt`. Be sure to install the package(s) on all cluster nodes. . Find the correct STONITH agent script: `pcs stonith list` . Find the parameters associated with the device: +pcs stonith describe pass:[agent_name]+ . Create a local copy of the CIB: `pcs cluster cib stonith_cfg` . Create the fencing resource: +pcs -f stonith_cfg stonith create pass:[stonith_id stonith_device_type [stonith_device_options]]+ . Enable STONITH in the cluster: `pcs -f stonith_cfg property set stonith-enabled=true` . If the device does not know how to fence nodes based on their uname, you may also need to set the special *pcmk_host_map* parameter. See `man stonithd` for details. . If the device does not support the *list* command, you may also need to set the special *pcmk_host_list* and/or *pcmk_host_check* parameters. See `man stonithd` for details. . If the device does not expect the victim to be specified with the *port* parameter, you may also need to set the special *pcmk_host_argument* parameter. See `man stonithd` for details. . Commit the new configuration: `pcs cluster cib-push stonith_cfg` . Once the STONITH resource is running, test it (you might want to stop the cluster on that machine first): +stonith_admin --reboot pass:[nodename]+ == Example == For this example, assume we have a chassis containing four nodes and an IPMI device active on 10.0.0.1. Following the steps above would go something like this: Step 1: Configure the IP address, authentication credentials, etc. in the IPMI device itself. Step 2: Install the *fence-agents-ipmilan* package on both nodes. Step 3: Choose the *fence_ipmilan* STONITH agent. Step 4: Obtain the agent's possible parameters: ---- [root@pcmk-1 ~]# pcs stonith describe fence_ipmilan Stonith options for: fence_ipmilan ipport: TCP/UDP port to use for connection with device inet6_only: Forces agent to use IPv6 addresses only ipaddr (required): IP Address or Hostname passwd_script: Script to retrieve password method: Method to fence (onoff|cycle) inet4_only: Forces agent to use IPv4 addresses only passwd: Login password or passphrase lanplus: Use Lanplus to improve security of connection auth: IPMI Lan Auth type. cipher: Ciphersuite to use (same as ipmitool -C parameter) privlvl: Privilege level on IPMI device action (required): Fencing Action login: Login Name verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit power_wait: Wait X seconds after issuing ON/OFF login_timeout: Wait X seconds for cmd prompt after login power_timeout: Test X seconds for status change after ON/OFF delay: Wait X seconds before fencing is started ipmitool_path: Path to ipmitool binary shell_timeout: Wait X seconds for cmd prompt after issuing command retry_on: Count of attempts to retry power on sudo: Use sudo (without password) when calling 3rd party sotfware. stonith-timeout: How long to wait for the STONITH action to complete per a stonith device. priority: The priority of the stonith resource. Devices are tried in order of highest priority to lowest. pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list). pcmk_host_check: How to determine which machines are controlled by the device. ---- Step 5: `pcs cluster cib stonith_cfg` Step 6: Here are example parameters for creating our STONITH resource: ---- # pcs -f stonith_cfg stonith create ipmi-fencing fence_ipmilan \ pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser \ passwd=acd123 op monitor interval=60s # pcs -f stonith_cfg stonith ipmi-fencing (stonith:fence_ipmilan): Stopped ---- Steps 7-10: Enable STONITH in the cluster: ---- # pcs -f stonith_cfg property set stonith-enabled=true # pcs -f stonith_cfg property Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster dc-version: 1.1.12-a9c8177 have-watchdog: false stonith-enabled: true ---- Step 11: `pcs cluster cib-push stonith_cfg` diff --git a/include/crm/crm.h b/include/crm/crm.h index 41279b02b8..37bc5ce8d9 100644 --- a/include/crm/crm.h +++ b/include/crm/crm.h @@ -1,200 +1,201 @@ /* * Copyright (C) 2004 Andrew Beekhof * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public * License as published by the Free Software Foundation; either * version 2 of the License, or (at your option) any later version. * * This software is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */ #ifndef CRM__H # define CRM__H /** * \file * \brief A dumping ground * \ingroup core */ # include # include # include # include # undef MIN # undef MAX # include # include # define CRM_FEATURE_SET "3.0.9" # define EOS '\0' # define DIMOF(a) ((int) (sizeof(a)/sizeof(a[0])) ) # ifndef MAX_NAME # define MAX_NAME 256 # endif # ifndef __GNUC__ # define __builtin_expect(expr, result) (expr) # endif /* Some handy macros used by the Linux kernel */ # define __likely(expr) __builtin_expect(expr, 1) # define __unlikely(expr) __builtin_expect(expr, 0) # define CRM_META "CRM_meta" extern char *crm_system_name; /* *INDENT-OFF* */ /* Clean these up at some point, some probably should be runtime options */ # define SOCKET_LEN 1024 # define APPNAME_LEN 256 # define MAX_IPC_FAIL 5 # define MAX_IPC_DELAY 120 # define DAEMON_RESPAWN_STOP 100 # define MSG_LOG 1 # define DOT_FSA_ACTIONS 1 # define DOT_ALL_FSA_INPUTS 1 /* #define FSA_TRACE 1 */ # define INFINITY_S "INFINITY" # define MINUS_INFINITY_S "-INFINITY" # define INFINITY 1000000 /* Sub-systems */ # define CRM_SYSTEM_DC "dc" # define CRM_SYSTEM_DCIB "dcib" /* The master CIB */ # define CRM_SYSTEM_CIB "cib" # define CRM_SYSTEM_CRMD "crmd" # define CRM_SYSTEM_LRMD "lrmd" # define CRM_SYSTEM_PENGINE "pengine" # define CRM_SYSTEM_TENGINE "tengine" # define CRM_SYSTEM_STONITHD "stonithd" # define CRM_SYSTEM_MCP "pacemakerd" /* Valid operations */ # define CRM_OP_NOOP "noop" # define CRM_OP_JOIN_ANNOUNCE "join_announce" # define CRM_OP_JOIN_OFFER "join_offer" # define CRM_OP_JOIN_REQUEST "join_request" # define CRM_OP_JOIN_ACKNAK "join_ack_nack" # define CRM_OP_JOIN_CONFIRM "join_confirm" # define CRM_OP_DIE "die_no_respawn" # define CRM_OP_RETRIVE_CIB "retrieve_cib" # define CRM_OP_PING "ping" # define CRM_OP_THROTTLE "throttle" # define CRM_OP_VOTE "vote" # define CRM_OP_NOVOTE "no-vote" # define CRM_OP_HELLO "hello" # define CRM_OP_HBEAT "dc_beat" # define CRM_OP_PECALC "pe_calc" # define CRM_OP_ABORT "abort" # define CRM_OP_QUIT "quit" # define CRM_OP_LOCAL_SHUTDOWN "start_shutdown" # define CRM_OP_SHUTDOWN_REQ "req_shutdown" # define CRM_OP_SHUTDOWN "do_shutdown" # define CRM_OP_FENCE "stonith" # define CRM_OP_EVENTCC "event_cc" # define CRM_OP_TEABORT "te_abort" # define CRM_OP_TEABORTED "te_abort_confirmed" /* we asked */ # define CRM_OP_TE_HALT "te_halt" # define CRM_OP_TECOMPLETE "te_complete" # define CRM_OP_TETIMEOUT "te_timeout" # define CRM_OP_TRANSITION "transition" # define CRM_OP_REGISTER "register" # define CRM_OP_IPC_FWD "ipc_fwd" # define CRM_OP_DEBUG_UP "debug_inc" # define CRM_OP_DEBUG_DOWN "debug_dec" # define CRM_OP_INVOKE_LRM "lrm_invoke" # define CRM_OP_LRM_REFRESH "lrm_refresh" /* Deprecated */ # define CRM_OP_LRM_QUERY "lrm_query" # define CRM_OP_LRM_DELETE "lrm_delete" # define CRM_OP_LRM_FAIL "lrm_fail" # define CRM_OP_PROBED "probe_complete" # define CRM_OP_NODES_PROBED "probe_nodes_complete" # define CRM_OP_REPROBE "probe_again" # define CRM_OP_CLEAR_FAILCOUNT "clear_failcount" # define CRM_OP_RELAXED_SET "one-or-more" +# define CRM_OP_RELAXED_CLONE "clone-one-or-more" # define CRM_OP_RM_NODE_CACHE "rm_node_cache" # define CRMD_JOINSTATE_DOWN "down" # define CRMD_JOINSTATE_PENDING "pending" # define CRMD_JOINSTATE_MEMBER "member" # define CRMD_JOINSTATE_NACK "banned" # define CRMD_ACTION_DELETE "delete" # define CRMD_ACTION_CANCEL "cancel" # define CRMD_ACTION_MIGRATE "migrate_to" # define CRMD_ACTION_MIGRATED "migrate_from" # define CRMD_ACTION_START "start" # define CRMD_ACTION_STARTED "running" # define CRMD_ACTION_STOP "stop" # define CRMD_ACTION_STOPPED "stopped" # define CRMD_ACTION_PROMOTE "promote" # define CRMD_ACTION_PROMOTED "promoted" # define CRMD_ACTION_DEMOTE "demote" # define CRMD_ACTION_DEMOTED "demoted" # define CRMD_ACTION_NOTIFY "notify" # define CRMD_ACTION_NOTIFIED "notified" # define CRMD_ACTION_STATUS "monitor" /* short names */ # define RSC_DELETE CRMD_ACTION_DELETE # define RSC_CANCEL CRMD_ACTION_CANCEL # define RSC_MIGRATE CRMD_ACTION_MIGRATE # define RSC_MIGRATED CRMD_ACTION_MIGRATED # define RSC_START CRMD_ACTION_START # define RSC_STARTED CRMD_ACTION_STARTED # define RSC_STOP CRMD_ACTION_STOP # define RSC_STOPPED CRMD_ACTION_STOPPED # define RSC_PROMOTE CRMD_ACTION_PROMOTE # define RSC_PROMOTED CRMD_ACTION_PROMOTED # define RSC_DEMOTE CRMD_ACTION_DEMOTE # define RSC_DEMOTED CRMD_ACTION_DEMOTED # define RSC_NOTIFY CRMD_ACTION_NOTIFY # define RSC_NOTIFIED CRMD_ACTION_NOTIFIED # define RSC_STATUS CRMD_ACTION_STATUS /* *INDENT-ON* */ typedef GList *GListPtr; # include # include # include # define crm_str_hash g_str_hash_traditional guint crm_strcase_hash(gconstpointer v); guint g_str_hash_traditional(gconstpointer v); #endif diff --git a/lib/common/xml.c b/lib/common/xml.c index 2c476dc830..2fa80ec5b2 100644 --- a/lib/common/xml.c +++ b/lib/common/xml.c @@ -1,6074 +1,6075 @@ /* * Copyright (C) 2004 Andrew Beekhof * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #if HAVE_BZLIB_H # include #endif #if HAVE_LIBXML2 # include # include # include #endif #if HAVE_LIBXSLT # include # include #endif #define XML_BUFFER_SIZE 4096 #define XML_PARSER_DEBUG 0 #define BEST_EFFORT_STATUS 0 void xml_log(int priority, const char *fmt, ...) G_GNUC_PRINTF(2, 3); static inline int __get_prefix(const char *prefix, xmlNode *xml, char *buffer, int offset); void xml_log(int priority, const char *fmt, ...) { va_list ap; va_start(ap, fmt); qb_log_from_external_source_va(__FUNCTION__, __FILE__, fmt, priority, __LINE__, 0, ap); va_end(ap); } typedef struct { xmlRelaxNGPtr rng; xmlRelaxNGValidCtxtPtr valid; xmlRelaxNGParserCtxtPtr parser; } relaxng_ctx_cache_t; struct schema_s { int type; float version; char *name; char *location; char *transform; int after_transform; void *cache; }; typedef struct { int found; const char *string; } filter_t; enum xml_private_flags { xpf_none = 0x0000, xpf_dirty = 0x0001, xpf_deleted = 0x0002, xpf_created = 0x0004, xpf_modified = 0x0008, xpf_tracking = 0x0010, xpf_processed = 0x0020, xpf_skip = 0x0040, xpf_moved = 0x0080, xpf_acl_enabled = 0x0100, xpf_acl_read = 0x0200, xpf_acl_write = 0x0400, xpf_acl_deny = 0x0800, xpf_acl_create = 0x1000, xpf_acl_denied = 0x2000, }; typedef struct xml_private_s { long check; uint32_t flags; char *user; GListPtr acls; GListPtr deleted_paths; } xml_private_t; typedef struct xml_acl_s { enum xml_private_flags mode; char *xpath; } xml_acl_t; /* *INDENT-OFF* */ static filter_t filter[] = { { 0, XML_ATTR_ORIGIN }, { 0, XML_CIB_ATTR_WRITTEN }, { 0, XML_ATTR_UPDATE_ORIG }, { 0, XML_ATTR_UPDATE_CLIENT }, { 0, XML_ATTR_UPDATE_USER }, }; /* *INDENT-ON* */ static struct schema_s *known_schemas = NULL; static int xml_schema_max = 0; static xmlNode *subtract_xml_comment(xmlNode * parent, xmlNode * left, xmlNode * right, gboolean * changed); static xmlNode *find_xml_comment(xmlNode * root, xmlNode * search_comment); static int add_xml_comment(xmlNode * parent, xmlNode * target, xmlNode * update); static bool __xml_acl_check(xmlNode *xml, const char *name, enum xml_private_flags mode); const char *__xml_acl_to_text(enum xml_private_flags flags); static int xml_latest_schema_index(void) { return xml_schema_max - 4; } static int xml_minimum_schema_index(void) { static int best = 0; if(best == 0) { int lpc = 0; float target = 0.0; best = xml_latest_schema_index(); target = floor(known_schemas[best].version); for(lpc = best; lpc > 0; lpc--) { if(known_schemas[lpc].version < target) { return best; } else { best = lpc; } } best = xml_latest_schema_index(); } return best; } const char * xml_latest_schema(void) { return get_schema_name(xml_latest_schema_index()); } #define CHUNK_SIZE 1024 static inline bool TRACKING_CHANGES(xmlNode *xml) { if(xml == NULL || xml->doc == NULL || xml->doc->_private == NULL) { return FALSE; } else if(is_set(((xml_private_t *)xml->doc->_private)->flags, xpf_tracking)) { return TRUE; } return FALSE; } #define buffer_print(buffer, max, offset, fmt, args...) do { \ int rc = (max); \ if(buffer) { \ rc = snprintf((buffer) + (offset), (max) - (offset), fmt, ##args); \ } \ if(buffer && rc < 0) { \ crm_perror(LOG_ERR, "snprintf failed at offset %d", offset); \ (buffer)[(offset)] = 0; \ } else if(rc >= ((max) - (offset))) { \ char *tmp = NULL; \ (max) = QB_MAX(CHUNK_SIZE, (max) * 2); \ tmp = realloc_safe((buffer), (max) + 1); \ CRM_ASSERT(tmp); \ (buffer) = tmp; \ } else { \ offset += rc; \ break; \ } \ } while(1); static void insert_prefix(int options, char **buffer, int *offset, int *max, int depth) { if (options & xml_log_option_formatted) { size_t spaces = 2 * depth; if ((*buffer) == NULL || spaces >= ((*max) - (*offset))) { (*max) = QB_MAX(CHUNK_SIZE, (*max) * 2); (*buffer) = realloc_safe((*buffer), (*max) + 1); } memset((*buffer) + (*offset), ' ', spaces); (*offset) += spaces; } } static const char * get_schema_root(void) { static const char *base = NULL; if (base == NULL) { base = getenv("PCMK_schema_directory"); } if (base == NULL || strlen(base) == 0) { base = CRM_DTD_DIRECTORY; } return base; } static char * get_schema_path(const char *name, const char *file) { const char *base = get_schema_root(); if(file) { return g_strdup_printf("%s/%s", base, file); } return g_strdup_printf("%s/%s.rng", base, name); } static int schema_filter(const struct dirent * a) { int rc = 0; float version = 0; if(strstr(a->d_name, "pacemaker-") != a->d_name) { /* crm_trace("%s - wrong prefix", a->d_name); */ } else if(strstr(a->d_name, ".rng") == NULL) { /* crm_trace("%s - wrong suffix", a->d_name); */ } else if(sscanf(a->d_name, "pacemaker-%f.rng", &version) == 0) { /* crm_trace("%s - wrong format", a->d_name); */ } else if(strcmp(a->d_name, "pacemaker-1.1.rng") == 0) { /* crm_trace("%s - hack", a->d_name); */ } else { /* crm_debug("%s - candidate", a->d_name); */ rc = 1; } return rc; } static int schema_sort(const struct dirent ** a, const struct dirent **b) { int rc = 0; float a_version = 0.0; float b_version = 0.0; sscanf(a[0]->d_name, "pacemaker-%f.rng", &a_version); sscanf(b[0]->d_name, "pacemaker-%f.rng", &b_version); if(a_version > b_version) { rc = 1; } else if(a_version < b_version) { rc = -1; } /* crm_trace("%s (%f) vs. %s (%f) : %d", a[0]->d_name, a_version, b[0]->d_name, b_version, rc); */ return rc; } static void __xml_schema_add( int type, float version, const char *name, const char *location, const char *transform, int after_transform) { int last = xml_schema_max; xml_schema_max++; known_schemas = realloc_safe(known_schemas, xml_schema_max*sizeof(struct schema_s)); CRM_ASSERT(known_schemas != NULL); memset(known_schemas+last, 0, sizeof(struct schema_s)); known_schemas[last].type = type; known_schemas[last].after_transform = after_transform; if(version > 0.0) { known_schemas[last].version = version; known_schemas[last].name = g_strdup_printf("pacemaker-%.1f", version); known_schemas[last].location = g_strdup_printf("%s.rng", known_schemas[last].name); } else { char dummy[1024]; CRM_ASSERT(name); CRM_ASSERT(location); sscanf(name, "%[^-]-%f", dummy, &version); known_schemas[last].version = version; known_schemas[last].name = strdup(name); known_schemas[last].location = strdup(location); } if(transform) { known_schemas[last].transform = strdup(transform); } if(after_transform == 0) { after_transform = xml_schema_max; } known_schemas[last].after_transform = after_transform; if(known_schemas[last].after_transform < 0) { crm_debug("Added supported schema %d: %s (%s)", last, known_schemas[last].name, known_schemas[last].location); } else if(known_schemas[last].transform) { crm_debug("Added supported schema %d: %s (%s upgrades to %d with %s)", last, known_schemas[last].name, known_schemas[last].location, known_schemas[last].after_transform, known_schemas[last].transform); } else { crm_debug("Added supported schema %d: %s (%s upgrades to %d)", last, known_schemas[last].name, known_schemas[last].location, known_schemas[last].after_transform); } } static int __xml_build_schema_list(void) { int lpc, max; const char *base = get_schema_root(); struct dirent **namelist = NULL; max = scandir(base, &namelist, schema_filter, schema_sort); __xml_schema_add(1, 0.0, "pacemaker-0.6", "crm.dtd", "upgrade06.xsl", 3); __xml_schema_add(1, 0.0, "transitional-0.6", "crm-transitional.dtd", "upgrade06.xsl", 3); __xml_schema_add(2, 0.0, "pacemaker-0.7", "pacemaker-1.0.rng", NULL, 0); if (max < 0) { crm_notice("scandir(%s) failed: %s (%d)", base, strerror(errno), errno); } else { for (lpc = 0; lpc < max; lpc++) { int next = 0; float version = 0.0; char *transform = NULL; sscanf(namelist[lpc]->d_name, "pacemaker-%f.rng", &version); if((lpc + 1) < max) { float next_version = 0.0; sscanf(namelist[lpc+1]->d_name, "pacemaker-%f.rng", &next_version); if(floor(version) < floor(next_version)) { struct stat s; char *xslt = NULL; transform = g_strdup_printf("upgrade-%.1f.xsl", version); xslt = get_schema_path(NULL, transform); if(stat(xslt, &s) != 0) { crm_err("Transform %s not found", xslt); free(xslt); __xml_schema_add(2, version, NULL, NULL, NULL, -1); break; } else { free(xslt); } } } else { next = -1; } __xml_schema_add(2, version, NULL, NULL, transform, next); free(namelist[lpc]); free(transform); } } /* 1.1 was the old name for -next */ __xml_schema_add(2, 0.0, "pacemaker-1.1", "pacemaker-next.rng", NULL, 0); __xml_schema_add(2, 0.0, "pacemaker-next", "pacemaker-next.rng", NULL, -1); __xml_schema_add(0, 0.0, "none", "N/A", NULL, -1); free(namelist); return TRUE; } static void set_parent_flag(xmlNode *xml, long flag) { for(; xml; xml = xml->parent) { xml_private_t *p = xml->_private; if(p == NULL) { /* During calls to xmlDocCopyNode(), _private will be unset for parent nodes */ } else { p->flags |= flag; /* crm_trace("Setting flag %x due to %s[@id=%s]", flag, xml->name, ID(xml)); */ } } } static void set_doc_flag(xmlNode *xml, long flag) { if(xml && xml->doc && xml->doc->_private){ /* During calls to xmlDocCopyNode(), xml->doc may be unset */ xml_private_t *p = xml->doc->_private; p->flags |= flag; /* crm_trace("Setting flag %x due to %s[@id=%s]", flag, xml->name, ID(xml)); */ } } static void __xml_node_dirty(xmlNode *xml) { set_doc_flag(xml, xpf_dirty); set_parent_flag(xml, xpf_dirty); } static void __xml_node_clean(xmlNode *xml) { xmlNode *cIter = NULL; xml_private_t *p = xml->_private; if(p) { p->flags = 0; } for (cIter = __xml_first_child(xml); cIter != NULL; cIter = __xml_next(cIter)) { __xml_node_clean(cIter); } } static void crm_node_created(xmlNode *xml) { xmlNode *cIter = NULL; xml_private_t *p = xml->_private; if(p && TRACKING_CHANGES(xml)) { if(is_not_set(p->flags, xpf_created)) { p->flags |= xpf_created; __xml_node_dirty(xml); } for (cIter = __xml_first_child(xml); cIter != NULL; cIter = __xml_next(cIter)) { crm_node_created(cIter); } } } static void crm_attr_dirty(xmlAttr *a) { xmlNode *parent = a->parent; xml_private_t *p = NULL; p = a->_private; p->flags |= (xpf_dirty|xpf_modified); p->flags = (p->flags & ~xpf_deleted); /* crm_trace("Setting flag %x due to %s[@id=%s, @%s=%s]", */ /* xpf_dirty, parent?parent->name:NULL, ID(parent), a->name, a->children->content); */ __xml_node_dirty(parent); } int get_tag_name(const char *input, size_t offset, size_t max); int get_attr_name(const char *input, size_t offset, size_t max); int get_attr_value(const char *input, size_t offset, size_t max); gboolean can_prune_leaf(xmlNode * xml_node); void diff_filter_context(int context, int upper_bound, int lower_bound, xmlNode * xml_node, xmlNode * parent); int in_upper_context(int depth, int context, xmlNode * xml_node); int add_xml_object(xmlNode * parent, xmlNode * target, xmlNode * update, gboolean as_diff); static inline const char * crm_attr_value(xmlAttr * attr) { if (attr == NULL || attr->children == NULL) { return NULL; } return (const char *)attr->children->content; } static inline xmlAttr * crm_first_attr(xmlNode * xml) { if (xml == NULL) { return NULL; } return xml->properties; } #define XML_PRIVATE_MAGIC (long) 0x81726354 static void __xml_acl_free(void *data) { if(data) { xml_acl_t *acl = data; free(acl->xpath); free(acl); } } static void __xml_private_clean(xml_private_t *p) { if(p) { CRM_ASSERT(p->check == XML_PRIVATE_MAGIC); free(p->user); p->user = NULL; if(p->acls) { g_list_free_full(p->acls, __xml_acl_free); p->acls = NULL; } if(p->deleted_paths) { g_list_free_full(p->deleted_paths, free); p->deleted_paths = NULL; } } } static void __xml_private_free(xml_private_t *p) { __xml_private_clean(p); free(p); } static void pcmkDeregisterNode(xmlNodePtr node) { __xml_private_free(node->_private); } static void pcmkRegisterNode(xmlNodePtr node) { xml_private_t *p = NULL; switch(node->type) { case XML_ELEMENT_NODE: case XML_DOCUMENT_NODE: case XML_ATTRIBUTE_NODE: case XML_COMMENT_NODE: p = calloc(1, sizeof(xml_private_t)); p->check = XML_PRIVATE_MAGIC; /* Flags will be reset if necessary when tracking is enabled */ p->flags |= (xpf_dirty|xpf_created); node->_private = p; break; case XML_TEXT_NODE: case XML_DTD_NODE: break; default: /* Ignore */ crm_trace("Ignoring %p %d", node, node->type); CRM_LOG_ASSERT(node->type == XML_ELEMENT_NODE); break; } if(p && TRACKING_CHANGES(node)) { /* XML_ELEMENT_NODE doesn't get picked up here, node->doc is * not hooked up at the point we are called */ set_doc_flag(node, xpf_dirty); __xml_node_dirty(node); } } static xml_acl_t * __xml_acl_create(xmlNode * xml, xmlNode *target, enum xml_private_flags mode) { xml_acl_t *acl = NULL; xml_private_t *p = NULL; const char *tag = crm_element_value(xml, XML_ACL_ATTR_TAG); const char *ref = crm_element_value(xml, XML_ACL_ATTR_REF); const char *xpath = crm_element_value(xml, XML_ACL_ATTR_XPATH); if(tag == NULL) { /* Compatability handling for pacemaker < 1.1.12 */ tag = crm_element_value(xml, XML_ACL_ATTR_TAGv1); } if(ref == NULL) { /* Compatability handling for pacemaker < 1.1.12 */ ref = crm_element_value(xml, XML_ACL_ATTR_REFv1); } if(target == NULL || target->doc == NULL || target->doc->_private == NULL){ CRM_ASSERT(target); CRM_ASSERT(target->doc); CRM_ASSERT(target->doc->_private); return NULL; } else if (tag == NULL && ref == NULL && xpath == NULL) { crm_trace("No criteria %p", xml); return NULL; } p = target->doc->_private; acl = calloc(1, sizeof(xml_acl_t)); if (acl) { const char *attr = crm_element_value(xml, XML_ACL_ATTR_ATTRIBUTE); acl->mode = mode; if(xpath) { acl->xpath = strdup(xpath); crm_trace("Using xpath: %s", acl->xpath); } else { int offset = 0; char buffer[XML_BUFFER_SIZE]; if(tag) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "//%s", tag); } else { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "//*"); } if(ref || attr) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "["); } if(ref) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "@id='%s'", ref); } if(ref && attr) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, " and "); } if(attr) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "@%s", attr); } if(ref || attr) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "]"); } CRM_LOG_ASSERT(offset > 0); acl->xpath = strdup(buffer); crm_trace("Built xpath: %s", acl->xpath); } p->acls = g_list_append(p->acls, acl); } return acl; } static gboolean __xml_acl_parse_entry(xmlNode * acl_top, xmlNode * acl_entry, xmlNode *target) { xmlNode *child = NULL; for (child = __xml_first_child(acl_entry); child; child = __xml_next(child)) { const char *tag = crm_element_name(child); const char *kind = crm_element_value(child, XML_ACL_ATTR_KIND); if (strcmp(XML_ACL_TAG_PERMISSION, tag) == 0){ tag = kind; } crm_trace("Processing %s %p", tag, child); if(tag == NULL) { CRM_ASSERT(tag != NULL); } else if (strcmp(XML_ACL_TAG_ROLE_REF, tag) == 0 || strcmp(XML_ACL_TAG_ROLE_REFv1, tag) == 0) { const char *ref_role = crm_element_value(child, XML_ATTR_ID); if (ref_role) { xmlNode *role = NULL; for (role = __xml_first_child(acl_top); role; role = __xml_next(role)) { if (strcmp(XML_ACL_TAG_ROLE, (const char *)role->name) == 0) { const char *role_id = crm_element_value(role, XML_ATTR_ID); if (role_id && strcmp(ref_role, role_id) == 0) { crm_debug("Unpacking referenced role: %s", role_id); __xml_acl_parse_entry(acl_top, role, target); break; } } } } } else if (strcmp(XML_ACL_TAG_READ, tag) == 0) { __xml_acl_create(child, target, xpf_acl_read); } else if (strcmp(XML_ACL_TAG_WRITE, tag) == 0) { __xml_acl_create(child, target, xpf_acl_write); } else if (strcmp(XML_ACL_TAG_DENY, tag) == 0) { __xml_acl_create(child, target, xpf_acl_deny); } else { crm_warn("Unknown ACL entry: %s/%s", tag, kind); } } return TRUE; } /* */ const char * __xml_acl_to_text(enum xml_private_flags flags) { if(is_set(flags, xpf_acl_deny)) { return "deny"; } if(is_set(flags, xpf_acl_write)) { return "read/write"; } if(is_set(flags, xpf_acl_read)) { return "read"; } return "none"; } static void __xml_acl_apply(xmlNode *xml) { GListPtr aIter = NULL; xml_private_t *p = NULL; xmlXPathObjectPtr xpathObj = NULL; if(xml_acl_enabled(xml) == FALSE) { p = xml->doc->_private; crm_trace("Not applying ACLs for %s", p->user); return; } p = xml->doc->_private; for(aIter = p->acls; aIter != NULL; aIter = aIter->next) { int max = 0, lpc = 0; xml_acl_t *acl = aIter->data; xpathObj = xpath_search(xml, acl->xpath); max = numXpathResults(xpathObj); for(lpc = 0; lpc < max; lpc++) { xmlNode *match = getXpathResult(xpathObj, lpc); char *path = xml_get_path(match); p = match->_private; crm_trace("Applying %x to %s for %s", acl->mode, path, acl->xpath); #ifdef SUSE_ACL_COMPAT if(is_not_set(p->flags, acl->mode)) { if(is_set(p->flags, xpf_acl_read) || is_set(p->flags, xpf_acl_write) || is_set(p->flags, xpf_acl_deny)) { crm_config_warn("Configuration element %s is matched by multiple ACL rules, only the first applies ('%s' wins over '%s')", path, __xml_acl_to_text(p->flags), __xml_acl_to_text(acl->mode)); free(path); continue; } } #endif p->flags |= acl->mode; free(path); } crm_trace("Now enforcing ACL: %s (%d matches)", acl->xpath, max); freeXpathObject(xpathObj); } p = xml->_private; if(is_not_set(p->flags, xpf_acl_read) && is_not_set(p->flags, xpf_acl_write)) { p->flags |= xpf_acl_deny; p = xml->doc->_private; crm_info("Enforcing default ACL for %s to %s", p->user, crm_element_name(xml)); } } static void __xml_acl_unpack(xmlNode *source, xmlNode *target, const char *user) { #if ENABLE_ACL xml_private_t *p = NULL; if(target == NULL || target->doc == NULL || target->doc->_private == NULL) { return; } p = target->doc->_private; if(pcmk_acl_required(user) == FALSE) { crm_trace("no acls needed for '%s'", user); } else if(p->acls == NULL) { xmlNode *acls = get_xpath_object("//"XML_CIB_TAG_ACLS, source, LOG_TRACE); free(p->user); p->user = strdup(user); if(acls) { xmlNode *child = NULL; for (child = __xml_first_child(acls); child; child = __xml_next(child)) { const char *tag = crm_element_name(child); if (strcmp(tag, XML_ACL_TAG_USER) == 0 || strcmp(tag, XML_ACL_TAG_USERv1) == 0) { const char *id = crm_element_value(child, XML_ATTR_ID); if(id && strcmp(id, user) == 0) { crm_debug("Unpacking ACLs for %s", id); __xml_acl_parse_entry(acls, child, target); } } } } } #endif } static inline bool __xml_acl_mode_test(enum xml_private_flags allowed, enum xml_private_flags requested) { if(is_set(allowed, xpf_acl_deny)) { return FALSE; } else if(is_set(allowed, requested)) { return TRUE; } else if(is_set(requested, xpf_acl_read) && is_set(allowed, xpf_acl_write)) { return TRUE; } else if(is_set(requested, xpf_acl_create) && is_set(allowed, xpf_acl_write)) { return TRUE; } else if(is_set(requested, xpf_acl_create) && is_set(allowed, xpf_created)) { return TRUE; } return FALSE; } /* rc = TRUE if orig_cib has been filtered * That means '*result' rather than 'xml' should be exploited afterwards */ static bool __xml_purge_attributes(xmlNode *xml) { xmlNode *child = NULL; xmlAttr *xIter = NULL; bool readable_children = FALSE; xml_private_t *p = xml->_private; if(__xml_acl_mode_test(p->flags, xpf_acl_read)) { crm_trace("%s is readable", crm_element_name(xml), ID(xml)); return TRUE; } xIter = crm_first_attr(xml); while(xIter != NULL) { xmlAttr *tmp = xIter; const char *prop_name = (const char *)xIter->name; xIter = xIter->next; if (strcmp(prop_name, XML_ATTR_ID) == 0) { continue; } xmlUnsetProp(xml, tmp->name); } child = __xml_first_child(xml); while ( child != NULL ) { xmlNode *tmp = child; child = __xml_next(child); readable_children |= __xml_purge_attributes(tmp); } if(readable_children == FALSE) { free_xml(xml); /* Nothing readable under here, purge completely */ } return readable_children; } bool xml_acl_filtered_copy(const char *user, xmlNode* acl_source, xmlNode *xml, xmlNode ** result) { GListPtr aIter = NULL; xmlNode *target = NULL; xml_private_t *p = NULL; xml_private_t *doc = NULL; *result = NULL; if(xml == NULL || pcmk_acl_required(user) == FALSE) { crm_trace("no acls needed for '%s'", user); return FALSE; } crm_trace("filtering copy of %p for '%s'", xml, user); target = copy_xml(xml); if(target == NULL) { return TRUE; } __xml_acl_unpack(acl_source, target, user); set_doc_flag(target, xpf_acl_enabled); __xml_acl_apply(target); doc = target->doc->_private; for(aIter = doc->acls; aIter != NULL && target; aIter = aIter->next) { int max = 0; xml_acl_t *acl = aIter->data; if(acl->mode != xpf_acl_deny) { /* Nothing to do */ } else if(acl->xpath) { int lpc = 0; xmlXPathObjectPtr xpathObj = xpath_search(target, acl->xpath); max = numXpathResults(xpathObj); for(lpc = 0; lpc < max; lpc++) { xmlNode *match = getXpathResult(xpathObj, lpc); crm_trace("Purging attributes from %s", acl->xpath); if(__xml_purge_attributes(match) == FALSE && match == target) { crm_trace("No access to the entire document for %s", user); freeXpathObject(xpathObj); return TRUE; } } crm_trace("Enforced ACL %s (%d matches)", acl->xpath, max); freeXpathObject(xpathObj); } } p = target->_private; if(is_set(p->flags, xpf_acl_deny) && __xml_purge_attributes(target) == FALSE) { crm_trace("No access to the entire document for %s", user); return TRUE; } if(doc->acls) { g_list_free_full(doc->acls, __xml_acl_free); doc->acls = NULL; } else { crm_trace("Ordinary user '%s' cannot access the CIB without any defined ACLs", doc->user); free_xml(target); target = NULL; } if(target) { *result = target; } return TRUE; } static void __xml_acl_post_process(xmlNode * xml) { xmlNode *cIter = __xml_first_child(xml); xml_private_t *p = xml->_private; if(is_set(p->flags, xpf_created)) { xmlAttr *xIter = NULL; /* Always allow new scaffolding, ie. node with no attributes or only an 'id' */ for (xIter = crm_first_attr(xml); xIter != NULL; xIter = xIter->next) { const char *prop_name = (const char *)xIter->name; if (strcmp(prop_name, XML_ATTR_ID) == 0) { /* Delay the acl check */ continue; } else if(__xml_acl_check(xml, NULL, xpf_acl_write)) { crm_trace("Creation of %s=%s is allowed", crm_element_name(xml), ID(xml)); break; } else { char *path = xml_get_path(xml); crm_trace("Cannot add new node %s at %s", crm_element_name(xml), path); if(xml != xmlDocGetRootElement(xml->doc)) { xmlUnlinkNode(xml); xmlFreeNode(xml); } free(path); return; } } } while (cIter != NULL) { xmlNode *child = cIter; cIter = __xml_next(cIter); /* In case it is free'd */ __xml_acl_post_process(child); } } bool xml_acl_denied(xmlNode *xml) { if(xml && xml->doc && xml->doc->_private){ xml_private_t *p = xml->doc->_private; return is_set(p->flags, xpf_acl_denied); } return FALSE; } void xml_acl_disable(xmlNode *xml) { if(xml_acl_enabled(xml)) { xml_private_t *p = xml->doc->_private; /* Catch anything that was created but shouldn't have been */ __xml_acl_apply(xml); __xml_acl_post_process(xml); clear_bit(p->flags, xpf_acl_enabled); } } bool xml_acl_enabled(xmlNode *xml) { if(xml && xml->doc && xml->doc->_private){ xml_private_t *p = xml->doc->_private; return is_set(p->flags, xpf_acl_enabled); } return FALSE; } void xml_track_changes(xmlNode * xml, const char *user, xmlNode *acl_source, bool enforce_acls) { xml_accept_changes(xml); crm_trace("Tracking changes%s to %p", enforce_acls?" with ACLs":"", xml); set_doc_flag(xml, xpf_tracking); if(enforce_acls) { if(acl_source == NULL) { acl_source = xml; } set_doc_flag(xml, xpf_acl_enabled); __xml_acl_unpack(acl_source, xml, user); __xml_acl_apply(xml); } } bool xml_tracking_changes(xmlNode * xml) { if(xml == NULL) { return FALSE; } else if(is_set(((xml_private_t *)xml->doc->_private)->flags, xpf_tracking)) { return TRUE; } return FALSE; } bool xml_document_dirty(xmlNode *xml) { if(xml != NULL && xml->doc && xml->doc->_private) { xml_private_t *doc = xml->doc->_private; return is_set(doc->flags, xpf_dirty); } return FALSE; } /* */ static int __xml_offset(xmlNode *xml) { int position = 0; xmlNode *cIter = NULL; for(cIter = xml; cIter->prev; cIter = cIter->prev) { xml_private_t *p = ((xmlNode*)cIter->prev)->_private; if(is_not_set(p->flags, xpf_skip)) { position++; } } return position; } static int __xml_offset_no_deletions(xmlNode *xml) { int position = 0; xmlNode *cIter = NULL; for(cIter = xml; cIter->prev; cIter = cIter->prev) { xml_private_t *p = ((xmlNode*)cIter->prev)->_private; if(is_not_set(p->flags, xpf_deleted)) { position++; } } return position; } static void __xml_build_changes(xmlNode * xml, xmlNode *patchset) { xmlNode *cIter = NULL; xmlAttr *pIter = NULL; xmlNode *change = NULL; xml_private_t *p = xml->_private; if(patchset && is_set(p->flags, xpf_created)) { int offset = 0; char buffer[XML_BUFFER_SIZE]; if(__get_prefix(NULL, xml->parent, buffer, offset) > 0) { int position = __xml_offset_no_deletions(xml); change = create_xml_node(patchset, XML_DIFF_CHANGE); crm_xml_add(change, XML_DIFF_OP, "create"); crm_xml_add(change, XML_DIFF_PATH, buffer); crm_xml_add_int(change, XML_DIFF_POSITION, position); add_node_copy(change, xml); } return; } for (pIter = crm_first_attr(xml); pIter != NULL; pIter = pIter->next) { xmlNode *attr = NULL; p = pIter->_private; if(is_not_set(p->flags, xpf_deleted) && is_not_set(p->flags, xpf_dirty)) { continue; } if(change == NULL) { int offset = 0; char buffer[XML_BUFFER_SIZE]; if(__get_prefix(NULL, xml, buffer, offset) > 0) { change = create_xml_node(patchset, XML_DIFF_CHANGE); crm_xml_add(change, XML_DIFF_OP, "modify"); crm_xml_add(change, XML_DIFF_PATH, buffer); change = create_xml_node(change, XML_DIFF_LIST); } } attr = create_xml_node(change, XML_DIFF_ATTR); crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, (const char *)pIter->name); if(p->flags & xpf_deleted) { crm_xml_add(attr, XML_DIFF_OP, "unset"); } else { const char *value = crm_element_value(xml, (const char *)pIter->name); crm_xml_add(attr, XML_DIFF_OP, "set"); crm_xml_add(attr, XML_NVPAIR_ATTR_VALUE, value); } } if(change) { xmlNode *result = NULL; change = create_xml_node(change->parent, XML_DIFF_RESULT); result = create_xml_node(change, (const char *)xml->name); for (pIter = crm_first_attr(xml); pIter != NULL; pIter = pIter->next) { const char *value = crm_element_value(xml, (const char *)pIter->name); p = pIter->_private; if (is_not_set(p->flags, xpf_deleted)) { crm_xml_add(result, (const char *)pIter->name, value); } } } for (cIter = __xml_first_child(xml); cIter != NULL; cIter = __xml_next(cIter)) { __xml_build_changes(cIter, patchset); } p = xml->_private; if(patchset && is_set(p->flags, xpf_moved)) { int offset = 0; char buffer[XML_BUFFER_SIZE]; crm_trace("%s.%s moved to position %d", xml->name, ID(xml), __xml_offset(xml)); if(__get_prefix(NULL, xml, buffer, offset) > 0) { change = create_xml_node(patchset, XML_DIFF_CHANGE); crm_xml_add(change, XML_DIFF_OP, "move"); crm_xml_add(change, XML_DIFF_PATH, buffer); crm_xml_add_int(change, XML_DIFF_POSITION, __xml_offset_no_deletions(xml)); } } } static void __xml_accept_changes(xmlNode * xml) { xmlNode *cIter = NULL; xmlAttr *pIter = NULL; xml_private_t *p = xml->_private; p->flags = xpf_none; pIter = crm_first_attr(xml); while (pIter != NULL) { const xmlChar *name = pIter->name; p = pIter->_private; pIter = pIter->next; if(p->flags & xpf_deleted) { xml_remove_prop(xml, (const char *)name); } else { p->flags = xpf_none; } } for (cIter = __xml_first_child(xml); cIter != NULL; cIter = __xml_next(cIter)) { __xml_accept_changes(cIter); } } static bool is_config_change(xmlNode *xml) { GListPtr gIter = NULL; xml_private_t *p = NULL; xmlNode *config = first_named_child(xml, XML_CIB_TAG_CONFIGURATION); if(config) { p = config->_private; } if(p && is_set(p->flags, xpf_dirty)) { return TRUE; } if(xml->doc && xml->doc->_private) { p = xml->doc->_private; for(gIter = p->deleted_paths; gIter; gIter = gIter->next) { char *path = gIter->data; if(strstr(path, "/"XML_TAG_CIB"/"XML_CIB_TAG_CONFIGURATION) != NULL) { return TRUE; } } } return FALSE; } static void xml_repair_v1_diff(xmlNode * last, xmlNode * next, xmlNode * local_diff, gboolean changed) { int lpc = 0; xmlNode *cib = NULL; xmlNode *diff_child = NULL; const char *tag = NULL; const char *vfields[] = { XML_ATTR_GENERATION_ADMIN, XML_ATTR_GENERATION, XML_ATTR_NUMUPDATES, }; if (local_diff == NULL) { crm_trace("Nothing to do"); return; } tag = "diff-removed"; diff_child = find_xml_node(local_diff, tag, FALSE); if (diff_child == NULL) { diff_child = create_xml_node(local_diff, tag); } tag = XML_TAG_CIB; cib = find_xml_node(diff_child, tag, FALSE); if (cib == NULL) { cib = create_xml_node(diff_child, tag); } for(lpc = 0; last && lpc < DIMOF(vfields); lpc++){ const char *value = crm_element_value(last, vfields[lpc]); crm_xml_add(diff_child, vfields[lpc], value); if(changed || lpc == 2) { crm_xml_add(cib, vfields[lpc], value); } } tag = "diff-added"; diff_child = find_xml_node(local_diff, tag, FALSE); if (diff_child == NULL) { diff_child = create_xml_node(local_diff, tag); } tag = XML_TAG_CIB; cib = find_xml_node(diff_child, tag, FALSE); if (cib == NULL) { cib = create_xml_node(diff_child, tag); } for(lpc = 0; next && lpc < DIMOF(vfields); lpc++){ const char *value = crm_element_value(next, vfields[lpc]); crm_xml_add(diff_child, vfields[lpc], value); } if (next) { xmlAttrPtr xIter = NULL; for (xIter = next->properties; xIter; xIter = xIter->next) { const char *p_name = (const char *)xIter->name; const char *p_value = crm_element_value(next, p_name); xmlSetProp(cib, (const xmlChar *)p_name, (const xmlChar *)p_value); } } crm_log_xml_explicit(local_diff, "Repaired-diff"); } static xmlNode * -xml_create_patchset_v1(xmlNode *source, xmlNode *target, bool config) +xml_create_patchset_v1(xmlNode *source, xmlNode *target, bool config, bool with_digest) { - xmlNode *patchset = diff_xml_object(source, target, TRUE); + xmlNode *patchset = diff_xml_object(source, target, !with_digest); if(patchset) { CRM_LOG_ASSERT(xml_document_dirty(target)); xml_repair_v1_diff(source, target, patchset, config); crm_xml_add(patchset, "format", "1"); } return patchset; } static xmlNode * xml_create_patchset_v2(xmlNode *source, xmlNode *target) { int lpc = 0; GListPtr gIter = NULL; xml_private_t *doc = NULL; xmlNode *v = NULL; xmlNode *version = NULL; xmlNode *patchset = NULL; const char *vfields[] = { XML_ATTR_GENERATION_ADMIN, XML_ATTR_GENERATION, XML_ATTR_NUMUPDATES, }; CRM_ASSERT(target); if(xml_document_dirty(target) == FALSE) { return NULL; } CRM_ASSERT(target->doc); doc = target->doc->_private; patchset = create_xml_node(NULL, XML_TAG_DIFF); crm_xml_add_int(patchset, "format", 2); version = create_xml_node(patchset, XML_DIFF_VERSION); v = create_xml_node(version, XML_DIFF_VSOURCE); for(lpc = 0; lpc < DIMOF(vfields); lpc++){ const char *value = crm_element_value(source, vfields[lpc]); if(value == NULL) { value = "1"; } crm_xml_add(v, vfields[lpc], value); } v = create_xml_node(version, XML_DIFF_VTARGET); for(lpc = 0; lpc < DIMOF(vfields); lpc++){ const char *value = crm_element_value(target, vfields[lpc]); if(value == NULL) { value = "1"; } crm_xml_add(v, vfields[lpc], value); } for(gIter = doc->deleted_paths; gIter; gIter = gIter->next) { xmlNode *change = create_xml_node(patchset, XML_DIFF_CHANGE); crm_xml_add(change, XML_DIFF_OP, "delete"); crm_xml_add(change, XML_DIFF_PATH, gIter->data); } __xml_build_changes(target, patchset); return patchset; } static gboolean patch_legacy_mode(void) { static gboolean init = TRUE; static gboolean legacy = FALSE; if(init) { init = FALSE; legacy = daemon_option_enabled("cib", "legacy"); if(legacy) { crm_notice("Enabled legacy mode"); } } return legacy; } xmlNode * xml_create_patchset(int format, xmlNode *source, xmlNode *target, bool *config_changed, bool manage_version, bool with_digest) { int counter = 0; bool config = FALSE; xmlNode *patch = NULL; const char *version = crm_element_value(source, XML_ATTR_CRM_VERSION); xml_acl_disable(target); if(xml_document_dirty(target) == FALSE) { crm_trace("No change %d", format); return NULL; /* No change */ } config = is_config_change(target); if(config_changed) { *config_changed = config; } if(manage_version && config) { crm_trace("Config changed %d", format); crm_xml_add(target, XML_ATTR_NUMUPDATES, "0"); crm_element_value_int(target, XML_ATTR_GENERATION, &counter); crm_xml_add_int(target, XML_ATTR_GENERATION, counter+1); } else if(manage_version) { crm_trace("Status changed %d", format); crm_element_value_int(target, XML_ATTR_NUMUPDATES, &counter); crm_xml_add_int(target, XML_ATTR_NUMUPDATES, counter+1); } if(format == 0) { if(patch_legacy_mode()) { format = 1; } else if(compare_version("3.0.8", version) < 0) { format = 2; } else { format = 1; } crm_trace("Using patch format %d for version: %s", format, version); } switch(format) { case 1: - patch = xml_create_patchset_v1(source, target, config); with_digest = TRUE; + patch = xml_create_patchset_v1(source, target, config, with_digest); break; case 2: patch = xml_create_patchset_v2(source, target); break; default: crm_err("Unknown patch format: %d", format); return NULL; } if(patch && with_digest) { char *digest = calculate_xml_versioned_digest(target, FALSE, TRUE, version); crm_xml_add(patch, XML_ATTR_DIGEST, digest); free(digest); } return patch; } static void __xml_log_element(int log_level, const char *file, const char *function, int line, const char *prefix, xmlNode * data, int depth, int options); void xml_log_patchset(uint8_t log_level, const char *function, xmlNode * patchset) { int format = 1; xmlNode *child = NULL; xmlNode *added = NULL; xmlNode *removed = NULL; gboolean is_first = TRUE; int add[] = { 0, 0, 0 }; int del[] = { 0, 0, 0 }; const char *fmt = NULL; const char *digest = NULL; int options = xml_log_option_formatted; static struct qb_log_callsite *patchset_cs = NULL; if (patchset_cs == NULL) { patchset_cs = qb_log_callsite_get(function, __FILE__, "xml-patchset", log_level, __LINE__, 0); } if (patchset == NULL) { crm_trace("Empty patch"); return; } else if (log_level == 0) { /* Log to stdout */ } else if (crm_is_callsite_active(patchset_cs, log_level, 0) == FALSE) { return; } xml_patch_versions(patchset, add, del); fmt = crm_element_value(patchset, "format"); digest = crm_element_value(patchset, XML_ATTR_DIGEST); if (add[2] != del[2] || add[1] != del[1] || add[0] != del[0]) { do_crm_log_alias(log_level, __FILE__, function, __LINE__, "Diff: --- %d.%d.%d %s", del[0], del[1], del[2], fmt); do_crm_log_alias(log_level, __FILE__, function, __LINE__, "Diff: +++ %d.%d.%d %s", add[0], add[1], add[2], digest); } else if (patchset != NULL && (add[0] || add[1] || add[2])) { do_crm_log_alias(log_level, __FILE__, function, __LINE__, "%s: Local-only Change: %d.%d.%d", function ? function : "", add[0], add[1], add[2]); } crm_element_value_int(patchset, "format", &format); if(format == 2) { xmlNode *change = NULL; for (change = __xml_first_child(patchset); change != NULL; change = __xml_next(change)) { const char *op = crm_element_value(change, XML_DIFF_OP); const char *xpath = crm_element_value(change, XML_DIFF_PATH); if(op == NULL) { } else if(strcmp(op, "create") == 0) { int lpc = 0, max = 0; char *prefix = g_strdup_printf("++ %s: ", xpath); max = strlen(prefix); __xml_log_element(log_level, __FILE__, function, __LINE__, prefix, change->children, 0, xml_log_option_formatted|xml_log_option_open); for(lpc = 2; lpc < max; lpc++) { prefix[lpc] = ' '; } __xml_log_element(log_level, __FILE__, function, __LINE__, prefix, change->children, 0, xml_log_option_formatted|xml_log_option_close|xml_log_option_children); free(prefix); } else if(strcmp(op, "move") == 0) { do_crm_log_alias(log_level, __FILE__, function, __LINE__, "+~ %s moved to offset %s", xpath, crm_element_value(change, XML_DIFF_POSITION)); } else if(strcmp(op, "modify") == 0) { xmlNode *clist = first_named_child(change, XML_DIFF_LIST); char buffer_set[XML_BUFFER_SIZE]; char buffer_unset[XML_BUFFER_SIZE]; int o_set = 0; int o_unset = 0; buffer_set[0] = 0; buffer_unset[0] = 0; for (child = __xml_first_child(clist); child != NULL; child = __xml_next(child)) { const char *name = crm_element_value(child, "name"); op = crm_element_value(child, XML_DIFF_OP); if(op == NULL) { } else if(strcmp(op, "set") == 0) { const char *value = crm_element_value(child, "value"); if(o_set > 0) { o_set += snprintf(buffer_set + o_set, XML_BUFFER_SIZE - o_set, ", "); } o_set += snprintf(buffer_set + o_set, XML_BUFFER_SIZE - o_set, "@%s=%s", name, value); } else if(strcmp(op, "unset") == 0) { if(o_unset > 0) { o_unset += snprintf(buffer_unset + o_unset, XML_BUFFER_SIZE - o_unset, ", "); } o_unset += snprintf(buffer_unset + o_unset, XML_BUFFER_SIZE - o_unset, "@%s", name); } } if(o_set) { do_crm_log_alias(log_level, __FILE__, function, __LINE__, "+ %s: %s", xpath, buffer_set); } if(o_unset) { do_crm_log_alias(log_level, __FILE__, function, __LINE__, "-- %s: %s", xpath, buffer_unset); } } else if(strcmp(op, "delete") == 0) { do_crm_log_alias(log_level, __FILE__, function, __LINE__, "-- %s", xpath); } } return; } if (log_level < LOG_DEBUG || function == NULL) { options |= xml_log_option_diff_short; } removed = find_xml_node(patchset, "diff-removed", FALSE); for (child = __xml_first_child(removed); child != NULL; child = __xml_next(child)) { log_data_element(log_level, __FILE__, function, __LINE__, "- ", child, 0, options | xml_log_option_diff_minus); if (is_first) { is_first = FALSE; } else { do_crm_log_alias(log_level, __FILE__, function, __LINE__, " --- "); } } is_first = TRUE; added = find_xml_node(patchset, "diff-added", FALSE); for (child = __xml_first_child(added); child != NULL; child = __xml_next(child)) { log_data_element(log_level, __FILE__, function, __LINE__, "+ ", child, 0, options | xml_log_option_diff_plus); if (is_first) { is_first = FALSE; } else { do_crm_log_alias(log_level, __FILE__, function, __LINE__, " +++ "); } } } void xml_log_changes(uint8_t log_level, const char *function, xmlNode * xml) { GListPtr gIter = NULL; xml_private_t *doc = NULL; CRM_ASSERT(xml); CRM_ASSERT(xml->doc); doc = xml->doc->_private; if(is_not_set(doc->flags, xpf_dirty)) { return; } for(gIter = doc->deleted_paths; gIter; gIter = gIter->next) { do_crm_log_alias(log_level, __FILE__, function, __LINE__, "-- %s", (char*)gIter->data); } log_data_element(log_level, __FILE__, function, __LINE__, "+ ", xml, 0, xml_log_option_formatted|xml_log_option_dirty_add); } void xml_accept_changes(xmlNode * xml) { xmlNode *top = NULL; xml_private_t *doc = NULL; if(xml == NULL) { return; } crm_trace("Accepting changes to %p", xml); doc = xml->doc->_private; top = xmlDocGetRootElement(xml->doc); __xml_private_clean(xml->doc->_private); if(is_not_set(doc->flags, xpf_dirty)) { doc->flags = xpf_none; return; } doc->flags = xpf_none; __xml_accept_changes(top); } /* Simplified version for applying v1-style XML patches */ static void __subtract_xml_object(xmlNode * target, xmlNode * patch) { xmlNode *patch_child = NULL; xmlNode *cIter = NULL; xmlAttrPtr xIter = NULL; char *id = NULL; const char *name = NULL; const char *value = NULL; if (target == NULL || patch == NULL) { return; } if (target->type == XML_COMMENT_NODE) { gboolean dummy; subtract_xml_comment(target->parent, target, patch, &dummy); } name = crm_element_name(target); CRM_CHECK(name != NULL, return); CRM_CHECK(safe_str_eq(crm_element_name(target), crm_element_name(patch)), return); CRM_CHECK(safe_str_eq(ID(target), ID(patch)), return); /* check for XML_DIFF_MARKER in a child */ id = crm_element_value_copy(target, XML_ATTR_ID); value = crm_element_value(patch, XML_DIFF_MARKER); if (value != NULL && strcmp(value, "removed:top") == 0) { crm_trace("We are the root of the deletion: %s.id=%s", name, id); free_xml(target); free(id); return; } for (xIter = crm_first_attr(patch); xIter != NULL; xIter = xIter->next) { const char *p_name = (const char *)xIter->name; - xml_remove_prop(target, p_name); + /* Removing and then restoring the id field would change the ordering of properties */ + if (safe_str_neq(p_name, XML_ATTR_ID)) { + xml_remove_prop(target, p_name); + } } - /* Restore the id field, it is never allowed to change */ - crm_xml_add(target, XML_ATTR_ID, id); /* changes to child objects */ cIter = __xml_first_child(target); while (cIter) { xmlNode *target_child = cIter; cIter = __xml_next(cIter); if (target_child->type == XML_COMMENT_NODE) { patch_child = find_xml_comment(patch, target_child); } else { patch_child = find_entity(patch, crm_element_name(target_child), ID(target_child)); } __subtract_xml_object(target_child, patch_child); } free(id); } static void __add_xml_object(xmlNode * parent, xmlNode * target, xmlNode * patch) { xmlNode *patch_child = NULL; xmlNode *target_child = NULL; xmlAttrPtr xIter = NULL; const char *id = NULL; const char *name = NULL; const char *value = NULL; if (patch == NULL) { return; } else if (parent == NULL && target == NULL) { return; } /* check for XML_DIFF_MARKER in a child */ value = crm_element_value(patch, XML_DIFF_MARKER); if (target == NULL && value != NULL && strcmp(value, "added:top") == 0) { id = ID(patch); name = crm_element_name(patch); crm_trace("We are the root of the addition: %s.id=%s", name, id); add_node_copy(parent, patch); return; } else if(target == NULL) { id = ID(patch); name = crm_element_name(patch); crm_err("Could not locate: %s.id=%s", name, id); return; } if (target->type == XML_COMMENT_NODE) { add_xml_comment(parent, target, patch); } name = crm_element_name(target); CRM_CHECK(name != NULL, return); CRM_CHECK(safe_str_eq(crm_element_name(target), crm_element_name(patch)), return); CRM_CHECK(safe_str_eq(ID(target), ID(patch)), return); for (xIter = crm_first_attr(patch); xIter != NULL; xIter = xIter->next) { const char *p_name = (const char *)xIter->name; const char *p_value = crm_element_value(patch, p_name); xml_remove_prop(target, p_name); /* Preserve the patch order */ crm_xml_add(target, p_name, p_value); } /* changes to child objects */ for (patch_child = __xml_first_child(patch); patch_child != NULL; patch_child = __xml_next(patch_child)) { if (patch_child->type == XML_COMMENT_NODE) { target_child = find_xml_comment(target, patch_child); } else { target_child = find_entity(target, crm_element_name(patch_child), ID(patch_child)); } __add_xml_object(target, target_child, patch_child); } } bool xml_patch_versions(xmlNode *patchset, int add[3], int del[3]) { int lpc = 0; int format = 1; xmlNode *tmp = NULL; const char *vfields[] = { XML_ATTR_GENERATION_ADMIN, XML_ATTR_GENERATION, XML_ATTR_NUMUPDATES, }; crm_element_value_int(patchset, "format", &format); switch(format) { case 1: tmp = find_xml_node(patchset, "diff-removed", FALSE); tmp = find_xml_node(tmp, "cib", FALSE); if(tmp == NULL) { /* Revert to the diff-removed line */ tmp = find_xml_node(patchset, "diff-removed", FALSE); } break; case 2: tmp = find_xml_node(patchset, "version", FALSE); tmp = find_xml_node(tmp, "source", FALSE); break; default: crm_warn("Unknown patch format: %d", format); return -EINVAL; } if (tmp) { for(lpc = 0; lpc < DIMOF(vfields); lpc++) { crm_element_value_int(tmp, vfields[lpc], &(del[lpc])); crm_trace("Got %d for del[%s]", del[lpc], vfields[lpc]); } } switch(format) { case 1: tmp = find_xml_node(patchset, "diff-added", FALSE); tmp = find_xml_node(tmp, "cib", FALSE); if(tmp == NULL) { /* Revert to the diff-added line */ tmp = find_xml_node(patchset, "diff-added", FALSE); } break; case 2: tmp = find_xml_node(patchset, "version", FALSE); tmp = find_xml_node(tmp, "target", FALSE); break; default: crm_warn("Unknown patch format: %d", format); return -EINVAL; } if (tmp) { for(lpc = 0; lpc < DIMOF(vfields); lpc++) { crm_element_value_int(tmp, vfields[lpc], &(add[lpc])); crm_trace("Got %d for add[%s]", add[lpc], vfields[lpc]); } } return pcmk_ok; } static int xml_patch_version_check(xmlNode *xml, xmlNode *patchset, int format) { int lpc = 0; bool changed = FALSE; int this[] = { 0, 0, 0 }; int add[] = { 0, 0, 0 }; int del[] = { 0, 0, 0 }; const char *vfields[] = { XML_ATTR_GENERATION_ADMIN, XML_ATTR_GENERATION, XML_ATTR_NUMUPDATES, }; for(lpc = 0; lpc < DIMOF(vfields); lpc++) { crm_element_value_int(xml, vfields[lpc], &(this[lpc])); crm_trace("Got %d for this[%s]", this[lpc], vfields[lpc]); if (this[lpc] < 0) { this[lpc] = 0; } } /* Set some defaults in case nothing is present */ add[0] = this[0]; add[1] = this[1]; add[2] = this[2] + 1; for(lpc = 0; lpc < DIMOF(vfields); lpc++) { del[lpc] = this[lpc]; } xml_patch_versions(patchset, add, del); for(lpc = 0; lpc < DIMOF(vfields); lpc++) { if(this[lpc] < del[lpc]) { crm_debug("Current %s is too low (%d < %d)", vfields[lpc], this[lpc], del[lpc]); return -pcmk_err_diff_resync; } else if(this[lpc] > del[lpc]) { crm_info("Current %s is too high (%d > %d)", vfields[lpc], this[lpc], del[lpc]); return -pcmk_err_old_data; } } for(lpc = 0; lpc < DIMOF(vfields); lpc++) { if(add[lpc] > del[lpc]) { changed = TRUE; } } if(changed == FALSE) { crm_notice("Versions did not change in patch %d.%d.%d", add[0], add[1], add[2]); return -pcmk_err_old_data; } crm_debug("Can apply patch %d.%d.%d to %d.%d.%d", add[0], add[1], add[2], this[0], this[1], this[2]); return pcmk_ok; } static int xml_apply_patchset_v1(xmlNode *xml, xmlNode *patchset, bool check_version) { int rc = pcmk_ok; int root_nodes_seen = 0; char *version = crm_element_value_copy(xml, XML_ATTR_CRM_VERSION); xmlNode *child_diff = NULL; xmlNode *added = find_xml_node(patchset, "diff-added", FALSE); xmlNode *removed = find_xml_node(patchset, "diff-removed", FALSE); xmlNode *old = copy_xml(xml); crm_trace("Substraction Phase"); for (child_diff = __xml_first_child(removed); child_diff != NULL; child_diff = __xml_next(child_diff)) { CRM_CHECK(root_nodes_seen == 0, rc = FALSE); if (root_nodes_seen == 0) { __subtract_xml_object(xml, child_diff); } root_nodes_seen++; } if (root_nodes_seen > 1) { crm_err("(-) Diffs cannot contain more than one change set... saw %d", root_nodes_seen); rc = -ENOTUNIQ; } root_nodes_seen = 0; crm_trace("Addition Phase"); if (rc == pcmk_ok) { xmlNode *child_diff = NULL; for (child_diff = __xml_first_child(added); child_diff != NULL; child_diff = __xml_next(child_diff)) { CRM_CHECK(root_nodes_seen == 0, rc = FALSE); if (root_nodes_seen == 0) { __add_xml_object(NULL, xml, child_diff); } root_nodes_seen++; } } if (root_nodes_seen > 1) { crm_err("(+) Diffs cannot contain more than one change set... saw %d", root_nodes_seen); rc = -ENOTUNIQ; } purge_diff_markers(xml); /* Purge prior to checking the digest */ free_xml(old); free(version); return rc; } static xmlNode * __first_xml_child_match(xmlNode *parent, const char *name, const char *id) { xmlNode *cIter = NULL; for (cIter = __xml_first_child(parent); cIter != NULL; cIter = __xml_next(cIter)) { if(strcmp((const char *)cIter->name, name) != 0) { continue; } else if(id) { const char *cid = ID(cIter); if(cid == NULL || strcmp(cid, id) != 0) { continue; } } return cIter; } return NULL; } static xmlNode * __xml_find_path(xmlNode *top, const char *key) { xmlNode *target = (xmlNode*)top->doc; char *id = malloc(XML_BUFFER_SIZE); char *tag = malloc(XML_BUFFER_SIZE); char *section = malloc(XML_BUFFER_SIZE); char *current = strdup(key); char *remainder = malloc(XML_BUFFER_SIZE); int rc = 0; while(current) { rc = sscanf (current, "/%[^/]%s", section, remainder); if(rc <= 0) { crm_trace("Done"); break; } else if(rc > 2) { crm_trace("Aborting on %s", current); target = NULL; break; } else if(tag && section) { int f = sscanf (section, "%[^[][@id='%[^']", tag, id); switch(f) { case 1: target = __first_xml_child_match(target, tag, NULL); break; case 2: target = __first_xml_child_match(target, tag, id); break; default: crm_trace("Aborting on %s", section); target = NULL; break; } if(rc == 1 || target == NULL) { crm_trace("Done"); break; } else { char *tmp = current; current = remainder; remainder = tmp; } } } if(target) { char *path = (char *)xmlGetNodePath(target); crm_trace("Found %s for %s", path, key); free(path); } else { crm_debug("No match for %s", key); } free(remainder); free(current); free(section); free(tag); free(id); return target; } static int xml_apply_patchset_v2(xmlNode *xml, xmlNode *patchset, bool check_version) { int rc = pcmk_ok; xmlNode *change = NULL; for (change = __xml_first_child(patchset); change != NULL; change = __xml_next(change)) { xmlNode *match = NULL; const char *op = crm_element_value(change, XML_DIFF_OP); const char *xpath = crm_element_value(change, XML_DIFF_PATH); crm_trace("Processing %s %s", change->name, op); if(op == NULL) { continue; } #if 0 match = get_xpath_object(xpath, xml, LOG_TRACE); #else match = __xml_find_path(xml, xpath); #endif crm_trace("Performing %s on %s with %p", op, xpath, match); if(match == NULL && strcmp(op, "delete") == 0) { crm_debug("No %s match for %s in %p", op, xpath, xml->doc); continue; } else if(match == NULL) { crm_err("No %s match for %s in %p", op, xpath, xml->doc); rc = -pcmk_err_diff_failed; continue; } else if(strcmp(op, "create") == 0) { int position = 0; xmlNode *child = NULL; xmlNode *match_child = NULL; match_child = match->children; crm_element_value_int(change, XML_DIFF_POSITION, &position); while(match_child && position != __xml_offset(match_child)) { match_child = match_child->next; } child = xmlDocCopyNode(change->children, match->doc, 1); if(match_child) { crm_trace("Adding %s at position %d", child->name, position); xmlAddPrevSibling(match_child, child); } else if(match->last) { /* Add to the end */ crm_trace("Adding %s at position %d (end)", child->name, position); xmlAddNextSibling(match->last, child); } else { crm_trace("Adding %s at position %d (first)", child->name, position); CRM_LOG_ASSERT(position == 0); xmlAddChild(match, child); } crm_node_created(child); } else if(strcmp(op, "move") == 0) { int position = 0; crm_element_value_int(change, XML_DIFF_POSITION, &position); if(position != __xml_offset(match)) { xmlNode *match_child = NULL; int p = position; if(p > __xml_offset(match)) { p++; /* Skip ourselves */ } CRM_ASSERT(match->parent != NULL); match_child = match->parent->children; while(match_child && p != __xml_offset(match_child)) { match_child = match_child->next; } crm_trace("Moving %s to position %d (was %d, prev %p, %s %p)", match->name, position, __xml_offset(match), match->prev, match_child?"next":"last", match_child?match_child:match->parent->last); if(match_child) { xmlAddPrevSibling(match_child, match); } else { CRM_ASSERT(match->parent->last != NULL); xmlAddNextSibling(match->parent->last, match); } } else { crm_trace("%s is already in position %d", match->name, position); } if(position != __xml_offset(match)) { crm_err("Moved %s.%d to position %d instead of %d (%p)", match->name, ID(match), __xml_offset(match), position, match->prev); rc = -pcmk_err_diff_failed; } } else if(strcmp(op, "delete") == 0) { free_xml(match); } else if(strcmp(op, "modify") == 0) { xmlAttr *pIter = crm_first_attr(match); xmlNode *attrs = __xml_first_child(first_named_child(change, XML_DIFF_RESULT)); if(attrs == NULL) { rc = -ENOMSG; continue; } while(pIter != NULL) { const char *name = (const char *)pIter->name; pIter = pIter->next; xml_remove_prop(match, name); } for (pIter = crm_first_attr(attrs); pIter != NULL; pIter = pIter->next) { const char *name = (const char *)pIter->name; const char *value = crm_element_value(attrs, name); crm_xml_add(match, name, value); } } else { crm_err("Unknown operation: %s", op); } } return rc; } int xml_apply_patchset(xmlNode *xml, xmlNode *patchset, bool check_version) { int format = 1; int rc = pcmk_ok; xmlNode *old = NULL; const char *digest = crm_element_value(patchset, XML_ATTR_DIGEST); if(patchset == NULL) { return rc; } xml_log_patchset(LOG_TRACE, __FUNCTION__, patchset); crm_element_value_int(patchset, "format", &format); if(check_version) { rc = xml_patch_version_check(xml, patchset, format); if(rc != pcmk_ok) { return rc; } } if(digest) { /* Make it available for logging if the result doesn't have the expected digest */ old = copy_xml(xml); } if(rc == pcmk_ok) { switch(format) { case 1: rc = xml_apply_patchset_v1(xml, patchset, check_version); break; case 2: rc = xml_apply_patchset_v2(xml, patchset, check_version); break; default: crm_err("Unknown patch format: %d", format); rc = -EINVAL; } } if(rc == pcmk_ok && digest) { static struct qb_log_callsite *digest_cs = NULL; char *new_digest = NULL; char *version = crm_element_value_copy(xml, XML_ATTR_CRM_VERSION); if (digest_cs == NULL) { digest_cs = qb_log_callsite_get(__func__, __FILE__, "diff-digest", LOG_TRACE, __LINE__, crm_trace_nonlog); } new_digest = calculate_xml_versioned_digest(xml, FALSE, TRUE, version); if (safe_str_neq(new_digest, digest)) { crm_info("v%d digest mis-match: expected %s, calculated %s", format, digest, new_digest); rc = -pcmk_err_diff_failed; if (digest_cs && digest_cs->targets) { save_xml_to_file(old, "PatchDigest:input", NULL); save_xml_to_file(xml, "PatchDigest:result", NULL); save_xml_to_file(patchset,"PatchDigest:diff", NULL); } else { crm_trace("%p %0.6x", digest_cs, digest_cs ? digest_cs->targets : 0); } } else { crm_trace("v%d digest matched: expected %s, calculated %s", format, digest, new_digest); } free(new_digest); free(version); } free_xml(old); return rc; } xmlNode * find_xml_node(xmlNode * root, const char *search_path, gboolean must_find) { xmlNode *a_child = NULL; const char *name = "NULL"; if (root != NULL) { name = crm_element_name(root); } if (search_path == NULL) { crm_warn("Will never find "); return NULL; } for (a_child = __xml_first_child(root); a_child != NULL; a_child = __xml_next(a_child)) { if (strcmp((const char *)a_child->name, search_path) == 0) { /* crm_trace("returning node (%s).", crm_element_name(a_child)); */ return a_child; } } if (must_find) { crm_warn("Could not find %s in %s.", search_path, name); } else if (root != NULL) { crm_trace("Could not find %s in %s.", search_path, name); } else { crm_trace("Could not find %s in .", search_path); } return NULL; } xmlNode * find_entity(xmlNode * parent, const char *node_name, const char *id) { xmlNode *a_child = NULL; for (a_child = __xml_first_child(parent); a_child != NULL; a_child = __xml_next(a_child)) { /* Uncertain if node_name == NULL check is strictly necessary here */ if (node_name == NULL || strcmp((const char *)a_child->name, node_name) == 0) { const char *cid = ID(a_child); if (id == NULL || (cid != NULL && strcmp(id, cid) == 0)) { return a_child; } } } crm_trace("node <%s id=%s> not found in %s.", node_name, id, crm_element_name(parent)); return NULL; } void copy_in_properties(xmlNode * target, xmlNode * src) { if (src == NULL) { crm_warn("No node to copy properties from"); } else if (target == NULL) { crm_err("No node to copy properties into"); } else { xmlAttrPtr pIter = NULL; for (pIter = crm_first_attr(src); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); expand_plus_plus(target, p_name, p_value); } } return; } void fix_plus_plus_recursive(xmlNode * target) { /* TODO: Remove recursion and use xpath searches for value++ */ xmlNode *child = NULL; xmlAttrPtr pIter = NULL; for (pIter = crm_first_attr(target); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); expand_plus_plus(target, p_name, p_value); } for (child = __xml_first_child(target); child != NULL; child = __xml_next(child)) { fix_plus_plus_recursive(child); } } void expand_plus_plus(xmlNode * target, const char *name, const char *value) { int offset = 1; int name_len = 0; int int_value = 0; int value_len = 0; const char *old_value = NULL; if (value == NULL || name == NULL) { return; } old_value = crm_element_value(target, name); if (old_value == NULL) { /* if no previous value, set unexpanded */ goto set_unexpanded; } else if (strstr(value, name) != value) { goto set_unexpanded; } name_len = strlen(name); value_len = strlen(value); if (value_len < (name_len + 2) || value[name_len] != '+' || (value[name_len + 1] != '+' && value[name_len + 1] != '=')) { goto set_unexpanded; } /* if we are expanding ourselves, * then no previous value was set and leave int_value as 0 */ if (old_value != value) { int_value = char2score(old_value); } if (value[name_len + 1] != '+') { const char *offset_s = value + (name_len + 2); offset = char2score(offset_s); } int_value += offset; if (int_value > INFINITY) { int_value = (int)INFINITY; } crm_xml_add_int(target, name, int_value); return; set_unexpanded: if (old_value == value) { /* the old value is already set, nothing to do */ return; } crm_xml_add(target, name, value); return; } xmlDoc * getDocPtr(xmlNode * node) { xmlDoc *doc = NULL; CRM_CHECK(node != NULL, return NULL); doc = node->doc; if (doc == NULL) { doc = xmlNewDoc((const xmlChar *)"1.0"); xmlDocSetRootElement(doc, node); xmlSetTreeDoc(node, doc); } return doc; } xmlNode * add_node_copy(xmlNode * parent, xmlNode * src_node) { xmlNode *child = NULL; xmlDoc *doc = getDocPtr(parent); CRM_CHECK(src_node != NULL, return NULL); child = xmlDocCopyNode(src_node, doc, 1); xmlAddChild(parent, child); crm_node_created(child); return child; } int add_node_nocopy(xmlNode * parent, const char *name, xmlNode * child) { add_node_copy(parent, child); free_xml(child); return 1; } static bool __xml_acl_check(xmlNode *xml, const char *name, enum xml_private_flags mode) { CRM_ASSERT(xml); CRM_ASSERT(xml->doc); CRM_ASSERT(xml->doc->_private); #if ENABLE_ACL { if(TRACKING_CHANGES(xml) && xml_acl_enabled(xml)) { int offset = 0; xmlNode *parent = xml; char buffer[XML_BUFFER_SIZE]; xml_private_t *docp = xml->doc->_private; if(docp->acls == NULL) { crm_trace("Ordinary user %s cannot access the CIB without any defined ACLs", docp->user); set_doc_flag(xml, xpf_acl_denied); return FALSE; } offset = __get_prefix(NULL, xml, buffer, offset); if(name) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "[@%s]", name); } CRM_LOG_ASSERT(offset > 0); /* Walk the tree upwards looking for xml_acl_* flags * - Creating an attribute requires write permissions for the node * - Creating a child requires write permissions for the parent */ if(name) { xmlAttr *attr = xmlHasProp(xml, (const xmlChar *)name); if(attr && mode == xpf_acl_create) { mode = xpf_acl_write; } } while(parent && parent->_private) { xml_private_t *p = parent->_private; if(__xml_acl_mode_test(p->flags, mode)) { return TRUE; } else if(is_set(p->flags, xpf_acl_deny)) { crm_trace("%x access denied to %s: parent", mode, buffer); set_doc_flag(xml, xpf_acl_denied); return FALSE; } parent = parent->parent; } crm_trace("%x access denied to %s: default", mode, buffer); set_doc_flag(xml, xpf_acl_denied); return FALSE; } } #endif return TRUE; } const char * crm_xml_add(xmlNode * node, const char *name, const char *value) { bool dirty = FALSE; xmlAttr *attr = NULL; CRM_CHECK(node != NULL, return NULL); CRM_CHECK(name != NULL, return NULL); if (value == NULL) { return NULL; } #if XML_PARANOIA_CHECKS { const char *old_value = NULL; old_value = crm_element_value(node, name); /* Could be re-setting the same value */ CRM_CHECK(old_value != value, crm_err("Cannot reset %s with crm_xml_add(%s)", name, value); return value); } #endif if(TRACKING_CHANGES(node)) { const char *old = crm_element_value(node, name); if(old == NULL || value == NULL || strcmp(old, value) != 0) { dirty = TRUE; } } if(dirty && __xml_acl_check(node, name, xpf_acl_create) == FALSE) { crm_trace("Cannot add %s=%s to %s", name, value, node->name); return NULL; } attr = xmlSetProp(node, (const xmlChar *)name, (const xmlChar *)value); if(dirty) { crm_attr_dirty(attr); } CRM_CHECK(attr && attr->children && attr->children->content, return NULL); return (char *)attr->children->content; } const char * crm_xml_replace(xmlNode * node, const char *name, const char *value) { bool dirty = FALSE; xmlAttr *attr = NULL; const char *old_value = NULL; CRM_CHECK(node != NULL, return NULL); CRM_CHECK(name != NULL && name[0] != 0, return NULL); old_value = crm_element_value(node, name); /* Could be re-setting the same value */ CRM_CHECK(old_value != value, return value); if(__xml_acl_check(node, name, xpf_acl_write) == FALSE) { /* Create a fake object linked to doc->_private instead? */ crm_trace("Cannot replace %s=%s to %s", name, value, node->name); return NULL; } else if (old_value != NULL && value == NULL) { xml_remove_prop(node, name); return NULL; } else if (value == NULL) { return NULL; } if(TRACKING_CHANGES(node)) { if(old_value == NULL || value == NULL || strcmp(old_value, value) != 0) { dirty = TRUE; } } attr = xmlSetProp(node, (const xmlChar *)name, (const xmlChar *)value); if(dirty) { crm_attr_dirty(attr); } CRM_CHECK(attr && attr->children && attr->children->content, return NULL); return (char *)attr->children->content; } const char * crm_xml_add_int(xmlNode * node, const char *name, int value) { char *number = crm_itoa(value); const char *added = crm_xml_add(node, name, number); free(number); return added; } xmlNode * create_xml_node(xmlNode * parent, const char *name) { xmlDoc *doc = NULL; xmlNode *node = NULL; if (name == NULL || name[0] == 0) { CRM_CHECK(name != NULL && name[0] == 0, return NULL); return NULL; } if (parent == NULL) { doc = xmlNewDoc((const xmlChar *)"1.0"); node = xmlNewDocRawNode(doc, NULL, (const xmlChar *)name, NULL); xmlDocSetRootElement(doc, node); } else { doc = getDocPtr(parent); node = xmlNewDocRawNode(doc, NULL, (const xmlChar *)name, NULL); xmlAddChild(parent, node); } crm_node_created(node); return node; } static inline int __get_prefix(const char *prefix, xmlNode *xml, char *buffer, int offset) { const char *id = ID(xml); if(offset == 0 && prefix == NULL && xml->parent) { offset = __get_prefix(NULL, xml->parent, buffer, offset); } if(id) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "/%s[@id='%s']", (const char *)xml->name, id); } else if(xml->name) { offset += snprintf(buffer + offset, XML_BUFFER_SIZE - offset, "/%s", (const char *)xml->name); } return offset; } char * xml_get_path(xmlNode *xml) { int offset = 0; char buffer[XML_BUFFER_SIZE]; if(__get_prefix(NULL, xml, buffer, offset) > 0) { return strdup(buffer); } return NULL; } void free_xml(xmlNode * child) { if (child != NULL) { xmlNode *top = NULL; xmlDoc *doc = child->doc; xml_private_t *p = child->_private; if (doc != NULL) { top = xmlDocGetRootElement(doc); } if (doc != NULL && top == child) { /* Free everything */ xmlFreeDoc(doc); } else if(__xml_acl_check(child, NULL, xpf_acl_write) == FALSE) { int offset = 0; char buffer[XML_BUFFER_SIZE]; __get_prefix(NULL, child, buffer, offset); crm_trace("Cannot remove %s %x", buffer, p->flags); return; } else { if(doc && TRACKING_CHANGES(child) && is_not_set(p->flags, xpf_created)) { int offset = 0; char buffer[XML_BUFFER_SIZE]; if(__get_prefix(NULL, child, buffer, offset) > 0) { crm_trace("Deleting %s %p from %p", buffer, child, doc); p = doc->_private; p->deleted_paths = g_list_append(p->deleted_paths, strdup(buffer)); set_doc_flag(child, xpf_dirty); } } /* Free this particular subtree * Make sure to unlink it from the parent first */ xmlUnlinkNode(child); xmlFreeNode(child); } } } xmlNode * copy_xml(xmlNode * src) { xmlDoc *doc = xmlNewDoc((const xmlChar *)"1.0"); xmlNode *copy = xmlDocCopyNode(src, doc, 1); xmlDocSetRootElement(doc, copy); xmlSetTreeDoc(copy, doc); return copy; } static void crm_xml_err(void *ctx, const char *msg, ...) G_GNUC_PRINTF(2, 3); static void crm_xml_err(void *ctx, const char *msg, ...) { int len = 0; va_list args; char *buf = NULL; static int buffer_len = 0; static char *buffer = NULL; static struct qb_log_callsite *xml_error_cs = NULL; va_start(args, msg); len = vasprintf(&buf, msg, args); if(xml_error_cs == NULL) { xml_error_cs = qb_log_callsite_get( __func__, __FILE__, "xml library error", LOG_TRACE, __LINE__, crm_trace_nonlog); } if (strchr(buf, '\n')) { buf[len - 1] = 0; if (buffer) { crm_err("XML Error: %s%s", buffer, buf); free(buffer); } else { crm_err("XML Error: %s", buf); } if (xml_error_cs && xml_error_cs->targets) { crm_abort(__FILE__, __PRETTY_FUNCTION__, __LINE__, "xml library error", TRUE, TRUE); } buffer = NULL; buffer_len = 0; } else if (buffer == NULL) { buffer_len = len; buffer = buf; buf = NULL; } else { buffer = realloc_safe(buffer, 1 + buffer_len + len); memcpy(buffer + buffer_len, buf, len); buffer_len += len; buffer[buffer_len] = 0; } va_end(args); free(buf); } xmlNode * string2xml(const char *input) { xmlNode *xml = NULL; xmlDocPtr output = NULL; xmlParserCtxtPtr ctxt = NULL; xmlErrorPtr last_error = NULL; if (input == NULL) { crm_err("Can't parse NULL input"); return NULL; } /* create a parser context */ ctxt = xmlNewParserCtxt(); CRM_CHECK(ctxt != NULL, return NULL); /* xmlCtxtUseOptions(ctxt, XML_PARSE_NOBLANKS|XML_PARSE_RECOVER); */ xmlCtxtResetLastError(ctxt); xmlSetGenericErrorFunc(ctxt, crm_xml_err); /* initGenericErrorDefaultFunc(crm_xml_err); */ output = xmlCtxtReadDoc(ctxt, (const xmlChar *)input, NULL, NULL, XML_PARSE_NOBLANKS | XML_PARSE_RECOVER); if (output) { xml = xmlDocGetRootElement(output); } last_error = xmlCtxtGetLastError(ctxt); if (last_error && last_error->code != XML_ERR_OK) { /* crm_abort(__FILE__,__FUNCTION__,__LINE__, "last_error->code != XML_ERR_OK", TRUE, TRUE); */ /* * http://xmlsoft.org/html/libxml-xmlerror.html#xmlErrorLevel * http://xmlsoft.org/html/libxml-xmlerror.html#xmlParserErrors */ crm_warn("Parsing failed (domain=%d, level=%d, code=%d): %s", last_error->domain, last_error->level, last_error->code, last_error->message); if (last_error->code == XML_ERR_DOCUMENT_EMPTY) { CRM_LOG_ASSERT("Cannot parse an empty string"); } else if (last_error->code != XML_ERR_DOCUMENT_END) { crm_err("Couldn't%s parse %d chars: %s", xml ? " fully" : "", (int)strlen(input), input); if (xml != NULL) { crm_log_xml_err(xml, "Partial"); } } else { int len = strlen(input); int lpc = 0; while(lpc < len) { crm_warn("Parse error[+%.3d]: %.80s", lpc, input+lpc); lpc += 80; } CRM_LOG_ASSERT("String parsing error"); } } xmlFreeParserCtxt(ctxt); return xml; } xmlNode * stdin2xml(void) { size_t data_length = 0; size_t read_chars = 0; char *xml_buffer = NULL; xmlNode *xml_obj = NULL; do { size_t next = XML_BUFFER_SIZE + data_length + 1; if(next <= 0) { crm_err("Buffer size exceeded at: %l + %d", data_length, XML_BUFFER_SIZE); break; } xml_buffer = realloc_safe(xml_buffer, next); read_chars = fread(xml_buffer + data_length, 1, XML_BUFFER_SIZE, stdin); data_length += read_chars; } while (read_chars > 0); if (data_length == 0) { crm_warn("No XML supplied on stdin"); free(xml_buffer); return NULL; } xml_buffer[data_length] = '\0'; xml_obj = string2xml(xml_buffer); free(xml_buffer); crm_log_xml_trace(xml_obj, "Created fragment"); return xml_obj; } static char * decompress_file(const char *filename) { char *buffer = NULL; #if HAVE_BZLIB_H int rc = 0; size_t length = 0, read_len = 0; BZFILE *bz_file = NULL; FILE *input = fopen(filename, "r"); if (input == NULL) { crm_perror(LOG_ERR, "Could not open %s for reading", filename); return NULL; } bz_file = BZ2_bzReadOpen(&rc, input, 0, 0, NULL, 0); if (rc != BZ_OK) { BZ2_bzReadClose(&rc, bz_file); return NULL; } rc = BZ_OK; while (rc == BZ_OK) { buffer = realloc_safe(buffer, XML_BUFFER_SIZE + length + 1); read_len = BZ2_bzRead(&rc, bz_file, buffer + length, XML_BUFFER_SIZE); crm_trace("Read %ld bytes from file: %d", (long)read_len, rc); if (rc == BZ_OK || rc == BZ_STREAM_END) { length += read_len; } } buffer[length] = '\0'; if (rc != BZ_STREAM_END) { crm_err("Couldnt read compressed xml from file"); free(buffer); buffer = NULL; } BZ2_bzReadClose(&rc, bz_file); fclose(input); #else crm_err("Cannot read compressed files:" " bzlib was not available at compile time"); #endif return buffer; } void strip_text_nodes(xmlNode * xml) { xmlNode *iter = xml->children; while (iter) { xmlNode *next = iter->next; switch (iter->type) { case XML_TEXT_NODE: /* Remove it */ xmlUnlinkNode(iter); xmlFreeNode(iter); break; case XML_ELEMENT_NODE: /* Search it */ strip_text_nodes(iter); break; default: /* Leave it */ break; } iter = next; } } xmlNode * filename2xml(const char *filename) { xmlNode *xml = NULL; xmlDocPtr output = NULL; const char *match = NULL; xmlParserCtxtPtr ctxt = NULL; xmlErrorPtr last_error = NULL; static int xml_options = XML_PARSE_NOBLANKS | XML_PARSE_RECOVER; /* create a parser context */ ctxt = xmlNewParserCtxt(); CRM_CHECK(ctxt != NULL, return NULL); /* xmlCtxtUseOptions(ctxt, XML_PARSE_NOBLANKS|XML_PARSE_RECOVER); */ xmlCtxtResetLastError(ctxt); xmlSetGenericErrorFunc(ctxt, crm_xml_err); /* initGenericErrorDefaultFunc(crm_xml_err); */ if (filename) { match = strstr(filename, ".bz2"); } if (filename == NULL) { /* STDIN_FILENO == fileno(stdin) */ output = xmlCtxtReadFd(ctxt, STDIN_FILENO, "unknown.xml", NULL, xml_options); } else if (match == NULL || match[4] != 0) { output = xmlCtxtReadFile(ctxt, filename, NULL, xml_options); } else { char *input = decompress_file(filename); output = xmlCtxtReadDoc(ctxt, (const xmlChar *)input, NULL, NULL, xml_options); free(input); } if (output && (xml = xmlDocGetRootElement(output))) { strip_text_nodes(xml); } last_error = xmlCtxtGetLastError(ctxt); if (last_error && last_error->code != XML_ERR_OK) { /* crm_abort(__FILE__,__FUNCTION__,__LINE__, "last_error->code != XML_ERR_OK", TRUE, TRUE); */ /* * http://xmlsoft.org/html/libxml-xmlerror.html#xmlErrorLevel * http://xmlsoft.org/html/libxml-xmlerror.html#xmlParserErrors */ crm_err("Parsing failed (domain=%d, level=%d, code=%d): %s", last_error->domain, last_error->level, last_error->code, last_error->message); if (last_error && last_error->code != XML_ERR_OK) { crm_err("Couldn't%s parse %s", xml ? " fully" : "", filename); if (xml != NULL) { crm_log_xml_err(xml, "Partial"); } } } xmlFreeParserCtxt(ctxt); return xml; } static int write_xml_stream(xmlNode * xml_node, const char *filename, FILE * stream, gboolean compress) { int res = 0; char *buffer = NULL; unsigned int out = 0; static mode_t cib_mode = S_IRUSR | S_IWUSR; CRM_CHECK(stream != NULL, return -1); crm_trace("Writing XML out to %s", filename); if (xml_node == NULL) { crm_err("Cannot write NULL to %s", filename); fclose(stream); return -1; } crm_log_xml_trace(xml_node, "Writing out"); if(strstr(filename, "cib") != NULL) { /* Only CIB's need this field written */ time_t now = time(NULL); char *now_str = ctime(&now); now_str[24] = EOS; /* replace the newline */ crm_xml_add(xml_node, XML_CIB_ATTR_WRITTEN, now_str); /* establish the correct permissions */ fchmod(fileno(stream), cib_mode); } buffer = dump_xml_formatted(xml_node); CRM_CHECK(buffer != NULL && strlen(buffer) > 0, crm_log_xml_warn(xml_node, "dump:failed"); goto bail); if (compress) { #if HAVE_BZLIB_H int rc = BZ_OK; unsigned int in = 0; BZFILE *bz_file = NULL; bz_file = BZ2_bzWriteOpen(&rc, stream, 5, 0, 30); if (rc != BZ_OK) { crm_err("bzWriteOpen failed: %d", rc); } else { BZ2_bzWrite(&rc, bz_file, buffer, strlen(buffer)); if (rc != BZ_OK) { crm_err("bzWrite() failed: %d", rc); } } if (rc == BZ_OK) { BZ2_bzWriteClose(&rc, bz_file, 0, &in, &out); if (rc != BZ_OK) { crm_err("bzWriteClose() failed: %d", rc); out = -1; } else { crm_trace("%s: In: %d, out: %d", filename, in, out); } } #else crm_err("Cannot write compressed files:" " bzlib was not available at compile time"); #endif } if (out <= 0) { res = fprintf(stream, "%s", buffer); if (res < 0) { crm_perror(LOG_ERR, "Cannot write output to %s", filename); goto bail; } } bail: if (fflush(stream) != 0) { crm_perror(LOG_ERR, "fflush for %s failed:", filename); res = -1; } if (fsync(fileno(stream)) < 0) { crm_perror(LOG_ERR, "fsync for %s failed:", filename); res = -1; } fclose(stream); crm_trace("Saved %d bytes to the Cib as XML", res); free(buffer); return res; } int write_xml_fd(xmlNode * xml_node, const char *filename, int fd, gboolean compress) { FILE *stream = NULL; CRM_CHECK(fd > 0, return -1); stream = fdopen(fd, "w"); return write_xml_stream(xml_node, filename, stream, compress); } int write_xml_file(xmlNode * xml_node, const char *filename, gboolean compress) { FILE *stream = NULL; stream = fopen(filename, "w"); return write_xml_stream(xml_node, filename, stream, compress); } xmlNode * get_message_xml(xmlNode * msg, const char *field) { xmlNode *tmp = first_named_child(msg, field); return __xml_first_child(tmp); } gboolean add_message_xml(xmlNode * msg, const char *field, xmlNode * xml) { xmlNode *holder = create_xml_node(msg, field); add_node_copy(holder, xml); return TRUE; } static char * crm_xml_escape_shuffle(char *text, int start, int *length, const char *replace) { int lpc; int offset = strlen(replace) - 1; /* We have space for 1 char already */ *length += offset; text = realloc_safe(text, *length); for (lpc = (*length) - 1; lpc > (start + offset); lpc--) { text[lpc] = text[lpc - offset]; } memcpy(text + start, replace, offset + 1); return text; } char * crm_xml_escape(const char *text) { int index; int changes = 0; int length = 1 + strlen(text); char *copy = strdup(text); /* * When xmlCtxtReadDoc() parses < and friends in a * value, it converts them to their human readable * form. * * If one uses xmlNodeDump() to convert it back to a * string, all is well, because special characters are * converted back to their escape sequences. * * However xmlNodeDump() is randomly dog slow, even with the same * input. So we need to replicate the escapeing in our custom * version so that the result can be re-parsed by xmlCtxtReadDoc() * when necessary. */ for (index = 0; index < length; index++) { switch (copy[index]) { case 0: break; case '<': copy = crm_xml_escape_shuffle(copy, index, &length, "<"); changes++; break; case '>': copy = crm_xml_escape_shuffle(copy, index, &length, ">"); changes++; break; case '"': copy = crm_xml_escape_shuffle(copy, index, &length, """); changes++; break; case '\'': copy = crm_xml_escape_shuffle(copy, index, &length, "'"); changes++; break; case '&': copy = crm_xml_escape_shuffle(copy, index, &length, "&"); changes++; break; case '\t': /* Might as well just expand to a few spaces... */ copy = crm_xml_escape_shuffle(copy, index, &length, " "); changes++; break; case '\n': /* crm_trace("Convert: \\%.3o", copy[index]); */ copy = crm_xml_escape_shuffle(copy, index, &length, "\\n"); changes++; break; case '\r': copy = crm_xml_escape_shuffle(copy, index, &length, "\\r"); changes++; break; /* For debugging... case '\\': crm_trace("Passthrough: \\%c", copy[index+1]); break; */ default: /* Check for and replace non-printing characters with their octal equivalent */ if(copy[index] < ' ' || copy[index] > '~') { char *replace = g_strdup_printf("\\%.3o", copy[index]); /* crm_trace("Convert to octal: \\%.3o", copy[index]); */ copy = crm_xml_escape_shuffle(copy, index, &length, replace); free(replace); changes++; } } } if (changes) { crm_trace("Dumped '%s'", copy); } return copy; } static inline void dump_xml_attr(xmlAttrPtr attr, int options, char **buffer, int *offset, int *max) { char *p_value = NULL; const char *p_name = NULL; CRM_ASSERT(buffer != NULL); if (attr == NULL || attr->children == NULL) { return; } p_name = (const char *)attr->name; p_value = crm_xml_escape((const char *)attr->children->content); buffer_print(*buffer, *max, *offset, " %s=\"%s\"", p_name, p_value); free(p_value); } static void __xml_log_element(int log_level, const char *file, const char *function, int line, const char *prefix, xmlNode * data, int depth, int options) { int max = 0; int offset = 0; const char *name = NULL; const char *hidden = NULL; xmlNode *child = NULL; xmlAttrPtr pIter = NULL; if(data == NULL) { return; } name = crm_element_name(data); if(is_set(options, xml_log_option_open)) { char *buffer = NULL; insert_prefix(options, &buffer, &offset, &max, depth); if(data->type == XML_COMMENT_NODE) { buffer_print(buffer, max, offset, ""); } else { buffer_print(buffer, max, offset, "<%s", name); } hidden = crm_element_value(data, "hidden"); for (pIter = crm_first_attr(data); pIter != NULL; pIter = pIter->next) { xml_private_t *p = pIter->_private; const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); char *p_copy = NULL; if(is_set(p->flags, xpf_deleted)) { continue; } else if ((is_set(options, xml_log_option_diff_plus) || is_set(options, xml_log_option_diff_minus)) && strcmp(XML_DIFF_MARKER, p_name) == 0) { continue; } else if (hidden != NULL && p_name[0] != 0 && strstr(hidden, p_name) != NULL) { p_copy = strdup("*****"); } else { p_copy = crm_xml_escape(p_value); } buffer_print(buffer, max, offset, " %s=\"%s\"", p_name, p_copy); free(p_copy); } if(xml_has_children(data) == FALSE) { buffer_print(buffer, max, offset, "/>"); } else if(is_set(options, xml_log_option_children)) { buffer_print(buffer, max, offset, ">"); } else { buffer_print(buffer, max, offset, "/>"); } do_crm_log_alias(log_level, file, function, line, "%s %s", prefix, buffer); free(buffer); } if(data->type == XML_COMMENT_NODE) { return; } else if(xml_has_children(data) == FALSE) { return; } else if(is_set(options, xml_log_option_children)) { offset = 0; max = 0; for (child = __xml_first_child(data); child != NULL; child = __xml_next(child)) { __xml_log_element(log_level, file, function, line, prefix, child, depth + 1, options|xml_log_option_open|xml_log_option_close); } } if(is_set(options, xml_log_option_close)) { char *buffer = NULL; insert_prefix(options, &buffer, &offset, &max, depth); buffer_print(buffer, max, offset, "", name); do_crm_log_alias(log_level, file, function, line, "%s %s", prefix, buffer); free(buffer); } } static void __xml_log_change_element(int log_level, const char *file, const char *function, int line, const char *prefix, xmlNode * data, int depth, int options) { xml_private_t *p; char *prefix_m = NULL; xmlNode *child = NULL; xmlAttrPtr pIter = NULL; if(data == NULL) { return; } p = data->_private; prefix_m = strdup(prefix); prefix_m[1] = '+'; if(is_set(p->flags, xpf_dirty) && is_set(p->flags, xpf_created)) { /* Continue and log full subtree */ __xml_log_element(log_level, file, function, line, prefix_m, data, depth, options|xml_log_option_open|xml_log_option_close|xml_log_option_children); } else if(is_set(p->flags, xpf_dirty)) { char *spaces = calloc(80, 1); int s_count = 0, s_max = 80; char *prefix_del = NULL; char *prefix_moved = NULL; const char *flags = prefix; insert_prefix(options, &spaces, &s_count, &s_max, depth); prefix_del = strdup(prefix); prefix_del[0] = '-'; prefix_del[1] = '-'; prefix_moved = strdup(prefix); prefix_moved[1] = '~'; if(is_set(p->flags, xpf_moved)) { flags = prefix_moved; } else { flags = prefix; } __xml_log_element(log_level, file, function, line, flags, data, depth, options|xml_log_option_open); for (pIter = crm_first_attr(data); pIter != NULL; pIter = pIter->next) { const char *aname = (const char*)pIter->name; p = pIter->_private; if(is_set(p->flags, xpf_deleted)) { const char *value = crm_element_value(data, aname); flags = prefix_del; do_crm_log_alias(log_level, file, function, line, "%s %s @%s=%s", flags, spaces, aname, value); } else if(is_set(p->flags, xpf_dirty)) { const char *value = crm_element_value(data, aname); if(is_set(p->flags, xpf_created)) { flags = prefix_m; } else if(is_set(p->flags, xpf_modified)) { flags = prefix; } else if(is_set(p->flags, xpf_moved)) { flags = prefix_moved; } else { flags = prefix; } do_crm_log_alias(log_level, file, function, line, "%s %s @%s=%s", flags, spaces, aname, value); } } free(prefix_moved); free(prefix_del); free(spaces); for (child = __xml_first_child(data); child != NULL; child = __xml_next(child)) { __xml_log_change_element(log_level, file, function, line, prefix, child, depth + 1, options); } __xml_log_element(log_level, file, function, line, prefix, data, depth, options|xml_log_option_close); } else { for (child = __xml_first_child(data); child != NULL; child = __xml_next(child)) { __xml_log_change_element(log_level, file, function, line, prefix, child, depth + 1, options); } } free(prefix_m); } void log_data_element(int log_level, const char *file, const char *function, int line, const char *prefix, xmlNode * data, int depth, int options) { xmlNode *a_child = NULL; char *prefix_m = NULL; if (prefix == NULL) { prefix = ""; } /* Since we use the same file and line, to avoid confusing libqb, we need to use the same format strings */ if (data == NULL) { do_crm_log_alias(log_level, file, function, line, "%s: %s", prefix, "No data to dump as XML"); return; } if(is_set(options, xml_log_option_dirty_add) || is_set(options, xml_log_option_dirty_add)) { __xml_log_change_element(log_level, file, function, line, prefix, data, depth, options); return; } if (is_set(options, xml_log_option_formatted)) { if (is_set(options, xml_log_option_diff_plus) && (data->children == NULL || crm_element_value(data, XML_DIFF_MARKER))) { options |= xml_log_option_diff_all; prefix_m = strdup(prefix); prefix_m[1] = '+'; prefix = prefix_m; } else if (is_set(options, xml_log_option_diff_minus) && (data->children == NULL || crm_element_value(data, XML_DIFF_MARKER))) { options |= xml_log_option_diff_all; prefix_m = strdup(prefix); prefix_m[1] = '-'; prefix = prefix_m; } } if (is_set(options, xml_log_option_diff_short) && is_not_set(options, xml_log_option_diff_all)) { /* Still searching for the actual change */ for (a_child = __xml_first_child(data); a_child != NULL; a_child = __xml_next(a_child)) { log_data_element(log_level, file, function, line, prefix, a_child, depth + 1, options); } return; } __xml_log_element(log_level, file, function, line, prefix, data, depth, options|xml_log_option_open|xml_log_option_close|xml_log_option_children); free(prefix_m); } static void dump_filtered_xml(xmlNode * data, int options, char **buffer, int *offset, int *max) { int lpc; xmlAttrPtr xIter = NULL; static int filter_len = DIMOF(filter); for (lpc = 0; options && lpc < filter_len; lpc++) { filter[lpc].found = FALSE; } for (xIter = crm_first_attr(data); xIter != NULL; xIter = xIter->next) { bool skip = FALSE; const char *p_name = (const char *)xIter->name; for (lpc = 0; skip == FALSE && lpc < filter_len; lpc++) { if (filter[lpc].found == FALSE && strcmp(p_name, filter[lpc].string) == 0) { filter[lpc].found = TRUE; skip = TRUE; break; } } if (skip == FALSE) { dump_xml_attr(xIter, options, buffer, offset, max); } } } static void dump_xml(xmlNode * data, int options, char **buffer, int *offset, int *max, int depth); static void dump_xml_element(xmlNode * data, int options, char **buffer, int *offset, int *max, int depth) { const char *name = NULL; CRM_ASSERT(max != NULL); CRM_ASSERT(offset != NULL); CRM_ASSERT(buffer != NULL); if (data == NULL) { crm_trace("Nothing to dump"); return; } if (*buffer == NULL) { *offset = 0; *max = 0; } name = crm_element_name(data); CRM_ASSERT(name != NULL); insert_prefix(options, buffer, offset, max, depth); buffer_print(*buffer, *max, *offset, "<%s", name); if (options & xml_log_option_filtered) { dump_filtered_xml(data, options, buffer, offset, max); } else { xmlAttrPtr xIter = NULL; for (xIter = crm_first_attr(data); xIter != NULL; xIter = xIter->next) { dump_xml_attr(xIter, options, buffer, offset, max); } } if (data->children == NULL) { buffer_print(*buffer, *max, *offset, "/>"); } else { buffer_print(*buffer, *max, *offset, ">"); } if (options & xml_log_option_formatted) { buffer_print(*buffer, *max, *offset, "\n"); } if (data->children) { xmlNode *xChild = NULL; for (xChild = __xml_first_child(data); xChild != NULL; xChild = __xml_next(xChild)) { dump_xml(xChild, options, buffer, offset, max, depth + 1); } insert_prefix(options, buffer, offset, max, depth); buffer_print(*buffer, *max, *offset, "", name); if (options & xml_log_option_formatted) { buffer_print(*buffer, *max, *offset, "\n"); } } } static void dump_xml_comment(xmlNode * data, int options, char **buffer, int *offset, int *max, int depth) { CRM_ASSERT(max != NULL); CRM_ASSERT(offset != NULL); CRM_ASSERT(buffer != NULL); if (data == NULL) { crm_trace("Nothing to dump"); return; } if (*buffer == NULL) { *offset = 0; *max = 0; } insert_prefix(options, buffer, offset, max, depth); buffer_print(*buffer, *max, *offset, ""); if (options & xml_log_option_formatted) { buffer_print(*buffer, *max, *offset, "\n"); } } static void dump_xml(xmlNode * data, int options, char **buffer, int *offset, int *max, int depth) { #if 0 if (is_not_set(options, xml_log_option_filtered)) { /* Turning this code on also changes the PE tests for some reason * (not just newlines). Figure out why before considering to * enable this permanently. * * It exists to help debug slowness in xmlNodeDump() and * potentially if we ever want to go back to it. * * In theory its a good idea (reuse) but our custom version does * better for the filtered case and avoids the final strdup() for * everything */ time_t now, next; xmlDoc *doc = NULL; xmlBuffer *xml_buffer = NULL; *buffer = NULL; doc = getDocPtr(data); /* doc will only be NULL if data is */ CRM_CHECK(doc != NULL, return); now = time(NULL); xml_buffer = xmlBufferCreate(); CRM_ASSERT(xml_buffer != NULL); /* The default allocator XML_BUFFER_ALLOC_EXACT does far too many * realloc()s and it can take upwards of 18 seconds (yes, seconds) * to dump a 28kb tree which XML_BUFFER_ALLOC_DOUBLEIT can do in * less than 1 second. * * We could also use xmlBufferCreateSize() to start with a * sane-ish initial size and avoid the first few doubles. */ xmlBufferSetAllocationScheme(xml_buffer, XML_BUFFER_ALLOC_DOUBLEIT); *max = xmlNodeDump(xml_buffer, doc, data, 0, (options & xml_log_option_formatted)); if (*max > 0) { *buffer = strdup((char *)xml_buffer->content); } next = time(NULL); if ((now + 1) < next) { crm_log_xml_trace(data, "Long time"); crm_err("xmlNodeDump() -> %dbytes took %ds", *max, next - now); } xmlBufferFree(xml_buffer); return; } #endif switch(data->type) { case XML_ELEMENT_NODE: /* Handle below */ dump_xml_element(data, options, buffer, offset, max, depth); break; case XML_TEXT_NODE: /* Ignore */ return; case XML_COMMENT_NODE: dump_xml_comment(data, options, buffer, offset, max, depth); break; default: crm_warn("Unhandled type: %d", data->type); return; /* XML_ATTRIBUTE_NODE = 2 XML_CDATA_SECTION_NODE = 4 XML_ENTITY_REF_NODE = 5 XML_ENTITY_NODE = 6 XML_PI_NODE = 7 XML_DOCUMENT_NODE = 9 XML_DOCUMENT_TYPE_NODE = 10 XML_DOCUMENT_FRAG_NODE = 11 XML_NOTATION_NODE = 12 XML_HTML_DOCUMENT_NODE = 13 XML_DTD_NODE = 14 XML_ELEMENT_DECL = 15 XML_ATTRIBUTE_DECL = 16 XML_ENTITY_DECL = 17 XML_NAMESPACE_DECL = 18 XML_XINCLUDE_START = 19 XML_XINCLUDE_END = 20 XML_DOCB_DOCUMENT_NODE = 21 */ } } static void fix_digest_buffer(char **buffer, int *offset, int *max, char c) { buffer_print(*buffer, *max, *offset, "%c", c); } static char * dump_xml_for_digest(xmlNode * an_xml_node) { char *buffer = NULL; int offset = 0, max = 0; /* for compatability with the old result which is used for v1 digests */ fix_digest_buffer(&buffer, &offset, &max, ' '); dump_xml(an_xml_node, 0, &buffer, &offset, &max, 0); fix_digest_buffer(&buffer, &offset, &max, '\n'); return buffer; } char * dump_xml_formatted(xmlNode * an_xml_node) { char *buffer = NULL; int offset = 0, max = 0; dump_xml(an_xml_node, xml_log_option_formatted, &buffer, &offset, &max, 0); return buffer; } char * dump_xml_unformatted(xmlNode * an_xml_node) { char *buffer = NULL; int offset = 0, max = 0; dump_xml(an_xml_node, 0, &buffer, &offset, &max, 0); return buffer; } gboolean xml_has_children(const xmlNode * xml_root) { if (xml_root != NULL && xml_root->children != NULL) { return TRUE; } return FALSE; } int crm_element_value_int(xmlNode * data, const char *name, int *dest) { const char *value = crm_element_value(data, name); CRM_CHECK(dest != NULL, return -1); if (value) { *dest = crm_int_helper(value, NULL); return 0; } return -1; } int crm_element_value_const_int(const xmlNode * data, const char *name, int *dest) { return crm_element_value_int((xmlNode *) data, name, dest); } const char * crm_element_value_const(const xmlNode * data, const char *name) { return crm_element_value((xmlNode *) data, name); } char * crm_element_value_copy(xmlNode * data, const char *name) { char *value_copy = NULL; const char *value = crm_element_value(data, name); if (value != NULL) { value_copy = strdup(value); } return value_copy; } void xml_remove_prop(xmlNode * obj, const char *name) { if(__xml_acl_check(obj, NULL, xpf_acl_write) == FALSE) { crm_trace("Cannot remove %s from %s", name, obj->name); } else if(TRACKING_CHANGES(obj)) { /* Leave in place (marked for removal) until after the diff is calculated */ xml_private_t *p = NULL; xmlAttr *attr = xmlHasProp(obj, (const xmlChar *)name); p = attr->_private; set_parent_flag(obj, xpf_dirty); p->flags |= xpf_deleted; /* crm_trace("Setting flag %x due to %s[@id=%s].%s", xpf_dirty, obj->name, ID(obj), name); */ } else { xmlUnsetProp(obj, (const xmlChar *)name); } } void purge_diff_markers(xmlNode * a_node) { xmlNode *child = NULL; CRM_CHECK(a_node != NULL, return); xml_remove_prop(a_node, XML_DIFF_MARKER); for (child = __xml_first_child(a_node); child != NULL; child = __xml_next(child)) { purge_diff_markers(child); } } void save_xml_to_file(xmlNode * xml, const char *desc, const char *filename) { char *f = NULL; if (filename == NULL) { char *uuid = crm_generate_uuid(); f = g_strdup_printf("/tmp/%s", uuid); filename = f; free(uuid); } crm_info("Saving %s to %s", desc, filename); write_xml_file(xml, filename, FALSE); g_free(f); } gboolean apply_xml_diff(xmlNode * old, xmlNode * diff, xmlNode ** new) { gboolean result = TRUE; int root_nodes_seen = 0; static struct qb_log_callsite *digest_cs = NULL; const char *digest = crm_element_value(diff, XML_ATTR_DIGEST); const char *version = crm_element_value(diff, XML_ATTR_CRM_VERSION); xmlNode *child_diff = NULL; xmlNode *added = find_xml_node(diff, "diff-added", FALSE); xmlNode *removed = find_xml_node(diff, "diff-removed", FALSE); CRM_CHECK(new != NULL, return FALSE); if (digest_cs == NULL) { digest_cs = qb_log_callsite_get(__func__, __FILE__, "diff-digest", LOG_TRACE, __LINE__, crm_trace_nonlog); } crm_trace("Substraction Phase"); for (child_diff = __xml_first_child(removed); child_diff != NULL; child_diff = __xml_next(child_diff)) { CRM_CHECK(root_nodes_seen == 0, result = FALSE); if (root_nodes_seen == 0) { *new = subtract_xml_object(NULL, old, child_diff, FALSE, NULL, NULL); } root_nodes_seen++; } if (root_nodes_seen == 0) { *new = copy_xml(old); } else if (root_nodes_seen > 1) { crm_err("(-) Diffs cannot contain more than one change set..." " saw %d", root_nodes_seen); result = FALSE; } root_nodes_seen = 0; crm_trace("Addition Phase"); if (result) { xmlNode *child_diff = NULL; for (child_diff = __xml_first_child(added); child_diff != NULL; child_diff = __xml_next(child_diff)) { CRM_CHECK(root_nodes_seen == 0, result = FALSE); if (root_nodes_seen == 0) { add_xml_object(NULL, *new, child_diff, TRUE); } root_nodes_seen++; } } if (root_nodes_seen > 1) { crm_err("(+) Diffs cannot contain more than one change set..." " saw %d", root_nodes_seen); result = FALSE; } else if (result && digest) { char *new_digest = NULL; purge_diff_markers(*new); /* Purge now so the diff is ok */ new_digest = calculate_xml_versioned_digest(*new, FALSE, TRUE, version); if (safe_str_neq(new_digest, digest)) { crm_info("Digest mis-match: expected %s, calculated %s", digest, new_digest); result = FALSE; crm_trace("%p %0.6x", digest_cs, digest_cs ? digest_cs->targets : 0); if (digest_cs && digest_cs->targets) { save_xml_to_file(old, "diff:original", NULL); save_xml_to_file(diff, "diff:input", NULL); save_xml_to_file(*new, "diff:new", NULL); } } else { crm_trace("Digest matched: expected %s, calculated %s", digest, new_digest); } free(new_digest); } else if (result) { purge_diff_markers(*new); /* Purge now so the diff is ok */ } return result; } static void __xml_diff_object(xmlNode * old, xmlNode * new) { xmlNode *cIter = NULL; xmlAttr *pIter = NULL; CRM_CHECK(new != NULL, return); if(old == NULL) { crm_node_created(new); __xml_acl_post_process(new); /* Check creation is allowed */ return; } else { xml_private_t *p = new->_private; if(p->flags & xpf_processed) { /* Avoid re-comparing nodes */ return; } p->flags |= xpf_processed; } for (pIter = crm_first_attr(new); pIter != NULL; pIter = pIter->next) { xml_private_t *p = pIter->_private; /* Assume everything was just created and take it from there */ p->flags |= xpf_created; } for (pIter = crm_first_attr(old); pIter != NULL; ) { xmlAttr *prop = pIter; xml_private_t *p = NULL; const char *name = (const char *)pIter->name; const char *old_value = crm_element_value(old, name); xmlAttr *exists = xmlHasProp(new, pIter->name); pIter = pIter->next; if(exists == NULL) { p = new->doc->_private; /* Prevent the dirty flag being set recursively upwards */ clear_bit(p->flags, xpf_tracking); exists = xmlSetProp(new, (const xmlChar *)name, (const xmlChar *)old_value); set_bit(p->flags, xpf_tracking); p = exists->_private; p->flags = 0; crm_trace("Lost %s@%s=%s", old->name, name, old_value); xml_remove_prop(new, name); } else { int p_new = __xml_offset((xmlNode*)exists); int p_old = __xml_offset((xmlNode*)prop); const char *value = crm_element_value(new, name); p = exists->_private; p->flags = (p->flags & ~xpf_created); if(strcmp(value, old_value) != 0) { /* Restore the original value, so we can call crm_xml_add() whcih checks ACLs */ char *vcopy = crm_element_value_copy(new, name); crm_trace("Modified %s@%s %s->%s", old->name, name, old_value, vcopy); xmlSetProp(new, prop->name, (const xmlChar *)old_value); crm_xml_add(new, name, vcopy); free(vcopy); } else if(p_old != p_new) { crm_info("Moved %s@%s (%d -> %d)", old->name, name, p_old, p_new); __xml_node_dirty(new); p->flags |= xpf_dirty|xpf_moved; if(p_old > p_new) { p = prop->_private; p->flags |= xpf_skip; } else { p = exists->_private; p->flags |= xpf_skip; } } } } for (pIter = crm_first_attr(new); pIter != NULL; ) { xmlAttr *prop = pIter; xml_private_t *p = pIter->_private; pIter = pIter->next; if(is_set(p->flags, xpf_created)) { char *name = strdup((const char *)prop->name); char *value = crm_element_value_copy(new, name); crm_trace("Created %s@%s=%s", new->name, name, value); /* Remove plus create wont work as it will modify the relative attribute ordering */ if(__xml_acl_check(new, name, xpf_acl_write)) { crm_attr_dirty(prop); } else { xmlUnsetProp(new, prop->name); /* Remove - change not allowed */ } free(value); free(name); } } for (cIter = __xml_first_child(old); cIter != NULL; ) { xmlNode *old_child = cIter; xmlNode *new_child = find_entity(new, crm_element_name(cIter), ID(cIter)); cIter = __xml_next(cIter); if(new_child) { __xml_diff_object(old_child, new_child); } else { xml_private_t *p = old_child->_private; /* Create then free (which will check the acls if necessary) */ xmlNode *candidate = add_node_copy(new, old_child); xmlNode *top = xmlDocGetRootElement(candidate->doc); __xml_node_clean(candidate); __xml_acl_apply(top); /* Make sure any ACLs are applied to 'candidate' */ free_xml(candidate); if(NULL == find_entity(new, crm_element_name(old_child), ID(old_child))) { p->flags |= xpf_skip; } } } for (cIter = __xml_first_child(new); cIter != NULL; ) { xmlNode *new_child = cIter; xmlNode *old_child = find_entity(old, crm_element_name(cIter), ID(cIter)); cIter = __xml_next(cIter); if(old_child == NULL) { xml_private_t *p = new_child->_private; p->flags |= xpf_skip; __xml_diff_object(old_child, new_child); } else { /* Check for movement, we already checked for differences */ int p_new = __xml_offset(new_child); int p_old = __xml_offset(old_child); xml_private_t *p = new_child->_private; if(p_old != p_new) { crm_info("%s.%s moved from %d to %d - %d", new_child->name, ID(new_child), p_old, p_new); p->flags |= xpf_moved; if(p_old > p_new) { p = old_child->_private; p->flags |= xpf_skip; } else { p = new_child->_private; p->flags |= xpf_skip; } } } } } void xml_calculate_changes(xmlNode * old, xmlNode * new) { CRM_CHECK(safe_str_eq(crm_element_name(old), crm_element_name(new)), return); CRM_CHECK(safe_str_eq(ID(old), ID(new)), return); if(xml_tracking_changes(new) == FALSE) { xml_track_changes(new, NULL, NULL, FALSE); } __xml_diff_object(old, new); } xmlNode * diff_xml_object(xmlNode * old, xmlNode * new, gboolean suppress) { xmlNode *tmp1 = NULL; xmlNode *diff = create_xml_node(NULL, "diff"); xmlNode *removed = create_xml_node(diff, "diff-removed"); xmlNode *added = create_xml_node(diff, "diff-added"); crm_xml_add(diff, XML_ATTR_CRM_VERSION, CRM_FEATURE_SET); tmp1 = subtract_xml_object(removed, old, new, FALSE, NULL, "removed:top"); if (suppress && tmp1 != NULL && can_prune_leaf(tmp1)) { free_xml(tmp1); } tmp1 = subtract_xml_object(added, new, old, TRUE, NULL, "added:top"); if (suppress && tmp1 != NULL && can_prune_leaf(tmp1)) { free_xml(tmp1); } if (added->children == NULL && removed->children == NULL) { free_xml(diff); diff = NULL; } return diff; } gboolean can_prune_leaf(xmlNode * xml_node) { xmlNode *cIter = NULL; xmlAttrPtr pIter = NULL; gboolean can_prune = TRUE; const char *name = crm_element_name(xml_node); if (safe_str_eq(name, XML_TAG_RESOURCE_REF) || safe_str_eq(name, XML_CIB_TAG_OBJ_REF) || safe_str_eq(name, XML_ACL_TAG_ROLE_REF) || safe_str_eq(name, XML_ACL_TAG_ROLE_REFv1)) { return FALSE; } for (pIter = crm_first_attr(xml_node); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; if (strcmp(p_name, XML_ATTR_ID) == 0) { continue; } can_prune = FALSE; } cIter = __xml_first_child(xml_node); while (cIter) { xmlNode *child = cIter; cIter = __xml_next(cIter); if (can_prune_leaf(child)) { free_xml(child); } else { can_prune = FALSE; } } return can_prune; } void diff_filter_context(int context, int upper_bound, int lower_bound, xmlNode * xml_node, xmlNode * parent) { xmlNode *us = NULL; xmlNode *child = NULL; xmlAttrPtr pIter = NULL; xmlNode *new_parent = parent; const char *name = crm_element_name(xml_node); CRM_CHECK(xml_node != NULL && name != NULL, return); us = create_xml_node(parent, name); for (pIter = crm_first_attr(xml_node); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); lower_bound = context; crm_xml_add(us, p_name, p_value); } if (lower_bound >= 0 || upper_bound >= 0) { crm_xml_add(us, XML_ATTR_ID, ID(xml_node)); new_parent = us; } else { upper_bound = in_upper_context(0, context, xml_node); if (upper_bound >= 0) { crm_xml_add(us, XML_ATTR_ID, ID(xml_node)); new_parent = us; } else { free_xml(us); us = NULL; } } for (child = __xml_first_child(us); child != NULL; child = __xml_next(child)) { diff_filter_context(context, upper_bound - 1, lower_bound - 1, child, new_parent); } } int in_upper_context(int depth, int context, xmlNode * xml_node) { if (context == 0) { return 0; } if (xml_node->properties) { return depth; } else if (depth < context) { xmlNode *child = NULL; for (child = __xml_first_child(xml_node); child != NULL; child = __xml_next(child)) { if (in_upper_context(depth + 1, context, child)) { return depth; } } } return 0; } static xmlNode * find_xml_comment(xmlNode * root, xmlNode * search_comment) { xmlNode *a_child = NULL; CRM_CHECK(search_comment->type == XML_COMMENT_NODE, return NULL); for (a_child = __xml_first_child(root); a_child != NULL; a_child = __xml_next(a_child)) { if (a_child->type != XML_COMMENT_NODE) { continue; } if (safe_str_eq((const char *)a_child->content, (const char *)search_comment->content)) { return a_child; } } return NULL; } static xmlNode * subtract_xml_comment(xmlNode * parent, xmlNode * left, xmlNode * right, gboolean * changed) { CRM_CHECK(left != NULL, return NULL); CRM_CHECK(left->type == XML_COMMENT_NODE, return NULL); if (right == NULL || safe_str_neq((const char *)left->content, (const char *)right->content)) { xmlNode *deleted = NULL; deleted = add_node_copy(parent, left); *changed = TRUE; return deleted; } return NULL; } xmlNode * subtract_xml_object(xmlNode * parent, xmlNode * left, xmlNode * right, gboolean full, gboolean * changed, const char *marker) { gboolean dummy = FALSE; gboolean skip = FALSE; xmlNode *diff = NULL; xmlNode *right_child = NULL; xmlNode *left_child = NULL; xmlAttrPtr xIter = NULL; const char *id = NULL; const char *name = NULL; const char *value = NULL; const char *right_val = NULL; int lpc = 0; static int filter_len = DIMOF(filter); if (changed == NULL) { changed = &dummy; } if (left == NULL) { return NULL; } if (left->type == XML_COMMENT_NODE) { return subtract_xml_comment(parent, left, right, changed); } id = ID(left); if (right == NULL) { xmlNode *deleted = NULL; crm_trace("Processing <%s id=%s> (complete copy)", crm_element_name(left), id); deleted = add_node_copy(parent, left); crm_xml_add(deleted, XML_DIFF_MARKER, marker); *changed = TRUE; return deleted; } name = crm_element_name(left); CRM_CHECK(name != NULL, return NULL); CRM_CHECK(safe_str_eq(crm_element_name(left), crm_element_name(right)), return NULL); /* check for XML_DIFF_MARKER in a child */ value = crm_element_value(right, XML_DIFF_MARKER); if (value != NULL && strcmp(value, "removed:top") == 0) { crm_trace("We are the root of the deletion: %s.id=%s", name, id); *changed = TRUE; return NULL; } /* Avoiding creating the full heirarchy would save even more work here */ diff = create_xml_node(parent, name); /* Reset filter */ for (lpc = 0; lpc < filter_len; lpc++) { filter[lpc].found = FALSE; } /* changes to child objects */ for (left_child = __xml_first_child(left); left_child != NULL; left_child = __xml_next(left_child)) { gboolean child_changed = FALSE; if (left_child->type == XML_COMMENT_NODE) { right_child = find_xml_comment(right, left_child); } else { right_child = find_entity(right, crm_element_name(left_child), ID(left_child)); } subtract_xml_object(diff, left_child, right_child, full, &child_changed, marker); if (child_changed) { *changed = TRUE; } } if (*changed == FALSE) { /* Nothing to do */ } else if (full) { xmlAttrPtr pIter = NULL; for (pIter = crm_first_attr(left); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); xmlSetProp(diff, (const xmlChar *)p_name, (const xmlChar *)p_value); } /* We already have everything we need... */ goto done; } else if (id) { xmlSetProp(diff, (const xmlChar *)XML_ATTR_ID, (const xmlChar *)id); } /* changes to name/value pairs */ for (xIter = crm_first_attr(left); xIter != NULL; xIter = xIter->next) { const char *prop_name = (const char *)xIter->name; if (strcmp(prop_name, XML_ATTR_ID) == 0) { continue; } skip = FALSE; for (lpc = 0; skip == FALSE && lpc < filter_len; lpc++) { if (filter[lpc].found == FALSE && strcmp(prop_name, filter[lpc].string) == 0) { filter[lpc].found = TRUE; skip = TRUE; break; } } if (skip) { continue; } right_val = crm_element_value(right, prop_name); if (right_val == NULL) { /* new */ *changed = TRUE; if (full) { xmlAttrPtr pIter = NULL; for (pIter = crm_first_attr(left); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); xmlSetProp(diff, (const xmlChar *)p_name, (const xmlChar *)p_value); } break; } else { const char *left_value = crm_element_value(left, prop_name); xmlSetProp(diff, (const xmlChar *)prop_name, (const xmlChar *)value); crm_xml_add(diff, prop_name, left_value); } } else { /* Only now do we need the left value */ const char *left_value = crm_element_value(left, prop_name); if (strcmp(left_value, right_val) == 0) { /* unchanged */ } else { *changed = TRUE; if (full) { xmlAttrPtr pIter = NULL; crm_trace("Changes detected to %s in <%s id=%s>", prop_name, crm_element_name(left), id); for (pIter = crm_first_attr(left); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); xmlSetProp(diff, (const xmlChar *)p_name, (const xmlChar *)p_value); } break; } else { crm_trace("Changes detected to %s (%s -> %s) in <%s id=%s>", prop_name, left_value, right_val, crm_element_name(left), id); crm_xml_add(diff, prop_name, left_value); } } } } if (*changed == FALSE) { free_xml(diff); return NULL; } else if (full == FALSE && id) { crm_xml_add(diff, XML_ATTR_ID, id); } done: return diff; } static int add_xml_comment(xmlNode * parent, xmlNode * target, xmlNode * update) { CRM_CHECK(update != NULL, return 0); CRM_CHECK(update->type == XML_COMMENT_NODE, return 0); if (target == NULL) { target = find_xml_comment(parent, update); } if (target == NULL) { add_node_copy(parent, update); /* We wont reach here currently */ } else if (safe_str_neq((const char *)target->content, (const char *)update->content)) { xmlFree(target->content); target->content = xmlStrdup(update->content); } return 0; } int add_xml_object(xmlNode * parent, xmlNode * target, xmlNode * update, gboolean as_diff) { xmlNode *a_child = NULL; const char *object_id = NULL; const char *object_name = NULL; #if XML_PARSE_DEBUG crm_log_xml_trace("update:", update); crm_log_xml_trace("target:", target); #endif CRM_CHECK(update != NULL, return 0); if (update->type == XML_COMMENT_NODE) { return add_xml_comment(parent, target, update); } object_name = crm_element_name(update); object_id = ID(update); CRM_CHECK(object_name != NULL, return 0); if (target == NULL && object_id == NULL) { /* placeholder object */ target = find_xml_node(parent, object_name, FALSE); } else if (target == NULL) { target = find_entity(parent, object_name, object_id); } if (target == NULL) { target = create_xml_node(parent, object_name); CRM_CHECK(target != NULL, return 0); #if XML_PARSER_DEBUG crm_trace("Added <%s%s%s/>", crm_str(object_name), object_id ? " id=" : "", object_id ? object_id : ""); } else { crm_trace("Found node <%s%s%s/> to update", crm_str(object_name), object_id ? " id=" : "", object_id ? object_id : ""); #endif } CRM_CHECK(safe_str_eq(crm_element_name(target), crm_element_name(update)), return 0); if (as_diff == FALSE) { /* So that expand_plus_plus() gets called */ copy_in_properties(target, update); } else { /* No need for expand_plus_plus(), just raw speed */ xmlAttrPtr pIter = NULL; for (pIter = crm_first_attr(update); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); /* Remove it first so the ordering of the update is preserved */ xmlUnsetProp(target, (const xmlChar *)p_name); xmlSetProp(target, (const xmlChar *)p_name, (const xmlChar *)p_value); } } for (a_child = __xml_first_child(update); a_child != NULL; a_child = __xml_next(a_child)) { #if XML_PARSER_DEBUG crm_trace("Updating child <%s id=%s>", crm_element_name(a_child), ID(a_child)); #endif add_xml_object(target, NULL, a_child, as_diff); } #if XML_PARSER_DEBUG crm_trace("Finished with <%s id=%s>", crm_str(object_name), crm_str(object_id)); #endif return 0; } gboolean update_xml_child(xmlNode * child, xmlNode * to_update) { gboolean can_update = TRUE; xmlNode *child_of_child = NULL; CRM_CHECK(child != NULL, return FALSE); CRM_CHECK(to_update != NULL, return FALSE); if (safe_str_neq(crm_element_name(to_update), crm_element_name(child))) { can_update = FALSE; } else if (safe_str_neq(ID(to_update), ID(child))) { can_update = FALSE; } else if (can_update) { #if XML_PARSER_DEBUG crm_log_xml_trace(child, "Update match found..."); #endif add_xml_object(NULL, child, to_update, FALSE); } for (child_of_child = __xml_first_child(child); child_of_child != NULL; child_of_child = __xml_next(child_of_child)) { /* only update the first one */ if (can_update) { break; } can_update = update_xml_child(child_of_child, to_update); } return can_update; } int find_xml_children(xmlNode ** children, xmlNode * root, const char *tag, const char *field, const char *value, gboolean search_matches) { int match_found = 0; CRM_CHECK(root != NULL, return FALSE); CRM_CHECK(children != NULL, return FALSE); if (tag != NULL && safe_str_neq(tag, crm_element_name(root))) { } else if (value != NULL && safe_str_neq(value, crm_element_value(root, field))) { } else { if (*children == NULL) { *children = create_xml_node(NULL, __FUNCTION__); } add_node_copy(*children, root); match_found = 1; } if (search_matches || match_found == 0) { xmlNode *child = NULL; for (child = __xml_first_child(root); child != NULL; child = __xml_next(child)) { match_found += find_xml_children(children, child, tag, field, value, search_matches); } } return match_found; } gboolean replace_xml_child(xmlNode * parent, xmlNode * child, xmlNode * update, gboolean delete_only) { gboolean can_delete = FALSE; xmlNode *child_of_child = NULL; const char *up_id = NULL; const char *child_id = NULL; const char *right_val = NULL; CRM_CHECK(child != NULL, return FALSE); CRM_CHECK(update != NULL, return FALSE); up_id = ID(update); child_id = ID(child); if (up_id == NULL || (child_id && strcmp(child_id, up_id) == 0)) { can_delete = TRUE; } if (safe_str_neq(crm_element_name(update), crm_element_name(child))) { can_delete = FALSE; } if (can_delete && delete_only) { xmlAttrPtr pIter = NULL; for (pIter = crm_first_attr(update); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); right_val = crm_element_value(child, p_name); if (safe_str_neq(p_value, right_val)) { can_delete = FALSE; } } } if (can_delete && parent != NULL) { crm_log_xml_trace(child, "Delete match found..."); if (delete_only || update == NULL) { free_xml(child); } else { xmlNode *tmp = copy_xml(update); xmlDoc *doc = tmp->doc; xmlNode *old = NULL; xml_accept_changes(tmp); old = xmlReplaceNode(child, tmp); if(xml_tracking_changes(tmp)) { /* Replaced sections may have included relevant ACLs */ __xml_acl_apply(tmp); } xml_calculate_changes(old, tmp); xmlDocSetRootElement(doc, old); free_xml(old); } child = NULL; return TRUE; } else if (can_delete) { crm_log_xml_debug(child, "Cannot delete the search root"); can_delete = FALSE; } child_of_child = __xml_first_child(child); while (child_of_child) { xmlNode *next = __xml_next(child_of_child); can_delete = replace_xml_child(child, child_of_child, update, delete_only); /* only delete the first one */ if (can_delete) { child_of_child = NULL; } else { child_of_child = next; } } return can_delete; } void hash2nvpair(gpointer key, gpointer value, gpointer user_data) { const char *name = key; const char *s_value = value; xmlNode *xml_node = user_data; xmlNode *xml_child = create_xml_node(xml_node, XML_CIB_TAG_NVPAIR); crm_xml_add(xml_child, XML_ATTR_ID, name); crm_xml_add(xml_child, XML_NVPAIR_ATTR_NAME, name); crm_xml_add(xml_child, XML_NVPAIR_ATTR_VALUE, s_value); crm_trace("dumped: name=%s value=%s", name, s_value); } void hash2smartfield(gpointer key, gpointer value, gpointer user_data) { const char *name = key; const char *s_value = value; xmlNode *xml_node = user_data; if (isdigit(name[0])) { xmlNode *tmp = create_xml_node(xml_node, XML_TAG_PARAM); crm_xml_add(tmp, XML_NVPAIR_ATTR_NAME, name); crm_xml_add(tmp, XML_NVPAIR_ATTR_VALUE, s_value); } else if (crm_element_value(xml_node, name) == NULL) { crm_xml_add(xml_node, name, s_value); crm_trace("dumped: %s=%s", name, s_value); } else { crm_trace("duplicate: %s=%s", name, s_value); } } void hash2field(gpointer key, gpointer value, gpointer user_data) { const char *name = key; const char *s_value = value; xmlNode *xml_node = user_data; if (crm_element_value(xml_node, name) == NULL) { crm_xml_add(xml_node, name, s_value); } else { crm_trace("duplicate: %s=%s", name, s_value); } } void hash2metafield(gpointer key, gpointer value, gpointer user_data) { char *crm_name = NULL; if (key == NULL || value == NULL) { return; } else if (((char *)key)[0] == '#') { return; } else if (strstr(key, ":")) { return; } crm_name = crm_meta_name(key); hash2field(crm_name, value, user_data); free(crm_name); } GHashTable * xml2list(xmlNode * parent) { xmlNode *child = NULL; xmlAttrPtr pIter = NULL; xmlNode *nvpair_list = NULL; GHashTable *nvpair_hash = g_hash_table_new_full(crm_str_hash, g_str_equal, g_hash_destroy_str, g_hash_destroy_str); CRM_CHECK(parent != NULL, return nvpair_hash); nvpair_list = find_xml_node(parent, XML_TAG_ATTRS, FALSE); if (nvpair_list == NULL) { crm_trace("No attributes in %s", crm_element_name(parent)); crm_log_xml_trace(parent, "No attributes for resource op"); } crm_log_xml_trace(nvpair_list, "Unpacking"); for (pIter = crm_first_attr(nvpair_list); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); crm_trace("Added %s=%s", p_name, p_value); g_hash_table_insert(nvpair_hash, strdup(p_name), strdup(p_value)); } for (child = __xml_first_child(nvpair_list); child != NULL; child = __xml_next(child)) { if (strcmp((const char *)child->name, XML_TAG_PARAM) == 0) { const char *key = crm_element_value(child, XML_NVPAIR_ATTR_NAME); const char *value = crm_element_value(child, XML_NVPAIR_ATTR_VALUE); crm_trace("Added %s=%s", key, value); if (key != NULL && value != NULL) { g_hash_table_insert(nvpair_hash, strdup(key), strdup(value)); } } } return nvpair_hash; } typedef struct name_value_s { const char *name; const void *value; } name_value_t; static gint sort_pairs(gconstpointer a, gconstpointer b) { int rc = 0; const name_value_t *pair_a = a; const name_value_t *pair_b = b; CRM_ASSERT(a != NULL); CRM_ASSERT(pair_a->name != NULL); CRM_ASSERT(b != NULL); CRM_ASSERT(pair_b->name != NULL); rc = strcmp(pair_a->name, pair_b->name); if (rc < 0) { return -1; } else if (rc > 0) { return 1; } return 0; } static void dump_pair(gpointer data, gpointer user_data) { name_value_t *pair = data; xmlNode *parent = user_data; crm_xml_add(parent, pair->name, pair->value); } xmlNode * sorted_xml(xmlNode * input, xmlNode * parent, gboolean recursive) { xmlNode *child = NULL; GListPtr sorted = NULL; GListPtr unsorted = NULL; name_value_t *pair = NULL; xmlNode *result = NULL; const char *name = NULL; xmlAttrPtr pIter = NULL; CRM_CHECK(input != NULL, return NULL); name = crm_element_name(input); CRM_CHECK(name != NULL, return NULL); result = create_xml_node(parent, name); for (pIter = crm_first_attr(input); pIter != NULL; pIter = pIter->next) { const char *p_name = (const char *)pIter->name; const char *p_value = crm_attr_value(pIter); pair = calloc(1, sizeof(name_value_t)); pair->name = p_name; pair->value = p_value; unsorted = g_list_prepend(unsorted, pair); pair = NULL; } sorted = g_list_sort(unsorted, sort_pairs); g_list_foreach(sorted, dump_pair, result); g_list_free_full(sorted, free); for (child = __xml_first_child(input); child != NULL; child = __xml_next(child)) { if (recursive) { sorted_xml(child, result, recursive); } else { add_node_copy(result, child); } } return result; } /* "c048eae664dba840e1d2060f00299e9d" */ static char * calculate_xml_digest_v1(xmlNode * input, gboolean sort, gboolean ignored) { char *digest = NULL; char *buffer = NULL; xmlNode *copy = NULL; if (sort) { crm_trace("Sorting xml..."); copy = sorted_xml(input, NULL, TRUE); crm_trace("Done"); input = copy; } buffer = dump_xml_for_digest(input); CRM_CHECK(buffer != NULL && strlen(buffer) > 0, free_xml(copy); free(buffer); return NULL); digest = crm_md5sum(buffer); crm_log_xml_trace(input, "digest:source"); free(buffer); free_xml(copy); return digest; } static char * calculate_xml_digest_v2(xmlNode * source, gboolean do_filter) { char *digest = NULL; char *buffer = NULL; int offset, max; static struct qb_log_callsite *digest_cs = NULL; crm_trace("Begin digest %s", do_filter?"filtered":""); if (do_filter && BEST_EFFORT_STATUS) { /* Exclude the status calculation from the digest * * This doesn't mean it wont be sync'd, we just wont be paranoid * about it being an _exact_ copy * * We don't need it to be exact, since we throw it away and regenerate * from our peers whenever a new DC is elected anyway * * Importantly, this reduces the amount of XML to copy+export as * well as the amount of data for MD5 needs to operate on */ } else { dump_xml(source, do_filter ? xml_log_option_filtered : 0, &buffer, &offset, &max, 0); } CRM_ASSERT(buffer != NULL); digest = crm_md5sum(buffer); if (digest_cs == NULL) { digest_cs = qb_log_callsite_get(__func__, __FILE__, "cib-digest", LOG_TRACE, __LINE__, crm_trace_nonlog); } if (digest_cs && digest_cs->targets) { char *trace_file = crm_concat("/tmp/digest", digest, '-'); crm_trace("Saving %s.%s.%s to %s", crm_element_value(source, XML_ATTR_GENERATION_ADMIN), crm_element_value(source, XML_ATTR_GENERATION), crm_element_value(source, XML_ATTR_NUMUPDATES), trace_file); save_xml_to_file(source, "digest input", trace_file); free(trace_file); } free(buffer); crm_trace("End digest"); return digest; } char * calculate_on_disk_digest(xmlNode * input) { /* Always use the v1 format for on-disk digests * a) its a compatability nightmare * b) we only use this once at startup, all other * invocations are in a separate child process */ return calculate_xml_digest_v1(input, FALSE, FALSE); } char * calculate_operation_digest(xmlNode * input, const char *version) { /* We still need the sorting for parameter digests */ return calculate_xml_digest_v1(input, TRUE, FALSE); } char * calculate_xml_versioned_digest(xmlNode * input, gboolean sort, gboolean do_filter, const char *version) { /* * The sorting associated with v1 digest creation accounted for 23% of * the CIB's CPU usage on the server. v2 drops this. * * The filtering accounts for an additional 2.5% and we may want to * remove it in future. * * v2 also uses the xmlBuffer contents directly to avoid additional copying */ if (version == NULL || compare_version("3.0.5", version) > 0) { crm_trace("Using v1 digest algorithm for %s", crm_str(version)); return calculate_xml_digest_v1(input, sort, do_filter); } crm_trace("Using v2 digest algorithm for %s", crm_str(version)); return calculate_xml_digest_v2(input, do_filter); } static gboolean validate_with_dtd(xmlDocPtr doc, gboolean to_logs, const char *dtd_file) { gboolean valid = TRUE; xmlDtdPtr dtd = NULL; xmlValidCtxtPtr cvp = NULL; CRM_CHECK(doc != NULL, return FALSE); CRM_CHECK(dtd_file != NULL, return FALSE); dtd = xmlParseDTD(NULL, (const xmlChar *)dtd_file); if(dtd == NULL) { crm_err("Could not locate/parse DTD: %s", dtd_file); return TRUE; } cvp = xmlNewValidCtxt(); if(cvp) { if (to_logs) { cvp->userData = (void *)LOG_ERR; cvp->error = (xmlValidityErrorFunc) xml_log; cvp->warning = (xmlValidityWarningFunc) xml_log; } else { cvp->userData = (void *)stderr; cvp->error = (xmlValidityErrorFunc) fprintf; cvp->warning = (xmlValidityWarningFunc) fprintf; } if (!xmlValidateDtd(cvp, doc, dtd)) { valid = FALSE; } xmlFreeValidCtxt(cvp); } else { crm_err("Internal error: No valid context"); } xmlFreeDtd(dtd); return valid; } xmlNode * first_named_child(xmlNode * parent, const char *name) { xmlNode *match = NULL; for (match = __xml_first_child(parent); match != NULL; match = __xml_next(match)) { /* * name == NULL gives first child regardless of name; this is * semantically incorrect in this funciton, but may be necessary * due to prior use of xml_child_iter_filter */ if (name == NULL || strcmp((const char *)match->name, name) == 0) { return match; } } return NULL; } #if 0 static void relaxng_invalid_stderr(void *userData, xmlErrorPtr error) { /* Structure xmlError struct _xmlError { int domain : What part of the library raised this er int code : The error code, e.g. an xmlParserError char * message : human-readable informative error messag xmlErrorLevel level : how consequent is the error char * file : the filename int line : the line number if available char * str1 : extra string information char * str2 : extra string information char * str3 : extra string information int int1 : extra number information int int2 : column number of the error or 0 if N/A void * ctxt : the parser context if available void * node : the node in the tree } */ crm_err("Structured error: line=%d, level=%d %s", error->line, error->level, error->message); } #endif static gboolean validate_with_relaxng(xmlDocPtr doc, gboolean to_logs, const char *relaxng_file, relaxng_ctx_cache_t ** cached_ctx) { int rc = 0; gboolean valid = TRUE; relaxng_ctx_cache_t *ctx = NULL; CRM_CHECK(doc != NULL, return FALSE); CRM_CHECK(relaxng_file != NULL, return FALSE); if (cached_ctx && *cached_ctx) { ctx = *cached_ctx; } else { crm_info("Creating RNG parser context"); ctx = calloc(1, sizeof(relaxng_ctx_cache_t)); xmlLoadExtDtdDefaultValue = 1; ctx->parser = xmlRelaxNGNewParserCtxt(relaxng_file); CRM_CHECK(ctx->parser != NULL, goto cleanup); if (to_logs) { xmlRelaxNGSetParserErrors(ctx->parser, (xmlRelaxNGValidityErrorFunc) xml_log, (xmlRelaxNGValidityWarningFunc) xml_log, GUINT_TO_POINTER(LOG_ERR)); } else { xmlRelaxNGSetParserErrors(ctx->parser, (xmlRelaxNGValidityErrorFunc) fprintf, (xmlRelaxNGValidityWarningFunc) fprintf, stderr); } ctx->rng = xmlRelaxNGParse(ctx->parser); CRM_CHECK(ctx->rng != NULL, crm_err("Could not find/parse %s", relaxng_file); goto cleanup); ctx->valid = xmlRelaxNGNewValidCtxt(ctx->rng); CRM_CHECK(ctx->valid != NULL, goto cleanup); if (to_logs) { xmlRelaxNGSetValidErrors(ctx->valid, (xmlRelaxNGValidityErrorFunc) xml_log, (xmlRelaxNGValidityWarningFunc) xml_log, GUINT_TO_POINTER(LOG_ERR)); } else { xmlRelaxNGSetValidErrors(ctx->valid, (xmlRelaxNGValidityErrorFunc) fprintf, (xmlRelaxNGValidityWarningFunc) fprintf, stderr); } } /* xmlRelaxNGSetValidStructuredErrors( */ /* valid, relaxng_invalid_stderr, valid); */ xmlLineNumbersDefault(1); rc = xmlRelaxNGValidateDoc(ctx->valid, doc); if (rc > 0) { valid = FALSE; } else if (rc < 0) { crm_err("Internal libxml error during validation\n"); } cleanup: if (cached_ctx) { *cached_ctx = ctx; } else { if (ctx->parser != NULL) { xmlRelaxNGFreeParserCtxt(ctx->parser); } if (ctx->valid != NULL) { xmlRelaxNGFreeValidCtxt(ctx->valid); } if (ctx->rng != NULL) { xmlRelaxNGFree(ctx->rng); } free(ctx); } return valid; } void crm_xml_init(void) { static bool init = TRUE; if(init) { init = FALSE; /* The default allocator XML_BUFFER_ALLOC_EXACT does far too many * realloc_safe()s and it can take upwards of 18 seconds (yes, seconds) * to dump a 28kb tree which XML_BUFFER_ALLOC_DOUBLEIT can do in * less than 1 second. */ xmlSetBufferAllocationScheme(XML_BUFFER_ALLOC_DOUBLEIT); /* Populate and free the _private field when nodes are created and destroyed */ xmlDeregisterNodeDefault(pcmkDeregisterNode); xmlRegisterNodeDefault(pcmkRegisterNode); __xml_build_schema_list(); } } void crm_xml_cleanup(void) { int lpc = 0; relaxng_ctx_cache_t *ctx = NULL; crm_info("Cleaning up memory from libxml2"); for (; lpc < xml_schema_max; lpc++) { switch (known_schemas[lpc].type) { case 0: /* None */ break; case 1: /* DTD - Not cached */ break; case 2: /* RNG - Cached */ ctx = (relaxng_ctx_cache_t *) known_schemas[lpc].cache; if (ctx == NULL) { break; } if (ctx->parser != NULL) { xmlRelaxNGFreeParserCtxt(ctx->parser); } if (ctx->valid != NULL) { xmlRelaxNGFreeValidCtxt(ctx->valid); } if (ctx->rng != NULL) { xmlRelaxNGFree(ctx->rng); } free(ctx); known_schemas[lpc].cache = NULL; break; default: break; } free(known_schemas[lpc].name); free(known_schemas[lpc].location); free(known_schemas[lpc].transform); } free(known_schemas); xsltCleanupGlobals(); xmlCleanupParser(); } static gboolean validate_with(xmlNode * xml, int method, gboolean to_logs) { xmlDocPtr doc = NULL; gboolean valid = FALSE; int type = 0; char *file = NULL; if(method < 0) { return FALSE; } type = known_schemas[method].type; if(type == 0) { return TRUE; } CRM_CHECK(xml != NULL, return FALSE); doc = getDocPtr(xml); file = get_schema_path(known_schemas[method].name, known_schemas[method].location); crm_trace("Validating with: %s (type=%d)", crm_str(file), type); switch (type) { case 1: valid = validate_with_dtd(doc, to_logs, file); break; case 2: valid = validate_with_relaxng(doc, to_logs, file, (relaxng_ctx_cache_t **) & (known_schemas[method].cache)); break; default: crm_err("Unknown validator type: %d", type); break; } free(file); return valid; } #include static void dump_file(const char *filename) { FILE *fp = NULL; int ch, line = 0; CRM_CHECK(filename != NULL, return); fp = fopen(filename, "r"); CRM_CHECK(fp != NULL, return); fprintf(stderr, "%4d ", ++line); do { ch = getc(fp); if (ch == EOF) { putc('\n', stderr); break; } else if (ch == '\n') { fprintf(stderr, "\n%4d ", ++line); } else { putc(ch, stderr); } } while (1); fclose(fp); } gboolean validate_xml_verbose(xmlNode * xml_blob) { int fd = 0; xmlDoc *doc = NULL; xmlNode *xml = NULL; gboolean rc = FALSE; char *filename = strdup(CRM_STATE_DIR "/cib-invalid.XXXXXX"); umask(S_IWGRP | S_IWOTH | S_IROTH); fd = mkstemp(filename); write_xml_fd(xml_blob, filename, fd, FALSE); dump_file(filename); doc = xmlParseFile(filename); xml = xmlDocGetRootElement(doc); rc = validate_xml(xml, NULL, FALSE); free_xml(xml); unlink(filename); free(filename); return rc; } gboolean validate_xml(xmlNode * xml_blob, const char *validation, gboolean to_logs) { int version = 0; if (validation == NULL) { validation = crm_element_value(xml_blob, XML_ATTR_VALIDATION); } if (validation == NULL) { int lpc = 0; bool valid = FALSE; validation = crm_element_value(xml_blob, "ignore-dtd"); if (crm_is_true(validation)) { /* Legacy compatibilty */ crm_xml_add(xml_blob, XML_ATTR_VALIDATION, "none"); return TRUE; } /* Work it out */ for (lpc = 0; lpc < xml_schema_max; lpc++) { if(validate_with(xml_blob, lpc, FALSE)) { valid = TRUE; crm_xml_add(xml_blob, XML_ATTR_VALIDATION, known_schemas[lpc].name); crm_info("XML validated against %s", known_schemas[lpc].name); if(known_schemas[lpc].after_transform == 0) { break; } } } return valid; } version = get_schema_version(validation); if (strcmp(validation, "none") == 0) { return TRUE; } else if(version < xml_schema_max) { return validate_with(xml_blob, version, to_logs); } crm_err("Unknown validator: %s", validation); return FALSE; } #if HAVE_LIBXSLT static xmlNode * apply_transformation(xmlNode * xml, const char *transform) { char *xform = NULL; xmlNode *out = NULL; xmlDocPtr res = NULL; xmlDocPtr doc = NULL; xsltStylesheet *xslt = NULL; CRM_CHECK(xml != NULL, return FALSE); doc = getDocPtr(xml); xform = get_schema_path(NULL, transform); xmlLoadExtDtdDefaultValue = 1; xmlSubstituteEntitiesDefault(1); xslt = xsltParseStylesheetFile((const xmlChar *)xform); CRM_CHECK(xslt != NULL, goto cleanup); res = xsltApplyStylesheet(xslt, doc, NULL); CRM_CHECK(res != NULL, goto cleanup); out = xmlDocGetRootElement(res); cleanup: if (xslt) { xsltFreeStylesheet(xslt); } free(xform); return out; } #endif const char * get_schema_name(int version) { if (version < 0 || version >= xml_schema_max) { return "unknown"; } return known_schemas[version].name; } int get_schema_version(const char *name) { int lpc = 0; if(name == NULL) { name = "none"; } for (; lpc < xml_schema_max; lpc++) { if (safe_str_eq(name, known_schemas[lpc].name)) { return lpc; } } return -1; } /* set which validation to use */ #include int update_validation(xmlNode ** xml_blob, int *best, int max, gboolean transform, gboolean to_logs) { xmlNode *xml = NULL; char *value = NULL; int max_stable_schemas = xml_latest_schema_index(); int lpc = 0, match = -1, rc = pcmk_ok; CRM_CHECK(best != NULL, return -EINVAL); CRM_CHECK(xml_blob != NULL, return -EINVAL); CRM_CHECK(*xml_blob != NULL, return -EINVAL); *best = 0; xml = *xml_blob; value = crm_element_value_copy(xml, XML_ATTR_VALIDATION); if (value != NULL) { match = get_schema_version(value); lpc = match; if (lpc >= 0 && transform == FALSE) { lpc++; } else if (lpc < 0) { crm_debug("Unknown validation type"); lpc = 0; } } if (match >= max_stable_schemas) { /* nothing to do */ free(value); *best = match; return pcmk_ok; } while(lpc <= max_stable_schemas) { gboolean valid = TRUE; crm_debug("Testing '%s' validation (%d of %d)", known_schemas[lpc].name ? known_schemas[lpc].name : "", lpc, max_stable_schemas); valid = validate_with(xml, lpc, to_logs); if (valid) { *best = lpc; } else { crm_trace("%s validation failed", known_schemas[lpc].name ? known_schemas[lpc].name : ""); } if (valid && transform) { xmlNode *upgrade = NULL; int next = known_schemas[lpc].after_transform; if (next < 0) { crm_trace("Stopping at %s", known_schemas[lpc].name); break; } else if (max > 0 && lpc == max) { crm_trace("Upgrade limit reached at %s (lpc=%d, next=%d, max=%d)", known_schemas[lpc].name, lpc, next, max); break; } else if (max > 0 && next > max) { crm_debug("Upgrade limit reached at %s (lpc=%d, next=%d, max=%d)", known_schemas[lpc].name, lpc, next, max); break; } else if (known_schemas[lpc].transform == NULL) { crm_notice("%s-style configuration is also valid for %s", known_schemas[lpc].name, known_schemas[next].name); if (validate_with(xml, next, to_logs)) { crm_debug("Configuration valid for schema: %s", known_schemas[next].name); lpc = next; *best = next; rc = pcmk_ok; } else { crm_info("Configuration not valid for schema: %s", known_schemas[next].name); } } else { crm_debug("Upgrading %s-style configuration to %s with %s", known_schemas[lpc].name, known_schemas[next].name, known_schemas[lpc].transform ? known_schemas[lpc].transform : "no-op"); #if HAVE_LIBXSLT upgrade = apply_transformation(xml, known_schemas[lpc].transform); #endif if (upgrade == NULL) { crm_err("Transformation %s failed", known_schemas[lpc].transform); rc = -pcmk_err_transform_failed; } else if (validate_with(upgrade, next, to_logs)) { crm_info("Transformation %s successful", known_schemas[lpc].transform); lpc = next; *best = next; free_xml(xml); xml = upgrade; rc = pcmk_ok; } else { crm_err("Transformation %s did not produce a valid configuration", known_schemas[lpc].transform); crm_log_xml_info(upgrade, "transform:bad"); free_xml(upgrade); rc = -pcmk_err_schema_validation; } } } } if (*best > match) { crm_info("%s the configuration from %s to %s", transform?"Transformed":"Upgraded", value ? value : "", known_schemas[*best].name); crm_xml_add(xml, XML_ATTR_VALIDATION, known_schemas[*best].name); } *xml_blob = xml; free(value); return rc; } /* * From xpath2.c * * All the elements returned by an XPath query are pointers to * elements from the tree *except* namespace nodes where the XPath * semantic is different from the implementation in libxml2 tree. * As a result when a returned node set is freed when * xmlXPathFreeObject() is called, that routine must check the * element type. But node from the returned set may have been removed * by xmlNodeSetContent() resulting in access to freed data. * * This can be exercised by running * valgrind xpath2 test3.xml '//discarded' discarded * * There is 2 ways around it: * - make a copy of the pointers to the nodes from the result set * then call xmlXPathFreeObject() and then modify the nodes * or * - remove the references from the node set, if they are not namespace nodes, before calling xmlXPathFreeObject(). */ void freeXpathObject(xmlXPathObjectPtr xpathObj) { int lpc, max = numXpathResults(xpathObj); if(xpathObj == NULL) { return; } for(lpc = 0; lpc < max; lpc++) { if (xpathObj->nodesetval->nodeTab[lpc] && xpathObj->nodesetval->nodeTab[lpc]->type != XML_NAMESPACE_DECL) { xpathObj->nodesetval->nodeTab[lpc] = NULL; } } /* _Now_ its safe to free it */ xmlXPathFreeObject(xpathObj); } xmlNode * getXpathResult(xmlXPathObjectPtr xpathObj, int index) { xmlNode *match = NULL; int max = numXpathResults(xpathObj); CRM_CHECK(index >= 0, return NULL); CRM_CHECK(xpathObj != NULL, return NULL); if (index >= max) { crm_err("Requested index %d of only %d items", index, max); return NULL; } else if(xpathObj->nodesetval->nodeTab[index] == NULL) { /* Previously requested */ return NULL; } match = xpathObj->nodesetval->nodeTab[index]; CRM_CHECK(match != NULL, return NULL); if (xpathObj->nodesetval->nodeTab[index]->type != XML_NAMESPACE_DECL) { /* See the comment for freeXpathObject() */ xpathObj->nodesetval->nodeTab[index] = NULL; } if (match->type == XML_DOCUMENT_NODE) { /* Will happen if section = '/' */ match = match->children; } else if (match->type != XML_ELEMENT_NODE && match->parent && match->parent->type == XML_ELEMENT_NODE) { /* reurning the parent instead */ match = match->parent; } else if (match->type != XML_ELEMENT_NODE) { /* We only support searching nodes */ crm_err("We only support %d not %d", XML_ELEMENT_NODE, match->type); match = NULL; } return match; } /* the caller needs to check if the result contains a xmlDocPtr or xmlNodePtr */ xmlXPathObjectPtr xpath_search(xmlNode * xml_top, const char *path) { xmlDocPtr doc = NULL; xmlXPathObjectPtr xpathObj = NULL; xmlXPathContextPtr xpathCtx = NULL; const xmlChar *xpathExpr = (const xmlChar *)path; CRM_CHECK(path != NULL, return NULL); CRM_CHECK(xml_top != NULL, return NULL); CRM_CHECK(strlen(path) > 0, return NULL); doc = getDocPtr(xml_top); xpathCtx = xmlXPathNewContext(doc); CRM_ASSERT(xpathCtx != NULL); xpathObj = xmlXPathEvalExpression(xpathExpr, xpathCtx); xmlXPathFreeContext(xpathCtx); return xpathObj; } gboolean cli_config_update(xmlNode ** xml, int *best_version, gboolean to_logs) { gboolean rc = TRUE; const char *value = crm_element_value(*xml, XML_ATTR_VALIDATION); int version = get_schema_version(value); int min_version = xml_minimum_schema_index(); if (version < min_version) { xmlNode *converted = NULL; converted = copy_xml(*xml); update_validation(&converted, &version, 0, TRUE, to_logs); value = crm_element_value(converted, XML_ATTR_VALIDATION); if (version < min_version) { if (to_logs) { crm_config_err("Your current configuration could only be upgraded to %s... " "the minimum requirement is %s.\n", crm_str(value), get_schema_name(min_version)); } else { fprintf(stderr, "Your current configuration could only be upgraded to %s... " "the minimum requirement is %s.\n", crm_str(value), get_schema_name(min_version)); } free_xml(converted); converted = NULL; rc = FALSE; } else { free_xml(*xml); *xml = converted; if (version < xml_latest_schema_index()) { crm_config_warn("Your configuration was internally updated to %s... " "which is acceptable but not the most recent", get_schema_name(version)); } else if (to_logs) { crm_info("Your configuration was internally updated to the latest version (%s)", get_schema_name(version)); } } } else if (version >= get_schema_version("none")) { if (to_logs) { crm_config_warn("Configuration validation is currently disabled." " It is highly encouraged and prevents many common cluster issues."); } else { fprintf(stderr, "Configuration validation is currently disabled." " It is highly encouraged and prevents many common cluster issues.\n"); } } if (best_version) { *best_version = version; } return rc; } xmlNode * expand_idref(xmlNode * input, xmlNode * top) { const char *tag = NULL; const char *ref = NULL; xmlNode *result = input; char *xpath_string = NULL; if (result == NULL) { return NULL; } else if (top == NULL) { top = input; } tag = crm_element_name(result); ref = crm_element_value(result, XML_ATTR_IDREF); if (ref != NULL) { int xpath_max = 512, offset = 0; xpath_string = calloc(1, xpath_max); offset += snprintf(xpath_string + offset, xpath_max - offset, "//%s[@id='%s']", tag, ref); CRM_LOG_ASSERT(offset > 0); result = get_xpath_object(xpath_string, top, LOG_ERR); if (result == NULL) { char *nodePath = (char *)xmlGetNodePath(top); crm_err("No match for %s found in %s: Invalid configuration", xpath_string, crm_str(nodePath)); free(nodePath); } } free(xpath_string); return result; } xmlNode * get_xpath_object_relative(const char *xpath, xmlNode * xml_obj, int error_level) { int len = 0; xmlNode *result = NULL; char *xpath_full = NULL; char *xpath_prefix = NULL; if (xml_obj == NULL || xpath == NULL) { return NULL; } xpath_prefix = (char *)xmlGetNodePath(xml_obj); len += strlen(xpath_prefix); len += strlen(xpath); xpath_full = strdup(xpath_prefix); xpath_full = realloc_safe(xpath_full, len + 1); strncat(xpath_full, xpath, len); result = get_xpath_object(xpath_full, xml_obj, error_level); free(xpath_prefix); free(xpath_full); return result; } xmlNode * get_xpath_object(const char *xpath, xmlNode * xml_obj, int error_level) { int max; xmlNode *result = NULL; xmlXPathObjectPtr xpathObj = NULL; char *nodePath = NULL; char *matchNodePath = NULL; if (xpath == NULL) { return xml_obj; /* or return NULL? */ } xpathObj = xpath_search(xml_obj, xpath); nodePath = (char *)xmlGetNodePath(xml_obj); max = numXpathResults(xpathObj); if (max < 1) { do_crm_log(error_level, "No match for %s in %s", xpath, crm_str(nodePath)); crm_log_xml_explicit(xml_obj, "Unexpected Input"); } else if (max > 1) { int lpc = 0; do_crm_log(error_level, "Too many matches for %s in %s", xpath, crm_str(nodePath)); for (lpc = 0; lpc < max; lpc++) { xmlNode *match = getXpathResult(xpathObj, lpc); CRM_LOG_ASSERT(match != NULL); if(match != NULL) { matchNodePath = (char *)xmlGetNodePath(match); do_crm_log(error_level, "%s[%d] = %s", xpath, lpc, crm_str(matchNodePath)); free(matchNodePath); } } crm_log_xml_explicit(xml_obj, "Bad Input"); } else { result = getXpathResult(xpathObj, 0); } freeXpathObject(xpathObj); free(nodePath); return result; } const char * crm_element_value(xmlNode * data, const char *name) { xmlAttr *attr = NULL; if (data == NULL) { crm_err("Couldn't find %s in NULL", name ? name : ""); CRM_LOG_ASSERT(data != NULL); return NULL; } else if (name == NULL) { crm_err("Couldn't find NULL in %s", crm_element_name(data)); return NULL; } attr = xmlHasProp(data, (const xmlChar *)name); if (attr == NULL || attr->children == NULL) { return NULL; } return (const char *)attr->children->content; } diff --git a/pengine/constraints.c b/pengine/constraints.c index a2ce9c4d7c..d479394a1d 100644 --- a/pengine/constraints.c +++ b/pengine/constraints.c @@ -1,2757 +1,2787 @@ /* * Copyright (C) 2004 Andrew Beekhof * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public * License as published by the Free Software Foundation; either * version 2 of the License, or (at your option) any later version. * * This software is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include enum pe_order_kind { pe_order_kind_optional, pe_order_kind_mandatory, pe_order_kind_serialize, }; #define EXPAND_CONSTRAINT_IDREF(__set, __rsc, __name) do { \ __rsc = pe_find_constraint_resource(data_set->resources, __name); \ if(__rsc == NULL) { \ crm_config_err("%s: No resource found for %s", __set, __name); \ return FALSE; \ } \ } while(0) enum pe_ordering get_flags(const char *id, enum pe_order_kind kind, const char *action_first, const char *action_then, gboolean invert); enum pe_ordering get_asymmetrical_flags(enum pe_order_kind kind); static rsc_to_node_t *generate_location_rule(resource_t * rsc, xmlNode * rule_xml, const char *discovery, pe_working_set_t * data_set); gboolean unpack_constraints(xmlNode * xml_constraints, pe_working_set_t * data_set) { xmlNode *xml_obj = NULL; xmlNode *lifetime = NULL; for (xml_obj = __xml_first_child(xml_constraints); xml_obj != NULL; xml_obj = __xml_next(xml_obj)) { const char *id = crm_element_value(xml_obj, XML_ATTR_ID); if (id == NULL) { crm_config_err("Constraint <%s...> must have an id", crm_element_name(xml_obj)); continue; } crm_trace("Processing constraint %s %s", crm_element_name(xml_obj), id); lifetime = first_named_child(xml_obj, "lifetime"); if (lifetime) { crm_config_warn("Support for the lifetime tag, used by %s, is deprecated." " The rules it contains should instead be direct decendants of the constraint object", id); } if (test_ruleset(lifetime, NULL, data_set->now) == FALSE) { crm_info("Constraint %s %s is not active", crm_element_name(xml_obj), id); } else if (safe_str_eq(XML_CONS_TAG_RSC_ORDER, crm_element_name(xml_obj))) { unpack_rsc_order(xml_obj, data_set); } else if (safe_str_eq(XML_CONS_TAG_RSC_DEPEND, crm_element_name(xml_obj))) { unpack_rsc_colocation(xml_obj, data_set); } else if (safe_str_eq(XML_CONS_TAG_RSC_LOCATION, crm_element_name(xml_obj))) { unpack_location(xml_obj, data_set); } else if (safe_str_eq(XML_CONS_TAG_RSC_TICKET, crm_element_name(xml_obj))) { unpack_rsc_ticket(xml_obj, data_set); } else { pe_err("Unsupported constraint type: %s", crm_element_name(xml_obj)); } } return TRUE; } static const char * invert_action(const char *action) { if (safe_str_eq(action, RSC_START)) { return RSC_STOP; } else if (safe_str_eq(action, RSC_STOP)) { return RSC_START; } else if (safe_str_eq(action, RSC_PROMOTE)) { return RSC_DEMOTE; } else if (safe_str_eq(action, RSC_DEMOTE)) { return RSC_PROMOTE; } else if (safe_str_eq(action, RSC_PROMOTED)) { return RSC_DEMOTED; } else if (safe_str_eq(action, RSC_DEMOTED)) { return RSC_PROMOTED; } else if (safe_str_eq(action, RSC_STARTED)) { return RSC_STOPPED; } else if (safe_str_eq(action, RSC_STOPPED)) { return RSC_STARTED; } crm_config_warn("Unknown action: %s", action); return NULL; } static enum pe_order_kind get_ordering_type(xmlNode * xml_obj) { enum pe_order_kind kind_e = pe_order_kind_mandatory; const char *kind = crm_element_value(xml_obj, XML_ORDER_ATTR_KIND); if (kind == NULL) { const char *score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE); kind_e = pe_order_kind_mandatory; if (score) { int score_i = char2score(score); if (score_i == 0) { kind_e = pe_order_kind_optional; } /* } else if(rsc_then->variant == pe_native && rsc_first->variant > pe_group) { */ /* kind_e = pe_order_kind_optional; */ } } else if (safe_str_eq(kind, "Mandatory")) { kind_e = pe_order_kind_mandatory; } else if (safe_str_eq(kind, "Optional")) { kind_e = pe_order_kind_optional; } else if (safe_str_eq(kind, "Serialize")) { kind_e = pe_order_kind_serialize; } else { const char *id = crm_element_value(xml_obj, XML_ATTR_ID); crm_config_err("Constraint %s: Unknown type '%s'", id, kind); } return kind_e; } static resource_t * pe_find_constraint_resource(GListPtr rsc_list, const char *id) { GListPtr rIter = NULL; for (rIter = rsc_list; id && rIter; rIter = rIter->next) { resource_t *parent = rIter->data; resource_t *match = parent->fns->find_rsc(parent, id, NULL, pe_find_renamed | pe_find_current); if (match != NULL) { if(safe_str_neq(match->id, id)) { /* We found an instance of a clone instead */ match = uber_parent(match); crm_debug("Found %s for %s", match->id, id); } return match; } } crm_trace("No match for %s", id); return NULL; } static gboolean pe_find_constraint_tag(pe_working_set_t * data_set, const char * id, tag_t ** tag) { gboolean rc = FALSE; *tag = NULL; rc = g_hash_table_lookup_extended(data_set->template_rsc_sets, id, NULL, (gpointer*) tag); if (rc == FALSE) { rc = g_hash_table_lookup_extended(data_set->tags, id, NULL, (gpointer*) tag); if (rc == FALSE) { crm_config_warn("No template/tag named '%s'", id); return FALSE; } else if (*tag == NULL) { crm_config_warn("No resource is tagged with '%s'", id); return FALSE; } } else if (*tag == NULL) { crm_config_warn("No resource is derived from template '%s'", id); return FALSE; } return rc; } static gboolean valid_resource_or_tag(pe_working_set_t * data_set, const char * id, resource_t ** rsc, tag_t ** tag) { gboolean rc = FALSE; if (rsc) { *rsc = NULL; *rsc = pe_find_constraint_resource(data_set->resources, id); if (*rsc) { return TRUE; } } if (tag) { *tag = NULL; rc = pe_find_constraint_tag(data_set, id, tag); } return rc; } static gboolean unpack_simple_rsc_order(xmlNode * xml_obj, pe_working_set_t * data_set) { int order_id = 0; resource_t *rsc_then = NULL; resource_t *rsc_first = NULL; gboolean invert_bool = TRUE; + gboolean require_all = TRUE; enum pe_order_kind kind = pe_order_kind_mandatory; enum pe_ordering cons_weight = pe_order_optional; const char *id_first = NULL; const char *id_then = NULL; const char *action_then = NULL; const char *action_first = NULL; const char *instance_then = NULL; const char *instance_first = NULL; + const char *require_all_s = NULL; const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *invert = crm_element_value(xml_obj, XML_CONS_ATTR_SYMMETRICAL); crm_str_to_boolean(invert, &invert_bool); if (xml_obj == NULL) { crm_config_err("No constraint object to process."); return FALSE; } else if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } id_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN); id_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST); action_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN_ACTION); action_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST_ACTION); instance_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN_INSTANCE); instance_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST_INSTANCE); if (action_first == NULL) { action_first = RSC_START; } if (action_then == NULL) { action_then = action_first; } if (id_then == NULL || id_first == NULL) { crm_config_err("Constraint %s needs two sides lh: %s rh: %s", id, crm_str(id_then), crm_str(id_first)); return FALSE; } rsc_then = pe_find_constraint_resource(data_set->resources, id_then); rsc_first = pe_find_constraint_resource(data_set->resources, id_first); if (rsc_then == NULL) { crm_config_err("Constraint %s: no resource found for name '%s'", id, id_then); return FALSE; } else if (rsc_first == NULL) { crm_config_err("Constraint %s: no resource found for name '%s'", id, id_first); return FALSE; } else if (instance_then && rsc_then->variant < pe_clone) { crm_config_err("Invalid constraint '%s':" " Resource '%s' is not a clone but instance %s was requested", id, id_then, instance_then); return FALSE; } else if (instance_first && rsc_first->variant < pe_clone) { crm_config_err("Invalid constraint '%s':" " Resource '%s' is not a clone but instance %s was requested", id, id_first, instance_first); return FALSE; } if (instance_then) { rsc_then = find_clone_instance(rsc_then, instance_then, data_set); if (rsc_then == NULL) { crm_config_warn("Invalid constraint '%s': No instance '%s' of '%s'", id, instance_then, id_then); return FALSE; } } if (instance_first) { rsc_first = find_clone_instance(rsc_first, instance_first, data_set); if (rsc_first == NULL) { crm_config_warn("Invalid constraint '%s': No instance '%s' of '%s'", id, instance_first, id_first); return FALSE; } } + require_all_s = crm_element_value(xml_obj, "require-all"); + if (require_all_s + && crm_is_true(require_all_s) == FALSE + && rsc_first->variant >= pe_clone) { + + require_all = FALSE; + } + cons_weight = pe_order_optional; kind = get_ordering_type(xml_obj); if (kind == pe_order_kind_optional && rsc_then->restart_type == pe_restart_restart) { crm_trace("Upgrade : recovery - implies right"); cons_weight |= pe_order_implies_then; } if (invert_bool == FALSE) { cons_weight |= get_asymmetrical_flags(kind); } else { cons_weight |= get_flags(id, kind, action_first, action_then, FALSE); } - order_id = new_rsc_order(rsc_first, action_first, rsc_then, action_then, cons_weight, data_set); + + if (require_all == FALSE) { + GListPtr rIter = NULL; + char *task = crm_concat(CRM_OP_RELAXED_CLONE, id, ':'); + action_t *unordered_action = get_pseudo_op(task, data_set); + free(task); + + update_action_flags(unordered_action, pe_action_requires_any); + + for (rIter = rsc_first->children; id && rIter; rIter = rIter->next) { + resource_t *child = rIter->data; + + custom_action_order(child, generate_op_key(child->id, action_first, 0), NULL, + NULL, NULL, unordered_action, + pe_order_one_or_more | pe_order_implies_then_printed, data_set); + } + + order_id = custom_action_order(NULL, NULL, unordered_action, + rsc_then, generate_op_key(rsc_then->id, action_then, 0), NULL, + cons_weight | pe_order_runnable_left, data_set); + } else { + order_id = new_rsc_order(rsc_first, action_first, rsc_then, action_then, cons_weight, data_set); + } pe_rsc_trace(rsc_first, "order-%d (%s): %s_%s before %s_%s flags=0x%.6x", order_id, id, rsc_first->id, action_first, rsc_then->id, action_then, cons_weight); if (invert_bool == FALSE) { return TRUE; } else if (invert && kind == pe_order_kind_serialize) { crm_config_warn("Cannot invert serialized constraint set %s", id); return TRUE; } else if (kind == pe_order_kind_serialize) { return TRUE; } action_then = invert_action(action_then); action_first = invert_action(action_first); if (action_then == NULL || action_first == NULL) { crm_config_err("Cannot invert rsc_order constraint %s." " Please specify the inverse manually.", id); return TRUE; } cons_weight = pe_order_optional; if (kind == pe_order_kind_optional && rsc_then->restart_type == pe_restart_restart) { crm_trace("Upgrade : recovery - implies left"); cons_weight |= pe_order_implies_first; } cons_weight |= get_flags(id, kind, action_first, action_then, TRUE); + order_id = new_rsc_order(rsc_then, action_then, rsc_first, action_first, cons_weight, data_set); pe_rsc_trace(rsc_then, "order-%d (%s): %s_%s before %s_%s flags=0x%.6x", order_id, id, rsc_then->id, action_then, rsc_first->id, action_first, cons_weight); return TRUE; } static gboolean expand_tags_in_sets(xmlNode * xml_obj, xmlNode ** expanded_xml, pe_working_set_t * data_set) { xmlNode *new_xml = NULL; xmlNode *set = NULL; gboolean any_refs = FALSE; const char *cons_id = NULL; *expanded_xml = NULL; if (xml_obj == NULL) { crm_config_err("No constraint object to process."); return FALSE; } new_xml = copy_xml(xml_obj); cons_id = ID(new_xml); for (set = __xml_first_child(new_xml); set != NULL; set = __xml_next(set)) { xmlNode *xml_rsc = NULL; GListPtr tag_refs = NULL; GListPtr gIter = NULL; if (safe_str_neq((const char *)set->name, XML_CONS_TAG_RSC_SET)) { continue; } for (xml_rsc = __xml_first_child(set); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { resource_t *rsc = NULL; tag_t *tag = NULL; const char *id = ID(xml_rsc); if (safe_str_neq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF)) { continue; } if (valid_resource_or_tag(data_set, id, &rsc, &tag) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", cons_id, id); free_xml(new_xml); return FALSE; } else if (rsc) { continue; } else if (tag) { /* The resource_ref under the resource_set references a template/tag */ xmlNode *last_ref = xml_rsc; /* A sample: Original XML: Now we are appending rsc2 and rsc3 which are tagged with tag1 right after it: */ for (gIter = tag->refs; gIter != NULL; gIter = gIter->next) { const char *obj_ref = (const char *) gIter->data; xmlNode *new_rsc_ref = NULL; new_rsc_ref = xmlNewDocRawNode(getDocPtr(set), NULL, (const xmlChar *)XML_TAG_RESOURCE_REF, NULL); crm_xml_add(new_rsc_ref, XML_ATTR_ID, obj_ref); xmlAddNextSibling(last_ref, new_rsc_ref); last_ref = new_rsc_ref; } any_refs = TRUE; /* Do not directly free ''. That would break the further __xml_next(xml_rsc)) and cause "Invalid read" seen by valgrind. So just record it into a hash table for freeing it later. */ tag_refs = g_list_append(tag_refs, xml_rsc); } } /* Now free '', and finally get: */ for (gIter = tag_refs; gIter != NULL; gIter = gIter->next) { xmlNode *tag_ref = gIter->data; free_xml(tag_ref); } g_list_free(tag_refs); } if (any_refs) { *expanded_xml = new_xml; } else { free_xml(new_xml); } return TRUE; } static gboolean tag_to_set(xmlNode * xml_obj, xmlNode ** rsc_set, const char * attr, gboolean convert_rsc, pe_working_set_t * data_set) { const char *cons_id = NULL; const char *id = NULL; resource_t *rsc = NULL; tag_t *tag = NULL; *rsc_set = NULL; if (xml_obj == NULL) { crm_config_err("No constraint object to process."); return FALSE; } if (attr == NULL) { crm_config_err("No attribute name to process."); return FALSE; } cons_id = crm_element_value(xml_obj, XML_ATTR_ID); if (cons_id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } id = crm_element_value(xml_obj, attr); if (id == NULL) { return TRUE; } if (valid_resource_or_tag(data_set, id, &rsc, &tag) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", cons_id, id); return FALSE; } else if (tag) { GListPtr gIter = NULL; /* A template/tag is referenced by the "attr" attribute (first, then, rsc or with-rsc). Add the template/tag's corresponding "resource_set" which contains the resources derived from it or tagged with it under the constraint. */ *rsc_set = create_xml_node(xml_obj, XML_CONS_TAG_RSC_SET); crm_xml_add(*rsc_set, XML_ATTR_ID, id); for (gIter = tag->refs; gIter != NULL; gIter = gIter->next) { const char *obj_ref = (const char *) gIter->data; xmlNode *rsc_ref = NULL; rsc_ref = create_xml_node(*rsc_set, XML_TAG_RESOURCE_REF); crm_xml_add(rsc_ref, XML_ATTR_ID, obj_ref); } /* Set sequential="false" for the resource_set */ crm_xml_add(*rsc_set, "sequential", XML_BOOLEAN_FALSE); } else if (rsc && convert_rsc) { /* Even a regular resource is referenced by "attr", convert it into a resource_set. Because the other side of the constraint could be a template/tag reference. */ xmlNode *rsc_ref = NULL; *rsc_set = create_xml_node(xml_obj, XML_CONS_TAG_RSC_SET); crm_xml_add(*rsc_set, XML_ATTR_ID, id); rsc_ref = create_xml_node(*rsc_set, XML_TAG_RESOURCE_REF); crm_xml_add(rsc_ref, XML_ATTR_ID, id); } else { return TRUE; } /* Remove the "attr" attribute referencing the template/tag */ if (*rsc_set) { xml_remove_prop(xml_obj, attr); } return TRUE; } gboolean unpack_rsc_location(xmlNode * xml_obj, resource_t * rsc_lh, const char * role, const char * score, pe_working_set_t * data_set); static gboolean unpack_simple_location(xmlNode * xml_obj, pe_working_set_t * data_set) { const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *value = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE); if(value) { resource_t *rsc_lh = pe_find_constraint_resource(data_set->resources, value); return unpack_rsc_location(xml_obj, rsc_lh, NULL, NULL, data_set); } value = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE"-pattern"); if(value) { regex_t *r_patt = calloc(1, sizeof(regex_t)); bool invert = FALSE; GListPtr rIter = NULL; if(value[0] == '!') { value++; invert = TRUE; } if (regcomp(r_patt, value, REG_EXTENDED)) { crm_config_err("Bad regex '%s' for constraint '%s'\n", value, id); regfree(r_patt); free(r_patt); return FALSE; } for (rIter = data_set->resources; rIter; rIter = rIter->next) { resource_t *r = rIter->data; int status = regexec(r_patt, r->id, 0, NULL, 0); if(invert == FALSE && status == 0) { crm_debug("'%s' matched '%s' for %s", r->id, value, id); unpack_rsc_location(xml_obj, r, NULL, NULL, data_set); } if(invert && status != 0) { crm_debug("'%s' is an inverted match of '%s' for %s", r->id, value, id); unpack_rsc_location(xml_obj, r, NULL, NULL, data_set); } else { crm_trace("'%s' does not match '%s' for %s", r->id, value, id); } } regfree(r_patt); free(r_patt); } return FALSE; } gboolean unpack_rsc_location(xmlNode * xml_obj, resource_t * rsc_lh, const char * role, const char * score, pe_working_set_t * data_set) { gboolean empty = TRUE; rsc_to_node_t *location = NULL; const char *id_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE); const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *node = crm_element_value(xml_obj, XML_CIB_TAG_NODE); const char *discovery = crm_element_value(xml_obj, XML_LOCATION_ATTR_DISCOVERY); if (rsc_lh == NULL) { /* only a warn as BSC adds the constraint then the resource */ crm_config_warn("No resource (con=%s, rsc=%s)", id, id_lh); return FALSE; } if (score == NULL) { score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE); } if (node != NULL && score != NULL) { int score_i = char2score(score); node_t *match = pe_find_node(data_set->nodes, node); if (!match) { return FALSE; } location = rsc2node_new(id, rsc_lh, score_i, discovery, match, data_set); } else { xmlNode *rule_xml = NULL; for (rule_xml = __xml_first_child(xml_obj); rule_xml != NULL; rule_xml = __xml_next(rule_xml)) { if (crm_str_eq((const char *)rule_xml->name, XML_TAG_RULE, TRUE)) { empty = FALSE; crm_trace("Unpacking %s/%s", id, ID(rule_xml)); generate_location_rule(rsc_lh, rule_xml, discovery, data_set); } } if (empty) { crm_config_err("Invalid location constraint %s:" " rsc_location must contain at least one rule", ID(xml_obj)); } } if (role == NULL) { role = crm_element_value(xml_obj, XML_RULE_ATTR_ROLE); } if (location && role) { if (text2role(role) == RSC_ROLE_UNKNOWN) { pe_err("Invalid constraint %s: Bad role %s", id, role); return FALSE; } else { enum rsc_role_e r = text2role(role); switch(r) { case RSC_ROLE_UNKNOWN: case RSC_ROLE_STARTED: case RSC_ROLE_SLAVE: /* Applies to all */ location->role_filter = RSC_ROLE_UNKNOWN; break; default: location->role_filter = r; break; } } } return TRUE; } static gboolean unpack_location_tags(xmlNode * xml_obj, xmlNode ** expanded_xml, pe_working_set_t * data_set) { const char *id = NULL; const char *id_lh = NULL; const char *state_lh = NULL; resource_t *rsc_lh = NULL; tag_t *tag_lh = NULL; xmlNode *new_xml = NULL; xmlNode *rsc_set_lh = NULL; gboolean any_sets = FALSE; *expanded_xml = NULL; if (xml_obj == NULL) { crm_config_err("No constraint object to process."); return FALSE; } id = crm_element_value(xml_obj, XML_ATTR_ID); if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } /* Attempt to expand any template/tag references in possible resource sets. */ expand_tags_in_sets(xml_obj, &new_xml, data_set); if (new_xml) { /* There are resource sets referencing templates. Return with the expanded XML. */ crm_log_xml_trace(new_xml, "Expanded rsc_location..."); *expanded_xml = new_xml; return TRUE; } id_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE); if (id_lh == NULL) { return TRUE; } if (valid_resource_or_tag(data_set, id_lh, &rsc_lh, &tag_lh) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", id, id_lh); return FALSE; } else if (rsc_lh) { /* No template is referenced. */ return TRUE; } state_lh = crm_element_value(xml_obj, XML_RULE_ATTR_ROLE); new_xml = copy_xml(xml_obj); /* Convert the template/tag reference in "rsc" into a resource_set under the rsc_location constraint. */ if (tag_to_set(new_xml, &rsc_set_lh, XML_COLOC_ATTR_SOURCE, FALSE, data_set) == FALSE) { free_xml(new_xml); return FALSE; } if (rsc_set_lh) { if (state_lh) { /* A "rsc-role" is specified. Move it into the converted resource_set as a "role"" attribute. */ crm_xml_add(rsc_set_lh, "role", state_lh); xml_remove_prop(new_xml, XML_RULE_ATTR_ROLE); } any_sets = TRUE; } if (any_sets) { crm_log_xml_trace(new_xml, "Expanded rsc_location..."); *expanded_xml = new_xml; } else { free_xml(new_xml); } return TRUE; } static gboolean unpack_location_set(xmlNode * location, xmlNode * set, pe_working_set_t * data_set) { xmlNode *xml_rsc = NULL; resource_t *resource = NULL; const char *set_id = ID(set); const char *role = crm_element_value(set, "role"); const char *local_score = crm_element_value(set, XML_RULE_ATTR_SCORE); if (set == NULL) { crm_config_err("No resource_set object to process."); return FALSE; } if (set_id == NULL) { crm_config_err("resource_set must have an id"); return FALSE; } for (xml_rsc = __xml_first_child(set); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc)); unpack_rsc_location(location, resource, role, local_score, data_set); } } return TRUE; } gboolean unpack_location(xmlNode * xml_obj, pe_working_set_t * data_set) { xmlNode *set = NULL; gboolean any_sets = FALSE; xmlNode *orig_xml = NULL; xmlNode *expanded_xml = NULL; const char *id = crm_element_value(xml_obj, XML_ATTR_ID); gboolean rc = TRUE; if (xml_obj == NULL) { crm_config_err("No rsc_location constraint object to process."); return FALSE; } if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } rc = unpack_location_tags(xml_obj, &expanded_xml, data_set); if (expanded_xml) { orig_xml = xml_obj; xml_obj = expanded_xml; } else if (rc == FALSE) { return FALSE; } for (set = __xml_first_child(xml_obj); set != NULL; set = __xml_next(set)) { if (crm_str_eq((const char *)set->name, XML_CONS_TAG_RSC_SET, TRUE)) { any_sets = TRUE; set = expand_idref(set, data_set->input); if (unpack_location_set(xml_obj, set, data_set) == FALSE) { return FALSE; } } } if (expanded_xml) { free_xml(expanded_xml); xml_obj = orig_xml; } if (any_sets == FALSE) { return unpack_simple_location(xml_obj, data_set); } return TRUE; } static int get_node_score(const char *rule, const char *score, gboolean raw, node_t * node) { int score_f = 0; if (score == NULL) { pe_err("Rule %s: no score specified. Assuming 0.", rule); } else if (raw) { score_f = char2score(score); } else { const char *attr_score = g_hash_table_lookup(node->details->attrs, score); if (attr_score == NULL) { crm_debug("Rule %s: node %s did not have a value for %s", rule, node->details->uname, score); score_f = -INFINITY; } else { crm_debug("Rule %s: node %s had value %s for %s", rule, node->details->uname, attr_score, score); score_f = char2score(attr_score); } } return score_f; } static rsc_to_node_t * generate_location_rule(resource_t * rsc, xmlNode * rule_xml, const char *discovery, pe_working_set_t * data_set) { const char *rule_id = NULL; const char *score = NULL; const char *boolean = NULL; const char *role = NULL; GListPtr gIter = NULL; GListPtr match_L = NULL; gboolean do_and = TRUE; gboolean accept = TRUE; gboolean raw_score = TRUE; rsc_to_node_t *location_rule = NULL; rule_xml = expand_idref(rule_xml, data_set->input); rule_id = crm_element_value(rule_xml, XML_ATTR_ID); boolean = crm_element_value(rule_xml, XML_RULE_ATTR_BOOLEAN_OP); role = crm_element_value(rule_xml, XML_RULE_ATTR_ROLE); crm_trace("Processing rule: %s", rule_id); if (role != NULL && text2role(role) == RSC_ROLE_UNKNOWN) { pe_err("Bad role specified for %s: %s", rule_id, role); return NULL; } score = crm_element_value(rule_xml, XML_RULE_ATTR_SCORE); if (score == NULL) { score = crm_element_value(rule_xml, XML_RULE_ATTR_SCORE_ATTRIBUTE); if (score == NULL) { score = crm_element_value(rule_xml, XML_RULE_ATTR_SCORE_MANGLED); } if (score != NULL) { raw_score = FALSE; } } if (safe_str_eq(boolean, "or")) { do_and = FALSE; } location_rule = rsc2node_new(rule_id, rsc, 0, discovery, NULL, data_set); if (location_rule == NULL) { return NULL; } if (role != NULL) { crm_trace("Setting role filter: %s", role); location_rule->role_filter = text2role(role); if (location_rule->role_filter == RSC_ROLE_SLAVE) { /* Any master/slave cannot be promoted without being a slave first * Ergo, any constraint for the slave role applies to every role */ location_rule->role_filter = RSC_ROLE_UNKNOWN; } } if (do_and) { GListPtr gIter = NULL; match_L = node_list_dup(data_set->nodes, TRUE, FALSE); for (gIter = match_L; gIter != NULL; gIter = gIter->next) { node_t *node = (node_t *) gIter->data; node->weight = get_node_score(rule_id, score, raw_score, node); } } for (gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) { int score_f = 0; node_t *node = (node_t *) gIter->data; accept = test_rule(rule_xml, node->details->attrs, RSC_ROLE_UNKNOWN, data_set->now); crm_trace("Rule %s %s on %s", ID(rule_xml), accept ? "passed" : "failed", node->details->uname); score_f = get_node_score(rule_id, score, raw_score, node); /* if(accept && score_f == -INFINITY) { */ /* accept = FALSE; */ /* } */ if (accept) { node_t *local = pe_find_node_id(match_L, node->details->id); if (local == NULL && do_and) { continue; } else if (local == NULL) { local = node_copy(node); match_L = g_list_append(match_L, local); } if (do_and == FALSE) { local->weight = merge_weights(local->weight, score_f); } crm_trace("node %s now has weight %d", node->details->uname, local->weight); } else if (do_and && !accept) { /* remove it */ node_t *delete = pe_find_node_id(match_L, node->details->id); if (delete != NULL) { match_L = g_list_remove(match_L, delete); crm_trace("node %s did not match", node->details->uname); } free(delete); } } location_rule->node_list_rh = match_L; if (location_rule->node_list_rh == NULL) { crm_trace("No matching nodes for rule %s", rule_id); return NULL; } crm_trace("%s: %d nodes matched", rule_id, g_list_length(location_rule->node_list_rh)); return location_rule; } static gint sort_cons_priority_lh(gconstpointer a, gconstpointer b) { const rsc_colocation_t *rsc_constraint1 = (const rsc_colocation_t *)a; const rsc_colocation_t *rsc_constraint2 = (const rsc_colocation_t *)b; if (a == NULL) { return 1; } if (b == NULL) { return -1; } CRM_ASSERT(rsc_constraint1->rsc_lh != NULL); CRM_ASSERT(rsc_constraint1->rsc_rh != NULL); if (rsc_constraint1->rsc_lh->priority > rsc_constraint2->rsc_lh->priority) { return -1; } if (rsc_constraint1->rsc_lh->priority < rsc_constraint2->rsc_lh->priority) { return 1; } /* Process clones before primitives and groups */ if (rsc_constraint1->rsc_lh->variant > rsc_constraint2->rsc_lh->variant) { return -1; } else if (rsc_constraint1->rsc_lh->variant < rsc_constraint2->rsc_lh->variant) { return 1; } return strcmp(rsc_constraint1->rsc_lh->id, rsc_constraint2->rsc_lh->id); } static gint sort_cons_priority_rh(gconstpointer a, gconstpointer b) { const rsc_colocation_t *rsc_constraint1 = (const rsc_colocation_t *)a; const rsc_colocation_t *rsc_constraint2 = (const rsc_colocation_t *)b; if (a == NULL) { return 1; } if (b == NULL) { return -1; } CRM_ASSERT(rsc_constraint1->rsc_lh != NULL); CRM_ASSERT(rsc_constraint1->rsc_rh != NULL); if (rsc_constraint1->rsc_rh->priority > rsc_constraint2->rsc_rh->priority) { return -1; } if (rsc_constraint1->rsc_rh->priority < rsc_constraint2->rsc_rh->priority) { return 1; } /* Process clones before primitives and groups */ if (rsc_constraint1->rsc_rh->variant > rsc_constraint2->rsc_rh->variant) { return -1; } else if (rsc_constraint1->rsc_rh->variant < rsc_constraint2->rsc_rh->variant) { return 1; } return strcmp(rsc_constraint1->rsc_rh->id, rsc_constraint2->rsc_rh->id); } gboolean rsc_colocation_new(const char *id, const char *node_attr, int score, resource_t * rsc_lh, resource_t * rsc_rh, const char *state_lh, const char *state_rh, pe_working_set_t * data_set) { rsc_colocation_t *new_con = NULL; if (rsc_lh == NULL) { crm_config_err("No resource found for LHS %s", id); return FALSE; } else if (rsc_rh == NULL) { crm_config_err("No resource found for RHS of %s", id); return FALSE; } new_con = calloc(1, sizeof(rsc_colocation_t)); if (new_con == NULL) { return FALSE; } if (state_lh == NULL || safe_str_eq(state_lh, RSC_ROLE_STARTED_S)) { state_lh = RSC_ROLE_UNKNOWN_S; } if (state_rh == NULL || safe_str_eq(state_rh, RSC_ROLE_STARTED_S)) { state_rh = RSC_ROLE_UNKNOWN_S; } new_con->id = id; new_con->rsc_lh = rsc_lh; new_con->rsc_rh = rsc_rh; new_con->score = score; new_con->role_lh = text2role(state_lh); new_con->role_rh = text2role(state_rh); new_con->node_attribute = node_attr; if (node_attr == NULL) { node_attr = "#" XML_ATTR_UNAME; } pe_rsc_trace(rsc_lh, "%s ==> %s (%s %d)", rsc_lh->id, rsc_rh->id, node_attr, score); rsc_lh->rsc_cons = g_list_insert_sorted(rsc_lh->rsc_cons, new_con, sort_cons_priority_rh); rsc_rh->rsc_cons_lhs = g_list_insert_sorted(rsc_rh->rsc_cons_lhs, new_con, sort_cons_priority_lh); data_set->colocation_constraints = g_list_append(data_set->colocation_constraints, new_con); if (score <= -INFINITY) { new_rsc_order(rsc_lh, CRMD_ACTION_STOP, rsc_rh, CRMD_ACTION_START, pe_order_anti_colocation, data_set); new_rsc_order(rsc_rh, CRMD_ACTION_STOP, rsc_lh, CRMD_ACTION_START, pe_order_anti_colocation, data_set); } return TRUE; } /* LHS before RHS */ int new_rsc_order(resource_t * lh_rsc, const char *lh_task, resource_t * rh_rsc, const char *rh_task, enum pe_ordering type, pe_working_set_t * data_set) { char *lh_key = NULL; char *rh_key = NULL; CRM_CHECK(lh_rsc != NULL, return -1); CRM_CHECK(lh_task != NULL, return -1); CRM_CHECK(rh_rsc != NULL, return -1); CRM_CHECK(rh_task != NULL, return -1); /* We no longer need to test if these reference stonith resources * now that stonithd has access to them even when they're not "running" * if (validate_order_resources(lh_rsc, lh_task, rh_rsc, rh_task)) { return -1; } */ lh_key = generate_op_key(lh_rsc->id, lh_task, 0); rh_key = generate_op_key(rh_rsc->id, rh_task, 0); return custom_action_order(lh_rsc, lh_key, NULL, rh_rsc, rh_key, NULL, type, data_set); } static char * task_from_action_or_key(action_t *action, const char *key) { char *res = NULL; char *rsc_id = NULL; char *op_type = NULL; int interval = 0; if (action) { res = strdup(action->task); } else if (key) { int rc = 0; rc = parse_op_key(key, &rsc_id, &op_type, &interval); if (rc == TRUE) { res = op_type; op_type = NULL; } free(rsc_id); free(op_type); } return res; } /* when order constraints are made between two resources start and stop actions * those constraints have to be mirrored against the corresponding * migration actions to ensure start/stop ordering is preserved during * a migration */ static void handle_migration_ordering(order_constraint_t *order, pe_working_set_t *data_set) { char *lh_task = NULL; char *rh_task = NULL; gboolean rh_migratable; gboolean lh_migratable; if (order->lh_rsc == NULL || order->rh_rsc == NULL) { return; } else if (order->lh_rsc == order->rh_rsc) { return; /* don't mess with those constraints built between parent * resources and the children */ } else if (is_parent(order->lh_rsc, order->rh_rsc)) { return; } else if (is_parent(order->rh_rsc, order->lh_rsc)) { return; } lh_migratable = is_set(order->lh_rsc->flags, pe_rsc_allow_migrate); rh_migratable = is_set(order->rh_rsc->flags, pe_rsc_allow_migrate); /* one of them has to be migratable for * the migrate ordering logic to be applied */ if (lh_migratable == FALSE && rh_migratable == FALSE) { return; } /* at this point we have two resources which allow migrations that have an * order dependency set between them. If those order dependencies involve * start/stop actions, we need to mirror the corresponding migrate actions * so order will be preserved. */ lh_task = task_from_action_or_key(order->lh_action, order->lh_action_task); rh_task = task_from_action_or_key(order->rh_action, order->rh_action_task); if (lh_task == NULL || rh_task == NULL) { goto cleanup_order; } if (safe_str_eq(lh_task, RSC_START) && safe_str_eq(rh_task, RSC_START)) { int flags = pe_order_optional; if (lh_migratable && rh_migratable) { /* A start then B start * A migrate_from then B migrate_to */ custom_action_order(order->lh_rsc, generate_op_key(order->lh_rsc->id, RSC_MIGRATED, 0), NULL, order->rh_rsc, generate_op_key(order->rh_rsc->id, RSC_MIGRATE, 0), NULL, flags, data_set); } if (rh_migratable) { if (lh_migratable) { flags |= pe_order_apply_first_non_migratable; } /* A start then B start * A start then B migrate_to... only if A start is not a part of a migration*/ custom_action_order(order->lh_rsc, generate_op_key(order->lh_rsc->id, RSC_START, 0), NULL, order->rh_rsc, generate_op_key(order->rh_rsc->id, RSC_MIGRATE, 0), NULL, flags, data_set); } } else if (rh_migratable == TRUE && safe_str_eq(lh_task, RSC_STOP) && safe_str_eq(rh_task, RSC_STOP)) { int flags = pe_order_optional; if (lh_migratable) { flags |= pe_order_apply_first_non_migratable; } /* rh side is at the bottom of the stack during a stop. If we have a constraint * stop B then stop A, if B is migrating via stop/start, and A is migrating using migration actions, * we need to enforce that A's migrate_to action occurs after B's stop action. */ custom_action_order(order->lh_rsc, generate_op_key(order->lh_rsc->id, RSC_STOP, 0), NULL, order->rh_rsc, generate_op_key(order->rh_rsc->id, RSC_MIGRATE, 0), NULL, flags, data_set); /* We need to build the stop constraint against migrate_from as well * to account for partial migrations. */ if (order->rh_rsc->partial_migration_target) { custom_action_order(order->lh_rsc, generate_op_key(order->lh_rsc->id, RSC_STOP, 0), NULL, order->rh_rsc, generate_op_key(order->rh_rsc->id, RSC_MIGRATED, 0), NULL, flags, data_set); } } cleanup_order: free(lh_task); free(rh_task); } /* LHS before RHS */ int custom_action_order(resource_t * lh_rsc, char *lh_action_task, action_t * lh_action, resource_t * rh_rsc, char *rh_action_task, action_t * rh_action, enum pe_ordering type, pe_working_set_t * data_set) { order_constraint_t *order = NULL; if (lh_rsc == NULL && lh_action) { lh_rsc = lh_action->rsc; } if (rh_rsc == NULL && rh_action) { rh_rsc = rh_action->rsc; } if ((lh_action == NULL && lh_rsc == NULL) || (rh_action == NULL && rh_rsc == NULL)) { crm_config_err("Invalid inputs %p.%p %p.%p", lh_rsc, lh_action, rh_rsc, rh_action); free(lh_action_task); free(rh_action_task); return -1; } order = calloc(1, sizeof(order_constraint_t)); order->id = data_set->order_id++; order->type = type; order->lh_rsc = lh_rsc; order->rh_rsc = rh_rsc; order->lh_action = lh_action; order->rh_action = rh_action; order->lh_action_task = lh_action_task; order->rh_action_task = rh_action_task; if (order->lh_action_task == NULL && lh_action) { order->lh_action_task = strdup(lh_action->uuid); } if (order->rh_action_task == NULL && rh_action) { order->rh_action_task = strdup(rh_action->uuid); } if (order->lh_rsc == NULL && lh_action) { order->lh_rsc = lh_action->rsc; } if (order->rh_rsc == NULL && rh_action) { order->rh_rsc = rh_action->rsc; } data_set->ordering_constraints = g_list_prepend(data_set->ordering_constraints, order); handle_migration_ordering(order, data_set); return order->id; } enum pe_ordering get_asymmetrical_flags(enum pe_order_kind kind) { enum pe_ordering flags = pe_order_optional; if (kind == pe_order_kind_mandatory) { flags |= pe_order_asymmetrical; } else if (kind == pe_order_kind_serialize) { flags |= pe_order_serialize_only; } return flags; } enum pe_ordering get_flags(const char *id, enum pe_order_kind kind, const char *action_first, const char *action_then, gboolean invert) { enum pe_ordering flags = pe_order_optional; if (invert && kind == pe_order_kind_mandatory) { crm_trace("Upgrade %s: implies left", id); flags |= pe_order_implies_first; } else if (kind == pe_order_kind_mandatory) { crm_trace("Upgrade %s: implies right", id); flags |= pe_order_implies_then; if (safe_str_eq(action_first, RSC_START) || safe_str_eq(action_first, RSC_PROMOTE)) { crm_trace("Upgrade %s: runnable", id); flags |= pe_order_runnable_left; } } else if (kind == pe_order_kind_serialize) { flags |= pe_order_serialize_only; } return flags; } static gboolean unpack_order_set(xmlNode * set, enum pe_order_kind kind, resource_t ** rsc, action_t ** begin, action_t ** end, action_t ** inv_begin, action_t ** inv_end, const char *symmetrical, pe_working_set_t * data_set) { xmlNode *xml_rsc = NULL; GListPtr set_iter = NULL; GListPtr resources = NULL; resource_t *last = NULL; resource_t *resource = NULL; int local_kind = kind; gboolean sequential = FALSE; enum pe_ordering flags = pe_order_optional; char *key = NULL; const char *id = ID(set); const char *action = crm_element_value(set, "action"); const char *sequential_s = crm_element_value(set, "sequential"); const char *kind_s = crm_element_value(set, XML_ORDER_ATTR_KIND); /* char *pseudo_id = NULL; char *end_id = NULL; char *begin_id = NULL; */ if (action == NULL) { action = RSC_START; } if (kind_s) { local_kind = get_ordering_type(set); } if (sequential_s == NULL) { sequential_s = "1"; } sequential = crm_is_true(sequential_s); if (crm_is_true(symmetrical)) { flags = get_flags(id, local_kind, action, action, FALSE); } else { flags = get_asymmetrical_flags(local_kind); } for (xml_rsc = __xml_first_child(set); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, resource, ID(xml_rsc)); resources = g_list_append(resources, resource); } } if (g_list_length(resources) == 1) { crm_trace("Single set: %s", id); *rsc = resource; *end = NULL; *begin = NULL; *inv_end = NULL; *inv_begin = NULL; goto done; } /* pseudo_id = crm_concat(id, action, '-'); end_id = crm_concat(pseudo_id, "end", '-'); begin_id = crm_concat(pseudo_id, "begin", '-'); */ *rsc = NULL; /* *end = get_pseudo_op(end_id, data_set); *begin = get_pseudo_op(begin_id, data_set); free(pseudo_id); free(begin_id); free(end_id); */ set_iter = resources; while (set_iter != NULL) { resource = (resource_t *) set_iter->data; set_iter = set_iter->next; key = generate_op_key(resource->id, action, 0); /* custom_action_order(NULL, NULL, *begin, resource, strdup(key), NULL, flags|pe_order_implies_first_printed, data_set); custom_action_order(resource, strdup(key), NULL, NULL, NULL, *end, flags|pe_order_implies_then_printed, data_set); */ if (local_kind == pe_order_kind_serialize) { /* Serialize before everything that comes after */ GListPtr gIter = NULL; for (gIter = set_iter; gIter != NULL; gIter = gIter->next) { resource_t *then_rsc = (resource_t *) gIter->data; char *then_key = generate_op_key(then_rsc->id, action, 0); custom_action_order(resource, strdup(key), NULL, then_rsc, then_key, NULL, flags, data_set); } } else if (sequential) { if (last != NULL) { new_rsc_order(last, action, resource, action, flags, data_set); } last = resource; } free(key); } if (crm_is_true(symmetrical) == FALSE) { goto done; } else if (symmetrical && local_kind == pe_order_kind_serialize) { crm_config_warn("Cannot invert serialized constraint set %s", id); goto done; } else if (local_kind == pe_order_kind_serialize) { goto done; } last = NULL; action = invert_action(action); /* pseudo_id = crm_concat(id, action, '-'); end_id = crm_concat(pseudo_id, "end", '-'); begin_id = crm_concat(pseudo_id, "begin", '-'); *inv_end = get_pseudo_op(end_id, data_set); *inv_begin = get_pseudo_op(begin_id, data_set); free(pseudo_id); free(begin_id); free(end_id); */ flags = get_flags(id, local_kind, action, action, TRUE); set_iter = resources; while (set_iter != NULL) { resource = (resource_t *) set_iter->data; set_iter = set_iter->next; /* key = generate_op_key(resource->id, action, 0); custom_action_order(NULL, NULL, *inv_begin, resource, strdup(key), NULL, flags|pe_order_implies_first_printed, data_set); custom_action_order(resource, key, NULL, NULL, NULL, *inv_end, flags|pe_order_implies_then_printed, data_set); */ if (sequential) { if (last != NULL) { new_rsc_order(resource, action, last, action, flags, data_set); } last = resource; } } done: g_list_free(resources); return TRUE; } static gboolean order_rsc_sets(const char *id, xmlNode * set1, xmlNode * set2, enum pe_order_kind kind, pe_working_set_t * data_set, gboolean invert, gboolean symmetrical) { xmlNode *xml_rsc = NULL; + xmlNode *xml_rsc_2 = NULL; resource_t *rsc_1 = NULL; resource_t *rsc_2 = NULL; const char *action_1 = crm_element_value(set1, "action"); const char *action_2 = crm_element_value(set2, "action"); const char *sequential_1 = crm_element_value(set1, "sequential"); const char *sequential_2 = crm_element_value(set2, "sequential"); const char *require_all_s = crm_element_value(set1, "require-all"); gboolean require_all = require_all_s ? crm_is_true(require_all_s) : TRUE; enum pe_ordering flags = pe_order_none; if (action_1 == NULL) { action_1 = RSC_START; }; if (action_2 == NULL) { action_2 = RSC_START; }; if (invert) { action_1 = invert_action(action_1); action_2 = invert_action(action_2); } if(safe_str_eq(RSC_STOP, action_1) || safe_str_eq(RSC_DEMOTE, action_1)) { /* Assuming: A -> ( B || C) -> D * The one-or-more logic only applies during the start/promote phase * During shutdown neither B nor can shutdown until D is down, so simply turn require_all back on. */ require_all = TRUE; } if (symmetrical == FALSE) { flags = get_asymmetrical_flags(kind); } else { flags = get_flags(id, kind, action_2, action_1, invert); } /* If we have an un-ordered set1, whether it is sequential or not is irrelevant in regards to set2. */ if (!require_all) { char *task = crm_concat(CRM_OP_RELAXED_SET, ID(set1), ':'); action_t *unordered_action = get_pseudo_op(task, data_set); free(task); update_action_flags(unordered_action, pe_action_requires_any); for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { - xmlNode *xml_rsc_2 = NULL; - if (!crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { continue; } EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); /* Add an ordering constraint between every element in set1 and the pseudo action. * If any action in set1 is runnable the pseudo action will be runnable. */ custom_action_order(rsc_1, generate_op_key(rsc_1->id, action_1, 0), NULL, NULL, NULL, unordered_action, pe_order_one_or_more | pe_order_implies_then_printed, data_set); + } + for (xml_rsc_2 = __xml_first_child(set2); xml_rsc_2 != NULL; xml_rsc_2 = __xml_next(xml_rsc_2)) { + if (!crm_str_eq((const char *)xml_rsc_2->name, XML_TAG_RESOURCE_REF, TRUE)) { + continue; + } - for (xml_rsc_2 = __xml_first_child(set2); xml_rsc_2 != NULL; - xml_rsc_2 = __xml_next(xml_rsc_2)) { - if (!crm_str_eq((const char *)xml_rsc_2->name, XML_TAG_RESOURCE_REF, TRUE)) { - continue; - } - - EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc_2)); + EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc_2)); - /* Add an ordering constraint between the pseudo action and every element in set2. - * If the pseudo action is runnable, every action in set2 will be runnable */ - custom_action_order(NULL, NULL, unordered_action, - rsc_2, generate_op_key(rsc_2->id, action_2, 0), NULL, - flags | pe_order_runnable_left, data_set); - } + /* Add an ordering constraint between the pseudo action and every element in set2. + * If the pseudo action is runnable, every action in set2 will be runnable */ + custom_action_order(NULL, NULL, unordered_action, + rsc_2, generate_op_key(rsc_2->id, action_2, 0), NULL, + flags | pe_order_runnable_left, data_set); } return TRUE; } if (crm_is_true(sequential_1)) { if (invert == FALSE) { /* get the last one */ const char *rid = NULL; for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { rid = ID(xml_rsc); } } EXPAND_CONSTRAINT_IDREF(id, rsc_1, rid); } else { /* get the first one */ for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); break; } } } } if (crm_is_true(sequential_2)) { if (invert == FALSE) { /* get the first one */ for (xml_rsc = __xml_first_child(set2); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc)); break; } } } else { /* get the last one */ const char *rid = NULL; for (xml_rsc = __xml_first_child(set2); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { rid = ID(xml_rsc); } } EXPAND_CONSTRAINT_IDREF(id, rsc_2, rid); } } if (rsc_1 != NULL && rsc_2 != NULL) { new_rsc_order(rsc_1, action_1, rsc_2, action_2, flags, data_set); } else if (rsc_1 != NULL) { for (xml_rsc = __xml_first_child(set2); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc)); new_rsc_order(rsc_1, action_1, rsc_2, action_2, flags, data_set); } } } else if (rsc_2 != NULL) { xmlNode *xml_rsc = NULL; for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); new_rsc_order(rsc_1, action_1, rsc_2, action_2, flags, data_set); } } } else { for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { xmlNode *xml_rsc_2 = NULL; EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); for (xml_rsc_2 = __xml_first_child(set2); xml_rsc_2 != NULL; xml_rsc_2 = __xml_next(xml_rsc_2)) { if (crm_str_eq((const char *)xml_rsc_2->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc_2)); new_rsc_order(rsc_1, action_1, rsc_2, action_2, flags, data_set); } } } } } return TRUE; } static gboolean unpack_order_tags(xmlNode * xml_obj, xmlNode ** expanded_xml, pe_working_set_t * data_set) { const char *id = NULL; const char *id_first = NULL; const char *id_then = NULL; const char *action_first = NULL; const char *action_then = NULL; resource_t *rsc_first = NULL; resource_t *rsc_then = NULL; tag_t *tag_first = NULL; tag_t *tag_then = NULL; xmlNode *new_xml = NULL; xmlNode *rsc_set_first = NULL; xmlNode *rsc_set_then = NULL; gboolean any_sets = FALSE; *expanded_xml = NULL; if (xml_obj == NULL) { crm_config_err("No constraint object to process."); return FALSE; } id = crm_element_value(xml_obj, XML_ATTR_ID); if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } /* Attempt to expand any template/tag references in possible resource sets. */ expand_tags_in_sets(xml_obj, &new_xml, data_set); if (new_xml) { /* There are resource sets referencing templates/tags. Return with the expanded XML. */ crm_log_xml_trace(new_xml, "Expanded rsc_order..."); *expanded_xml = new_xml; return TRUE; } id_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST); id_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN); if (id_first == NULL || id_then == NULL) { return TRUE; } if (valid_resource_or_tag(data_set, id_first, &rsc_first, &tag_first) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", id, id_first); return FALSE; } if (valid_resource_or_tag(data_set, id_then, &rsc_then, &tag_then) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", id, id_then); return FALSE; } if (rsc_first && rsc_then) { /* Neither side references any template/tag. */ return TRUE; } action_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST_ACTION); action_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN_ACTION); new_xml = copy_xml(xml_obj); /* Convert the template/tag reference in "first" into a resource_set under the order constraint. */ if (tag_to_set(new_xml, &rsc_set_first, XML_ORDER_ATTR_FIRST, TRUE, data_set) == FALSE) { free_xml(new_xml); return FALSE; } if (rsc_set_first) { if (action_first) { /* A "first-action" is specified. Move it into the converted resource_set as an "action" attribute. */ crm_xml_add(rsc_set_first, "action", action_first); xml_remove_prop(new_xml, XML_ORDER_ATTR_FIRST_ACTION); } any_sets = TRUE; } /* Convert the template/tag reference in "then" into a resource_set under the order constraint. */ if (tag_to_set(new_xml, &rsc_set_then, XML_ORDER_ATTR_THEN, TRUE, data_set) == FALSE) { free_xml(new_xml); return FALSE; } if (rsc_set_then) { if (action_then) { /* A "then-action" is specified. Move it into the converted resource_set as an "action" attribute. */ crm_xml_add(rsc_set_then, "action", action_then); xml_remove_prop(new_xml, XML_ORDER_ATTR_THEN_ACTION); } any_sets = TRUE; } if (any_sets) { crm_log_xml_trace(new_xml, "Expanded rsc_order..."); *expanded_xml = new_xml; } else { free_xml(new_xml); } return TRUE; } gboolean unpack_rsc_order(xmlNode * xml_obj, pe_working_set_t * data_set) { gboolean any_sets = FALSE; resource_t *rsc = NULL; /* resource_t *last_rsc = NULL; */ action_t *set_end = NULL; action_t *set_begin = NULL; action_t *set_inv_end = NULL; action_t *set_inv_begin = NULL; xmlNode *set = NULL; xmlNode *last = NULL; xmlNode *orig_xml = NULL; xmlNode *expanded_xml = NULL; /* action_t *last_end = NULL; action_t *last_begin = NULL; action_t *last_inv_end = NULL; action_t *last_inv_begin = NULL; */ const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *invert = crm_element_value(xml_obj, XML_CONS_ATTR_SYMMETRICAL); enum pe_order_kind kind = get_ordering_type(xml_obj); gboolean invert_bool = TRUE; gboolean rc = TRUE; if (invert == NULL) { invert = "true"; } invert_bool = crm_is_true(invert); rc = unpack_order_tags(xml_obj, &expanded_xml, data_set); if (expanded_xml) { orig_xml = xml_obj; xml_obj = expanded_xml; } else if (rc == FALSE) { return FALSE; } for (set = __xml_first_child(xml_obj); set != NULL; set = __xml_next(set)) { if (crm_str_eq((const char *)set->name, XML_CONS_TAG_RSC_SET, TRUE)) { any_sets = TRUE; set = expand_idref(set, data_set->input); if (unpack_order_set(set, kind, &rsc, &set_begin, &set_end, &set_inv_begin, &set_inv_end, invert, data_set) == FALSE) { return FALSE; /* Expand orders in order_rsc_sets() instead of via pseudo actions. */ /* } else if(last) { const char *set_action = crm_element_value(set, "action"); const char *last_action = crm_element_value(last, "action"); enum pe_ordering flags = get_flags(id, kind, last_action, set_action, FALSE); if(!set_action) { set_action = RSC_START; } if(!last_action) { last_action = RSC_START; } if(rsc == NULL && last_rsc == NULL) { order_actions(last_end, set_begin, flags); } else { custom_action_order( last_rsc, null_or_opkey(last_rsc, last_action), last_end, rsc, null_or_opkey(rsc, set_action), set_begin, flags, data_set); } if(crm_is_true(invert)) { set_action = invert_action(set_action); last_action = invert_action(last_action); flags = get_flags(id, kind, last_action, set_action, TRUE); if(rsc == NULL && last_rsc == NULL) { order_actions(last_inv_begin, set_inv_end, flags); } else { custom_action_order( last_rsc, null_or_opkey(last_rsc, last_action), last_inv_begin, rsc, null_or_opkey(rsc, set_action), set_inv_end, flags, data_set); } } */ } else if ( /* never called -- Now call it for supporting clones in resource sets */ last) { if (order_rsc_sets(id, last, set, kind, data_set, FALSE, invert_bool) == FALSE) { return FALSE; } if (invert_bool && order_rsc_sets(id, set, last, kind, data_set, TRUE, invert_bool) == FALSE) { return FALSE; } } last = set; /* last_rsc = rsc; last_end = set_end; last_begin = set_begin; last_inv_end = set_inv_end; last_inv_begin = set_inv_begin; */ } } if (expanded_xml) { free_xml(expanded_xml); xml_obj = orig_xml; } if (any_sets == FALSE) { return unpack_simple_rsc_order(xml_obj, data_set); } return TRUE; } static gboolean unpack_colocation_set(xmlNode * set, int score, pe_working_set_t * data_set) { xmlNode *xml_rsc = NULL; resource_t *with = NULL; resource_t *resource = NULL; const char *set_id = ID(set); const char *role = crm_element_value(set, "role"); const char *sequential = crm_element_value(set, "sequential"); const char *ordering = crm_element_value(set, "ordering"); int local_score = score; const char *score_s = crm_element_value(set, XML_RULE_ATTR_SCORE); if (score_s) { local_score = char2score(score_s); } if(ordering == NULL) { ordering = "group"; } if (sequential != NULL && crm_is_true(sequential) == FALSE) { return TRUE; } else if (local_score >= 0 && safe_str_eq(ordering, "group")) { for (xml_rsc = __xml_first_child(set); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc)); if (with != NULL) { pe_rsc_trace(resource, "Colocating %s with %s", resource->id, with->id); rsc_colocation_new(set_id, NULL, local_score, resource, with, role, role, data_set); } with = resource; } } } else if (local_score >= 0) { resource_t *last = NULL; for (xml_rsc = __xml_first_child(set); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc)); if (last != NULL) { pe_rsc_trace(resource, "Colocating %s with %s", last->id, resource->id); rsc_colocation_new(set_id, NULL, local_score, last, resource, role, role, data_set); } last = resource; } } } else { /* Anti-colocating with every prior resource is * the only way to ensure the intuitive result * (ie. that no-one in the set can run with anyone * else in the set) */ for (xml_rsc = __xml_first_child(set); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { xmlNode *xml_rsc_with = NULL; EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc)); for (xml_rsc_with = __xml_first_child(set); xml_rsc_with != NULL; xml_rsc_with = __xml_next(xml_rsc_with)) { if (crm_str_eq((const char *)xml_rsc_with->name, XML_TAG_RESOURCE_REF, TRUE)) { if (safe_str_eq(resource->id, ID(xml_rsc_with))) { break; } else if (resource == NULL) { crm_config_err("%s: No resource found for %s", set_id, ID(xml_rsc_with)); return FALSE; } EXPAND_CONSTRAINT_IDREF(set_id, with, ID(xml_rsc_with)); pe_rsc_trace(resource, "Anti-Colocating %s with %s", resource->id, with->id); rsc_colocation_new(set_id, NULL, local_score, resource, with, role, role, data_set); } } } } } return TRUE; } static gboolean colocate_rsc_sets(const char *id, xmlNode * set1, xmlNode * set2, int score, pe_working_set_t * data_set) { xmlNode *xml_rsc = NULL; resource_t *rsc_1 = NULL; resource_t *rsc_2 = NULL; const char *role_1 = crm_element_value(set1, "role"); const char *role_2 = crm_element_value(set2, "role"); const char *sequential_1 = crm_element_value(set1, "sequential"); const char *sequential_2 = crm_element_value(set2, "sequential"); if (sequential_1 == NULL || crm_is_true(sequential_1)) { /* get the first one */ for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); break; } } } if (sequential_2 == NULL || crm_is_true(sequential_2)) { /* get the last one */ const char *rid = NULL; for (xml_rsc = __xml_first_child(set2); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { rid = ID(xml_rsc); } } EXPAND_CONSTRAINT_IDREF(id, rsc_2, rid); } if (rsc_1 != NULL && rsc_2 != NULL) { rsc_colocation_new(id, NULL, score, rsc_1, rsc_2, role_1, role_2, data_set); } else if (rsc_1 != NULL) { for (xml_rsc = __xml_first_child(set2); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc)); rsc_colocation_new(id, NULL, score, rsc_1, rsc_2, role_1, role_2, data_set); } } } else if (rsc_2 != NULL) { for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); rsc_colocation_new(id, NULL, score, rsc_1, rsc_2, role_1, role_2, data_set); } } } else { for (xml_rsc = __xml_first_child(set1); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { xmlNode *xml_rsc_2 = NULL; EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); for (xml_rsc_2 = __xml_first_child(set2); xml_rsc_2 != NULL; xml_rsc_2 = __xml_next(xml_rsc_2)) { if (crm_str_eq((const char *)xml_rsc_2->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc_2)); rsc_colocation_new(id, NULL, score, rsc_1, rsc_2, role_1, role_2, data_set); } } } } } return TRUE; } static gboolean unpack_simple_colocation(xmlNode * xml_obj, pe_working_set_t * data_set) { int score_i = 0; const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE); const char *id_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE); const char *id_rh = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET); const char *state_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE_ROLE); const char *state_rh = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET_ROLE); const char *instance_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE_INSTANCE); const char *instance_rh = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET_INSTANCE); const char *attr = crm_element_value(xml_obj, XML_COLOC_ATTR_NODE_ATTR); const char *symmetrical = crm_element_value(xml_obj, XML_CONS_ATTR_SYMMETRICAL); resource_t *rsc_lh = pe_find_constraint_resource(data_set->resources, id_lh); resource_t *rsc_rh = pe_find_constraint_resource(data_set->resources, id_rh); if (rsc_lh == NULL) { crm_config_err("Invalid constraint '%s': No resource named '%s'", id, id_lh); return FALSE; } else if (rsc_rh == NULL) { crm_config_err("Invalid constraint '%s': No resource named '%s'", id, id_rh); return FALSE; } else if (instance_lh && rsc_lh->variant < pe_clone) { crm_config_err ("Invalid constraint '%s': Resource '%s' is not a clone but instance %s was requested", id, id_lh, instance_lh); return FALSE; } else if (instance_rh && rsc_rh->variant < pe_clone) { crm_config_err ("Invalid constraint '%s': Resource '%s' is not a clone but instance %s was requested", id, id_rh, instance_rh); return FALSE; } if (instance_lh) { rsc_lh = find_clone_instance(rsc_lh, instance_lh, data_set); if (rsc_lh == NULL) { crm_config_warn("Invalid constraint '%s': No instance '%s' of '%s'", id, instance_lh, id_lh); return FALSE; } } if (instance_rh) { rsc_rh = find_clone_instance(rsc_rh, instance_rh, data_set); if (rsc_rh == NULL) { crm_config_warn("Invalid constraint '%s': No instance '%s' of '%s'", id, instance_rh, id_rh); return FALSE; } } if (crm_is_true(symmetrical)) { crm_config_warn("The %s colocation constraint attribute has been removed." " It didn't do what you think it did anyway.", XML_CONS_ATTR_SYMMETRICAL); } if (score) { score_i = char2score(score); } rsc_colocation_new(id, attr, score_i, rsc_lh, rsc_rh, state_lh, state_rh, data_set); return TRUE; } static gboolean unpack_colocation_tags(xmlNode * xml_obj, xmlNode ** expanded_xml, pe_working_set_t * data_set) { const char *id = NULL; const char *id_lh = NULL; const char *id_rh = NULL; const char *state_lh = NULL; const char *state_rh = NULL; resource_t *rsc_lh = NULL; resource_t *rsc_rh = NULL; tag_t *tag_lh = NULL; tag_t *tag_rh = NULL; xmlNode *new_xml = NULL; xmlNode *rsc_set_lh = NULL; xmlNode *rsc_set_rh = NULL; gboolean any_sets = FALSE; *expanded_xml = NULL; if (xml_obj == NULL) { crm_config_err("No constraint object to process."); return FALSE; } id = crm_element_value(xml_obj, XML_ATTR_ID); if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } /* Attempt to expand any template/tag references in possible resource sets. */ expand_tags_in_sets(xml_obj, &new_xml, data_set); if (new_xml) { /* There are resource sets referencing templates/tags. Return with the expanded XML. */ crm_log_xml_trace(new_xml, "Expanded rsc_colocation..."); *expanded_xml = new_xml; return TRUE; } id_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE); id_rh = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET); if (id_lh == NULL || id_rh == NULL) { return TRUE; } if (valid_resource_or_tag(data_set, id_lh, &rsc_lh, &tag_lh) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", id, id_lh); return FALSE; } if (valid_resource_or_tag(data_set, id_rh, &rsc_rh, &tag_rh) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", id, id_rh); return FALSE; } if (rsc_lh && rsc_rh) { /* Neither side references any template/tag. */ return TRUE; } if (tag_lh && tag_rh) { /* A colocation constraint between two templates/tags makes no sense. */ crm_config_err("Either LHS or RHS of %s should be a normal resource instead of a template/tag", id); return FALSE; } state_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE_ROLE); state_rh = crm_element_value(xml_obj, XML_COLOC_ATTR_TARGET_ROLE); new_xml = copy_xml(xml_obj); /* Convert the template/tag reference in "rsc" into a resource_set under the colocation constraint. */ if (tag_to_set(new_xml, &rsc_set_lh, XML_COLOC_ATTR_SOURCE, TRUE, data_set) == FALSE) { free_xml(new_xml); return FALSE; } if (rsc_set_lh) { if (state_lh) { /* A "rsc-role" is specified. Move it into the converted resource_set as a "role"" attribute. */ crm_xml_add(rsc_set_lh, "role", state_lh); xml_remove_prop(new_xml, XML_COLOC_ATTR_SOURCE_ROLE); } any_sets = TRUE; } /* Convert the template/tag reference in "with-rsc" into a resource_set under the colocation constraint. */ if (tag_to_set(new_xml, &rsc_set_rh, XML_COLOC_ATTR_TARGET, TRUE, data_set) == FALSE) { free_xml(new_xml); return FALSE; } if (rsc_set_rh) { if (state_rh) { /* A "with-rsc-role" is specified. Move it into the converted resource_set as a "role"" attribute. */ crm_xml_add(rsc_set_rh, "role", state_rh); xml_remove_prop(new_xml, XML_COLOC_ATTR_TARGET_ROLE); } any_sets = TRUE; } if (any_sets) { crm_log_xml_trace(new_xml, "Expanded rsc_colocation..."); *expanded_xml = new_xml; } else { free_xml(new_xml); } return TRUE; } gboolean unpack_rsc_colocation(xmlNode * xml_obj, pe_working_set_t * data_set) { int score_i = 0; xmlNode *set = NULL; xmlNode *last = NULL; gboolean any_sets = FALSE; xmlNode *orig_xml = NULL; xmlNode *expanded_xml = NULL; const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE); gboolean rc = TRUE; if (score) { score_i = char2score(score); } rc = unpack_colocation_tags(xml_obj, &expanded_xml, data_set); if (expanded_xml) { orig_xml = xml_obj; xml_obj = expanded_xml; } else if (rc == FALSE) { return FALSE; } for (set = __xml_first_child(xml_obj); set != NULL; set = __xml_next(set)) { if (crm_str_eq((const char *)set->name, XML_CONS_TAG_RSC_SET, TRUE)) { any_sets = TRUE; set = expand_idref(set, data_set->input); if (unpack_colocation_set(set, score_i, data_set) == FALSE) { return FALSE; } else if (last && colocate_rsc_sets(id, last, set, score_i, data_set) == FALSE) { return FALSE; } last = set; } } if (expanded_xml) { free_xml(expanded_xml); xml_obj = orig_xml; } if (any_sets == FALSE) { return unpack_simple_colocation(xml_obj, data_set); } return TRUE; } gboolean rsc_ticket_new(const char *id, resource_t * rsc_lh, ticket_t * ticket, const char *state_lh, const char *loss_policy, pe_working_set_t * data_set) { rsc_ticket_t *new_rsc_ticket = NULL; if (rsc_lh == NULL) { crm_config_err("No resource found for LHS %s", id); return FALSE; } new_rsc_ticket = calloc(1, sizeof(rsc_ticket_t)); if (new_rsc_ticket == NULL) { return FALSE; } if (state_lh == NULL || safe_str_eq(state_lh, RSC_ROLE_STARTED_S)) { state_lh = RSC_ROLE_UNKNOWN_S; } new_rsc_ticket->id = id; new_rsc_ticket->ticket = ticket; new_rsc_ticket->rsc_lh = rsc_lh; new_rsc_ticket->role_lh = text2role(state_lh); if (safe_str_eq(loss_policy, "fence")) { crm_debug("On loss of ticket '%s': Fence the nodes running %s (%s)", new_rsc_ticket->ticket->id, new_rsc_ticket->rsc_lh->id, role2text(new_rsc_ticket->role_lh)); new_rsc_ticket->loss_policy = loss_ticket_fence; } else if (safe_str_eq(loss_policy, "freeze")) { crm_debug("On loss of ticket '%s': Freeze %s (%s)", new_rsc_ticket->ticket->id, new_rsc_ticket->rsc_lh->id, role2text(new_rsc_ticket->role_lh)); new_rsc_ticket->loss_policy = loss_ticket_freeze; } else if (safe_str_eq(loss_policy, "demote")) { crm_debug("On loss of ticket '%s': Demote %s (%s)", new_rsc_ticket->ticket->id, new_rsc_ticket->rsc_lh->id, role2text(new_rsc_ticket->role_lh)); new_rsc_ticket->loss_policy = loss_ticket_demote; } else if (safe_str_eq(loss_policy, "stop")) { crm_debug("On loss of ticket '%s': Stop %s (%s)", new_rsc_ticket->ticket->id, new_rsc_ticket->rsc_lh->id, role2text(new_rsc_ticket->role_lh)); new_rsc_ticket->loss_policy = loss_ticket_stop; } else { if (new_rsc_ticket->role_lh == RSC_ROLE_MASTER) { crm_debug("On loss of ticket '%s': Default to demote %s (%s)", new_rsc_ticket->ticket->id, new_rsc_ticket->rsc_lh->id, role2text(new_rsc_ticket->role_lh)); new_rsc_ticket->loss_policy = loss_ticket_demote; } else { crm_debug("On loss of ticket '%s': Default to stop %s (%s)", new_rsc_ticket->ticket->id, new_rsc_ticket->rsc_lh->id, role2text(new_rsc_ticket->role_lh)); new_rsc_ticket->loss_policy = loss_ticket_stop; } } pe_rsc_trace(rsc_lh, "%s (%s) ==> %s", rsc_lh->id, role2text(new_rsc_ticket->role_lh), ticket->id); rsc_lh->rsc_tickets = g_list_append(rsc_lh->rsc_tickets, new_rsc_ticket); data_set->ticket_constraints = g_list_append(data_set->ticket_constraints, new_rsc_ticket); if (new_rsc_ticket->ticket->granted == FALSE || new_rsc_ticket->ticket->standby) { rsc_ticket_constraint(rsc_lh, new_rsc_ticket, data_set); } return TRUE; } static gboolean unpack_rsc_ticket_set(xmlNode * set, ticket_t * ticket, const char *loss_policy, pe_working_set_t * data_set) { xmlNode *xml_rsc = NULL; resource_t *resource = NULL; const char *set_id = ID(set); const char *role = crm_element_value(set, "role"); if (set == NULL) { crm_config_err("No resource_set object to process."); return FALSE; } if (set_id == NULL) { crm_config_err("resource_set must have an id"); return FALSE; } if (ticket == NULL) { crm_config_err("No dependented ticket specified for '%s'", set_id); return FALSE; } for (xml_rsc = __xml_first_child(set); xml_rsc != NULL; xml_rsc = __xml_next(xml_rsc)) { if (crm_str_eq((const char *)xml_rsc->name, XML_TAG_RESOURCE_REF, TRUE)) { EXPAND_CONSTRAINT_IDREF(set_id, resource, ID(xml_rsc)); pe_rsc_trace(resource, "Resource '%s' depends on ticket '%s'", resource->id, ticket->id); rsc_ticket_new(set_id, resource, ticket, role, loss_policy, data_set); } } return TRUE; } static gboolean unpack_simple_rsc_ticket(xmlNode * xml_obj, pe_working_set_t * data_set) { const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *ticket_str = crm_element_value(xml_obj, XML_TICKET_ATTR_TICKET); const char *loss_policy = crm_element_value(xml_obj, XML_TICKET_ATTR_LOSS_POLICY); ticket_t *ticket = NULL; const char *id_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE); const char *state_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE_ROLE); const char *instance_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE_INSTANCE); resource_t *rsc_lh = NULL; if (xml_obj == NULL) { crm_config_err("No rsc_ticket constraint object to process."); return FALSE; } if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } if (ticket_str == NULL) { crm_config_err("Invalid constraint '%s': No ticket specified", id); return FALSE; } else { ticket = g_hash_table_lookup(data_set->tickets, ticket_str); } if (ticket == NULL) { crm_config_err("Invalid constraint '%s': No ticket named '%s'", id, ticket_str); return FALSE; } if (id_lh == NULL) { crm_config_err("Invalid constraint '%s': No resource specified", id); return FALSE; } else { rsc_lh = pe_find_constraint_resource(data_set->resources, id_lh); } if (rsc_lh == NULL) { crm_config_err("Invalid constraint '%s': No resource named '%s'", id, id_lh); return FALSE; } else if (instance_lh && rsc_lh->variant < pe_clone) { crm_config_err ("Invalid constraint '%s': Resource '%s' is not a clone but instance %s was requested", id, id_lh, instance_lh); return FALSE; } if (instance_lh) { rsc_lh = find_clone_instance(rsc_lh, instance_lh, data_set); if (rsc_lh == NULL) { crm_config_warn("Invalid constraint '%s': No instance '%s' of '%s'", id, instance_lh, id_lh); return FALSE; } } rsc_ticket_new(id, rsc_lh, ticket, state_lh, loss_policy, data_set); return TRUE; } static gboolean unpack_rsc_ticket_tags(xmlNode * xml_obj, xmlNode ** expanded_xml, pe_working_set_t * data_set) { const char *id = NULL; const char *id_lh = NULL; const char *state_lh = NULL; resource_t *rsc_lh = NULL; tag_t *tag_lh = NULL; xmlNode *new_xml = NULL; xmlNode *rsc_set_lh = NULL; gboolean any_sets = FALSE; *expanded_xml = NULL; if (xml_obj == NULL) { crm_config_err("No constraint object to process."); return FALSE; } id = crm_element_value(xml_obj, XML_ATTR_ID); if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } /* Attempt to expand any template/tag references in possible resource sets. */ expand_tags_in_sets(xml_obj, &new_xml, data_set); if (new_xml) { /* There are resource sets referencing templates/tags. Return with the expanded XML. */ crm_log_xml_trace(new_xml, "Expanded rsc_ticket..."); *expanded_xml = new_xml; return TRUE; } id_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE); if (id_lh == NULL) { return TRUE; } if (valid_resource_or_tag(data_set, id_lh, &rsc_lh, &tag_lh) == FALSE) { crm_config_err("Constraint '%s': Invalid reference to '%s'", id, id_lh); return FALSE; } else if (rsc_lh) { /* No template/tag is referenced. */ return TRUE; } state_lh = crm_element_value(xml_obj, XML_COLOC_ATTR_SOURCE_ROLE); new_xml = copy_xml(xml_obj); /* Convert the template/tag reference in "rsc" into a resource_set under the rsc_ticket constraint. */ if (tag_to_set(new_xml, &rsc_set_lh, XML_COLOC_ATTR_SOURCE, FALSE, data_set) == FALSE) { free_xml(new_xml); return FALSE; } if (rsc_set_lh) { if (state_lh) { /* A "rsc-role" is specified. Move it into the converted resource_set as a "role"" attribute. */ crm_xml_add(rsc_set_lh, "role", state_lh); xml_remove_prop(new_xml, XML_COLOC_ATTR_SOURCE_ROLE); } any_sets = TRUE; } if (any_sets) { crm_log_xml_trace(new_xml, "Expanded rsc_ticket..."); *expanded_xml = new_xml; } else { free_xml(new_xml); } return TRUE; } gboolean unpack_rsc_ticket(xmlNode * xml_obj, pe_working_set_t * data_set) { xmlNode *set = NULL; gboolean any_sets = FALSE; const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *ticket_str = crm_element_value(xml_obj, XML_TICKET_ATTR_TICKET); const char *loss_policy = crm_element_value(xml_obj, XML_TICKET_ATTR_LOSS_POLICY); ticket_t *ticket = NULL; xmlNode *orig_xml = NULL; xmlNode *expanded_xml = NULL; gboolean rc = TRUE; if (xml_obj == NULL) { crm_config_err("No rsc_ticket constraint object to process."); return FALSE; } if (id == NULL) { crm_config_err("%s constraint must have an id", crm_element_name(xml_obj)); return FALSE; } if (data_set->tickets == NULL) { data_set->tickets = g_hash_table_new_full(crm_str_hash, g_str_equal, g_hash_destroy_str, destroy_ticket); } if (ticket_str == NULL) { crm_config_err("Invalid constraint '%s': No ticket specified", id); return FALSE; } else { ticket = g_hash_table_lookup(data_set->tickets, ticket_str); } if (ticket == NULL) { ticket = ticket_new(ticket_str, data_set); if (ticket == NULL) { return FALSE; } } rc = unpack_rsc_ticket_tags(xml_obj, &expanded_xml, data_set); if (expanded_xml) { orig_xml = xml_obj; xml_obj = expanded_xml; } else if (rc == FALSE) { return FALSE; } for (set = __xml_first_child(xml_obj); set != NULL; set = __xml_next(set)) { if (crm_str_eq((const char *)set->name, XML_CONS_TAG_RSC_SET, TRUE)) { any_sets = TRUE; set = expand_idref(set, data_set->input); if (unpack_rsc_ticket_set(set, ticket, loss_policy, data_set) == FALSE) { return FALSE; } } } if (expanded_xml) { free_xml(expanded_xml); xml_obj = orig_xml; } if (any_sets == FALSE) { return unpack_simple_rsc_ticket(xml_obj, data_set); } return TRUE; } gboolean is_active(rsc_to_node_t * cons) { return TRUE; } diff --git a/pengine/native.c b/pengine/native.c index 62639d037e..3f1807e5b5 100644 --- a/pengine/native.c +++ b/pengine/native.c @@ -1,3243 +1,3276 @@ /* * Copyright (C) 2004 Andrew Beekhof * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public * License as published by the Free Software Foundation; either * version 2 of the License, or (at your option) any later version. * * This software is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */ #include #include #include #include #include #include #include /* #define DELETE_THEN_REFRESH 1 // The crmd will remove the resource from the CIB itself, making this redundant */ #define INFINITY_HACK (INFINITY * -100) #define VARIANT_NATIVE 1 #include gboolean update_action(action_t * then); void native_rsc_colocation_rh_must(resource_t * rsc_lh, gboolean update_lh, resource_t * rsc_rh, gboolean update_rh); void native_rsc_colocation_rh_mustnot(resource_t * rsc_lh, gboolean update_lh, resource_t * rsc_rh, gboolean update_rh); void Recurring(resource_t * rsc, action_t * start, node_t * node, pe_working_set_t * data_set); void RecurringOp(resource_t * rsc, action_t * start, node_t * node, xmlNode * operation, pe_working_set_t * data_set); void Recurring_Stopped(resource_t * rsc, action_t * start, node_t * node, pe_working_set_t * data_set); void RecurringOp_Stopped(resource_t * rsc, action_t * start, node_t * node, xmlNode * operation, pe_working_set_t * data_set); void pe_post_notify(resource_t * rsc, node_t * node, action_t * op, notify_data_t * n_data, pe_working_set_t * data_set); gboolean DeleteRsc(resource_t * rsc, node_t * node, gboolean optional, pe_working_set_t * data_set); gboolean StopRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set); gboolean StartRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set); gboolean DemoteRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set); gboolean PromoteRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set); gboolean RoleError(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set); gboolean NullOp(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set); /* *INDENT-OFF* */ enum rsc_role_e rsc_state_matrix[RSC_ROLE_MAX][RSC_ROLE_MAX] = { /* Current State */ /* Next State: Unknown Stopped Started Slave Master */ /* Unknown */ { RSC_ROLE_UNKNOWN, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, }, /* Stopped */ { RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STARTED, RSC_ROLE_SLAVE, RSC_ROLE_SLAVE, }, /* Started */ { RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STARTED, RSC_ROLE_SLAVE, RSC_ROLE_MASTER, }, /* Slave */ { RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_SLAVE, RSC_ROLE_MASTER, }, /* Master */ { RSC_ROLE_STOPPED, RSC_ROLE_SLAVE, RSC_ROLE_SLAVE, RSC_ROLE_SLAVE, RSC_ROLE_MASTER, }, }; gboolean (*rsc_action_matrix[RSC_ROLE_MAX][RSC_ROLE_MAX])(resource_t*,node_t*,gboolean,pe_working_set_t*) = { /* Current State */ /* Next State: Unknown Stopped Started Slave Master */ /* Unknown */ { RoleError, StopRsc, RoleError, RoleError, RoleError, }, /* Stopped */ { RoleError, NullOp, StartRsc, StartRsc, RoleError, }, /* Started */ { RoleError, StopRsc, NullOp, NullOp, PromoteRsc, }, /* Slave */ { RoleError, StopRsc, StopRsc, NullOp, PromoteRsc, }, /* Master */ { RoleError, DemoteRsc, DemoteRsc, DemoteRsc, NullOp, }, }; /* *INDENT-ON* */ struct capacity_data { node_t *node; resource_t *rsc; gboolean is_enough; }; static action_t * get_first_named_action(resource_t * rsc, const char *action, gboolean only_valid, node_t * current); static void check_capacity(gpointer key, gpointer value, gpointer user_data) { int required = 0; int remaining = 0; struct capacity_data *data = user_data; required = crm_parse_int(value, "0"); remaining = crm_parse_int(g_hash_table_lookup(data->node->details->utilization, key), "0"); if (required > remaining) { CRM_ASSERT(data->rsc); CRM_ASSERT(data->node); pe_rsc_debug(data->rsc, "Node %s has no enough %s for resource %s: required=%d remaining=%d", data->node->details->uname, (char *)key, data->rsc->id, required, remaining); data->is_enough = FALSE; } } static gboolean have_enough_capacity(node_t * node, resource_t * rsc) { struct capacity_data data; data.node = node; data.rsc = rsc; data.is_enough = TRUE; g_hash_table_foreach(rsc->utilization, check_capacity, &data); return data.is_enough; } static gboolean native_choose_node(resource_t * rsc, node_t * prefer, pe_working_set_t * data_set) { /* 1. Sort by weight 2. color.chosen_node = the node (of those with the highest wieght) with the fewest resources 3. remove color.chosen_node from all other colors */ int alloc_details = scores_log_level + 1; GListPtr nodes = NULL; node_t *chosen = NULL; int lpc = 0; int multiple = 0; int length = 0; gboolean result = FALSE; if (safe_str_neq(data_set->placement_strategy, "default")) { GListPtr gIter = NULL; for (gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) { node_t *node = (node_t *) gIter->data; if (have_enough_capacity(node, rsc) == FALSE) { pe_rsc_debug(rsc, "Resource %s cannot be allocated to node %s: none of enough capacity", rsc->id, node->details->uname); resource_location(rsc, node, -INFINITY, "__limit_utilization_", data_set); } } dump_node_scores(alloc_details, rsc, "Post-utilization", rsc->allowed_nodes); } length = g_hash_table_size(rsc->allowed_nodes); if (is_not_set(rsc->flags, pe_rsc_provisional)) { return rsc->allocated_to ? TRUE : FALSE; } if (prefer) { chosen = g_hash_table_lookup(rsc->allowed_nodes, prefer->details->id); if (chosen && chosen->weight >= 0 && can_run_resources(chosen)) { pe_rsc_trace(rsc, "Using preferred node %s for %s instead of choosing from %d candidates", chosen->details->uname, rsc->id, length); } else if (chosen && chosen->weight < 0) { pe_rsc_trace(rsc, "Preferred node %s for %s was unavailable", chosen->details->uname, rsc->id); chosen = NULL; } else if (chosen && can_run_resources(chosen)) { pe_rsc_trace(rsc, "Preferred node %s for %s was unsuitable", chosen->details->uname, rsc->id); chosen = NULL; } else { pe_rsc_trace(rsc, "Preferred node %s for %s was unknown", prefer->details->uname, rsc->id); } } if (chosen == NULL && rsc->allowed_nodes) { nodes = g_hash_table_get_values(rsc->allowed_nodes); nodes = g_list_sort_with_data(nodes, sort_node_weight, g_list_nth_data(rsc->running_on, 0)); chosen = g_list_nth_data(nodes, 0); pe_rsc_trace(rsc, "Chose node %s for %s from %d candidates", chosen ? chosen->details->uname : "", rsc->id, length); if (chosen && chosen->weight > 0 && can_run_resources(chosen)) { node_t *running = g_list_nth_data(rsc->running_on, 0); if (running && can_run_resources(running) == FALSE) { pe_rsc_trace(rsc, "Current node for %s (%s) can't run resources", rsc->id, running->details->uname); running = NULL; } for (lpc = 1; lpc < length && running; lpc++) { node_t *tmp = g_list_nth_data(nodes, lpc); if (tmp->weight == chosen->weight) { multiple++; if (tmp->details == running->details) { /* prefer the existing node if scores are equal */ chosen = tmp; } } } } } if (multiple > 1) { int log_level = LOG_INFO; static char score[33]; score2char_stack(chosen->weight, score, sizeof(score)); if (chosen->weight >= INFINITY) { log_level = LOG_WARNING; } do_crm_log(log_level, "%d nodes with equal score (%s) for" " running %s resources. Chose %s.", multiple, score, rsc->id, chosen->details->uname); } result = native_assign_node(rsc, nodes, chosen, FALSE); g_list_free(nodes); return result; } static int node_list_attr_score(GHashTable * list, const char *attr, const char *value) { GHashTableIter iter; node_t *node = NULL; int best_score = -INFINITY; const char *best_node = NULL; if (attr == NULL) { attr = "#" XML_ATTR_UNAME; } g_hash_table_iter_init(&iter, list); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { int weight = node->weight; if (can_run_resources(node) == FALSE) { weight = -INFINITY; } if (weight > best_score || best_node == NULL) { const char *tmp = g_hash_table_lookup(node->details->attrs, attr); if (safe_str_eq(value, tmp)) { best_score = weight; best_node = node->details->uname; } } } if (safe_str_neq(attr, "#" XML_ATTR_UNAME)) { crm_info("Best score for %s=%s was %s with %d", attr, value, best_node ? best_node : "", best_score); } return best_score; } static void node_hash_update(GHashTable * list1, GHashTable * list2, const char *attr, float factor, gboolean only_positive) { int score = 0; int new_score = 0; GHashTableIter iter; node_t *node = NULL; if (attr == NULL) { attr = "#" XML_ATTR_UNAME; } g_hash_table_iter_init(&iter, list1); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { CRM_LOG_ASSERT(node != NULL); if(node == NULL) { continue; }; score = node_list_attr_score(list2, attr, g_hash_table_lookup(node->details->attrs, attr)); new_score = merge_weights(factor * score, node->weight); if (factor < 0 && score < 0) { /* Negative preference for a node with a negative score * should not become a positive preference * * TODO - Decide if we want to filter only if weight == -INFINITY * */ crm_trace("%s: Filtering %d + %f*%d (factor * score)", node->details->uname, node->weight, factor, score); } else if (node->weight == INFINITY_HACK) { crm_trace("%s: Filtering %d + %f*%d (node < 0)", node->details->uname, node->weight, factor, score); } else if (only_positive && new_score < 0 && node->weight > 0) { node->weight = INFINITY_HACK; crm_trace("%s: Filtering %d + %f*%d (score > 0)", node->details->uname, node->weight, factor, score); } else if (only_positive && new_score < 0 && node->weight == 0) { crm_trace("%s: Filtering %d + %f*%d (score == 0)", node->details->uname, node->weight, factor, score); } else { crm_trace("%s: %d + %f*%d", node->details->uname, node->weight, factor, score); node->weight = new_score; } } } GHashTable * node_hash_dup(GHashTable * hash) { /* Hack! */ GListPtr list = g_hash_table_get_values(hash); GHashTable *result = node_hash_from_list(list); g_list_free(list); return result; } GHashTable * native_merge_weights(resource_t * rsc, const char *rhs, GHashTable * nodes, const char *attr, float factor, enum pe_weights flags) { return rsc_merge_weights(rsc, rhs, nodes, attr, factor, flags); } GHashTable * rsc_merge_weights(resource_t * rsc, const char *rhs, GHashTable * nodes, const char *attr, float factor, enum pe_weights flags) { GHashTable *work = NULL; int multiplier = 1; if (factor < 0) { multiplier = -1; } if (is_set(rsc->flags, pe_rsc_merging)) { pe_rsc_info(rsc, "%s: Breaking dependency loop at %s", rhs, rsc->id); return nodes; } set_bit(rsc->flags, pe_rsc_merging); if (is_set(flags, pe_weights_init)) { if (rsc->variant == pe_group && rsc->children) { GListPtr last = rsc->children; while (last->next != NULL) { last = last->next; } pe_rsc_trace(rsc, "Merging %s as a group %p %p", rsc->id, rsc->children, last); work = rsc_merge_weights(last->data, rhs, NULL, attr, factor, flags); } else { work = node_hash_dup(rsc->allowed_nodes); } clear_bit(flags, pe_weights_init); } else if (rsc->variant == pe_group && rsc->children) { GListPtr iter = rsc->children; pe_rsc_trace(rsc, "%s: Combining scores from %d children of %s", rhs, g_list_length(iter), rsc->id); work = node_hash_dup(nodes); for(iter = rsc->children; iter->next != NULL; iter = iter->next) { work = rsc_merge_weights(iter->data, rhs, work, attr, factor, flags); } } else { pe_rsc_trace(rsc, "%s: Combining scores from %s", rhs, rsc->id); work = node_hash_dup(nodes); node_hash_update(work, rsc->allowed_nodes, attr, factor, is_set(flags, pe_weights_positive)); } if (is_set(flags, pe_weights_rollback) && can_run_any(work) == FALSE) { pe_rsc_info(rsc, "%s: Rolling back scores from %s", rhs, rsc->id); g_hash_table_destroy(work); clear_bit(rsc->flags, pe_rsc_merging); return nodes; } if (can_run_any(work)) { GListPtr gIter = NULL; if (is_set(flags, pe_weights_forward)) { gIter = rsc->rsc_cons; crm_trace("Checking %d additional colocation constraints", g_list_length(gIter)); } else if(rsc->variant == pe_group && rsc->children) { GListPtr last = rsc->children; while (last->next != NULL) { last = last->next; } gIter = ((resource_t*)last->data)->rsc_cons_lhs; crm_trace("Checking %d additional optional group colocation constraints from %s", g_list_length(gIter), ((resource_t*)last->data)->id); } else { gIter = rsc->rsc_cons_lhs; crm_trace("Checking %d additional optional colocation constraints %s", g_list_length(gIter), rsc->id); } for (; gIter != NULL; gIter = gIter->next) { resource_t *other = NULL; rsc_colocation_t *constraint = (rsc_colocation_t *) gIter->data; if (is_set(flags, pe_weights_forward)) { other = constraint->rsc_rh; } else { other = constraint->rsc_lh; } pe_rsc_trace(rsc, "Applying %s (%s)", constraint->id, other->id); work = rsc_merge_weights(other, rhs, work, constraint->node_attribute, multiplier * (float)constraint->score / INFINITY, flags|pe_weights_rollback); dump_node_scores(LOG_TRACE, NULL, rhs, work); } } if (is_set(flags, pe_weights_positive)) { node_t *node = NULL; GHashTableIter iter; g_hash_table_iter_init(&iter, work); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { if (node->weight == INFINITY_HACK) { node->weight = 1; } } } if (nodes) { g_hash_table_destroy(nodes); } clear_bit(rsc->flags, pe_rsc_merging); return work; } node_t * native_color(resource_t * rsc, node_t * prefer, pe_working_set_t * data_set) { GListPtr gIter = NULL; int alloc_details = scores_log_level + 1; if (rsc->parent && is_not_set(rsc->parent->flags, pe_rsc_allocating)) { /* never allocate children on their own */ pe_rsc_debug(rsc, "Escalating allocation of %s to its parent: %s", rsc->id, rsc->parent->id); rsc->parent->cmds->allocate(rsc->parent, prefer, data_set); } if (is_not_set(rsc->flags, pe_rsc_provisional)) { return rsc->allocated_to; } if (is_set(rsc->flags, pe_rsc_allocating)) { pe_rsc_debug(rsc, "Dependency loop detected involving %s", rsc->id); return NULL; } set_bit(rsc->flags, pe_rsc_allocating); print_resource(alloc_details, "Allocating: ", rsc, FALSE); dump_node_scores(alloc_details, rsc, "Pre-allloc", rsc->allowed_nodes); for (gIter = rsc->rsc_cons; gIter != NULL; gIter = gIter->next) { rsc_colocation_t *constraint = (rsc_colocation_t *) gIter->data; GHashTable *archive = NULL; resource_t *rsc_rh = constraint->rsc_rh; pe_rsc_trace(rsc, "%s: Pre-Processing %s (%s, %d, %s)", rsc->id, constraint->id, rsc_rh->id, constraint->score, role2text(constraint->role_lh)); if (constraint->role_lh >= RSC_ROLE_MASTER || (constraint->score < 0 && constraint->score > -INFINITY)) { archive = node_hash_dup(rsc->allowed_nodes); } rsc_rh->cmds->allocate(rsc_rh, NULL, data_set); rsc->cmds->rsc_colocation_lh(rsc, rsc_rh, constraint); if (archive && can_run_any(rsc->allowed_nodes) == FALSE) { pe_rsc_info(rsc, "%s: Rolling back scores from %s", rsc->id, rsc_rh->id); g_hash_table_destroy(rsc->allowed_nodes); rsc->allowed_nodes = archive; archive = NULL; } if (archive) { g_hash_table_destroy(archive); } } dump_node_scores(alloc_details, rsc, "Post-coloc", rsc->allowed_nodes); for (gIter = rsc->rsc_cons_lhs; gIter != NULL; gIter = gIter->next) { rsc_colocation_t *constraint = (rsc_colocation_t *) gIter->data; rsc->allowed_nodes = constraint->rsc_lh->cmds->merge_weights(constraint->rsc_lh, rsc->id, rsc->allowed_nodes, constraint->node_attribute, (float)constraint->score / INFINITY, pe_weights_rollback); } print_resource(LOG_DEBUG_2, "Allocating: ", rsc, FALSE); if (rsc->next_role == RSC_ROLE_STOPPED) { pe_rsc_trace(rsc, "Making sure %s doesn't get allocated", rsc->id); /* make sure it doesnt come up again */ resource_location(rsc, NULL, -INFINITY, XML_RSC_ATTR_TARGET_ROLE, data_set); } else if(rsc->next_role > rsc->role && is_set(data_set->flags, pe_flag_have_quorum) == FALSE && data_set->no_quorum_policy == no_quorum_freeze) { crm_notice("Resource %s cannot be elevated from %s to %s: no-quorum-policy=freeze", rsc->id, role2text(rsc->role), role2text(rsc->next_role)); rsc->next_role = rsc->role; } dump_node_scores(show_scores ? 0 : scores_log_level, rsc, __FUNCTION__, rsc->allowed_nodes); if (is_set(data_set->flags, pe_flag_stonith_enabled) && is_set(data_set->flags, pe_flag_have_stonith_resource) == FALSE) { clear_bit(rsc->flags, pe_rsc_managed); } if (is_not_set(rsc->flags, pe_rsc_managed)) { const char *reason = NULL; node_t *assign_to = NULL; rsc->next_role = rsc->role; if (rsc->running_on == NULL) { reason = "inactive"; } else if (rsc->role == RSC_ROLE_MASTER) { assign_to = rsc->running_on->data; reason = "master"; } else if (is_set(rsc->flags, pe_rsc_failed)) { assign_to = rsc->running_on->data; reason = "failed"; } else { assign_to = rsc->running_on->data; reason = "active"; } pe_rsc_info(rsc, "Unmanaged resource %s allocated to %s: %s", rsc->id, assign_to ? assign_to->details->uname : "'nowhere'", reason); native_assign_node(rsc, NULL, assign_to, TRUE); } else if (is_set(data_set->flags, pe_flag_stop_everything)) { pe_rsc_debug(rsc, "Forcing %s to stop", rsc->id); native_assign_node(rsc, NULL, NULL, TRUE); } else if (is_set(rsc->flags, pe_rsc_provisional) && native_choose_node(rsc, prefer, data_set)) { pe_rsc_trace(rsc, "Allocated resource %s to %s", rsc->id, rsc->allocated_to->details->uname); } else if (rsc->allocated_to == NULL) { if (is_not_set(rsc->flags, pe_rsc_orphan)) { pe_rsc_info(rsc, "Resource %s cannot run anywhere", rsc->id); } else if (rsc->running_on != NULL) { pe_rsc_info(rsc, "Stopping orphan resource %s", rsc->id); } } else { pe_rsc_debug(rsc, "Pre-Allocated resource %s to %s", rsc->id, rsc->allocated_to->details->uname); } clear_bit(rsc->flags, pe_rsc_allocating); print_resource(LOG_DEBUG_3, "Allocated ", rsc, TRUE); if (rsc->is_remote_node) { node_t *remote_node = pe_find_node(data_set->nodes, rsc->id); CRM_ASSERT(remote_node != NULL); if (rsc->allocated_to && rsc->next_role != RSC_ROLE_STOPPED) { crm_trace("Setting remote node %s to ONLINE", remote_node->details->id); remote_node->details->online = TRUE; /* We shouldn't consider an unseen remote-node unclean if we are going * to try and connect to it. Otherwise we get an unnecessary fence */ if (remote_node->details->unseen == TRUE) { remote_node->details->unclean = FALSE; } } else { crm_trace("Setting remote node %s to SHUTDOWN. next role = %s, allocated=%s", remote_node->details->id, role2text(rsc->next_role), rsc->allocated_to ? "true" : "false"); remote_node->details->shutdown = TRUE; } } return rsc->allocated_to; } static gboolean is_op_dup(resource_t * rsc, const char *name, const char *interval) { gboolean dup = FALSE; const char *id = NULL; const char *value = NULL; xmlNode *operation = NULL; CRM_ASSERT(rsc); for (operation = __xml_first_child(rsc->ops_xml); operation != NULL; operation = __xml_next(operation)) { if (crm_str_eq((const char *)operation->name, "op", TRUE)) { value = crm_element_value(operation, "name"); if (safe_str_neq(value, name)) { continue; } value = crm_element_value(operation, XML_LRM_ATTR_INTERVAL); if (value == NULL) { value = "0"; } if (safe_str_neq(value, interval)) { continue; } if (id == NULL) { id = ID(operation); } else { crm_config_err("Operation %s is a duplicate of %s", ID(operation), id); crm_config_err ("Do not use the same (name, interval) combination more than once per resource"); dup = TRUE; } } } return dup; } void RecurringOp(resource_t * rsc, action_t * start, node_t * node, xmlNode * operation, pe_working_set_t * data_set) { char *key = NULL; const char *name = NULL; const char *value = NULL; const char *interval = NULL; const char *node_uname = NULL; unsigned long long interval_ms = 0; action_t *mon = NULL; gboolean is_optional = TRUE; GListPtr possible_matches = NULL; /* Only process for the operations without role="Stopped" */ value = crm_element_value(operation, "role"); if (value && text2role(value) == RSC_ROLE_STOPPED) { return; } CRM_ASSERT(rsc); pe_rsc_trace(rsc, "Creating recurring action %s for %s in role %s on %s", ID(operation), rsc->id, role2text(rsc->next_role), node ? node->details->uname : "n/a"); if (node != NULL) { node_uname = node->details->uname; } interval = crm_element_value(operation, XML_LRM_ATTR_INTERVAL); interval_ms = crm_get_interval(interval); if (interval_ms == 0) { return; } name = crm_element_value(operation, "name"); if (is_op_dup(rsc, name, interval)) { return; } if (safe_str_eq(name, RSC_STOP) || safe_str_eq(name, RSC_START) || safe_str_eq(name, RSC_DEMOTE) || safe_str_eq(name, RSC_PROMOTE) ) { crm_config_err("Invalid recurring action %s wth name: '%s'", ID(operation), name); return; } key = generate_op_key(rsc->id, name, interval_ms); if (find_rsc_op_entry(rsc, key) == NULL) { /* disabled */ free(key); return; } if (start != NULL) { pe_rsc_trace(rsc, "Marking %s %s due to %s", key, is_set(start->flags, pe_action_optional) ? "optional" : "manditory", start->uuid); is_optional = (rsc->cmds->action_flags(start, NULL) & pe_action_optional); } else { pe_rsc_trace(rsc, "Marking %s optional", key); is_optional = TRUE; } /* start a monitor for an already active resource */ possible_matches = find_actions_exact(rsc->actions, key, node); if (possible_matches == NULL) { is_optional = FALSE; pe_rsc_trace(rsc, "Marking %s manditory: not active", key); } else { g_list_free(possible_matches); } if ((rsc->next_role == RSC_ROLE_MASTER && value == NULL) || (value != NULL && text2role(value) != rsc->next_role)) { int log_level = LOG_DEBUG_2; const char *result = "Ignoring"; if (is_optional) { char *local_key = strdup(key); log_level = LOG_INFO; result = "Cancelling"; /* its running : cancel it */ mon = custom_action(rsc, local_key, RSC_CANCEL, node, FALSE, TRUE, data_set); free(mon->task); free(mon->cancel_task); mon->task = strdup(RSC_CANCEL); mon->cancel_task = strdup(name); add_hash_param(mon->meta, XML_LRM_ATTR_INTERVAL, interval); add_hash_param(mon->meta, XML_LRM_ATTR_TASK, name); local_key = NULL; switch (rsc->role) { case RSC_ROLE_SLAVE: case RSC_ROLE_STARTED: if (rsc->next_role == RSC_ROLE_MASTER) { local_key = promote_key(rsc); } else if (rsc->next_role == RSC_ROLE_STOPPED) { local_key = stop_key(rsc); } break; case RSC_ROLE_MASTER: local_key = demote_key(rsc); break; default: break; } if (local_key) { custom_action_order(rsc, NULL, mon, rsc, local_key, NULL, pe_order_runnable_left, data_set); } mon = NULL; } do_crm_log(log_level, "%s action %s (%s vs. %s)", result, key, value ? value : role2text(RSC_ROLE_SLAVE), role2text(rsc->next_role)); free(key); key = NULL; return; } mon = custom_action(rsc, key, name, node, is_optional, TRUE, data_set); key = mon->uuid; if (is_optional) { pe_rsc_trace(rsc, "%s\t %s (optional)", crm_str(node_uname), mon->uuid); } if (start == NULL || is_set(start->flags, pe_action_runnable) == FALSE) { pe_rsc_debug(rsc, "%s\t %s (cancelled : start un-runnable)", crm_str(node_uname), mon->uuid); update_action_flags(mon, pe_action_runnable | pe_action_clear); } else if (node == NULL || node->details->online == FALSE || node->details->unclean) { pe_rsc_debug(rsc, "%s\t %s (cancelled : no node available)", crm_str(node_uname), mon->uuid); update_action_flags(mon, pe_action_runnable | pe_action_clear); } else if (is_set(mon->flags, pe_action_optional) == FALSE) { pe_rsc_info(rsc, " Start recurring %s (%llus) for %s on %s", mon->task, interval_ms / 1000, rsc->id, crm_str(node_uname)); } if (rsc->next_role == RSC_ROLE_MASTER) { char *running_master = crm_itoa(PCMK_OCF_RUNNING_MASTER); add_hash_param(mon->meta, XML_ATTR_TE_TARGET_RC, running_master); free(running_master); } if (node == NULL || is_set(rsc->flags, pe_rsc_managed)) { custom_action_order(rsc, start_key(rsc), NULL, NULL, strdup(key), mon, pe_order_implies_then | pe_order_runnable_left, data_set); if (rsc->next_role == RSC_ROLE_MASTER) { custom_action_order(rsc, promote_key(rsc), NULL, rsc, NULL, mon, pe_order_optional | pe_order_runnable_left, data_set); } else if (rsc->role == RSC_ROLE_MASTER) { custom_action_order(rsc, demote_key(rsc), NULL, rsc, NULL, mon, pe_order_optional | pe_order_runnable_left, data_set); } } } void Recurring(resource_t * rsc, action_t * start, node_t * node, pe_working_set_t * data_set) { if (is_not_set(rsc->flags, pe_rsc_maintenance) && (node == NULL || node->details->maintenance == FALSE)) { xmlNode *operation = NULL; for (operation = __xml_first_child(rsc->ops_xml); operation != NULL; operation = __xml_next(operation)) { if (crm_str_eq((const char *)operation->name, "op", TRUE)) { RecurringOp(rsc, start, node, operation, data_set); } } } } void RecurringOp_Stopped(resource_t * rsc, action_t * start, node_t * node, xmlNode * operation, pe_working_set_t * data_set) { char *key = NULL; const char *name = NULL; const char *role = NULL; const char *interval = NULL; const char *node_uname = NULL; unsigned long long interval_ms = 0; GListPtr possible_matches = NULL; GListPtr gIter = NULL; /* TODO: Support of non-unique clone */ if (is_set(rsc->flags, pe_rsc_unique) == FALSE) { return; } /* Only process for the operations with role="Stopped" */ role = crm_element_value(operation, "role"); if (role == NULL || text2role(role) != RSC_ROLE_STOPPED) { return; } pe_rsc_trace(rsc, "Creating recurring actions %s for %s in role %s on nodes where it'll not be running", ID(operation), rsc->id, role2text(rsc->next_role)); if (node != NULL) { node_uname = node->details->uname; } interval = crm_element_value(operation, XML_LRM_ATTR_INTERVAL); interval_ms = crm_get_interval(interval); if (interval_ms == 0) { return; } name = crm_element_value(operation, "name"); if (is_op_dup(rsc, name, interval)) { return; } if (safe_str_eq(name, RSC_STOP) || safe_str_eq(name, RSC_START) || safe_str_eq(name, RSC_DEMOTE) || safe_str_eq(name, RSC_PROMOTE) ) { crm_config_err("Invalid recurring action %s wth name: '%s'", ID(operation), name); return; } key = generate_op_key(rsc->id, name, interval_ms); if (find_rsc_op_entry(rsc, key) == NULL) { /* disabled */ free(key); return; } /* if the monitor exists on the node where the resource will be running, cancel it */ if (node != NULL) { possible_matches = find_actions_exact(rsc->actions, key, node); if (possible_matches) { action_t *cancel_op = NULL; char *local_key = strdup(key); g_list_free(possible_matches); cancel_op = custom_action(rsc, local_key, RSC_CANCEL, node, FALSE, TRUE, data_set); free(cancel_op->task); free(cancel_op->cancel_task); cancel_op->task = strdup(RSC_CANCEL); cancel_op->cancel_task = strdup(name); add_hash_param(cancel_op->meta, XML_LRM_ATTR_INTERVAL, interval); add_hash_param(cancel_op->meta, XML_LRM_ATTR_TASK, name); local_key = NULL; if (rsc->next_role == RSC_ROLE_STARTED || rsc->next_role == RSC_ROLE_SLAVE) { /* rsc->role == RSC_ROLE_STOPPED: cancel the monitor before start */ /* rsc->role == RSC_ROLE_STARTED: for a migration, cancel the monitor on the target node before start */ custom_action_order(rsc, NULL, cancel_op, rsc, start_key(rsc), NULL, pe_order_runnable_left, data_set); } pe_rsc_info(rsc, "Cancel action %s (%s vs. %s) on %s", key, role, role2text(rsc->next_role), crm_str(node_uname)); } } for (gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) { node_t *stop_node = (node_t *) gIter->data; const char *stop_node_uname = stop_node->details->uname; gboolean is_optional = TRUE; gboolean probe_is_optional = TRUE; gboolean stop_is_optional = TRUE; action_t *stopped_mon = NULL; char *rc_inactive = NULL; GListPtr probe_complete_ops = NULL; GListPtr stop_ops = NULL; GListPtr local_gIter = NULL; char *stop_op_key = NULL; if (node_uname && safe_str_eq(stop_node_uname, node_uname)) { continue; } pe_rsc_trace(rsc, "Creating recurring action %s for %s on %s", ID(operation), rsc->id, crm_str(stop_node_uname)); /* start a monitor for an already stopped resource */ possible_matches = find_actions_exact(rsc->actions, key, stop_node); if (possible_matches == NULL) { pe_rsc_trace(rsc, "Marking %s manditory on %s: not active", key, crm_str(stop_node_uname)); is_optional = FALSE; } else { pe_rsc_trace(rsc, "Marking %s optional on %s: already active", key, crm_str(stop_node_uname)); is_optional = TRUE; g_list_free(possible_matches); } stopped_mon = custom_action(rsc, strdup(key), name, stop_node, is_optional, TRUE, data_set); rc_inactive = crm_itoa(PCMK_OCF_NOT_RUNNING); add_hash_param(stopped_mon->meta, XML_ATTR_TE_TARGET_RC, rc_inactive); free(rc_inactive); probe_complete_ops = find_actions(data_set->actions, CRM_OP_PROBED, NULL); for (local_gIter = probe_complete_ops; local_gIter != NULL; local_gIter = local_gIter->next) { action_t *probe_complete = (action_t *) local_gIter->data; if (probe_complete->node == NULL) { if (is_set(probe_complete->flags, pe_action_optional) == FALSE) { probe_is_optional = FALSE; } if (is_set(probe_complete->flags, pe_action_runnable) == FALSE) { crm_debug("%s\t %s (cancelled : probe un-runnable)", crm_str(stop_node_uname), stopped_mon->uuid); update_action_flags(stopped_mon, pe_action_runnable | pe_action_clear); } if (is_set(rsc->flags, pe_rsc_managed)) { custom_action_order(NULL, NULL, probe_complete, NULL, strdup(key), stopped_mon, pe_order_optional, data_set); } break; } } if (probe_complete_ops) { g_list_free(probe_complete_ops); } stop_op_key = stop_key(rsc); stop_ops = find_actions_exact(rsc->actions, stop_op_key, stop_node); for (local_gIter = stop_ops; local_gIter != NULL; local_gIter = local_gIter->next) { action_t *stop = (action_t *) local_gIter->data; if (is_set(stop->flags, pe_action_optional) == FALSE) { stop_is_optional = FALSE; } if (is_set(stop->flags, pe_action_runnable) == FALSE) { crm_debug("%s\t %s (cancelled : stop un-runnable)", crm_str(stop_node_uname), stopped_mon->uuid); update_action_flags(stopped_mon, pe_action_runnable | pe_action_clear); } if (is_set(rsc->flags, pe_rsc_managed)) { custom_action_order(rsc, strdup(stop_op_key), stop, NULL, strdup(key), stopped_mon, pe_order_implies_then | pe_order_runnable_left, data_set); } } if (stop_ops) { g_list_free(stop_ops); } free(stop_op_key); if (is_optional == FALSE && probe_is_optional && stop_is_optional && is_set(rsc->flags, pe_rsc_managed) == FALSE) { pe_rsc_trace(rsc, "Marking %s optional on %s due to unmanaged", key, crm_str(stop_node_uname)); update_action_flags(stopped_mon, pe_action_optional); } if (is_set(stopped_mon->flags, pe_action_optional)) { pe_rsc_trace(rsc, "%s\t %s (optional)", crm_str(stop_node_uname), stopped_mon->uuid); } if (stop_node->details->online == FALSE || stop_node->details->unclean) { pe_rsc_debug(rsc, "%s\t %s (cancelled : no node available)", crm_str(stop_node_uname), stopped_mon->uuid); update_action_flags(stopped_mon, pe_action_runnable | pe_action_clear); } if (is_set(stopped_mon->flags, pe_action_runnable) && is_set(stopped_mon->flags, pe_action_optional) == FALSE) { crm_notice(" Start recurring %s (%llus) for %s on %s", stopped_mon->task, interval_ms / 1000, rsc->id, crm_str(stop_node_uname)); } } free(key); } void Recurring_Stopped(resource_t * rsc, action_t * start, node_t * node, pe_working_set_t * data_set) { if (is_not_set(rsc->flags, pe_rsc_maintenance) && (node == NULL || node->details->maintenance == FALSE)) { xmlNode *operation = NULL; for (operation = __xml_first_child(rsc->ops_xml); operation != NULL; operation = __xml_next(operation)) { if (crm_str_eq((const char *)operation->name, "op", TRUE)) { RecurringOp_Stopped(rsc, start, node, operation, data_set); } } } } static void handle_migration_actions(resource_t * rsc, node_t *current, node_t *chosen, pe_working_set_t * data_set) { action_t *migrate_to = NULL; action_t *migrate_from = NULL; action_t *start = NULL; action_t *stop = NULL; gboolean partial = rsc->partial_migration_target ? TRUE : FALSE; pe_rsc_trace(rsc, "Processing migration actions %s moving from %s to %s . partial migration = %s", rsc->id, current->details->id, chosen->details->id, partial ? "TRUE" : "FALSE"); start = start_action(rsc, chosen, TRUE); stop = stop_action(rsc, current, TRUE); if (partial == FALSE) { migrate_to = custom_action(rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), RSC_MIGRATE, current, TRUE, TRUE, data_set); } migrate_from = custom_action(rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), RSC_MIGRATED, chosen, TRUE, TRUE, data_set); if ((migrate_to && migrate_from) || (migrate_from && partial)) { set_bit(start->flags, pe_action_migrate_runnable); set_bit(stop->flags, pe_action_migrate_runnable); update_action_flags(start, pe_action_pseudo); /* easier than trying to delete it from the graph */ /* order probes before migrations */ if (partial) { set_bit(migrate_from->flags, pe_action_migrate_runnable); migrate_from->needs = start->needs; custom_action_order(rsc, generate_op_key(rsc->id, RSC_STATUS, 0), NULL, rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL, pe_order_optional, data_set); } else { set_bit(migrate_from->flags, pe_action_migrate_runnable); set_bit(migrate_to->flags, pe_action_migrate_runnable); migrate_to->needs = start->needs; custom_action_order(rsc, generate_op_key(rsc->id, RSC_STATUS, 0), NULL, rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), NULL, pe_order_optional, data_set); custom_action_order(rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), NULL, rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL, pe_order_optional | pe_order_implies_first_migratable, data_set); } custom_action_order(rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL, rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL, pe_order_optional | pe_order_implies_first_migratable, data_set); custom_action_order(rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL, rsc, generate_op_key(rsc->id, RSC_START, 0), NULL, pe_order_optional | pe_order_implies_first_migratable | pe_order_pseudo_left, data_set); } if (migrate_to) { add_hash_param(migrate_to->meta, XML_LRM_ATTR_MIGRATE_SOURCE, current->details->uname); add_hash_param(migrate_to->meta, XML_LRM_ATTR_MIGRATE_TARGET, chosen->details->uname); /* pcmk remote connections don't require pending to be recorded in cib. * We can optimize cib writes by only setting PENDING for non pcmk remote * connection resources */ if (rsc->is_remote_node == FALSE) { /* migrate_to takes place on the source node, but can * have an effect on the target node depending on how * the agent is written. Because of this, we have to maintain * a record that the migrate_to occurred incase the source node * loses membership while the migrate_to action is still in-flight. */ add_hash_param(migrate_to->meta, XML_OP_ATTR_PENDING, "true"); } } if (migrate_from) { add_hash_param(migrate_from->meta, XML_LRM_ATTR_MIGRATE_SOURCE, current->details->uname); add_hash_param(migrate_from->meta, XML_LRM_ATTR_MIGRATE_TARGET, chosen->details->uname); } } void native_create_actions(resource_t * rsc, pe_working_set_t * data_set) { action_t *start = NULL; node_t *chosen = NULL; node_t *current = NULL; gboolean need_stop = FALSE; gboolean is_moving = FALSE; gboolean allow_migrate = is_set(rsc->flags, pe_rsc_allow_migrate) ? TRUE : FALSE; GListPtr gIter = NULL; int num_active_nodes = 0; enum rsc_role_e role = RSC_ROLE_UNKNOWN; enum rsc_role_e next_role = RSC_ROLE_UNKNOWN; CRM_ASSERT(rsc); chosen = rsc->allocated_to; if (chosen != NULL && rsc->next_role == RSC_ROLE_UNKNOWN) { rsc->next_role = RSC_ROLE_STARTED; pe_rsc_trace(rsc, "Fixed next_role: unknown -> %s", role2text(rsc->next_role)); } else if (rsc->next_role == RSC_ROLE_UNKNOWN) { rsc->next_role = RSC_ROLE_STOPPED; pe_rsc_trace(rsc, "Fixed next_role: unknown -> %s", role2text(rsc->next_role)); } pe_rsc_trace(rsc, "Processing state transition for %s %p: %s->%s", rsc->id, rsc, role2text(rsc->role), role2text(rsc->next_role)); if (rsc->running_on) { current = rsc->running_on->data; } for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) { node_t *n = (node_t *) gIter->data; if (rsc->partial_migration_source && (n->details == rsc->partial_migration_source->details)) { current = rsc->partial_migration_source; } num_active_nodes++; } get_rsc_attributes(rsc->parameters, rsc, chosen, data_set); for (gIter = rsc->dangling_migrations; gIter != NULL; gIter = gIter->next) { node_t *current = (node_t *) gIter->data; action_t *stop = stop_action(rsc, current, FALSE); set_bit(stop->flags, pe_action_dangle); pe_rsc_trace(rsc, "Forcing a cleanup of %s on %s", rsc->id, current->details->uname); if (is_set(data_set->flags, pe_flag_remove_after_stop)) { DeleteRsc(rsc, current, FALSE, data_set); } } if (num_active_nodes > 1) { if (num_active_nodes == 2 && chosen && rsc->partial_migration_target && rsc->partial_migration_source && (current->details == rsc->partial_migration_source->details) && (chosen->details == rsc->partial_migration_target->details)) { /* Here the chosen node is still the migration target from a partial * migration. Attempt to continue the migration instead of recovering * by stopping the resource everywhere and starting it on a single node. */ pe_rsc_trace(rsc, "Will attempt to continue with a partial migration to target %s from %s", rsc->partial_migration_target->details->id, rsc->partial_migration_source->details->id); } else { const char *type = crm_element_value(rsc->xml, XML_ATTR_TYPE); const char *class = crm_element_value(rsc->xml, XML_AGENT_ATTR_CLASS); if(rsc->partial_migration_target && rsc->partial_migration_source) { crm_notice("Resource %s can no longer migrate to %s. Stopping on %s too", rsc->id, rsc->partial_migration_target->details->uname, rsc->partial_migration_source->details->uname); } else { pe_proc_err("Resource %s (%s::%s) is active on %d nodes %s", rsc->id, class, type, num_active_nodes, recovery2text(rsc->recovery_type)); crm_warn("See %s for more information.", "http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"); } if (rsc->recovery_type == recovery_stop_start) { need_stop = TRUE; } /* If by chance a partial migration is in process, * but the migration target is not chosen still, clear all * partial migration data. */ rsc->partial_migration_source = rsc->partial_migration_target = NULL; allow_migrate = FALSE; } } if (is_set(rsc->flags, pe_rsc_start_pending)) { start = start_action(rsc, chosen, TRUE); set_bit(start->flags, pe_action_print_always); } if (current && chosen && current->details != chosen->details) { pe_rsc_trace(rsc, "Moving %s", rsc->id); is_moving = TRUE; need_stop = TRUE; } else if (is_set(rsc->flags, pe_rsc_failed)) { pe_rsc_trace(rsc, "Recovering %s", rsc->id); need_stop = TRUE; } else if (is_set(rsc->flags, pe_rsc_block)) { pe_rsc_trace(rsc, "Block %s", rsc->id); need_stop = TRUE; } else if (rsc->role > RSC_ROLE_STARTED && current != NULL && chosen != NULL) { /* Recovery of a promoted resource */ start = start_action(rsc, chosen, TRUE); if (is_set(start->flags, pe_action_optional) == FALSE) { pe_rsc_trace(rsc, "Forced start %s", rsc->id); need_stop = TRUE; } } pe_rsc_trace(rsc, "Creating actions for %s: %s->%s", rsc->id, role2text(rsc->role), role2text(rsc->next_role)); role = rsc->role; /* Potentiall optional steps on brining the resource down and back up to the same level */ while (role != RSC_ROLE_STOPPED) { next_role = rsc_state_matrix[role][RSC_ROLE_STOPPED]; pe_rsc_trace(rsc, "Down: Executing: %s->%s (%s)%s", role2text(role), role2text(next_role), rsc->id, need_stop ? " required" : ""); if (rsc_action_matrix[role][next_role] (rsc, current, !need_stop, data_set) == FALSE) { break; } role = next_role; } while (rsc->role <= rsc->next_role && role != rsc->role && is_not_set(rsc->flags, pe_rsc_block)) { next_role = rsc_state_matrix[role][rsc->role]; pe_rsc_trace(rsc, "Up: Executing: %s->%s (%s)%s", role2text(role), role2text(next_role), rsc->id, need_stop ? " required" : ""); if (rsc_action_matrix[role][next_role] (rsc, chosen, !need_stop, data_set) == FALSE) { break; } role = next_role; } role = rsc->role; /* Required steps from this role to the next */ while (role != rsc->next_role) { next_role = rsc_state_matrix[role][rsc->next_role]; pe_rsc_trace(rsc, "Role: Executing: %s->%s = (%s)", role2text(role), role2text(rsc->next_role), role2text(next_role), rsc->id); if (rsc_action_matrix[role][next_role] (rsc, chosen, FALSE, data_set) == FALSE) { break; } role = next_role; } if(is_set(rsc->flags, pe_rsc_block)) { pe_rsc_trace(rsc, "No monitor additional ops for blocked resource"); } else if (rsc->next_role != RSC_ROLE_STOPPED || is_set(rsc->flags, pe_rsc_managed) == FALSE) { pe_rsc_trace(rsc, "Monitor ops for active resource"); start = start_action(rsc, chosen, TRUE); Recurring(rsc, start, chosen, data_set); Recurring_Stopped(rsc, start, chosen, data_set); } else { pe_rsc_trace(rsc, "Monitor ops for in-active resource"); Recurring_Stopped(rsc, NULL, NULL, data_set); } /* if we are stuck in a partial migration, where the target * of the partial migration no longer matches the chosen target. * A full stop/start is required */ if (rsc->partial_migration_target && (chosen == NULL || rsc->partial_migration_target->details != chosen->details)) { pe_rsc_trace(rsc, "Not allowing partial migration to continue. %s", rsc->id); allow_migrate = FALSE; } else if (is_moving == FALSE || is_not_set(rsc->flags, pe_rsc_managed) || is_set(rsc->flags, pe_rsc_failed) || is_set(rsc->flags, pe_rsc_start_pending) || (current->details->unclean == TRUE) || rsc->next_role < RSC_ROLE_STARTED) { allow_migrate = FALSE; } if (allow_migrate) { handle_migration_actions(rsc, current, chosen, data_set); } } static void rsc_avoids_remote_nodes(resource_t *rsc) { GHashTableIter iter; node_t *node = NULL; g_hash_table_iter_init(&iter, rsc->allowed_nodes); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { if (node->details->remote_rsc) { node->weight = -INFINITY; } } } void native_internal_constraints(resource_t * rsc, pe_working_set_t * data_set) { /* This function is on the critical path and worth optimizing as much as possible */ resource_t *top = uber_parent(rsc); int type = pe_order_optional | pe_order_implies_then | pe_order_restart; gboolean is_stonith = is_set(rsc->flags, pe_rsc_fence_device); custom_action_order(rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL, rsc, generate_op_key(rsc->id, RSC_START, 0), NULL, type, data_set); if (top->variant == pe_master || rsc->role > RSC_ROLE_SLAVE) { custom_action_order(rsc, generate_op_key(rsc->id, RSC_DEMOTE, 0), NULL, rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL, pe_order_implies_first_master, data_set); custom_action_order(rsc, generate_op_key(rsc->id, RSC_START, 0), NULL, rsc, generate_op_key(rsc->id, RSC_PROMOTE, 0), NULL, pe_order_runnable_left, data_set); } if (is_stonith == FALSE && is_set(data_set->flags, pe_flag_enable_unfencing) && is_set(rsc->flags, pe_rsc_needs_unfencing) && is_not_set(rsc->flags, pe_rsc_have_unfencing)) { /* Check if the node needs to be unfenced first */ node_t *node = NULL; GHashTableIter iter; if(rsc != top) { /* Only create these constraints once, rsc is almost certainly cloned */ clear_bit_recursive(top, pe_rsc_have_unfencing); } g_hash_table_iter_init(&iter, rsc->allowed_nodes); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { action_t *unfence = pe_fence_op(node, "on", TRUE, data_set); custom_action_order(top, generate_op_key(top->id, top == rsc?RSC_STOP:RSC_STOPPED, 0), NULL, NULL, strdup(unfence->uuid), unfence, pe_order_optional, data_set); crm_debug("Stopping %s prior to unfencing %s", top->id, unfence->uuid); custom_action_order(NULL, strdup(unfence->uuid), unfence, top, generate_op_key(top->id, RSC_START, 0), NULL, pe_order_implies_then_on_node, data_set); } } if (is_not_set(rsc->flags, pe_rsc_managed)) { pe_rsc_trace(rsc, "Skipping fencing constraints for unmanaged resource: %s", rsc->id); return; } { action_t *all_stopped = get_pseudo_op(ALL_STOPPED, data_set); custom_action_order(rsc, stop_key(rsc), NULL, NULL, strdup(all_stopped->task), all_stopped, pe_order_implies_then | pe_order_runnable_left, data_set); } if (g_hash_table_size(rsc->utilization) > 0 && safe_str_neq(data_set->placement_strategy, "default")) { GHashTableIter iter; node_t *next = NULL; GListPtr gIter = NULL; pe_rsc_trace(rsc, "Creating utilization constraints for %s - strategy: %s", rsc->id, data_set->placement_strategy); for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) { node_t *current = (node_t *) gIter->data; char *load_stopped_task = crm_concat(LOAD_STOPPED, current->details->uname, '_'); action_t *load_stopped = get_pseudo_op(load_stopped_task, data_set); if (load_stopped->node == NULL) { load_stopped->node = node_copy(current); update_action_flags(load_stopped, pe_action_optional | pe_action_clear); } custom_action_order(rsc, stop_key(rsc), NULL, NULL, load_stopped_task, load_stopped, pe_order_load, data_set); } g_hash_table_iter_init(&iter, rsc->allowed_nodes); while (g_hash_table_iter_next(&iter, NULL, (void **)&next)) { char *load_stopped_task = crm_concat(LOAD_STOPPED, next->details->uname, '_'); action_t *load_stopped = get_pseudo_op(load_stopped_task, data_set); if (load_stopped->node == NULL) { load_stopped->node = node_copy(next); update_action_flags(load_stopped, pe_action_optional | pe_action_clear); } custom_action_order(NULL, strdup(load_stopped_task), load_stopped, rsc, start_key(rsc), NULL, pe_order_load, data_set); custom_action_order(NULL, strdup(load_stopped_task), load_stopped, rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), NULL, pe_order_load, data_set); free(load_stopped_task); } } if (rsc->container) { resource_t *remote_rsc = NULL; /* find out if the container is associated with remote node connection resource */ if (rsc->container->is_remote_node) { remote_rsc = rsc->container; } else if (rsc->is_remote_node == FALSE) { remote_rsc = rsc_contains_remote_node(data_set, rsc->container); } /* if the container is a remote-node, force the resource within the container * instead of colocating the resource with the container. */ if (remote_rsc) { GHashTableIter iter; node_t *node = NULL; g_hash_table_iter_init(&iter, rsc->allowed_nodes); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { if (node->details->remote_rsc != remote_rsc) { node->weight = -INFINITY; } } } else { crm_trace("Generating order and colocation rules for rsc %s with container %s", rsc->id, rsc->container->id); custom_action_order(rsc->container, generate_op_key(rsc->container->id, RSC_START, 0), NULL, rsc, generate_op_key(rsc->id, RSC_START, 0), NULL, pe_order_implies_then | pe_order_runnable_left, data_set); custom_action_order(rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL, rsc->container, generate_op_key(rsc->container->id, RSC_STOP, 0), NULL, pe_order_implies_first, data_set); rsc_colocation_new("resource-with-containter", NULL, INFINITY, rsc, rsc->container, NULL, NULL, data_set); } } if (rsc->is_remote_node || is_stonith) { /* don't allow remote nodes to run stonith devices * or remote connection resources.*/ rsc_avoids_remote_nodes(rsc); } /* If this rsc is a remote connection resource associated * with a container ( which will most likely be a virtual guest ) * do not allow the container to live on any remote-nodes. * remote-nodes managing nested remote-nodes should not be allowed. */ if (rsc->is_remote_node && rsc->container) { rsc_avoids_remote_nodes(rsc->container); } } void native_rsc_colocation_lh(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint) { if (rsc_lh == NULL) { pe_err("rsc_lh was NULL for %s", constraint->id); return; } else if (constraint->rsc_rh == NULL) { pe_err("rsc_rh was NULL for %s", constraint->id); return; } pe_rsc_trace(rsc_lh, "Processing colocation constraint between %s and %s", rsc_lh->id, rsc_rh->id); rsc_rh->cmds->rsc_colocation_rh(rsc_lh, rsc_rh, constraint); } enum filter_colocation_res { influence_nothing = 0, influence_rsc_location, influence_rsc_priority, }; static enum filter_colocation_res filter_colocation_constraint(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint) { if (constraint->score == 0) { return influence_nothing; } /* rh side must be allocated before we can process constraint */ if (is_set(rsc_rh->flags, pe_rsc_provisional)) { return influence_nothing; } if ((constraint->role_lh >= RSC_ROLE_SLAVE) && rsc_lh->parent && rsc_lh->parent->variant == pe_master && is_not_set(rsc_lh->flags, pe_rsc_provisional)) { /* LH and RH resources have already been allocated, place the correct * priority oh LH rsc for the given multistate resource role */ return influence_rsc_priority; } if (is_not_set(rsc_lh->flags, pe_rsc_provisional)) { /* error check */ struct node_shared_s *details_lh; struct node_shared_s *details_rh; if ((constraint->score > -INFINITY) && (constraint->score < INFINITY)) { return influence_nothing; } details_rh = rsc_rh->allocated_to ? rsc_rh->allocated_to->details : NULL; details_lh = rsc_lh->allocated_to ? rsc_lh->allocated_to->details : NULL; if (constraint->score == INFINITY && details_lh != details_rh) { crm_err("%s and %s are both allocated" " but to different nodes: %s vs. %s", rsc_lh->id, rsc_rh->id, details_lh ? details_lh->uname : "n/a", details_rh ? details_rh->uname : "n/a"); } else if (constraint->score == -INFINITY && details_lh == details_rh) { crm_err("%s and %s are both allocated" " but to the SAME node: %s", rsc_lh->id, rsc_rh->id, details_rh ? details_rh->uname : "n/a"); } return influence_nothing; } if (constraint->score > 0 && constraint->role_lh != RSC_ROLE_UNKNOWN && constraint->role_lh != rsc_lh->next_role) { crm_trace("LH: Skipping constraint: \"%s\" state filter nextrole is %s", role2text(constraint->role_lh), role2text(rsc_lh->next_role)); return influence_nothing; } if (constraint->score > 0 && constraint->role_rh != RSC_ROLE_UNKNOWN && constraint->role_rh != rsc_rh->next_role) { crm_trace("RH: Skipping constraint: \"%s\" state filter", role2text(constraint->role_rh)); return FALSE; } if (constraint->score < 0 && constraint->role_lh != RSC_ROLE_UNKNOWN && constraint->role_lh == rsc_lh->next_role) { crm_trace("LH: Skipping -ve constraint: \"%s\" state filter", role2text(constraint->role_lh)); return influence_nothing; } if (constraint->score < 0 && constraint->role_rh != RSC_ROLE_UNKNOWN && constraint->role_rh == rsc_rh->next_role) { crm_trace("RH: Skipping -ve constraint: \"%s\" state filter", role2text(constraint->role_rh)); return influence_nothing; } return influence_rsc_location; } static void influence_priority(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint) { const char *rh_value = NULL; const char *lh_value = NULL; const char *attribute = "#id"; int score_multiplier = 1; if (constraint->node_attribute != NULL) { attribute = constraint->node_attribute; } if (!rsc_rh->allocated_to || !rsc_lh->allocated_to) { return; } lh_value = g_hash_table_lookup(rsc_lh->allocated_to->details->attrs, attribute); rh_value = g_hash_table_lookup(rsc_rh->allocated_to->details->attrs, attribute); if (!safe_str_eq(lh_value, rh_value)) { if(constraint->score == INFINITY && constraint->role_lh == RSC_ROLE_MASTER) { rsc_lh->priority = -INFINITY; } return; } if (constraint->role_rh && (constraint->role_rh != rsc_rh->next_role)) { return; } if (constraint->role_lh == RSC_ROLE_SLAVE) { score_multiplier = -1; } rsc_lh->priority = merge_weights(score_multiplier * constraint->score, rsc_lh->priority); } static void colocation_match(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint) { const char *tmp = NULL; const char *value = NULL; const char *attribute = "#id"; GHashTable *work = NULL; gboolean do_check = FALSE; GHashTableIter iter; node_t *node = NULL; if (constraint->node_attribute != NULL) { attribute = constraint->node_attribute; } if (rsc_rh->allocated_to) { value = g_hash_table_lookup(rsc_rh->allocated_to->details->attrs, attribute); do_check = TRUE; } else if (constraint->score < 0) { /* nothing to do: * anti-colocation with something thats not running */ return; } work = node_hash_dup(rsc_lh->allowed_nodes); g_hash_table_iter_init(&iter, work); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { tmp = g_hash_table_lookup(node->details->attrs, attribute); if (do_check && safe_str_eq(tmp, value)) { if (constraint->score < INFINITY) { pe_rsc_trace(rsc_lh, "%s: %s.%s += %d", constraint->id, rsc_lh->id, node->details->uname, constraint->score); node->weight = merge_weights(constraint->score, node->weight); } } else if (do_check == FALSE || constraint->score >= INFINITY) { pe_rsc_trace(rsc_lh, "%s: %s.%s -= %d (%s)", constraint->id, rsc_lh->id, node->details->uname, constraint->score, do_check ? "failed" : "unallocated"); node->weight = merge_weights(-constraint->score, node->weight); } } if (can_run_any(work) || constraint->score <= -INFINITY || constraint->score >= INFINITY) { g_hash_table_destroy(rsc_lh->allowed_nodes); rsc_lh->allowed_nodes = work; work = NULL; } else { static char score[33]; score2char_stack(constraint->score, score, sizeof(score)); pe_rsc_info(rsc_lh, "%s: Rolling back scores from %s (%d, %s)", rsc_lh->id, rsc_rh->id, do_check, score); } if (work) { g_hash_table_destroy(work); } } void native_rsc_colocation_rh(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint) { enum filter_colocation_res filter_results; CRM_ASSERT(rsc_lh); CRM_ASSERT(rsc_rh); filter_results = filter_colocation_constraint(rsc_lh, rsc_rh, constraint); pe_rsc_trace(rsc_lh, "%sColocating %s with %s (%s, weight=%d, filter=%d)", constraint->score >= 0 ? "" : "Anti-", rsc_lh->id, rsc_rh->id, constraint->id, constraint->score, filter_results); switch (filter_results) { case influence_rsc_priority: influence_priority(rsc_lh, rsc_rh, constraint); break; case influence_rsc_location: pe_rsc_trace(rsc_lh, "%sColocating %s with %s (%s, weight=%d)", constraint->score >= 0 ? "" : "Anti-", rsc_lh->id, rsc_rh->id, constraint->id, constraint->score); colocation_match(rsc_lh, rsc_rh, constraint); break; case influence_nothing: default: return; } } static gboolean filter_rsc_ticket(resource_t * rsc_lh, rsc_ticket_t * rsc_ticket) { if (rsc_ticket->role_lh != RSC_ROLE_UNKNOWN && rsc_ticket->role_lh != rsc_lh->role) { pe_rsc_trace(rsc_lh, "LH: Skipping constraint: \"%s\" state filter", role2text(rsc_ticket->role_lh)); return FALSE; } return TRUE; } void rsc_ticket_constraint(resource_t * rsc_lh, rsc_ticket_t * rsc_ticket, pe_working_set_t * data_set) { if (rsc_ticket == NULL) { pe_err("rsc_ticket was NULL"); return; } if (rsc_lh == NULL) { pe_err("rsc_lh was NULL for %s", rsc_ticket->id); return; } if (rsc_ticket->ticket->granted && rsc_ticket->ticket->standby == FALSE) { return; } if (rsc_lh->children) { GListPtr gIter = rsc_lh->children; pe_rsc_trace(rsc_lh, "Processing ticket dependencies from %s", rsc_lh->id); for (; gIter != NULL; gIter = gIter->next) { resource_t *child_rsc = (resource_t *) gIter->data; rsc_ticket_constraint(child_rsc, rsc_ticket, data_set); } return; } pe_rsc_trace(rsc_lh, "%s: Processing ticket dependency on %s (%s, %s)", rsc_lh->id, rsc_ticket->ticket->id, rsc_ticket->id, role2text(rsc_ticket->role_lh)); if (rsc_ticket->ticket->granted == FALSE && g_list_length(rsc_lh->running_on) > 0) { GListPtr gIter = NULL; switch (rsc_ticket->loss_policy) { case loss_ticket_stop: resource_location(rsc_lh, NULL, -INFINITY, "__loss_of_ticket__", data_set); break; case loss_ticket_demote: /*Promotion score will be set to -INFINITY in master_promotion_order() */ if (rsc_ticket->role_lh != RSC_ROLE_MASTER) { resource_location(rsc_lh, NULL, -INFINITY, "__loss_of_ticket__", data_set); } break; case loss_ticket_fence: if (filter_rsc_ticket(rsc_lh, rsc_ticket) == FALSE) { return; } resource_location(rsc_lh, NULL, -INFINITY, "__loss_of_ticket__", data_set); for (gIter = rsc_lh->running_on; gIter != NULL; gIter = gIter->next) { node_t *node = (node_t *) gIter->data; pe_fence_node(data_set, node, "deadman ticket lost"); } break; case loss_ticket_freeze: if (filter_rsc_ticket(rsc_lh, rsc_ticket) == FALSE) { return; } if (g_list_length(rsc_lh->running_on) > 0) { clear_bit(rsc_lh->flags, pe_rsc_managed); set_bit(rsc_lh->flags, pe_rsc_block); } break; } } else if (rsc_ticket->ticket->granted == FALSE) { if (rsc_ticket->role_lh != RSC_ROLE_MASTER || rsc_ticket->loss_policy == loss_ticket_stop) { resource_location(rsc_lh, NULL, -INFINITY, "__no_ticket__", data_set); } } else if (rsc_ticket->ticket->standby) { if (rsc_ticket->role_lh != RSC_ROLE_MASTER || rsc_ticket->loss_policy == loss_ticket_stop) { resource_location(rsc_lh, NULL, -INFINITY, "__ticket_standby__", data_set); } } } enum pe_action_flags native_action_flags(action_t * action, node_t * node) { return action->flags; } enum pe_graph_flags native_update_actions(action_t * first, action_t * then, node_t * node, enum pe_action_flags flags, enum pe_action_flags filter, enum pe_ordering type) { /* flags == get_action_flags(first, then_node) called from update_action() */ enum pe_graph_flags changed = pe_graph_none; enum pe_action_flags then_flags = then->flags; enum pe_action_flags first_flags = first->flags; crm_trace( "Testing %s on %s (0x%.6x) with %s 0x%.6x %x %x", first->uuid, first->node ? first->node->details->uname : "[none]", first->flags, then->uuid, then->flags); if (type & pe_order_asymmetrical) { resource_t *then_rsc = then->rsc; enum rsc_role_e then_rsc_role = then_rsc ? then_rsc->fns->state(then_rsc, TRUE) : 0; if (!then_rsc) { /* ignore */ } else if ((then_rsc_role == RSC_ROLE_STOPPED) && safe_str_eq(then->task, RSC_STOP)) { /* ignore... if 'then' is supposed to be stopped after 'first', but * then is already stopped, there is nothing to be done when non-symmetrical. */ } else if ((then_rsc_role >= RSC_ROLE_STARTED) && safe_str_eq(then->task, RSC_START)) { /* ignore... if 'then' is supposed to be started after 'first', but * then is already started, there is nothing to be done when non-symmetrical. */ } else if (!(first->flags & pe_action_runnable)) { /* prevent 'then' action from happening if 'first' is not runnable and * 'then' has not yet occurred. */ pe_clear_action_bit(then, pe_action_runnable); pe_clear_action_bit(then, pe_action_optional); pe_rsc_trace(then->rsc, "Unset optional and runnable on %s", then->uuid); } else { /* ignore... then is allowed to start/stop if it wants to. */ } } if (type & pe_order_implies_first) { if ((filter & pe_action_optional) && (flags & pe_action_optional) == 0) { pe_rsc_trace(first->rsc, "Unset optional on %s because of %s", first->uuid, then->uuid); pe_clear_action_bit(first, pe_action_optional); } if (is_set(flags, pe_action_migrate_runnable) && is_set(then->flags, pe_action_migrate_runnable) == FALSE && is_set(then->flags, pe_action_optional) == FALSE) { pe_rsc_trace(first->rsc, "Unset migrate runnable on %s because of %s", first->uuid, then->uuid); pe_clear_action_bit(first, pe_action_migrate_runnable); } } if (type & pe_order_implies_first_master) { if ((filter & pe_action_optional) && ((then->flags & pe_action_optional) == FALSE) && then->rsc && (then->rsc->role == RSC_ROLE_MASTER)) { pe_clear_action_bit(first, pe_action_optional); if (is_set(first->flags, pe_action_migrate_runnable) && is_set(then->flags, pe_action_migrate_runnable) == FALSE) { pe_rsc_trace(first->rsc, "Unset migrate runnable on %s because of %s", first->uuid, then->uuid); pe_clear_action_bit(first, pe_action_migrate_runnable); } pe_rsc_trace(then->rsc, "Unset optional on %s because of %s", first->uuid, then->uuid); } } if ((type & pe_order_implies_first_migratable) && is_set(filter, pe_action_optional)) { if (((then->flags & pe_action_migrate_runnable) == FALSE) || ((then->flags & pe_action_runnable) == FALSE)) { pe_rsc_trace(then->rsc, "Unset runnable on %s because %s is neither runnable or migratable", first->uuid, then->uuid); pe_clear_action_bit(first, pe_action_runnable); } if ((then->flags & pe_action_optional) == 0) { pe_rsc_trace(then->rsc, "Unset optional on %s because %s is not optional", first->uuid, then->uuid); pe_clear_action_bit(first, pe_action_optional); } } if ((type & pe_order_pseudo_left) && is_set(filter, pe_action_optional)) { if ((first->flags & pe_action_runnable) == FALSE) { pe_clear_action_bit(then, pe_action_migrate_runnable); pe_clear_action_bit(then, pe_action_pseudo); pe_rsc_trace(then->rsc, "Unset pseudo on %s because %s is not runnable", then->uuid, first->uuid); } } if (is_set(type, pe_order_runnable_left) && is_set(filter, pe_action_runnable) && is_set(then->flags, pe_action_runnable) && is_set(flags, pe_action_runnable) == FALSE) { pe_rsc_trace(then->rsc, "Unset runnable on %s because of %s", then->uuid, first->uuid); pe_clear_action_bit(then, pe_action_runnable); pe_clear_action_bit(then, pe_action_migrate_runnable); } if (is_set(type, pe_order_implies_then) && is_set(filter, pe_action_optional) && is_set(then->flags, pe_action_optional) && is_set(flags, pe_action_optional) == FALSE) { /* in this case, treat migrate_runnable as if first is optional */ if (is_set(first->flags, pe_action_migrate_runnable) == FALSE) { pe_rsc_trace(then->rsc, "Unset optional on %s because of %s", then->uuid, first->uuid); pe_clear_action_bit(then, pe_action_optional); } } if (is_set(type, pe_order_restart)) { const char *reason = NULL; CRM_ASSERT(first->rsc && first->rsc->variant == pe_native); CRM_ASSERT(then->rsc && then->rsc->variant == pe_native); if ((filter & pe_action_runnable) && (then->flags & pe_action_runnable) == 0 && (then->rsc->flags & pe_rsc_managed)) { reason = "shutdown"; } if ((filter & pe_action_optional) && (then->flags & pe_action_optional) == 0) { reason = "recover"; } if (reason && is_set(first->flags, pe_action_optional)) { if (is_set(first->flags, pe_action_runnable) || is_not_set(then->flags, pe_action_optional)) { pe_rsc_trace(first->rsc, "Handling %s: %s -> %s", reason, first->uuid, then->uuid); pe_clear_action_bit(first, pe_action_optional); } } if (reason && is_not_set(first->flags, pe_action_optional) && is_not_set(first->flags, pe_action_runnable)) { pe_rsc_trace(then->rsc, "Handling %s: %s -> %s", reason, first->uuid, then->uuid); pe_clear_action_bit(then, pe_action_runnable); } if (reason && is_not_set(first->flags, pe_action_optional) && is_set(first->flags, pe_action_migrate_runnable) && is_not_set(then->flags, pe_action_migrate_runnable)) { pe_clear_action_bit(first, pe_action_migrate_runnable); } } if (then_flags != then->flags) { changed |= pe_graph_updated_then; pe_rsc_trace(then->rsc, "Then: Flags for %s on %s are now 0x%.6x (was 0x%.6x) because of %s 0x%.6x", then->uuid, then->node ? then->node->details->uname : "[none]", then->flags, then_flags, first->uuid, first->flags); if(then->rsc && then->rsc->parent) { /* "X_stop then X_start" doesn't get handled for cloned groups unless we do this */ update_action(then); } } if (first_flags != first->flags) { changed |= pe_graph_updated_first; pe_rsc_trace(first->rsc, "First: Flags for %s on %s are now 0x%.6x (was 0x%.6x) because of %s 0x%.6x", first->uuid, first->node ? first->node->details->uname : "[none]", first->flags, first_flags, then->uuid, then->flags); } return changed; } void native_rsc_location(resource_t * rsc, rsc_to_node_t * constraint) { GListPtr gIter = NULL; GHashTableIter iter; node_t *node = NULL; if (constraint == NULL) { pe_err("Constraint is NULL"); return; } else if (rsc == NULL) { pe_err("LHS of rsc_to_node (%s) is NULL", constraint->id); return; } pe_rsc_trace(rsc, "Applying %s (%s) to %s", constraint->id, role2text(constraint->role_filter), rsc->id); /* take "lifetime" into account */ if (constraint->role_filter > RSC_ROLE_UNKNOWN && constraint->role_filter != rsc->next_role) { pe_rsc_debug(rsc, "Constraint (%s) is not active (role : %s vs. %s)", constraint->id, role2text(constraint->role_filter), role2text(rsc->next_role)); return; } else if (is_active(constraint) == FALSE) { pe_rsc_trace(rsc, "Constraint (%s) is not active", constraint->id); return; } if (constraint->node_list_rh == NULL) { pe_rsc_trace(rsc, "RHS of constraint %s is NULL", constraint->id); return; } for (gIter = constraint->node_list_rh; gIter != NULL; gIter = gIter->next) { node_t *node = (node_t *) gIter->data; node_t *other_node = NULL; other_node = (node_t *) pe_hash_table_lookup(rsc->allowed_nodes, node->details->id); if (other_node != NULL) { pe_rsc_trace(rsc, "%s + %s: %d + %d", node->details->uname, other_node->details->uname, node->weight, other_node->weight); other_node->weight = merge_weights(other_node->weight, node->weight); } else { other_node = node_copy(node); g_hash_table_insert(rsc->allowed_nodes, (gpointer) other_node->details->id, other_node); } if (other_node->rsc_discover_mode < constraint->discover_mode) { if (constraint->discover_mode == discover_exclusive) { rsc->exclusive_discover = TRUE; } /* exclusive > never > always... always is default */ other_node->rsc_discover_mode = constraint->discover_mode; } } g_hash_table_iter_init(&iter, rsc->allowed_nodes); while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) { pe_rsc_trace(rsc, "%s + %s : %d", rsc->id, node->details->uname, node->weight); } } void native_expand(resource_t * rsc, pe_working_set_t * data_set) { GListPtr gIter = NULL; CRM_ASSERT(rsc); pe_rsc_trace(rsc, "Processing actions from %s", rsc->id); for (gIter = rsc->actions; gIter != NULL; gIter = gIter->next) { action_t *action = (action_t *) gIter->data; crm_trace("processing action %d for rsc=%s", action->id, rsc->id); graph_element_from_action(action, data_set); } for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) { resource_t *child_rsc = (resource_t *) gIter->data; child_rsc->cmds->expand(child_rsc, data_set); } } #define log_change(fmt, args...) do { \ if(terminal) { \ printf(" * "fmt"\n", ##args); \ } else { \ crm_notice(fmt, ##args); \ } \ } while(0) #define STOP_SANITY_ASSERT(lineno) do { \ if(current && current->details->unclean) { \ /* It will be a pseduo op */ \ } else if(stop == NULL) { \ crm_err("%s:%d: No stop action exists for %s", __FUNCTION__, lineno, rsc->id); \ CRM_ASSERT(stop != NULL); \ } else if(is_set(stop->flags, pe_action_optional)) { \ crm_err("%s:%d: Action %s is still optional", __FUNCTION__, lineno, stop->uuid); \ CRM_ASSERT(is_not_set(stop->flags, pe_action_optional)); \ } \ } while(0) void LogActions(resource_t * rsc, pe_working_set_t * data_set, gboolean terminal) { node_t *next = NULL; node_t *current = NULL; action_t *stop = NULL; action_t *start = NULL; action_t *demote = NULL; action_t *promote = NULL; char *key = NULL; gboolean moving = FALSE; GListPtr possible_matches = NULL; if (rsc->children) { GListPtr gIter = NULL; for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) { resource_t *child_rsc = (resource_t *) gIter->data; LogActions(child_rsc, data_set, terminal); } return; } next = rsc->allocated_to; if (rsc->running_on) { if (g_list_length(rsc->running_on) > 1 && rsc->partial_migration_source) { current = rsc->partial_migration_source; } else { current = rsc->running_on->data; } if (rsc->role == RSC_ROLE_STOPPED) { /* * This can occur when resources are being recovered * We fiddle with the current role in native_create_actions() */ rsc->role = RSC_ROLE_STARTED; } } if (current == NULL && is_set(rsc->flags, pe_rsc_orphan)) { /* Don't log stopped orphans */ return; } if (is_not_set(rsc->flags, pe_rsc_managed) || (current == NULL && next == NULL)) { pe_rsc_info(rsc, "Leave %s\t(%s%s)", rsc->id, role2text(rsc->role), is_not_set(rsc->flags, pe_rsc_managed) ? " unmanaged" : ""); return; } if (current != NULL && next != NULL && safe_str_neq(current->details->id, next->details->id)) { moving = TRUE; } key = start_key(rsc); possible_matches = find_actions(rsc->actions, key, next); free(key); if (possible_matches) { start = possible_matches->data; g_list_free(possible_matches); } key = stop_key(rsc); if(start == NULL || is_set(start->flags, pe_action_runnable) == FALSE) { possible_matches = find_actions(rsc->actions, key, NULL); } else { possible_matches = find_actions(rsc->actions, key, current); } free(key); if (possible_matches) { stop = possible_matches->data; g_list_free(possible_matches); } key = promote_key(rsc); possible_matches = find_actions(rsc->actions, key, next); free(key); if (possible_matches) { promote = possible_matches->data; g_list_free(possible_matches); } key = demote_key(rsc); possible_matches = find_actions(rsc->actions, key, next); free(key); if (possible_matches) { demote = possible_matches->data; g_list_free(possible_matches); } if (rsc->role == rsc->next_role) { action_t *migrate_to = NULL; key = generate_op_key(rsc->id, RSC_MIGRATED, 0); possible_matches = find_actions(rsc->actions, key, next); free(key); if (possible_matches) { migrate_to = possible_matches->data; } CRM_CHECK(next != NULL,); if (next == NULL) { } else if (migrate_to && is_set(migrate_to->flags, pe_action_runnable) && current) { log_change("Migrate %s\t(%s %s -> %s)", rsc->id, role2text(rsc->role), current->details->uname, next->details->uname); } else if (is_set(rsc->flags, pe_rsc_reload)) { log_change("Reload %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname); } else if (start == NULL || is_set(start->flags, pe_action_optional)) { pe_rsc_info(rsc, "Leave %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname); } else if (start && is_set(start->flags, pe_action_runnable) == FALSE) { log_change("Stop %s\t(%s %s%s)", rsc->id, role2text(rsc->role), current?current->details->uname:"N/A", stop && is_not_set(stop->flags, pe_action_runnable) ? " - blocked" : ""); STOP_SANITY_ASSERT(__LINE__); } else if (moving && current) { log_change("%s %s\t(%s %s -> %s)", is_set(rsc->flags, pe_rsc_failed) ? "Recover" : "Move ", rsc->id, role2text(rsc->role), current->details->uname, next->details->uname); } else if (is_set(rsc->flags, pe_rsc_failed)) { log_change("Recover %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname); STOP_SANITY_ASSERT(__LINE__); } else { log_change("Restart %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname); /* STOP_SANITY_ASSERT(__LINE__); False positive for migrate-fail-7 */ } g_list_free(possible_matches); return; } if (rsc->role > RSC_ROLE_SLAVE && rsc->role > rsc->next_role) { CRM_CHECK(current != NULL,); if (current != NULL) { gboolean allowed = FALSE; if (demote != NULL && (demote->flags & pe_action_runnable)) { allowed = TRUE; } log_change("Demote %s\t(%s -> %s %s%s)", rsc->id, role2text(rsc->role), role2text(rsc->next_role), current->details->uname, allowed ? "" : " - blocked"); if (stop != NULL && is_not_set(stop->flags, pe_action_optional) && rsc->next_role > RSC_ROLE_STOPPED && moving == FALSE) { if (is_set(rsc->flags, pe_rsc_failed)) { log_change("Recover %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname); STOP_SANITY_ASSERT(__LINE__); } else if (is_set(rsc->flags, pe_rsc_reload)) { log_change("Reload %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname); } else { log_change("Restart %s\t(%s %s)", rsc->id, role2text(rsc->next_role), next->details->uname); STOP_SANITY_ASSERT(__LINE__); } } } } else if (rsc->next_role == RSC_ROLE_STOPPED) { GListPtr gIter = NULL; CRM_CHECK(current != NULL,); key = stop_key(rsc); for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) { node_t *node = (node_t *) gIter->data; action_t *stop_op = NULL; gboolean allowed = FALSE; possible_matches = find_actions(rsc->actions, key, node); if (possible_matches) { stop_op = possible_matches->data; g_list_free(possible_matches); } if (stop_op && (stop_op->flags & pe_action_runnable)) { STOP_SANITY_ASSERT(__LINE__); allowed = TRUE; } log_change("Stop %s\t(%s%s)", rsc->id, node->details->uname, allowed ? "" : " - blocked"); } free(key); } if (moving) { log_change("Move %s\t(%s %s -> %s)", rsc->id, role2text(rsc->next_role), current->details->uname, next->details->uname); STOP_SANITY_ASSERT(__LINE__); } if (rsc->role == RSC_ROLE_STOPPED) { gboolean allowed = FALSE; if (start && (start->flags & pe_action_runnable)) { allowed = TRUE; } CRM_CHECK(next != NULL,); if (next != NULL) { log_change("Start %s\t(%s%s)", rsc->id, next->details->uname, allowed ? "" : " - blocked"); } if (allowed == FALSE) { return; } } if (rsc->next_role > RSC_ROLE_SLAVE && rsc->role < rsc->next_role) { gboolean allowed = FALSE; CRM_LOG_ASSERT(next); if (stop != NULL && is_not_set(stop->flags, pe_action_optional) && rsc->role > RSC_ROLE_STOPPED) { if (is_set(rsc->flags, pe_rsc_failed)) { log_change("Recover %s\t(%s %s)", rsc->id, role2text(rsc->role), next?next->details->uname:NULL); STOP_SANITY_ASSERT(__LINE__); } else if (is_set(rsc->flags, pe_rsc_reload)) { log_change("Reload %s\t(%s %s)", rsc->id, role2text(rsc->role), next?next->details->uname:NULL); STOP_SANITY_ASSERT(__LINE__); } else { log_change("Restart %s\t(%s %s)", rsc->id, role2text(rsc->role), next?next->details->uname:NULL); STOP_SANITY_ASSERT(__LINE__); } } if (promote && (promote->flags & pe_action_runnable)) { allowed = TRUE; } log_change("Promote %s\t(%s -> %s %s%s)", rsc->id, role2text(rsc->role), role2text(rsc->next_role), next?next->details->uname:NULL, allowed ? "" : " - blocked"); } } gboolean StopRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set) { GListPtr gIter = NULL; CRM_ASSERT(rsc); pe_rsc_trace(rsc, "%s", rsc->id); for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) { node_t *current = (node_t *) gIter->data; action_t *stop; if (rsc->partial_migration_target) { if (rsc->partial_migration_target->details == current->details) { pe_rsc_trace(rsc, "Filtered %s -> %s %s", current->details->uname, next->details->uname, rsc->id); continue; } else { pe_rsc_trace(rsc, "Forced on %s %s", current->details->uname, rsc->id); optional = FALSE; } } pe_rsc_trace(rsc, "%s on %s", rsc->id, current->details->uname); stop = stop_action(rsc, current, optional); if (is_not_set(rsc->flags, pe_rsc_managed)) { update_action_flags(stop, pe_action_runnable | pe_action_clear); } if (is_set(data_set->flags, pe_flag_remove_after_stop)) { DeleteRsc(rsc, current, optional, data_set); } } return TRUE; } gboolean StartRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set) { action_t *start = NULL; CRM_ASSERT(rsc); pe_rsc_trace(rsc, "%s on %s %d", rsc->id, next ? next->details->uname : "N/A", optional); start = start_action(rsc, next, TRUE); if (is_set(start->flags, pe_action_runnable) && optional == FALSE) { update_action_flags(start, pe_action_optional | pe_action_clear); } return TRUE; } gboolean PromoteRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set) { char *key = NULL; GListPtr gIter = NULL; gboolean runnable = TRUE; GListPtr action_list = NULL; CRM_ASSERT(rsc); CRM_CHECK(next != NULL, return FALSE); pe_rsc_trace(rsc, "%s on %s", rsc->id, next->details->uname); key = start_key(rsc); action_list = find_actions_exact(rsc->actions, key, next); free(key); for (gIter = action_list; gIter != NULL; gIter = gIter->next) { action_t *start = (action_t *) gIter->data; if (is_set(start->flags, pe_action_runnable) == FALSE) { runnable = FALSE; } } g_list_free(action_list); if (runnable) { promote_action(rsc, next, optional); return TRUE; } pe_rsc_debug(rsc, "%s\tPromote %s (canceled)", next->details->uname, rsc->id); key = promote_key(rsc); action_list = find_actions_exact(rsc->actions, key, next); free(key); for (gIter = action_list; gIter != NULL; gIter = gIter->next) { action_t *promote = (action_t *) gIter->data; update_action_flags(promote, pe_action_runnable | pe_action_clear); } g_list_free(action_list); return TRUE; } gboolean DemoteRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set) { GListPtr gIter = NULL; CRM_ASSERT(rsc); pe_rsc_trace(rsc, "%s", rsc->id); /* CRM_CHECK(rsc->next_role == RSC_ROLE_SLAVE, return FALSE); */ for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) { node_t *current = (node_t *) gIter->data; pe_rsc_trace(rsc, "%s on %s", rsc->id, next ? next->details->uname : "N/A"); demote_action(rsc, current, optional); } return TRUE; } gboolean RoleError(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set) { CRM_ASSERT(rsc); crm_err("%s on %s", rsc->id, next ? next->details->uname : "N/A"); CRM_CHECK(FALSE, return FALSE); return FALSE; } gboolean NullOp(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set) { CRM_ASSERT(rsc); pe_rsc_trace(rsc, "%s", rsc->id); return FALSE; } gboolean DeleteRsc(resource_t * rsc, node_t * node, gboolean optional, pe_working_set_t * data_set) { if (is_set(rsc->flags, pe_rsc_failed)) { pe_rsc_trace(rsc, "Resource %s not deleted from %s: failed", rsc->id, node->details->uname); return FALSE; } else if (node == NULL) { pe_rsc_trace(rsc, "Resource %s not deleted: NULL node", rsc->id); return FALSE; } else if (node->details->unclean || node->details->online == FALSE) { pe_rsc_trace(rsc, "Resource %s not deleted from %s: unrunnable", rsc->id, node->details->uname); return FALSE; } crm_notice("Removing %s from %s", rsc->id, node->details->uname); delete_action(rsc, node, optional); new_rsc_order(rsc, RSC_STOP, rsc, RSC_DELETE, optional ? pe_order_implies_then : pe_order_optional, data_set); new_rsc_order(rsc, RSC_DELETE, rsc, RSC_START, optional ? pe_order_implies_then : pe_order_optional, data_set); return TRUE; } #include <../lib/pengine/unpack.h> #define set_char(x) last_rsc_id[lpc] = x; complete = TRUE; static char * increment_clone(char *last_rsc_id) { int lpc = 0; int len = 0; char *tmp = NULL; gboolean complete = FALSE; CRM_CHECK(last_rsc_id != NULL, return NULL); if (last_rsc_id != NULL) { len = strlen(last_rsc_id); } lpc = len - 1; while (complete == FALSE && lpc > 0) { switch (last_rsc_id[lpc]) { case 0: lpc--; break; case '0': set_char('1'); break; case '1': set_char('2'); break; case '2': set_char('3'); break; case '3': set_char('4'); break; case '4': set_char('5'); break; case '5': set_char('6'); break; case '6': set_char('7'); break; case '7': set_char('8'); break; case '8': set_char('9'); break; case '9': last_rsc_id[lpc] = '0'; lpc--; break; case ':': tmp = last_rsc_id; last_rsc_id = calloc(1, len + 2); memcpy(last_rsc_id, tmp, len); last_rsc_id[++lpc] = '1'; last_rsc_id[len] = '0'; last_rsc_id[len + 1] = 0; complete = TRUE; free(tmp); break; default: crm_err("Unexpected char: %c (%d)", last_rsc_id[lpc], lpc); return NULL; break; } } return last_rsc_id; } static node_t * probe_grouped_clone(resource_t * rsc, node_t * node, pe_working_set_t * data_set) { node_t *running = NULL; resource_t *top = uber_parent(rsc); if (running == NULL && is_set(top->flags, pe_rsc_unique) == FALSE) { /* Annoyingly we also need to check any other clone instances * Clumsy, but it will work. * * An alternative would be to update known_on for every peer * during process_rsc_state() * * This code desperately needs optimization * ptest -x with 100 nodes, 100 clones and clone-max=10: * No probes O(25s) * Detection without clone loop O(3m) * Detection with clone loop O(8m) ptest[32211]: 2010/02/18_14:27:55 CRIT: stage5: Probing for unknown resources ptest[32211]: 2010/02/18_14:33:39 CRIT: stage5: Done ptest[32211]: 2010/02/18_14:35:05 CRIT: stage7: Updating action states ptest[32211]: 2010/02/18_14:35:05 CRIT: stage7: Done */ char *clone_id = clone_zero(rsc->id); resource_t *peer = pe_find_resource(top->children, clone_id); while (peer && running == NULL) { running = pe_hash_table_lookup(peer->known_on, node->details->id); if (running != NULL) { /* we already know the status of the resource on this node */ pe_rsc_trace(rsc, "Skipping active clone: %s", rsc->id); free(clone_id); return running; } clone_id = increment_clone(clone_id); peer = pe_find_resource(data_set->resources, clone_id); } free(clone_id); } return running; } gboolean native_create_probe(resource_t * rsc, node_t * node, action_t * complete, gboolean force, pe_working_set_t * data_set) { char *key = NULL; action_t *probe = NULL; node_t *running = NULL; node_t *allowed = NULL; resource_t *top = uber_parent(rsc); static const char *rc_master = NULL; static const char *rc_inactive = NULL; if (rc_inactive == NULL) { rc_inactive = crm_itoa(PCMK_OCF_NOT_RUNNING); rc_master = crm_itoa(PCMK_OCF_RUNNING_MASTER); } CRM_CHECK(node != NULL, return FALSE); if (force == FALSE && is_not_set(data_set->flags, pe_flag_startup_probes)) { pe_rsc_trace(rsc, "Skipping active resource detection for %s", rsc->id); return FALSE; } else if (force == FALSE && is_container_remote_node(node)) { pe_rsc_trace(rsc, "Skipping active resource detection for %s on container %s", rsc->id, node->details->id); return FALSE; } if (is_remote_node(node)) { const char *class = crm_element_value(rsc->xml, XML_AGENT_ATTR_CLASS); if (safe_str_eq(class, "stonith")) { pe_rsc_trace(rsc, "Skipping probe for %s on node %s, remote-nodes do not run stonith agents.", rsc->id, node->details->id); return FALSE; } else if (rsc_contains_remote_node(data_set, rsc)) { pe_rsc_trace(rsc, "Skipping probe for %s on node %s, remote-nodes can not run resources that contain connection resources.", rsc->id, node->details->id); return FALSE; } else if (rsc->is_remote_node) { pe_rsc_trace(rsc, "Skipping probe for %s on node %s, remote-nodes can not run connection resources", rsc->id, node->details->id); return FALSE; } } if (rsc->children) { GListPtr gIter = NULL; gboolean any_created = FALSE; for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) { resource_t *child_rsc = (resource_t *) gIter->data; any_created = child_rsc->cmds->create_probe(child_rsc, node, complete, force, data_set) || any_created; } return any_created; } else if (rsc->container) { pe_rsc_trace(rsc, "Skipping %s: it is within container %s", rsc->id, rsc->container->id); return FALSE; } if (is_set(rsc->flags, pe_rsc_orphan)) { pe_rsc_trace(rsc, "Skipping orphan: %s", rsc->id); return FALSE; } running = g_hash_table_lookup(rsc->known_on, node->details->id); if (running == NULL && is_set(rsc->flags, pe_rsc_unique) == FALSE) { /* Anonymous clones */ if (rsc->parent == top) { running = g_hash_table_lookup(rsc->parent->known_on, node->details->id); } else { /* Grouped anonymous clones need extra special handling */ running = probe_grouped_clone(rsc, node, data_set); } } if (force == FALSE && running != NULL) { /* we already know the status of the resource on this node */ pe_rsc_trace(rsc, "Skipping active: %s on %s", rsc->id, node->details->uname); return FALSE; } allowed = g_hash_table_lookup(rsc->allowed_nodes, node->details->id); if (rsc->exclusive_discover || top->exclusive_discover) { if (allowed == NULL) { /* exclusive discover is enabled and this node is not in the allowed list. */ return FALSE; } else if (allowed->rsc_discover_mode != discover_exclusive) { /* exclusive discover is enabled and this node is not marked * as a node this resource should be discovered on */ return FALSE; } } if (allowed && allowed->rsc_discover_mode == discover_never) { /* this resource is marked as not needing to be discovered on this node */ return FALSE; } key = generate_op_key(rsc->id, RSC_STATUS, 0); probe = custom_action(rsc, key, RSC_STATUS, node, FALSE, TRUE, data_set); update_action_flags(probe, pe_action_optional | pe_action_clear); /* If enabled, require unfencing before probing any fence devices * but ensure it happens after any resources that require * unfencing have been probed. * * Doing it the other way (requiring unfencing after probing * resources that need it) would result in the node being * unfenced, and all its resources being stopped, whenever a new * resource is added. Which would be highly suboptimal. * * So essentially, at the point the fencing device(s) have been * probed, we know the state of all resources that require * unfencing and that unfencing occurred. */ if(is_set(rsc->flags, pe_rsc_fence_device) && is_set(data_set->flags, pe_flag_enable_unfencing)) { trigger_unfencing(NULL, node, "node discovery", probe, data_set); probe->priority = INFINITY; /* Ensure this runs if unfencing succeeds */ } else if(is_set(rsc->flags, pe_rsc_needs_unfencing)) { action_t *unfence = pe_fence_op(node, "on", TRUE, data_set); order_actions(probe, unfence, pe_order_optional); } /* * We need to know if it's running_on (not just known_on) this node * to correctly determine the target rc. */ running = pe_find_node_id(rsc->running_on, node->details->id); if (running == NULL) { add_hash_param(probe->meta, XML_ATTR_TE_TARGET_RC, rc_inactive); } else if (rsc->role == RSC_ROLE_MASTER) { add_hash_param(probe->meta, XML_ATTR_TE_TARGET_RC, rc_master); } pe_rsc_debug(rsc, "Probing %s on %s (%s)", rsc->id, node->details->uname, role2text(rsc->role)); if(is_set(rsc->flags, pe_rsc_fence_device) && is_set(data_set->flags, pe_flag_enable_unfencing)) { /* Normally rsc.start depends on probe complete which depends * on rsc.probe. But this can't be the case in this scenario as * it would create graph loops. * * So instead we explicitly order 'rsc.probe then rsc.start' */ custom_action_order(rsc, NULL, probe, rsc, generate_op_key(rsc->id, RSC_START, 0), NULL, pe_order_optional, data_set); } else { order_actions(probe, complete, pe_order_implies_then); } return TRUE; } static void native_start_constraints(resource_t * rsc, action_t * stonith_op, gboolean is_stonith, pe_working_set_t * data_set) { node_t *target = stonith_op ? stonith_op->node : NULL; GListPtr gIter = NULL; action_t *all_stopped = get_pseudo_op(ALL_STOPPED, data_set); action_t *stonith_done = get_pseudo_op(STONITH_DONE, data_set); for (gIter = rsc->actions; gIter != NULL; gIter = gIter->next) { action_t *action = (action_t *) gIter->data; if(action->needs == rsc_req_nothing) { } else if (action->needs == rsc_req_stonith) { order_actions(stonith_done, action, pe_order_optional); } else if (target != NULL && safe_str_eq(action->task, RSC_START) && NULL == pe_hash_table_lookup(rsc->known_on, target->details->id)) { /* if known == NULL, then we dont know if * the resource is active on the node * we're about to shoot * * in this case, regardless of action->needs, * the only safe option is to wait until * the node is shot before doing anything * to with the resource * * its analogous to waiting for all the probes * for rscX to complete before starting rscX * * the most likely explaination is that the * DC died and took its status with it */ pe_rsc_debug(rsc, "Ordering %s after %s recovery", action->uuid, target->details->uname); order_actions(all_stopped, action, pe_order_optional | pe_order_runnable_left); } } } +static GListPtr +find_fence_target_node_actions(GListPtr search_list, const char *key, node_t *fence_target, pe_working_set_t *data_set) +{ + GListPtr gIter = NULL; + GListPtr result_list = find_actions(search_list, key, fence_target); + + /* find stop actions for this rsc on any container nodes running on + * the fencing target node */ + for (gIter = fence_target->details->running_rsc; gIter != NULL; gIter = gIter->next) { + GListPtr iter = NULL; + GListPtr tmp_list = NULL; + resource_t *tmp_rsc = (resource_t *) gIter->data; + node_t *container_node = NULL; + + /* found a container node that lives on the host node + * that is getting fenced. Find stop for our rsc that live on + * the container node as well. These stop operations are also + * implied by fencing of the host cluster node. */ + if (tmp_rsc->is_remote_node && tmp_rsc->container != NULL) { + container_node = pe_find_node(data_set->nodes, tmp_rsc->id); + } + if (container_node) { + tmp_list = find_actions(search_list, key, container_node); + } + for (iter = tmp_list; iter != NULL; iter = iter->next) { + result_list = g_list_prepend(result_list, (action_t *) iter->data); + } + g_list_free(tmp_list); + } + + return result_list; +} + static void native_stop_constraints(resource_t * rsc, action_t * stonith_op, gboolean is_stonith, pe_working_set_t * data_set) { char *key = NULL; GListPtr gIter = NULL; GListPtr action_list = NULL; resource_t *top = uber_parent(rsc); key = stop_key(rsc); - action_list = find_actions(rsc->actions, key, stonith_op->node); + action_list = find_fence_target_node_actions(rsc->actions, key, stonith_op->node, data_set); free(key); /* add the stonith OP as a stop pre-req and the mark the stop * as a pseudo op - since its now redundant */ for (gIter = action_list; gIter != NULL; gIter = gIter->next) { action_t *action = (action_t *) gIter->data; if (action->node->details->online && action->node->details->unclean == FALSE && is_set(rsc->flags, pe_rsc_failed)) { continue; } if (is_set(rsc->flags, pe_rsc_failed)) { crm_notice("Stop of failed resource %s is" " implicit after %s is fenced", rsc->id, action->node->details->uname); } else { crm_info("%s is implicit after %s is fenced", action->uuid, action->node->details->uname); } /* the stop would never complete and is * now implied by the stonith operation */ update_action_flags(action, pe_action_pseudo); update_action_flags(action, pe_action_runnable); update_action_flags(action, pe_action_implied_by_stonith); { enum pe_ordering flags = pe_order_optional; action_t *parent_stop = find_first_action(top->actions, NULL, RSC_STOP, NULL); if(stonith_op->node->details->remote_rsc) { flags |= pe_order_preserve; } order_actions(stonith_op, action, flags); order_actions(stonith_op, parent_stop, flags); } if (is_set(rsc->flags, pe_rsc_notify)) { /* Create a second notification that will be delivered * immediately after the node is fenced * * Basic problem: * - C is a clone active on the node to be shot and stopping on another * - R is a resource that depends on C * * + C.stop depends on R.stop * + C.stopped depends on STONITH * + C.notify depends on C.stopped * + C.healthy depends on C.notify * + R.stop depends on C.healthy * * The extra notification here changes * + C.healthy depends on C.notify * into: * + C.healthy depends on C.notify' * + C.notify' depends on STONITH' * thus breaking the loop */ notify_data_t *n_data = create_notification_boundaries(rsc, RSC_STOP, NULL, stonith_op, data_set); crm_info("Creating secondary notification for %s", action->uuid); collect_notification_data(rsc, TRUE, FALSE, n_data); g_hash_table_insert(n_data->keys, strdup("notify_stop_resource"), strdup(rsc->id)); g_hash_table_insert(n_data->keys, strdup("notify_stop_uname"), strdup(action->node->details->uname)); create_notifications(uber_parent(rsc), n_data, data_set); free_notification_data(n_data); } /* From Bug #1601, successful fencing must be an input to a failed resources stop action. However given group(rA, rB) running on nodeX and B.stop has failed, A := stop healthy resource (rA.stop) B := stop failed resource (pseudo operation B.stop) C := stonith nodeX A requires B, B requires C, C requires A This loop would prevent the cluster from making progress. This block creates the "C requires A" dependency and therefore must (at least for now) be disabled. Instead, run the block above and treat all resources on nodeX as B would be (marked as a pseudo op depending on the STONITH). TODO: Break the "A requires B" dependency in update_action() and re-enable this block } else if(is_stonith == FALSE) { crm_info("Moving healthy resource %s" " off %s before fencing", rsc->id, node->details->uname); * stop healthy resources before the * stonith op * custom_action_order( rsc, stop_key(rsc), NULL, NULL,strdup(CRM_OP_FENCE),stonith_op, pe_order_optional, data_set); */ } g_list_free(action_list); key = demote_key(rsc); - action_list = find_actions(rsc->actions, key, stonith_op->node); + action_list = find_fence_target_node_actions(rsc->actions, key, stonith_op->node, data_set); free(key); for (gIter = action_list; gIter != NULL; gIter = gIter->next) { action_t *action = (action_t *) gIter->data; if (action->node->details->online == FALSE || action->node->details->unclean == TRUE || is_set(rsc->flags, pe_rsc_failed)) { if (is_set(rsc->flags, pe_rsc_failed)) { pe_rsc_info(rsc, "Demote of failed resource %s is" " implict after %s is fenced", rsc->id, action->node->details->uname); } else { pe_rsc_info(rsc, "%s is implicit after %s is fenced", action->uuid, action->node->details->uname); } /* the stop would never complete and is * now implied by the stonith operation */ crm_trace("here - 1"); update_action_flags(action, pe_action_pseudo); update_action_flags(action, pe_action_runnable); if (is_stonith == FALSE) { order_actions(stonith_op, action, pe_order_preserve|pe_order_optional); } } } g_list_free(action_list); } void rsc_stonith_ordering(resource_t * rsc, action_t * stonith_op, pe_working_set_t * data_set) { gboolean is_stonith = FALSE; if (rsc->children) { GListPtr gIter = NULL; for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) { resource_t *child_rsc = (resource_t *) gIter->data; rsc_stonith_ordering(child_rsc, stonith_op, data_set); } return; } if (is_not_set(rsc->flags, pe_rsc_managed)) { pe_rsc_trace(rsc, "Skipping fencing constraints for unmanaged resource: %s", rsc->id); return; } /* Start constraints */ native_start_constraints(rsc, stonith_op, is_stonith, data_set); /* Stop constraints */ if (stonith_op) { native_stop_constraints(rsc, stonith_op, is_stonith, data_set); } } enum stack_activity { stack_stable = 0, stack_starting = 1, stack_stopping = 2, stack_middle = 4, }; static action_t * get_first_named_action(resource_t * rsc, const char *action, gboolean only_valid, node_t * current) { action_t *a = NULL; GListPtr action_list = NULL; char *key = generate_op_key(rsc->id, action, 0); action_list = find_actions(rsc->actions, key, current); if (action_list == NULL || action_list->data == NULL) { crm_trace("%s: no %s action", rsc->id, action); free(key); return NULL; } a = action_list->data; g_list_free(action_list); if (only_valid && is_set(a->flags, pe_action_pseudo)) { crm_trace("%s: pseudo", key); a = NULL; } else if (only_valid && is_not_set(a->flags, pe_action_runnable)) { crm_trace("%s: runnable", key); a = NULL; } free(key); return a; } static void ReloadRsc(resource_t * rsc, action_t * stop, action_t * start, pe_working_set_t * data_set) { action_t *action = NULL; action_t *rewrite = NULL; if (is_not_set(rsc->flags, pe_rsc_try_reload)) { return; } else if (is_not_set(stop->flags, pe_action_optional)) { pe_rsc_trace(rsc, "%s: stop action", rsc->id); return; } else if (is_not_set(start->flags, pe_action_optional)) { pe_rsc_trace(rsc, "%s: start action", rsc->id); return; } pe_rsc_trace(rsc, "%s on %s", rsc->id, stop->node->details->uname); action = get_first_named_action(rsc, RSC_PROMOTE, TRUE, NULL); if (action && is_set(action->flags, pe_action_optional) == FALSE) { update_action_flags(action, pe_action_pseudo); } action = get_first_named_action(rsc, RSC_DEMOTE, TRUE, NULL); if (action && is_set(action->flags, pe_action_optional) == FALSE) { rewrite = action; update_action_flags(stop, pe_action_pseudo); } else { rewrite = start; } pe_rsc_info(rsc, "Rewriting %s of %s on %s as a reload", rewrite->task, rsc->id, stop->node->details->uname); set_bit(rsc->flags, pe_rsc_reload); update_action_flags(rewrite, pe_action_optional | pe_action_clear); free(rewrite->uuid); free(rewrite->task); rewrite->task = strdup("reload"); rewrite->uuid = generate_op_key(rsc->id, rewrite->task, 0); } void rsc_reload(resource_t * rsc, pe_working_set_t * data_set) { GListPtr gIter = NULL; action_t *stop = NULL; action_t *start = NULL; if(is_set(rsc->flags, pe_rsc_munging)) { return; } set_bit(rsc->flags, pe_rsc_munging); if (rsc->children) { for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) { resource_t *child_rsc = (resource_t *) gIter->data; rsc_reload(child_rsc, data_set); } return; } else if (rsc->variant > pe_native) { return; } pe_rsc_trace(rsc, "Processing %s", rsc->id); stop = get_first_named_action(rsc, RSC_STOP, TRUE, rsc->running_on ? rsc->running_on->data : NULL); start = get_first_named_action(rsc, RSC_START, TRUE, NULL); if (is_not_set(rsc->flags, pe_rsc_managed) || is_set(rsc->flags, pe_rsc_failed) || is_set(rsc->flags, pe_rsc_start_pending) || rsc->next_role < RSC_ROLE_STARTED) { pe_rsc_trace(rsc, "%s: general resource state: flags=0x%.16llx", rsc->id, rsc->flags); return; } if (stop != NULL && is_set(stop->flags, pe_action_optional) && is_set(rsc->flags, pe_rsc_try_reload)) { ReloadRsc(rsc, stop, start, data_set); } } void native_append_meta(resource_t * rsc, xmlNode * xml) { char *value = g_hash_table_lookup(rsc->meta, XML_RSC_ATTR_INCARNATION); if (value) { char *name = NULL; name = crm_meta_name(XML_RSC_ATTR_INCARNATION); crm_xml_add(xml, name, value); free(name); } value = g_hash_table_lookup(rsc->meta, XML_RSC_ATTR_REMOTE_NODE); if (value) { char *name = NULL; name = crm_meta_name(XML_RSC_ATTR_REMOTE_NODE); crm_xml_add(xml, name, value); free(name); } } diff --git a/pengine/regression.sh b/pengine/regression.sh index 3a244d7416..9172acba77 100755 --- a/pengine/regression.sh +++ b/pengine/regression.sh @@ -1,783 +1,794 @@ #!/bin/bash # Copyright (C) 2004 Andrew Beekhof # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public # License as published by the Free Software Foundation; either # version 2 of the License, or (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # core=`dirname $0` . $core/regression.core.sh || exit 1 create_mode="true" info Generating test outputs for these tests... # do_test file description info Done. echo "" info Performing the following tests from $io_dir create_mode="false" echo "" do_test simple1 "Offline " do_test simple2 "Start " do_test simple3 "Start 2 " do_test simple4 "Start Failed" do_test simple6 "Stop Start " do_test simple7 "Shutdown " #do_test simple8 "Stonith " #do_test simple9 "Lower version" #do_test simple10 "Higher version" do_test simple11 "Priority (ne)" do_test simple12 "Priority (eq)" do_test simple8 "Stickiness" echo "" do_test group1 "Group " do_test group2 "Group + Native " do_test group3 "Group + Group " do_test group4 "Group + Native (nothing)" do_test group5 "Group + Native (move) " do_test group6 "Group + Group (move) " do_test group7 "Group colocation" do_test group13 "Group colocation (cant run)" do_test group8 "Group anti-colocation" do_test group9 "Group recovery" do_test group10 "Group partial recovery" do_test group11 "Group target_role" do_test group14 "Group stop (graph terminated)" do_test group15 "-ve group colocation" do_test bug-1573 "Partial stop of a group with two children" do_test bug-1718 "Mandatory group ordering - Stop group_FUN" do_test bug-lf-2613 "Move group on failure" do_test bug-lf-2619 "Move group on clone failure" do_test group-fail "Ensure stop order is preserved for partially active groups" do_test group-unmanaged "No need to restart r115 because r114 is unmanaged" do_test group-unmanaged-stopped "Make sure r115 is stopped when r114 fails" do_test group-dependants "Account for the location preferences of things colocated with a group" echo "" do_test rsc_dep1 "Must not " do_test rsc_dep3 "Must " do_test rsc_dep5 "Must not 3 " do_test rsc_dep7 "Must 3 " do_test rsc_dep10 "Must (but cant)" do_test rsc_dep2 "Must (running) " do_test rsc_dep8 "Must (running : alt) " do_test rsc_dep4 "Must (running + move)" do_test asymmetric "Asymmetric - require explicit location constraints" echo "" do_test orphan-0 "Orphan ignore" do_test orphan-1 "Orphan stop" do_test orphan-2 "Orphan stop, remove failcount" echo "" do_test params-0 "Params: No change" do_test params-1 "Params: Changed" do_test params-2 "Params: Resource definition" do_test params-4 "Params: Reload" do_test params-5 "Params: Restart based on probe digest" do_test novell-251689 "Resource definition change + target_role=stopped" do_test bug-lf-2106 "Restart all anonymous clone instances after config change" do_test params-6 "Params: Detect reload in previously migrated resource" do_test nvpair-id-ref "Support id-ref in nvpair with optional name" echo "" do_test target-0 "Target Role : baseline" do_test target-1 "Target Role : master" do_test target-2 "Target Role : invalid" echo "" do_test base-score "Set a node's default score for all nodes" echo "" do_test date-1 "Dates" -t "2005-020" do_test date-2 "Date Spec - Pass" -t "2005-020T12:30" do_test date-3 "Date Spec - Fail" -t "2005-020T11:30" do_test origin "Timing of recurring operations" -t "2014-05-07 00:28:00" do_test probe-0 "Probe (anon clone)" do_test probe-1 "Pending Probe" do_test probe-2 "Correctly re-probe cloned groups" do_test probe-3 "Probe (pending node)" do_test probe-4 "Probe (pending node + stopped resource)" --rc 4 do_test standby "Standby" do_test comments "Comments" echo "" do_test one-or-more-0 "Everything starts" do_test one-or-more-1 "Nothing starts because of A" do_test one-or-more-2 "D can start because of C" do_test one-or-more-3 "D cannot start because of B and C" do_test one-or-more-4 "D cannot start because of target-role" do_test one-or-more-5 "Start A and F even though C and D are stopped" do_test one-or-more-6 "Leave A running even though B is stopped" do_test one-or-more-7 "Leave A running even though C is stopped" do_test bug-5140-require-all-false "Allow basegrp:0 to stop" +do_test clone-require-all-1 "clone B starts node 3 and 4" +do_test clone-require-all-2 "clone B remains stopped everywhere" +do_test clone-require-all-3 "clone B stops everywhere because A stops everywhere" +do_test clone-require-all-4 "clone B remains on node 3 and 4 with only one instance of A remaining." +do_test clone-require-all-5 "clone B starts on node 1 3 and 4" +do_test clone-require-all-6 "clone B remains active after shutting down instances of A" +do_test clone-require-all-7 "clone A and B both start at the same time. all instances of A start before B." +do_test clone-require-all-no-interleave-1 "C starts everywhere after A and B" +do_test clone-require-all-no-interleave-2 "C starts on nodes 1, 2, and 4 with only one active instance of B" +do_test clone-require-all-no-interleave-3 "C remains active when instance of B is stopped on one node and started on another." echo "" do_test order1 "Order start 1 " do_test order2 "Order start 2 " do_test order3 "Order stop " do_test order4 "Order (multiple) " do_test order5 "Order (move) " do_test order6 "Order (move w/ restart) " do_test order7 "Order (manditory) " do_test order-optional "Order (score=0) " do_test order-required "Order (score=INFINITY) " do_test bug-lf-2171 "Prevent group start when clone is stopped" do_test order-clone "Clone ordering should be able to prevent startup of dependant clones" do_test order-sets "Ordering for resource sets" do_test order-serialize "Serialize resources without inhibiting migration" do_test order-serialize-set "Serialize a set of resources without inhibiting migration" do_test clone-order-primitive "Order clone start after a primitive" do_test clone-order-16instances "Verify ordering of 16 cloned resources" do_test order-optional-keyword "Order (optional keyword)" do_test order-mandatory "Order (mandatory keyword)" do_test bug-lf-2493 "Don't imply colocation requirements when applying ordering constraints with clones" do_test ordered-set-basic-startup "Constraint set with default order settings." do_test ordered-set-natural "Allow natural set ordering" do_test order-wrong-kind "Order (error)" echo "" do_test coloc-loop "Colocation - loop" do_test coloc-many-one "Colocation - many-to-one" do_test coloc-list "Colocation - many-to-one with list" do_test coloc-group "Colocation - groups" do_test coloc-slave-anti "Anti-colocation with slave shouldn't prevent master colocation" do_test coloc-attr "Colocation based on node attributes" do_test coloc-negative-group "Negative colocation with a group" do_test coloc-intra-set "Intra-set colocation" do_test bug-lf-2435 "Colocation sets with a negative score" do_test coloc-clone-stays-active "Ensure clones don't get stopped/demoted because a dependant must stop" do_test coloc_fp_logic "Verify floating point calculations in colocation are working" do_test colo_master_w_native "cl#5070 - Verify promotion order is affected when colocating master to native rsc." do_test colo_slave_w_native "cl#5070 - Verify promotion order is affected when colocating slave to native rsc." do_test anti-colocation-order "cl#5187 - Prevent resources in an anti-colocation from even temporarily running on a same node" echo "" do_test rsc-sets-seq-true "Resource Sets - sequential=false" do_test rsc-sets-seq-false "Resource Sets - sequential=true" do_test rsc-sets-clone "Resource Sets - Clone" do_test rsc-sets-master "Resource Sets - Master" do_test rsc-sets-clone-1 "Resource Sets - Clone (lf#2404)" #echo "" #do_test agent1 "version: lt (empty)" #do_test agent2 "version: eq " #do_test agent3 "version: gt " echo "" do_test attrs1 "string: eq (and) " do_test attrs2 "string: lt / gt (and)" do_test attrs3 "string: ne (or) " do_test attrs4 "string: exists " do_test attrs5 "string: not_exists " do_test attrs6 "is_dc: true " do_test attrs7 "is_dc: false " do_test attrs8 "score_attribute " do_test per-node-attrs "Per node resource parameters" echo "" do_test mon-rsc-1 "Schedule Monitor - start" do_test mon-rsc-2 "Schedule Monitor - move " do_test mon-rsc-3 "Schedule Monitor - pending start " do_test mon-rsc-4 "Schedule Monitor - move/pending start" echo "" do_test rec-rsc-0 "Resource Recover - no start " do_test rec-rsc-1 "Resource Recover - start " do_test rec-rsc-2 "Resource Recover - monitor " do_test rec-rsc-3 "Resource Recover - stop - ignore" do_test rec-rsc-4 "Resource Recover - stop - block " do_test rec-rsc-5 "Resource Recover - stop - fence " do_test rec-rsc-6 "Resource Recover - multiple - restart" do_test rec-rsc-7 "Resource Recover - multiple - stop " do_test rec-rsc-8 "Resource Recover - multiple - block " do_test rec-rsc-9 "Resource Recover - group/group" do_test monitor-recovery "on-fail=block + resource recovery detected by recurring monitor" do_test stop-failure-no-quorum "Stop failure without quorum" do_test stop-failure-no-fencing "Stop failure without fencing available" do_test stop-failure-with-fencing "Stop failure with fencing available" echo "" do_test quorum-1 "No quorum - ignore" do_test quorum-2 "No quorum - freeze" do_test quorum-3 "No quorum - stop " do_test quorum-4 "No quorum - start anyway" do_test quorum-5 "No quorum - start anyway (group)" do_test quorum-6 "No quorum - start anyway (clone)" do_test bug-cl-5212 "No promotion with no-quorum-policy=freeze" echo "" do_test rec-node-1 "Node Recover - Startup - no fence" do_test rec-node-2 "Node Recover - Startup - fence " do_test rec-node-3 "Node Recover - HA down - no fence" do_test rec-node-4 "Node Recover - HA down - fence " do_test rec-node-5 "Node Recover - CRM down - no fence" do_test rec-node-6 "Node Recover - CRM down - fence " do_test rec-node-7 "Node Recover - no quorum - ignore " do_test rec-node-8 "Node Recover - no quorum - freeze " do_test rec-node-9 "Node Recover - no quorum - stop " do_test rec-node-10 "Node Recover - no quorum - stop w/fence" do_test rec-node-11 "Node Recover - CRM down w/ group - fence " do_test rec-node-12 "Node Recover - nothing active - fence " do_test rec-node-13 "Node Recover - failed resource + shutdown - fence " do_test rec-node-15 "Node Recover - unknown lrm section" do_test rec-node-14 "Serialize all stonith's" echo "" do_test multi1 "Multiple Active (stop/start)" echo "" do_test migrate-begin "Normal migration" do_test migrate-success "Completed migration" do_test migrate-partial-1 "Completed migration, missing stop on source" do_test migrate-partial-2 "Successful migrate_to only" do_test migrate-partial-3 "Successful migrate_to only, target down" do_test migrate-partial-4 "Migrate from the correct host after migrate_to+migrate_from" do_test bug-5186-partial-migrate "Handle partial migration when src node loses membership" do_test migrate-fail-2 "Failed migrate_from" do_test migrate-fail-3 "Failed migrate_from + stop on source" do_test migrate-fail-4 "Failed migrate_from + stop on target - ideally we wouldn't need to re-stop on target" do_test migrate-fail-5 "Failed migrate_from + stop on source and target" do_test migrate-fail-6 "Failed migrate_to" do_test migrate-fail-7 "Failed migrate_to + stop on source" do_test migrate-fail-8 "Failed migrate_to + stop on target - ideally we wouldn't need to re-stop on target" do_test migrate-fail-9 "Failed migrate_to + stop on source and target" do_test migrate-stop "Migration in a stopping stack" do_test migrate-start "Migration in a starting stack" do_test migrate-stop_start "Migration in a restarting stack" do_test migrate-stop-complex "Migration in a complex stopping stack" do_test migrate-start-complex "Migration in a complex starting stack" do_test migrate-stop-start-complex "Migration in a complex moving stack" do_test migrate-shutdown "Order the post-migration 'stop' before node shutdown" do_test migrate-1 "Migrate (migrate)" do_test migrate-2 "Migrate (stable)" do_test migrate-3 "Migrate (failed migrate_to)" do_test migrate-4 "Migrate (failed migrate_from)" do_test novell-252693 "Migration in a stopping stack" do_test novell-252693-2 "Migration in a starting stack" do_test novell-252693-3 "Non-Migration in a starting and stopping stack" do_test bug-1820 "Migration in a group" do_test bug-1820-1 "Non-migration in a group" do_test migrate-5 "Primitive migration with a clone" do_test migrate-fencing "Migration after Fencing" do_test migrate-both-vms "Migrate two VMs that have no colocation" do_test 1-a-then-bm-move-b "Advanced migrate logic. A then B. migrate B." do_test 2-am-then-b-move-a "Advanced migrate logic, A then B, migrate A without stopping B" do_test 3-am-then-bm-both-migrate "Advanced migrate logic. A then B. migrate both" do_test 4-am-then-bm-b-not-migratable "Advanced migrate logic, A then B, B not migratable" do_test 5-am-then-bm-a-not-migratable "Advanced migrate logic. A then B. move both, a not migratable" do_test 6-migrate-group "Advanced migrate logic, migrate a group" do_test 7-migrate-group-one-unmigratable "Advanced migrate logic, migrate group mixed with allow-migrate true/false" do_test 8-am-then-bm-a-migrating-b-stopping "Advanced migrate logic, A then B, A migrating, B stopping" do_test 9-am-then-bm-b-migrating-a-stopping "Advanced migrate logic, A then B, B migrate, A stopping" do_test 10-a-then-bm-b-move-a-clone "Advanced migrate logic, A clone then B, migrate B while stopping A" do_test 11-a-then-bm-b-move-a-clone-starting "Advanced migrate logic, A clone then B, B moving while A is start/stopping" #echo "" #do_test complex1 "Complex " do_test bug-lf-2422 "Dependancy on partially active group - stop ocfs:*" echo "" do_test clone-anon-probe-1 "Probe the correct (anonymous) clone instance for each node" do_test clone-anon-probe-2 "Avoid needless re-probing of anonymous clones" do_test clone-anon-failcount "Merge failcounts for anonymous clones" do_test inc0 "Incarnation start" do_test inc1 "Incarnation start order" do_test inc2 "Incarnation silent restart, stop, move" do_test inc3 "Inter-incarnation ordering, silent restart, stop, move" do_test inc4 "Inter-incarnation ordering, silent restart, stop, move (ordered)" do_test inc5 "Inter-incarnation ordering, silent restart, stop, move (restart 1)" do_test inc6 "Inter-incarnation ordering, silent restart, stop, move (restart 2)" do_test inc7 "Clone colocation" do_test inc8 "Clone anti-colocation" do_test inc9 "Non-unique clone" do_test inc10 "Non-unique clone (stop)" do_test inc11 "Primitive colocation with clones" do_test inc12 "Clone shutdown" do_test cloned-group "Make sure only the correct number of cloned groups are started" do_test cloned-group-stop "Ensure stopping qpidd also stops glance and cinder" do_test clone-no-shuffle "Dont prioritize allocation of instances that must be moved" do_test clone-max-zero "Orphan processing with clone-max=0" do_test clone-anon-dup "Bug LF#2087 - Correctly parse the state of anonymous clones that are active more than once per node" do_test bug-lf-2160 "Dont shuffle clones due to colocation" do_test bug-lf-2213 "clone-node-max enforcement for cloned groups" do_test bug-lf-2153 "Clone ordering constraints" do_test bug-lf-2361 "Ensure clones observe mandatory ordering constraints if the LHS is unrunnable" do_test bug-lf-2317 "Avoid needless restart of primitive depending on a clone" do_test clone-colocate-instance-1 "Colocation with a specific clone instance (negative example)" do_test clone-colocate-instance-2 "Colocation with a specific clone instance" do_test clone-order-instance "Ordering with specific clone instances" do_test bug-lf-2453 "Enforce mandatory clone ordering without colocation" do_test bug-lf-2508 "Correctly reconstruct the status of anonymous cloned groups" do_test bug-lf-2544 "Balanced clone placement" do_test bug-lf-2445 "Redistribute clones with node-max > 1 and stickiness = 0" do_test bug-lf-2574 "Avoid clone shuffle" do_test bug-lf-2581 "Avoid group restart due to unrelated clone (re)start" do_test bug-cl-5168 "Don't shuffle clones" do_test bug-cl-5170 "Prevent clone from starting with on-fail=block" do_test clone-fail-block-colocation "Move colocated group when failed clone has on-fail=block" do_test clone-interleave-1 "Clone-3 cannot start on pcmk-1 due to interleaved ordering (no colocation)" do_test clone-interleave-2 "Clone-3 must stop on pcmk-1 due to interleaved ordering (no colocation)" do_test clone-interleave-3 "Clone-3 must be recovered on pcmk-1 due to interleaved ordering (no colocation)" echo "" do_test unfence-startup "Clean unfencing" do_test unfence-definition "Unfencing when the agent changes" do_test unfence-parameters "Unfencing when the agent parameters changes" echo "" do_test master-0 "Stopped -> Slave" do_test master-1 "Stopped -> Promote" do_test master-2 "Stopped -> Promote : notify" do_test master-3 "Stopped -> Promote : master location" do_test master-4 "Started -> Promote : master location" do_test master-5 "Promoted -> Promoted" do_test master-6 "Promoted -> Promoted (2)" do_test master-7 "Promoted -> Fenced" do_test master-8 "Promoted -> Fenced -> Moved" do_test master-9 "Stopped + Promotable + No quorum" do_test master-10 "Stopped -> Promotable : notify with monitor" do_test master-11 "Stopped -> Promote : colocation" do_test novell-239082 "Demote/Promote ordering" do_test novell-239087 "Stable master placement" do_test master-12 "Promotion based solely on rsc_location constraints" do_test master-13 "Include preferences of colocated resources when placing master" do_test master-demote "Ordering when actions depends on demoting a slave resource" do_test master-ordering "Prevent resources from starting that need a master" do_test bug-1765 "Master-Master Colocation (dont stop the slaves)" do_test master-group "Promotion of cloned groups" do_test bug-lf-1852 "Don't shuffle master/slave instances unnecessarily" do_test master-failed-demote "Dont retry failed demote actions" do_test master-failed-demote-2 "Dont retry failed demote actions (notify=false)" do_test master-depend "Ensure resources that depend on the master don't get allocated until the master does" do_test master-reattach "Re-attach to a running master" do_test master-allow-start "Don't include master score if it would prevent allocation" do_test master-colocation "Allow master instances placemaker to be influenced by colocation constraints" do_test master-pseudo "Make sure promote/demote pseudo actions are created correctly" do_test master-role "Prevent target-role from promoting more than master-max instances" do_test bug-lf-2358 "Master-Master anti-colocation" do_test master-promotion-constraint "Mandatory master colocation constraints" do_test unmanaged-master "Ensure role is preserved for unmanaged resources" do_test master-unmanaged-monitor "Start the correct monitor operation for unmanaged masters" do_test master-demote-2 "Demote does not clear past failure" do_test master-move "Move master based on failure of colocated group" do_test master-probed-score "Observe the promotion score of probed resources" do_test colocation_constraint_stops_master "cl#5054 - Ensure master is demoted when stopped by colocation constraint" do_test colocation_constraint_stops_slave "cl#5054 - Ensure slave is not demoted when stopped by colocation constraint" do_test order_constraint_stops_master "cl#5054 - Ensure master is demoted when stopped by order constraint" do_test order_constraint_stops_slave "cl#5054 - Ensure slave is not demoted when stopped by order constraint" do_test master_monitor_restart "cl#5072 - Ensure master monitor operation will start after promotion." do_test bug-rh-880249 "Handle replacement of an m/s resource with a primitive" do_test bug-5143-ms-shuffle "Prevent master shuffling due to promotion score" do_test master-demote-block "Block promotion if demote fails with on-fail=block" do_test master-dependant-ban "Don't stop instances from being active because a dependant is banned from that host" do_test master-stop "Stop instances due to location constraint with role=Started" do_test master-partially-demoted-group "Allow partially demoted group to finish demoting" do_test bug-cl-5213 "Ensure role colocation with -INFINITY is enforced" do_test bug-cl-5219 "Allow unrelated resources with a common colocation target to remain promoted" do_test master-asymmetrical-order "Fix the behaviors of multi-state resources with asymmetrical ordering" echo "" do_test history-1 "Correctly parse stateful-1 resource state" echo "" do_test managed-0 "Managed (reference)" do_test managed-1 "Not managed - down " do_test managed-2 "Not managed - up " do_test bug-5028 "Shutdown should block if anything depends on an unmanaged resource" do_test bug-5028-detach "Ensure detach still works" do_test bug-5028-bottom "Ensure shutdown still blocks if the blocked resource is at the bottom of the stack" do_test unmanaged-stop-1 "cl#5155 - Block the stop of resources if any depending resource is unmanaged " do_test unmanaged-stop-2 "cl#5155 - Block the stop of resources if the first resource in a mandatory stop order is unmanaged " do_test unmanaged-stop-3 "cl#5155 - Block the stop of resources if any depending resource in a group is unmanaged " do_test unmanaged-stop-4 "cl#5155 - Block the stop of resources if any depending resource in the middle of a group is unmanaged " do_test unmanaged-block-restart "Block restart of resources if any dependent resource in a group is unmanaged" echo "" do_test interleave-0 "Interleave (reference)" do_test interleave-1 "coloc - not interleaved" do_test interleave-2 "coloc - interleaved " do_test interleave-3 "coloc - interleaved (2)" do_test interleave-pseudo-stop "Interleaved clone during stonith" do_test interleave-stop "Interleaved clone during stop" do_test interleave-restart "Interleaved clone during dependancy restart" echo "" do_test notify-0 "Notify reference" do_test notify-1 "Notify simple" do_test notify-2 "Notify simple, confirm" do_test notify-3 "Notify move, confirm" do_test novell-239079 "Notification priority" #do_test notify-2 "Notify - 764" echo "" do_test 594 "OSDL #594 - Unrunnable actions scheduled in transition" do_test 662 "OSDL #662 - Two resources start on one node when incarnation_node_max = 1" do_test 696 "OSDL #696 - CRM starts stonith RA without monitor" do_test 726 "OSDL #726 - Attempting to schedule rsc_posic041_monitor_5000 _after_ a stop" do_test 735 "OSDL #735 - Correctly detect that rsc_hadev1 is stopped on hadev3" do_test 764 "OSDL #764 - Missing monitor op for DoFencing:child_DoFencing:1" do_test 797 "OSDL #797 - Assert triggered: task_id_i > max_call_id" do_test 829 "OSDL #829" do_test 994 "OSDL #994 - Stopping the last resource in a resource group causes the entire group to be restarted" do_test 994-2 "OSDL #994 - with a dependant resource" do_test 1360 "OSDL #1360 - Clone stickiness" do_test 1484 "OSDL #1484 - on_fail=stop" do_test 1494 "OSDL #1494 - Clone stability" do_test unrunnable-1 "Unrunnable" do_test stonith-0 "Stonith loop - 1" do_test stonith-1 "Stonith loop - 2" do_test stonith-2 "Stonith loop - 3" do_test stonith-3 "Stonith startup" do_test stonith-4 "Stonith node state" --rc 4 do_test bug-1572-1 "Recovery of groups depending on master/slave" do_test bug-1572-2 "Recovery of groups depending on master/slave when the master is never re-promoted" do_test bug-1685 "Depends-on-master ordering" do_test bug-1822 "Dont promote partially active groups" do_test bug-pm-11 "New resource added to a m/s group" do_test bug-pm-12 "Recover only the failed portion of a cloned group" do_test bug-n-387749 "Don't shuffle clone instances" do_test bug-n-385265 "Don't ignore the failure stickiness of group children - resource_idvscommon should stay stopped" do_test bug-n-385265-2 "Ensure groups are migrated instead of remaining partially active on the current node" do_test bug-lf-1920 "Correctly handle probes that find active resources" do_test bnc-515172 "Location constraint with multiple expressions" do_test colocate-primitive-with-clone "Optional colocation with a clone" do_test use-after-free-merge "Use-after-free in native_merge_weights" do_test bug-lf-2551 "STONITH ordering for stop" do_test bug-lf-2606 "Stonith implies demote" do_test bug-lf-2474 "Ensure resource op timeout takes precedence over op_defaults" do_test bug-suse-707150 "Prevent vm-01 from starting due to colocation/ordering" do_test bug-5014-A-start-B-start "Verify when A starts B starts using symmetrical=false" do_test bug-5014-A-stop-B-started "Verify when A stops B does not stop if it has already started using symmetric=false" do_test bug-5014-A-stopped-B-stopped "Verify when A is stopped and B has not started, B does not start before A using symmetric=false" do_test bug-5014-CthenAthenB-C-stopped "Verify when C then A is symmetrical=true, A then B is symmetric=false, and C is stopped that nothing starts." do_test bug-5014-CLONE-A-start-B-start "Verify when A starts B starts using clone resources with symmetric=false" do_test bug-5014-CLONE-A-stop-B-started "Verify when A stops B does not stop if it has already started using clone resources with symmetric=false." do_test bug-5014-GROUP-A-start-B-start "Verify when A starts B starts when using group resources with symmetric=false." do_test bug-5014-GROUP-A-stopped-B-started "Verify when A stops B does not stop if it has already started using group resources with symmetric=false." do_test bug-5014-GROUP-A-stopped-B-stopped "Verify when A is stopped and B has not started, B does not start before A using group resources with symmetric=false." do_test bug-5014-ordered-set-symmetrical-false "Verify ordered sets work with symmetrical=false" do_test bug-5014-ordered-set-symmetrical-true "Verify ordered sets work with symmetrical=true" do_test bug-5007-masterslave_colocation "Verify use of colocation scores other than INFINITY and -INFINITY work on multi-state resources." do_test bug-5038 "Prevent restart of anonymous clones when clone-max decreases" do_test bug-5025-1 "Automatically clean up failcount after resource config change with reload" do_test bug-5025-2 "Make sure clear failcount action isn't set when config does not change." do_test bug-5025-3 "Automatically clean up failcount after resource config change with restart" do_test bug-5025-4 "Clear failcount when last failure is a start op and rsc attributes changed." do_test failcount "Ensure failcounts are correctly expired" do_test failcount-block "Ensure failcounts are not expired when on-fail=block is present" do_test monitor-onfail-restart "bug-5058 - Monitor failure with on-fail set to restart" do_test monitor-onfail-stop "bug-5058 - Monitor failure wiht on-fail set to stop" do_test bug-5059 "No need to restart p_stateful1:*" do_test bug-5069-op-enabled "Test on-fail=ignore with failure when monitor is enabled." do_test bug-5069-op-disabled "Test on-fail-ignore with failure when monitor is disabled." do_test obsolete-lrm-resource "cl#5115 - Do not use obsolete lrm_resource sections" do_test expire-non-blocked-failure "Ignore failure-timeout only if the failed operation has on-fail=block" do_test ignore_stonith_rsc_order1 "cl#5056- Ignore order constraint between stonith and non-stonith rsc." do_test ignore_stonith_rsc_order2 "cl#5056- Ignore order constraint with group rsc containing mixed stonith and non-stonith." do_test ignore_stonith_rsc_order3 "cl#5056- Ignore order constraint, stonith clone and mixed group" do_test ignore_stonith_rsc_order4 "cl#5056- Ignore order constraint, stonith clone and clone with nested mixed group" do_test honor_stonith_rsc_order1 "cl#5056- Honor order constraint, stonith clone and pure stonith group(single rsc)." do_test honor_stonith_rsc_order2 "cl#5056- Honor order constraint, stonith clone and pure stonith group(multiple rsc)" do_test honor_stonith_rsc_order3 "cl#5056- Honor order constraint, stonith clones with nested pure stonith group." do_test honor_stonith_rsc_order4 "cl#5056- Honor order constraint, between two native stonith rscs." do_test probe-timeout "cl#5099 - Default probe timeout" echo "" do_test systemhealth1 "System Health () #1" do_test systemhealth2 "System Health () #2" do_test systemhealth3 "System Health () #3" do_test systemhealthn1 "System Health (None) #1" do_test systemhealthn2 "System Health (None) #2" do_test systemhealthn3 "System Health (None) #3" do_test systemhealthm1 "System Health (Migrate On Red) #1" do_test systemhealthm2 "System Health (Migrate On Red) #2" do_test systemhealthm3 "System Health (Migrate On Red) #3" do_test systemhealtho1 "System Health (Only Green) #1" do_test systemhealtho2 "System Health (Only Green) #2" do_test systemhealtho3 "System Health (Only Green) #3" do_test systemhealthp1 "System Health (Progessive) #1" do_test systemhealthp2 "System Health (Progessive) #2" do_test systemhealthp3 "System Health (Progessive) #3" echo "" do_test utilization "Placement Strategy - utilization" do_test minimal "Placement Strategy - minimal" do_test balanced "Placement Strategy - balanced" echo "" do_test placement-stickiness "Optimized Placement Strategy - stickiness" do_test placement-priority "Optimized Placement Strategy - priority" do_test placement-location "Optimized Placement Strategy - location" do_test placement-capacity "Optimized Placement Strategy - capacity" echo "" do_test utilization-order1 "Utilization Order - Simple" do_test utilization-order2 "Utilization Order - Complex" do_test utilization-order3 "Utilization Order - Migrate" do_test utilization-order4 "Utilization Order - Live Mirgration (bnc#695440)" do_test utilization-shuffle "Don't displace prmExPostgreSQLDB2 on act2, Start prmExPostgreSQLDB1 on act3" do_test load-stopped-loop "Avoid transition loop due to load_stopped (cl#5044)" echo "" do_test reprobe-target_rc "Ensure correct target_rc for reprobe of inactive resources" do_test node-maintenance-1 "cl#5128 - Node maintenance" do_test node-maintenance-2 "cl#5128 - Node maintenance (coming out of maintenance mode)" do_test rsc-maintenance "Per-resource maintenance" echo "" do_test not-installed-agent "The resource agent is missing" do_test not-installed-tools "Something the resource agent needs is missing" echo "" do_test stopped-monitor-00 "Stopped Monitor - initial start" do_test stopped-monitor-01 "Stopped Monitor - failed started" do_test stopped-monitor-02 "Stopped Monitor - started multi-up" do_test stopped-monitor-03 "Stopped Monitor - stop started" do_test stopped-monitor-04 "Stopped Monitor - failed stop" do_test stopped-monitor-05 "Stopped Monitor - start unmanaged" do_test stopped-monitor-06 "Stopped Monitor - unmanaged multi-up" do_test stopped-monitor-07 "Stopped Monitor - start unmanaged multi-up" do_test stopped-monitor-08 "Stopped Monitor - migrate" do_test stopped-monitor-09 "Stopped Monitor - unmanage started" do_test stopped-monitor-10 "Stopped Monitor - unmanaged started multi-up" do_test stopped-monitor-11 "Stopped Monitor - stop unmanaged started" do_test stopped-monitor-12 "Stopped Monitor - unmanaged started multi-up (targer-role="Stopped")" do_test stopped-monitor-20 "Stopped Monitor - initial stop" do_test stopped-monitor-21 "Stopped Monitor - stopped single-up" do_test stopped-monitor-22 "Stopped Monitor - stopped multi-up" do_test stopped-monitor-23 "Stopped Monitor - start stopped" do_test stopped-monitor-24 "Stopped Monitor - unmanage stopped" do_test stopped-monitor-25 "Stopped Monitor - unmanaged stopped multi-up" do_test stopped-monitor-26 "Stopped Monitor - start unmanaged stopped" do_test stopped-monitor-27 "Stopped Monitor - unmanaged stopped multi-up (target-role="Started")" do_test stopped-monitor-30 "Stopped Monitor - new node started" do_test stopped-monitor-31 "Stopped Monitor - new node stopped" echo"" do_test ticket-primitive-1 "Ticket - Primitive (loss-policy=stop, initial)" do_test ticket-primitive-2 "Ticket - Primitive (loss-policy=stop, granted)" do_test ticket-primitive-3 "Ticket - Primitive (loss-policy-stop, revoked)" do_test ticket-primitive-4 "Ticket - Primitive (loss-policy=demote, initial)" do_test ticket-primitive-5 "Ticket - Primitive (loss-policy=demote, granted)" do_test ticket-primitive-6 "Ticket - Primitive (loss-policy=demote, revoked)" do_test ticket-primitive-7 "Ticket - Primitive (loss-policy=fence, initial)" do_test ticket-primitive-8 "Ticket - Primitive (loss-policy=fence, granted)" do_test ticket-primitive-9 "Ticket - Primitive (loss-policy=fence, revoked)" do_test ticket-primitive-10 "Ticket - Primitive (loss-policy=freeze, initial)" do_test ticket-primitive-11 "Ticket - Primitive (loss-policy=freeze, granted)" do_test ticket-primitive-12 "Ticket - Primitive (loss-policy=freeze, revoked)" do_test ticket-primitive-13 "Ticket - Primitive (loss-policy=stop, standby, granted)" do_test ticket-primitive-14 "Ticket - Primitive (loss-policy=stop, granted, standby)" do_test ticket-primitive-15 "Ticket - Primitive (loss-policy=stop, standby, revoked)" do_test ticket-primitive-16 "Ticket - Primitive (loss-policy=demote, standby, granted)" do_test ticket-primitive-17 "Ticket - Primitive (loss-policy=demote, granted, standby)" do_test ticket-primitive-18 "Ticket - Primitive (loss-policy=demote, standby, revoked)" do_test ticket-primitive-19 "Ticket - Primitive (loss-policy=fence, standby, granted)" do_test ticket-primitive-20 "Ticket - Primitive (loss-policy=fence, granted, standby)" do_test ticket-primitive-21 "Ticket - Primitive (loss-policy=fence, standby, revoked)" do_test ticket-primitive-22 "Ticket - Primitive (loss-policy=freeze, standby, granted)" do_test ticket-primitive-23 "Ticket - Primitive (loss-policy=freeze, granted, standby)" do_test ticket-primitive-24 "Ticket - Primitive (loss-policy=freeze, standby, revoked)" echo"" do_test ticket-group-1 "Ticket - Group (loss-policy=stop, initial)" do_test ticket-group-2 "Ticket - Group (loss-policy=stop, granted)" do_test ticket-group-3 "Ticket - Group (loss-policy-stop, revoked)" do_test ticket-group-4 "Ticket - Group (loss-policy=demote, initial)" do_test ticket-group-5 "Ticket - Group (loss-policy=demote, granted)" do_test ticket-group-6 "Ticket - Group (loss-policy=demote, revoked)" do_test ticket-group-7 "Ticket - Group (loss-policy=fence, initial)" do_test ticket-group-8 "Ticket - Group (loss-policy=fence, granted)" do_test ticket-group-9 "Ticket - Group (loss-policy=fence, revoked)" do_test ticket-group-10 "Ticket - Group (loss-policy=freeze, initial)" do_test ticket-group-11 "Ticket - Group (loss-policy=freeze, granted)" do_test ticket-group-12 "Ticket - Group (loss-policy=freeze, revoked)" do_test ticket-group-13 "Ticket - Group (loss-policy=stop, standby, granted)" do_test ticket-group-14 "Ticket - Group (loss-policy=stop, granted, standby)" do_test ticket-group-15 "Ticket - Group (loss-policy=stop, standby, revoked)" do_test ticket-group-16 "Ticket - Group (loss-policy=demote, standby, granted)" do_test ticket-group-17 "Ticket - Group (loss-policy=demote, granted, standby)" do_test ticket-group-18 "Ticket - Group (loss-policy=demote, standby, revoked)" do_test ticket-group-19 "Ticket - Group (loss-policy=fence, standby, granted)" do_test ticket-group-20 "Ticket - Group (loss-policy=fence, granted, standby)" do_test ticket-group-21 "Ticket - Group (loss-policy=fence, standby, revoked)" do_test ticket-group-22 "Ticket - Group (loss-policy=freeze, standby, granted)" do_test ticket-group-23 "Ticket - Group (loss-policy=freeze, granted, standby)" do_test ticket-group-24 "Ticket - Group (loss-policy=freeze, standby, revoked)" echo"" do_test ticket-clone-1 "Ticket - Clone (loss-policy=stop, initial)" do_test ticket-clone-2 "Ticket - Clone (loss-policy=stop, granted)" do_test ticket-clone-3 "Ticket - Clone (loss-policy-stop, revoked)" do_test ticket-clone-4 "Ticket - Clone (loss-policy=demote, initial)" do_test ticket-clone-5 "Ticket - Clone (loss-policy=demote, granted)" do_test ticket-clone-6 "Ticket - Clone (loss-policy=demote, revoked)" do_test ticket-clone-7 "Ticket - Clone (loss-policy=fence, initial)" do_test ticket-clone-8 "Ticket - Clone (loss-policy=fence, granted)" do_test ticket-clone-9 "Ticket - Clone (loss-policy=fence, revoked)" do_test ticket-clone-10 "Ticket - Clone (loss-policy=freeze, initial)" do_test ticket-clone-11 "Ticket - Clone (loss-policy=freeze, granted)" do_test ticket-clone-12 "Ticket - Clone (loss-policy=freeze, revoked)" do_test ticket-clone-13 "Ticket - Clone (loss-policy=stop, standby, granted)" do_test ticket-clone-14 "Ticket - Clone (loss-policy=stop, granted, standby)" do_test ticket-clone-15 "Ticket - Clone (loss-policy=stop, standby, revoked)" do_test ticket-clone-16 "Ticket - Clone (loss-policy=demote, standby, granted)" do_test ticket-clone-17 "Ticket - Clone (loss-policy=demote, granted, standby)" do_test ticket-clone-18 "Ticket - Clone (loss-policy=demote, standby, revoked)" do_test ticket-clone-19 "Ticket - Clone (loss-policy=fence, standby, granted)" do_test ticket-clone-20 "Ticket - Clone (loss-policy=fence, granted, standby)" do_test ticket-clone-21 "Ticket - Clone (loss-policy=fence, standby, revoked)" do_test ticket-clone-22 "Ticket - Clone (loss-policy=freeze, standby, granted)" do_test ticket-clone-23 "Ticket - Clone (loss-policy=freeze, granted, standby)" do_test ticket-clone-24 "Ticket - Clone (loss-policy=freeze, standby, revoked)" echo"" do_test ticket-master-1 "Ticket - Master (loss-policy=stop, initial)" do_test ticket-master-2 "Ticket - Master (loss-policy=stop, granted)" do_test ticket-master-3 "Ticket - Master (loss-policy-stop, revoked)" do_test ticket-master-4 "Ticket - Master (loss-policy=demote, initial)" do_test ticket-master-5 "Ticket - Master (loss-policy=demote, granted)" do_test ticket-master-6 "Ticket - Master (loss-policy=demote, revoked)" do_test ticket-master-7 "Ticket - Master (loss-policy=fence, initial)" do_test ticket-master-8 "Ticket - Master (loss-policy=fence, granted)" do_test ticket-master-9 "Ticket - Master (loss-policy=fence, revoked)" do_test ticket-master-10 "Ticket - Master (loss-policy=freeze, initial)" do_test ticket-master-11 "Ticket - Master (loss-policy=freeze, granted)" do_test ticket-master-12 "Ticket - Master (loss-policy=freeze, revoked)" do_test ticket-master-13 "Ticket - Master (loss-policy=stop, standby, granted)" do_test ticket-master-14 "Ticket - Master (loss-policy=stop, granted, standby)" do_test ticket-master-15 "Ticket - Master (loss-policy=stop, standby, revoked)" do_test ticket-master-16 "Ticket - Master (loss-policy=demote, standby, granted)" do_test ticket-master-17 "Ticket - Master (loss-policy=demote, granted, standby)" do_test ticket-master-18 "Ticket - Master (loss-policy=demote, standby, revoked)" do_test ticket-master-19 "Ticket - Master (loss-policy=fence, standby, granted)" do_test ticket-master-20 "Ticket - Master (loss-policy=fence, granted, standby)" do_test ticket-master-21 "Ticket - Master (loss-policy=fence, standby, revoked)" do_test ticket-master-22 "Ticket - Master (loss-policy=freeze, standby, granted)" do_test ticket-master-23 "Ticket - Master (loss-policy=freeze, granted, standby)" do_test ticket-master-24 "Ticket - Master (loss-policy=freeze, standby, revoked)" echo "" do_test ticket-rsc-sets-1 "Ticket - Resource sets (1 ticket, initial)" do_test ticket-rsc-sets-2 "Ticket - Resource sets (1 ticket, granted)" do_test ticket-rsc-sets-3 "Ticket - Resource sets (1 ticket, revoked)" do_test ticket-rsc-sets-4 "Ticket - Resource sets (2 tickets, initial)" do_test ticket-rsc-sets-5 "Ticket - Resource sets (2 tickets, granted)" do_test ticket-rsc-sets-6 "Ticket - Resource sets (2 tickets, granted)" do_test ticket-rsc-sets-7 "Ticket - Resource sets (2 tickets, revoked)" do_test ticket-rsc-sets-8 "Ticket - Resource sets (1 ticket, standby, granted)" do_test ticket-rsc-sets-9 "Ticket - Resource sets (1 ticket, granted, standby)" do_test ticket-rsc-sets-10 "Ticket - Resource sets (1 ticket, standby, revoked)" do_test ticket-rsc-sets-11 "Ticket - Resource sets (2 tickets, standby, granted)" do_test ticket-rsc-sets-12 "Ticket - Resource sets (2 tickets, standby, granted)" do_test ticket-rsc-sets-13 "Ticket - Resource sets (2 tickets, granted, standby)" do_test ticket-rsc-sets-14 "Ticket - Resource sets (2 tickets, standby, revoked)" do_test cluster-specific-params "Cluster-specific instance attributes based on rules" do_test site-specific-params "Site-specific instance attributes based on rules" echo "" do_test template-1 "Template - 1" do_test template-2 "Template - 2" do_test template-3 "Template - 3 (merge operations)" do_test template-coloc-1 "Template - Colocation 1" do_test template-coloc-2 "Template - Colocation 2" do_test template-coloc-3 "Template - Colocation 3" do_test template-order-1 "Template - Order 1" do_test template-order-2 "Template - Order 2" do_test template-order-3 "Template - Order 3" do_test template-ticket "Template - Ticket" do_test template-rsc-sets-1 "Template - Resource Sets 1" do_test template-rsc-sets-2 "Template - Resource Sets 2" do_test template-rsc-sets-3 "Template - Resource Sets 3" do_test template-rsc-sets-4 "Template - Resource Sets 4" do_test template-clone-primitive "Cloned primitive from template" do_test template-clone-group "Cloned group from template" do_test location-sets-templates "Resource sets and templates - Location" do_test tags-coloc-order-1 "Tags - Colocation and Order (Simple)" do_test tags-coloc-order-2 "Tags - Colocation and Order (Resource Sets with Templates)" do_test tags-location "Tags - Location" do_test tags-ticket "Tags - Ticket" echo "" do_test container-1 "Container - initial" do_test container-2 "Container - monitor failed" do_test container-3 "Container - stop failed" do_test container-4 "Container - reached migration-threshold" do_test container-group-1 "Container in group - initial" do_test container-group-2 "Container in group - monitor failed" do_test container-group-3 "Container in group - stop failed" do_test container-group-4 "Container in group - reached migration-threshold" do_test container-is-remote-node "Place resource within container when container is remote-node" do_test bug-rh-1097457 "Kill user defined container/contents ordering" echo "" do_test whitebox-fail1 "Fail whitebox container rsc." do_test whitebox-fail2 "Fail whitebox container rsc lrmd connection." do_test whitebox-fail3 "Failed containers should not run nested on remote nodes." do_test whitebox-start "Start whitebox container with resources assigned to it" do_test whitebox-stop "Stop whitebox container with resources assigned to it" do_test whitebox-move "Move whitebox container with resources assigned to it" do_test whitebox-asymmetric "Verify connection rsc opts-in based on container resource" do_test whitebox-ms-ordering "Verify promote/demote can not occur before connection is established" do_test whitebox-orphaned "Properly shutdown orphaned whitebox container" do_test whitebox-orphan-ms "Properly tear down orphan ms resources on remote-nodes" do_test whitebox-unexpectedly-running "Recover container nodes the cluster did not start." do_test whitebox-migrate1 "Migrate both container and connection resource" +do_test whitebox-imply-stop-on-fence "imply stop action on container node rsc when host node is fenced" echo "" do_test remote-startup-probes "Baremetal remote-node startup probes" do_test remote-startup "Startup a newly discovered remote-nodes with no status." do_test remote-fence-unclean "Fence unclean baremetal remote-node" do_test remote-fence-unclean2 "Fence baremetal remote-node after cluster node fails and connection can not be recovered" do_test remote-move "Move remote-node connection resource" do_test remote-disable "Disable a baremetal remote-node" do_test remote-orphaned "Properly shutdown orphaned connection resource" do_test remote-recover "Recover connection resource after cluster-node fails." do_test remote-stale-node-entry "Make sure we properly handle leftover remote-node entries in the node section" do_test remote-partial-migrate "Make sure partial migrations are handled before ops on the remote node." do_test remote-partial-migrate2 "Make sure partial migration target is prefered for remote connection." do_test remote-recover-fail "Make sure start failure causes fencing if rsc are active on remote." do_test remote-start-fail "Make sure a start failure does not result in fencing if no active resources are on remote." do_test remote-unclean2 "Make monitor failure always results in fencing, even if no rsc are active on remote." echo "" do_test resource-discovery "Exercises resource-discovery location constraint option." do_test rsc-discovery-per-node "Disable resource discovery per node" echo "" test_results diff --git a/pengine/test10/clone-require-all-1.dot b/pengine/test10/clone-require-all-1.dot new file mode 100644 index 0000000000..9856969d7f --- /dev/null +++ b/pengine/test10/clone-require-all-1.dot @@ -0,0 +1,15 @@ + digraph "g" { +"B-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = bold] +"B-clone_start_0" -> "B:1_start_0 rhel7-auto4" [ style = bold] +"B-clone_start_0" -> "B_start_0 rhel7-auto3" [ style = bold] +"B-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"B:1_monitor_10000 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B:1_start_0 rhel7-auto4" -> "B-clone_running_0" [ style = bold] +"B:1_start_0 rhel7-auto4" -> "B:1_monitor_10000 rhel7-auto4" [ style = bold] +"B:1_start_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B_start_0 rhel7-auto3" -> "B-clone_running_0" [ style = bold] +"B_start_0 rhel7-auto3" -> "B_monitor_10000 rhel7-auto3" [ style = bold] +"B_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +} diff --git a/pengine/test10/clone-require-all-1.exp b/pengine/test10/clone-require-all-1.exp new file mode 100644 index 0000000000..c2d1abde74 --- /dev/null +++ b/pengine/test10/clone-require-all-1.exp @@ -0,0 +1,80 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-1.scores b/pengine/test10/clone-require-all-1.scores new file mode 100644 index 0000000000..fe3ce21e89 --- /dev/null +++ b/pengine/test10/clone-require-all-1.scores @@ -0,0 +1,77 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: 0 +clone_color: A-clone allocation score on rhel7-auto2: 0 +clone_color: A-clone allocation score on rhel7-auto3: -INFINITY +clone_color: A-clone allocation score on rhel7-auto4: -INFINITY +clone_color: A:0 allocation score on rhel7-auto1: 1 +clone_color: A:0 allocation score on rhel7-auto2: 0 +clone_color: A:0 allocation score on rhel7-auto3: -INFINITY +clone_color: A:0 allocation score on rhel7-auto4: -INFINITY +clone_color: A:1 allocation score on rhel7-auto1: 0 +clone_color: A:1 allocation score on rhel7-auto2: 1 +clone_color: A:1 allocation score on rhel7-auto3: -INFINITY +clone_color: A:1 allocation score on rhel7-auto4: -INFINITY +clone_color: A:2 allocation score on rhel7-auto1: 0 +clone_color: A:2 allocation score on rhel7-auto2: 0 +clone_color: A:2 allocation score on rhel7-auto3: -INFINITY +clone_color: A:2 allocation score on rhel7-auto4: -INFINITY +clone_color: A:3 allocation score on rhel7-auto1: 0 +clone_color: A:3 allocation score on rhel7-auto2: 0 +clone_color: A:3 allocation score on rhel7-auto3: -INFINITY +clone_color: A:3 allocation score on rhel7-auto4: -INFINITY +clone_color: B-clone allocation score on rhel7-auto1: -INFINITY +clone_color: B-clone allocation score on rhel7-auto2: -INFINITY +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: -INFINITY +clone_color: B:0 allocation score on rhel7-auto2: -INFINITY +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: -INFINITY +clone_color: B:1 allocation score on rhel7-auto2: -INFINITY +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: -INFINITY +clone_color: B:2 allocation score on rhel7-auto2: -INFINITY +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: -INFINITY +clone_color: B:3 allocation score on rhel7-auto2: -INFINITY +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: 1 +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: -INFINITY +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: 0 +native_color: A:1 allocation score on rhel7-auto2: 1 +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 0 +native_color: B:0 allocation score on rhel7-auto4: 0 +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: -INFINITY +native_color: B:1 allocation score on rhel7-auto4: 0 +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-1.summary b/pengine/test10/clone-require-all-1.summary new file mode 100644 index 0000000000..2cbb97dce3 --- /dev/null +++ b/pengine/test10/clone-require-all-1.summary @@ -0,0 +1,34 @@ + +Current cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 ] + Stopped: [ rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + +Transition Summary: + * Start B:0 (rhel7-auto3) + * Start B:1 (rhel7-auto4) + +Executing cluster transition: + * Pseudo action: B-clone_start_0 + * Resource action: B start on rhel7-auto3 + * Resource action: B start on rhel7-auto4 + * Pseudo action: B-clone_running_0 + * Resource action: B monitor=10000 on rhel7-auto3 + * Resource action: B monitor=10000 on rhel7-auto4 + +Revised cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 ] + Stopped: [ rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 ] + diff --git a/pengine/test10/clone-require-all-1.xml b/pengine/test10/clone-require-all-1.xml new file mode 100644 index 0000000000..724fac1e28 --- /dev/null +++ b/pengine/test10/clone-require-all-1.xml @@ -0,0 +1,152 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-2.dot b/pengine/test10/clone-require-all-2.dot new file mode 100644 index 0000000000..4f830cec41 --- /dev/null +++ b/pengine/test10/clone-require-all-2.dot @@ -0,0 +1,42 @@ + digraph "g" { +"A-clone_running_0" [ style=dashed color="red" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = dashed] +"A-clone_start_0" -> "A_start_0 " [ style = dashed] +"A-clone_start_0" [ style=dashed color="red" fontcolor="orange"] +"A-clone_stop_0" -> "A-clone_stopped_0" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto1" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto2" [ style = bold] +"A-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stopped_0" -> "A-clone_start_0" [ style = dashed] +"A-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"A_start_0 " -> "A-clone_running_0" [ style = dashed] +"A_start_0 " [ style=dashed color="red" fontcolor="black"] +"A_stop_0 rhel7-auto1" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto1" -> "A_start_0 " [ style = dashed] +"A_stop_0 rhel7-auto1" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"A_stop_0 rhel7-auto2" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto2" -> "A_start_0 " [ style = dashed] +"A_stop_0 rhel7-auto2" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"B-clone_running_0" [ style=dashed color="red" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = dashed] +"B-clone_start_0" -> "B:1_start_0 rhel7-auto3" [ style = dashed] +"B-clone_start_0" -> "B_start_0 rhel7-auto4" [ style = dashed] +"B-clone_start_0" [ style=dashed color="red" fontcolor="orange"] +"B:1_monitor_10000 rhel7-auto3" [ style=dashed color="red" fontcolor="black"] +"B:1_start_0 rhel7-auto3" -> "B-clone_running_0" [ style = dashed] +"B:1_start_0 rhel7-auto3" -> "B:1_monitor_10000 rhel7-auto3" [ style = dashed] +"B:1_start_0 rhel7-auto3" [ style=dashed color="red" fontcolor="black"] +"B_monitor_10000 rhel7-auto4" [ style=dashed color="red" fontcolor="black"] +"B_start_0 rhel7-auto4" -> "B-clone_running_0" [ style = dashed] +"B_start_0 rhel7-auto4" -> "B_monitor_10000 rhel7-auto4" [ style = dashed] +"B_start_0 rhel7-auto4" [ style=dashed color="red" fontcolor="black"] +"all_stopped" [ style=bold color="green" fontcolor="orange"] +"shooter_monitor_60000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"shooter_start_0 rhel7-auto3" -> "shooter_monitor_60000 rhel7-auto3" [ style = bold] +"shooter_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"shooter_stop_0 rhel7-auto1" -> "all_stopped" [ style = bold] +"shooter_stop_0 rhel7-auto1" -> "shooter_start_0 rhel7-auto3" [ style = bold] +"shooter_stop_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +} diff --git a/pengine/test10/clone-require-all-2.exp b/pengine/test10/clone-require-all-2.exp new file mode 100644 index 0000000000..a5ad63feb5 --- /dev/null +++ b/pengine/test10/clone-require-all-2.exp @@ -0,0 +1,107 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-2.scores b/pengine/test10/clone-require-all-2.scores new file mode 100644 index 0000000000..cdbf611ff5 --- /dev/null +++ b/pengine/test10/clone-require-all-2.scores @@ -0,0 +1,77 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: 0 +clone_color: A-clone allocation score on rhel7-auto2: 0 +clone_color: A-clone allocation score on rhel7-auto3: -INFINITY +clone_color: A-clone allocation score on rhel7-auto4: -INFINITY +clone_color: A:0 allocation score on rhel7-auto1: 1 +clone_color: A:0 allocation score on rhel7-auto2: 0 +clone_color: A:0 allocation score on rhel7-auto3: -INFINITY +clone_color: A:0 allocation score on rhel7-auto4: -INFINITY +clone_color: A:1 allocation score on rhel7-auto1: 0 +clone_color: A:1 allocation score on rhel7-auto2: 1 +clone_color: A:1 allocation score on rhel7-auto3: -INFINITY +clone_color: A:1 allocation score on rhel7-auto4: -INFINITY +clone_color: A:2 allocation score on rhel7-auto1: 0 +clone_color: A:2 allocation score on rhel7-auto2: 0 +clone_color: A:2 allocation score on rhel7-auto3: -INFINITY +clone_color: A:2 allocation score on rhel7-auto4: -INFINITY +clone_color: A:3 allocation score on rhel7-auto1: 0 +clone_color: A:3 allocation score on rhel7-auto2: 0 +clone_color: A:3 allocation score on rhel7-auto3: -INFINITY +clone_color: A:3 allocation score on rhel7-auto4: -INFINITY +clone_color: B-clone allocation score on rhel7-auto1: -INFINITY +clone_color: B-clone allocation score on rhel7-auto2: -INFINITY +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: -INFINITY +clone_color: B:0 allocation score on rhel7-auto2: -INFINITY +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: -INFINITY +clone_color: B:1 allocation score on rhel7-auto2: -INFINITY +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: -INFINITY +clone_color: B:2 allocation score on rhel7-auto2: -INFINITY +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: -INFINITY +clone_color: B:3 allocation score on rhel7-auto2: -INFINITY +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: -INFINITY +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: -INFINITY +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: -INFINITY +native_color: A:1 allocation score on rhel7-auto2: -INFINITY +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 0 +native_color: B:0 allocation score on rhel7-auto4: 0 +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: 0 +native_color: B:1 allocation score on rhel7-auto4: -INFINITY +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-2.summary b/pengine/test10/clone-require-all-2.summary new file mode 100644 index 0000000000..d4b2519247 --- /dev/null +++ b/pengine/test10/clone-require-all-2.summary @@ -0,0 +1,41 @@ + +Current cluster status: +Node rhel7-auto1 (1): standby +Node rhel7-auto2 (2): standby +Online: [ rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 ] + Stopped: [ rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + +Transition Summary: + * Move shooter (Started rhel7-auto1 -> rhel7-auto3) + * Stop A:0 (rhel7-auto1) + * Stop A:1 (rhel7-auto2) + * Start B:0 (rhel7-auto4 - blocked) + * Start B:1 (rhel7-auto3 - blocked) + +Executing cluster transition: + * Resource action: shooter stop on rhel7-auto1 + * Pseudo action: A-clone_stop_0 + * Resource action: shooter start on rhel7-auto3 + * Resource action: A stop on rhel7-auto1 + * Resource action: A stop on rhel7-auto2 + * Pseudo action: A-clone_stopped_0 + * Pseudo action: all_stopped + * Resource action: shooter monitor=60000 on rhel7-auto3 + +Revised cluster status: +Node rhel7-auto1 (1): standby +Node rhel7-auto2 (2): standby +Online: [ rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto3 + Clone Set: A-clone [A] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + diff --git a/pengine/test10/clone-require-all-2.xml b/pengine/test10/clone-require-all-2.xml new file mode 100644 index 0000000000..1fd9576626 --- /dev/null +++ b/pengine/test10/clone-require-all-2.xml @@ -0,0 +1,160 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-3.dot b/pengine/test10/clone-require-all-3.dot new file mode 100644 index 0000000000..d93693d439 --- /dev/null +++ b/pengine/test10/clone-require-all-3.dot @@ -0,0 +1,57 @@ + digraph "g" { +"A-clone_running_0" [ style=dashed color="red" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = dashed] +"A-clone_start_0" -> "A_start_0 " [ style = dashed] +"A-clone_start_0" [ style=dashed color="red" fontcolor="orange"] +"A-clone_stop_0" -> "A-clone_stopped_0" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto1" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto2" [ style = bold] +"A-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stopped_0" -> "A-clone_start_0" [ style = dashed] +"A-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"A_start_0 " -> "A-clone_running_0" [ style = dashed] +"A_start_0 " [ style=dashed color="red" fontcolor="black"] +"A_stop_0 rhel7-auto1" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto1" -> "A_start_0 " [ style = dashed] +"A_stop_0 rhel7-auto1" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"A_stop_0 rhel7-auto2" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto2" -> "A_start_0 " [ style = dashed] +"A_stop_0 rhel7-auto2" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"B-clone_running_0" [ style=dashed color="red" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = dashed] +"B-clone_start_0" -> "B_start_0 rhel7-auto3" [ style = dashed] +"B-clone_start_0" -> "B_start_0 rhel7-auto4" [ style = dashed] +"B-clone_start_0" [ style=dashed color="red" fontcolor="orange"] +"B-clone_stop_0" -> "B-clone_stopped_0" [ style = bold] +"B-clone_stop_0" -> "B_stop_0 rhel7-auto3" [ style = bold] +"B-clone_stop_0" -> "B_stop_0 rhel7-auto4" [ style = bold] +"B-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_stopped_0" -> "A-clone_stop_0" [ style = bold] +"B-clone_stopped_0" -> "B-clone_start_0" [ style = dashed] +"B-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"B_monitor_10000 rhel7-auto3" [ style=dashed color="red" fontcolor="black"] +"B_monitor_10000 rhel7-auto4" [ style=dashed color="red" fontcolor="black"] +"B_start_0 rhel7-auto3" -> "B-clone_running_0" [ style = dashed] +"B_start_0 rhel7-auto3" -> "B_monitor_10000 rhel7-auto3" [ style = dashed] +"B_start_0 rhel7-auto3" [ style=dashed color="red" fontcolor="black"] +"B_start_0 rhel7-auto4" -> "B-clone_running_0" [ style = dashed] +"B_start_0 rhel7-auto4" -> "B_monitor_10000 rhel7-auto4" [ style = dashed] +"B_start_0 rhel7-auto4" [ style=dashed color="red" fontcolor="black"] +"B_stop_0 rhel7-auto3" -> "B-clone_stopped_0" [ style = bold] +"B_stop_0 rhel7-auto3" -> "B_start_0 rhel7-auto3" [ style = dashed] +"B_stop_0 rhel7-auto3" -> "all_stopped" [ style = bold] +"B_stop_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B_stop_0 rhel7-auto4" -> "B-clone_stopped_0" [ style = bold] +"B_stop_0 rhel7-auto4" -> "B_start_0 rhel7-auto4" [ style = dashed] +"B_stop_0 rhel7-auto4" -> "all_stopped" [ style = bold] +"B_stop_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"all_stopped" [ style=bold color="green" fontcolor="orange"] +"shooter_monitor_60000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"shooter_start_0 rhel7-auto3" -> "shooter_monitor_60000 rhel7-auto3" [ style = bold] +"shooter_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"shooter_stop_0 rhel7-auto1" -> "all_stopped" [ style = bold] +"shooter_stop_0 rhel7-auto1" -> "shooter_start_0 rhel7-auto3" [ style = bold] +"shooter_stop_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +} diff --git a/pengine/test10/clone-require-all-3.exp b/pengine/test10/clone-require-all-3.exp new file mode 100644 index 0000000000..02eea7adb2 --- /dev/null +++ b/pengine/test10/clone-require-all-3.exp @@ -0,0 +1,169 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-3.scores b/pengine/test10/clone-require-all-3.scores new file mode 100644 index 0000000000..814a972883 --- /dev/null +++ b/pengine/test10/clone-require-all-3.scores @@ -0,0 +1,77 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: 0 +clone_color: A-clone allocation score on rhel7-auto2: 0 +clone_color: A-clone allocation score on rhel7-auto3: -INFINITY +clone_color: A-clone allocation score on rhel7-auto4: -INFINITY +clone_color: A:0 allocation score on rhel7-auto1: 1 +clone_color: A:0 allocation score on rhel7-auto2: 0 +clone_color: A:0 allocation score on rhel7-auto3: -INFINITY +clone_color: A:0 allocation score on rhel7-auto4: -INFINITY +clone_color: A:1 allocation score on rhel7-auto1: 0 +clone_color: A:1 allocation score on rhel7-auto2: 1 +clone_color: A:1 allocation score on rhel7-auto3: -INFINITY +clone_color: A:1 allocation score on rhel7-auto4: -INFINITY +clone_color: A:2 allocation score on rhel7-auto1: 0 +clone_color: A:2 allocation score on rhel7-auto2: 0 +clone_color: A:2 allocation score on rhel7-auto3: -INFINITY +clone_color: A:2 allocation score on rhel7-auto4: -INFINITY +clone_color: A:3 allocation score on rhel7-auto1: 0 +clone_color: A:3 allocation score on rhel7-auto2: 0 +clone_color: A:3 allocation score on rhel7-auto3: -INFINITY +clone_color: A:3 allocation score on rhel7-auto4: -INFINITY +clone_color: B-clone allocation score on rhel7-auto1: -INFINITY +clone_color: B-clone allocation score on rhel7-auto2: -INFINITY +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: -INFINITY +clone_color: B:0 allocation score on rhel7-auto2: -INFINITY +clone_color: B:0 allocation score on rhel7-auto3: 1 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: -INFINITY +clone_color: B:1 allocation score on rhel7-auto2: -INFINITY +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 1 +clone_color: B:2 allocation score on rhel7-auto1: -INFINITY +clone_color: B:2 allocation score on rhel7-auto2: -INFINITY +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: -INFINITY +clone_color: B:3 allocation score on rhel7-auto2: -INFINITY +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: -INFINITY +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: -INFINITY +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: -INFINITY +native_color: A:1 allocation score on rhel7-auto2: -INFINITY +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 1 +native_color: B:0 allocation score on rhel7-auto4: -INFINITY +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: 0 +native_color: B:1 allocation score on rhel7-auto4: 1 +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-3.summary b/pengine/test10/clone-require-all-3.summary new file mode 100644 index 0000000000..68191b168a --- /dev/null +++ b/pengine/test10/clone-require-all-3.summary @@ -0,0 +1,46 @@ + +Current cluster status: +Node rhel7-auto1 (1): standby +Node rhel7-auto2 (2): standby +Online: [ rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 ] + Stopped: [ rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 ] + +Transition Summary: + * Move shooter (Started rhel7-auto1 -> rhel7-auto3) + * Stop A:0 (rhel7-auto1) + * Stop A:1 (rhel7-auto2) + * Stop B:0 (Started rhel7-auto3) + * Stop B:1 (Started rhel7-auto4) + +Executing cluster transition: + * Resource action: shooter stop on rhel7-auto1 + * Pseudo action: B-clone_stop_0 + * Resource action: shooter start on rhel7-auto3 + * Resource action: B stop on rhel7-auto3 + * Resource action: B stop on rhel7-auto4 + * Pseudo action: B-clone_stopped_0 + * Resource action: shooter monitor=60000 on rhel7-auto3 + * Pseudo action: A-clone_stop_0 + * Resource action: A stop on rhel7-auto1 + * Resource action: A stop on rhel7-auto2 + * Pseudo action: A-clone_stopped_0 + * Pseudo action: all_stopped + +Revised cluster status: +Node rhel7-auto1 (1): standby +Node rhel7-auto2 (2): standby +Online: [ rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto3 + Clone Set: A-clone [A] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + diff --git a/pengine/test10/clone-require-all-3.xml b/pengine/test10/clone-require-all-3.xml new file mode 100644 index 0000000000..b04f0e2602 --- /dev/null +++ b/pengine/test10/clone-require-all-3.xml @@ -0,0 +1,160 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-4.dot b/pengine/test10/clone-require-all-4.dot new file mode 100644 index 0000000000..4b9521fa00 --- /dev/null +++ b/pengine/test10/clone-require-all-4.dot @@ -0,0 +1,24 @@ + digraph "g" { +"A-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = bold] +"A-clone_start_0" -> "A_start_0 " [ style = dashed] +"A-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stop_0" -> "A-clone_stopped_0" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto1" [ style = bold] +"A-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stopped_0" -> "A-clone_start_0" [ style = bold] +"A-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"A_start_0 " -> "A-clone_running_0" [ style = dashed] +"A_start_0 " [ style=dashed color="red" fontcolor="black"] +"A_stop_0 rhel7-auto1" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto1" -> "A_start_0 " [ style = dashed] +"A_stop_0 rhel7-auto1" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"all_stopped" [ style=bold color="green" fontcolor="orange"] +"shooter_monitor_60000 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"shooter_start_0 rhel7-auto2" -> "shooter_monitor_60000 rhel7-auto2" [ style = bold] +"shooter_start_0 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"shooter_stop_0 rhel7-auto1" -> "all_stopped" [ style = bold] +"shooter_stop_0 rhel7-auto1" -> "shooter_start_0 rhel7-auto2" [ style = bold] +"shooter_stop_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +} diff --git a/pengine/test10/clone-require-all-4.exp b/pengine/test10/clone-require-all-4.exp new file mode 100644 index 0000000000..53c152975b --- /dev/null +++ b/pengine/test10/clone-require-all-4.exp @@ -0,0 +1,112 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-4.scores b/pengine/test10/clone-require-all-4.scores new file mode 100644 index 0000000000..c602a1d74b --- /dev/null +++ b/pengine/test10/clone-require-all-4.scores @@ -0,0 +1,77 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: 0 +clone_color: A-clone allocation score on rhel7-auto2: 0 +clone_color: A-clone allocation score on rhel7-auto3: -INFINITY +clone_color: A-clone allocation score on rhel7-auto4: -INFINITY +clone_color: A:0 allocation score on rhel7-auto1: 1 +clone_color: A:0 allocation score on rhel7-auto2: 0 +clone_color: A:0 allocation score on rhel7-auto3: -INFINITY +clone_color: A:0 allocation score on rhel7-auto4: -INFINITY +clone_color: A:1 allocation score on rhel7-auto1: 0 +clone_color: A:1 allocation score on rhel7-auto2: 1 +clone_color: A:1 allocation score on rhel7-auto3: -INFINITY +clone_color: A:1 allocation score on rhel7-auto4: -INFINITY +clone_color: A:2 allocation score on rhel7-auto1: 0 +clone_color: A:2 allocation score on rhel7-auto2: 0 +clone_color: A:2 allocation score on rhel7-auto3: -INFINITY +clone_color: A:2 allocation score on rhel7-auto4: -INFINITY +clone_color: A:3 allocation score on rhel7-auto1: 0 +clone_color: A:3 allocation score on rhel7-auto2: 0 +clone_color: A:3 allocation score on rhel7-auto3: -INFINITY +clone_color: A:3 allocation score on rhel7-auto4: -INFINITY +clone_color: B-clone allocation score on rhel7-auto1: -INFINITY +clone_color: B-clone allocation score on rhel7-auto2: -INFINITY +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: -INFINITY +clone_color: B:0 allocation score on rhel7-auto2: -INFINITY +clone_color: B:0 allocation score on rhel7-auto3: 1 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: -INFINITY +clone_color: B:1 allocation score on rhel7-auto2: -INFINITY +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 1 +clone_color: B:2 allocation score on rhel7-auto1: -INFINITY +clone_color: B:2 allocation score on rhel7-auto2: -INFINITY +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: -INFINITY +clone_color: B:3 allocation score on rhel7-auto2: -INFINITY +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: -INFINITY +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: -INFINITY +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: -INFINITY +native_color: A:1 allocation score on rhel7-auto2: 1 +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 1 +native_color: B:0 allocation score on rhel7-auto4: 0 +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: -INFINITY +native_color: B:1 allocation score on rhel7-auto4: 1 +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-4.summary b/pengine/test10/clone-require-all-4.summary new file mode 100644 index 0000000000..49ae3bd87e --- /dev/null +++ b/pengine/test10/clone-require-all-4.summary @@ -0,0 +1,40 @@ + +Current cluster status: +Node rhel7-auto1 (1): standby +Online: [ rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 ] + Stopped: [ rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 ] + +Transition Summary: + * Move shooter (Started rhel7-auto1 -> rhel7-auto2) + * Stop A:0 (rhel7-auto1) + +Executing cluster transition: + * Resource action: shooter stop on rhel7-auto1 + * Pseudo action: A-clone_stop_0 + * Resource action: shooter start on rhel7-auto2 + * Resource action: A stop on rhel7-auto1 + * Pseudo action: A-clone_stopped_0 + * Pseudo action: A-clone_start_0 + * Pseudo action: all_stopped + * Resource action: shooter monitor=60000 on rhel7-auto2 + * Pseudo action: A-clone_running_0 + +Revised cluster status: +Node rhel7-auto1 (1): standby +Online: [ rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto2 + Clone Set: A-clone [A] + Started: [ rhel7-auto2 ] + Stopped: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 ] + diff --git a/pengine/test10/clone-require-all-4.xml b/pengine/test10/clone-require-all-4.xml new file mode 100644 index 0000000000..8bfd27fa34 --- /dev/null +++ b/pengine/test10/clone-require-all-4.xml @@ -0,0 +1,156 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-5.dot b/pengine/test10/clone-require-all-5.dot new file mode 100644 index 0000000000..ce5a59373d --- /dev/null +++ b/pengine/test10/clone-require-all-5.dot @@ -0,0 +1,31 @@ + digraph "g" { +"A-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = bold] +"A-clone_start_0" -> "A_start_0 rhel7-auto3" [ style = bold] +"A-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"A_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"A_start_0 rhel7-auto3" -> "A-clone_running_0" [ style = bold] +"A_start_0 rhel7-auto3" -> "A_monitor_10000 rhel7-auto3" [ style = bold] +"A_start_0 rhel7-auto3" -> "clone-one-or-more:order-A-clone-B-clone-mandatory" [ style = bold] +"A_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = bold] +"B-clone_start_0" -> "B:1_start_0 rhel7-auto3" [ style = bold] +"B-clone_start_0" -> "B:2_start_0 rhel7-auto1" [ style = bold] +"B-clone_start_0" -> "B_start_0 rhel7-auto4" [ style = bold] +"B-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"B:1_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B:1_start_0 rhel7-auto3" -> "B-clone_running_0" [ style = bold] +"B:1_start_0 rhel7-auto3" -> "B:1_monitor_10000 rhel7-auto3" [ style = bold] +"B:1_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B:2_monitor_10000 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"B:2_start_0 rhel7-auto1" -> "B-clone_running_0" [ style = bold] +"B:2_start_0 rhel7-auto1" -> "B:2_monitor_10000 rhel7-auto1" [ style = bold] +"B:2_start_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"B_monitor_10000 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B_start_0 rhel7-auto4" -> "B-clone_running_0" [ style = bold] +"B_start_0 rhel7-auto4" -> "B_monitor_10000 rhel7-auto4" [ style = bold] +"B_start_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"clone-one-or-more:order-A-clone-B-clone-mandatory" -> "B-clone_start_0" [ style = bold] +"clone-one-or-more:order-A-clone-B-clone-mandatory" [ style=bold color="green" fontcolor="orange"] +} diff --git a/pengine/test10/clone-require-all-5.exp b/pengine/test10/clone-require-all-5.exp new file mode 100644 index 0000000000..abb4d2516e --- /dev/null +++ b/pengine/test10/clone-require-all-5.exp @@ -0,0 +1,174 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-5.scores b/pengine/test10/clone-require-all-5.scores new file mode 100644 index 0000000000..cca67398e1 --- /dev/null +++ b/pengine/test10/clone-require-all-5.scores @@ -0,0 +1,77 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: 0 +clone_color: A-clone allocation score on rhel7-auto2: 0 +clone_color: A-clone allocation score on rhel7-auto3: 0 +clone_color: A-clone allocation score on rhel7-auto4: -INFINITY +clone_color: A:0 allocation score on rhel7-auto1: 1 +clone_color: A:0 allocation score on rhel7-auto2: 0 +clone_color: A:0 allocation score on rhel7-auto3: 0 +clone_color: A:0 allocation score on rhel7-auto4: -INFINITY +clone_color: A:1 allocation score on rhel7-auto1: 0 +clone_color: A:1 allocation score on rhel7-auto2: 1 +clone_color: A:1 allocation score on rhel7-auto3: 0 +clone_color: A:1 allocation score on rhel7-auto4: -INFINITY +clone_color: A:2 allocation score on rhel7-auto1: 0 +clone_color: A:2 allocation score on rhel7-auto2: 0 +clone_color: A:2 allocation score on rhel7-auto3: 0 +clone_color: A:2 allocation score on rhel7-auto4: -INFINITY +clone_color: A:3 allocation score on rhel7-auto1: 0 +clone_color: A:3 allocation score on rhel7-auto2: 0 +clone_color: A:3 allocation score on rhel7-auto3: 0 +clone_color: A:3 allocation score on rhel7-auto4: -INFINITY +clone_color: B-clone allocation score on rhel7-auto1: 0 +clone_color: B-clone allocation score on rhel7-auto2: -INFINITY +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: 0 +clone_color: B:0 allocation score on rhel7-auto2: -INFINITY +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: 0 +clone_color: B:1 allocation score on rhel7-auto2: -INFINITY +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: 0 +clone_color: B:2 allocation score on rhel7-auto2: -INFINITY +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: 0 +clone_color: B:3 allocation score on rhel7-auto2: -INFINITY +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: 1 +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: 0 +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: 0 +native_color: A:1 allocation score on rhel7-auto2: 1 +native_color: A:1 allocation score on rhel7-auto3: 0 +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: 0 +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: 0 +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 0 +native_color: B:0 allocation score on rhel7-auto4: 0 +native_color: B:1 allocation score on rhel7-auto1: 0 +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: 0 +native_color: B:1 allocation score on rhel7-auto4: -INFINITY +native_color: B:2 allocation score on rhel7-auto1: 0 +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-5.summary b/pengine/test10/clone-require-all-5.summary new file mode 100644 index 0000000000..2820093bbe --- /dev/null +++ b/pengine/test10/clone-require-all-5.summary @@ -0,0 +1,43 @@ + +Current cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 ] + Stopped: [ rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + +Transition Summary: + * Start A:2 (rhel7-auto3) + * Start B:0 (rhel7-auto4) + * Start B:1 (rhel7-auto3) + * Start B:2 (rhel7-auto1) + +Executing cluster transition: + * Pseudo action: A-clone_start_0 + * Resource action: A start on rhel7-auto3 + * Pseudo action: A-clone_running_0 + * Pseudo action: clone-one-or-more:order-A-clone-B-clone-mandatory + * Resource action: A monitor=10000 on rhel7-auto3 + * Pseudo action: B-clone_start_0 + * Resource action: B start on rhel7-auto4 + * Resource action: B start on rhel7-auto3 + * Resource action: B start on rhel7-auto1 + * Pseudo action: B-clone_running_0 + * Resource action: B monitor=10000 on rhel7-auto4 + * Resource action: B monitor=10000 on rhel7-auto3 + * Resource action: B monitor=10000 on rhel7-auto1 + +Revised cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Stopped: [ rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto2 ] + diff --git a/pengine/test10/clone-require-all-5.xml b/pengine/test10/clone-require-all-5.xml new file mode 100644 index 0000000000..1079dae0cb --- /dev/null +++ b/pengine/test10/clone-require-all-5.xml @@ -0,0 +1,150 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-6.dot b/pengine/test10/clone-require-all-6.dot new file mode 100644 index 0000000000..3ee5c89ec1 --- /dev/null +++ b/pengine/test10/clone-require-all-6.dot @@ -0,0 +1,23 @@ + digraph "g" { +"A-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = bold] +"A-clone_start_0" -> "A_start_0 " [ style = dashed] +"A-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stop_0" -> "A-clone_stopped_0" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto1" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto3" [ style = bold] +"A-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stopped_0" -> "A-clone_start_0" [ style = bold] +"A-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"A_start_0 " -> "A-clone_running_0" [ style = dashed] +"A_start_0 " [ style=dashed color="red" fontcolor="black"] +"A_stop_0 rhel7-auto1" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto1" -> "A_start_0 " [ style = dashed] +"A_stop_0 rhel7-auto1" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"A_stop_0 rhel7-auto3" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto3" -> "A_start_0 " [ style = dashed] +"A_stop_0 rhel7-auto3" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"all_stopped" [ style=bold color="green" fontcolor="orange"] +} diff --git a/pengine/test10/clone-require-all-6.exp b/pengine/test10/clone-require-all-6.exp new file mode 100644 index 0000000000..8cc4c5785a --- /dev/null +++ b/pengine/test10/clone-require-all-6.exp @@ -0,0 +1,93 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-6.scores b/pengine/test10/clone-require-all-6.scores new file mode 100644 index 0000000000..4e8ee5a504 --- /dev/null +++ b/pengine/test10/clone-require-all-6.scores @@ -0,0 +1,77 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: -INFINITY +clone_color: A-clone allocation score on rhel7-auto2: 0 +clone_color: A-clone allocation score on rhel7-auto3: -INFINITY +clone_color: A-clone allocation score on rhel7-auto4: -INFINITY +clone_color: A:0 allocation score on rhel7-auto1: -INFINITY +clone_color: A:0 allocation score on rhel7-auto2: 0 +clone_color: A:0 allocation score on rhel7-auto3: -INFINITY +clone_color: A:0 allocation score on rhel7-auto4: -INFINITY +clone_color: A:1 allocation score on rhel7-auto1: -INFINITY +clone_color: A:1 allocation score on rhel7-auto2: 1 +clone_color: A:1 allocation score on rhel7-auto3: -INFINITY +clone_color: A:1 allocation score on rhel7-auto4: -INFINITY +clone_color: A:2 allocation score on rhel7-auto1: -INFINITY +clone_color: A:2 allocation score on rhel7-auto2: 0 +clone_color: A:2 allocation score on rhel7-auto3: -INFINITY +clone_color: A:2 allocation score on rhel7-auto4: -INFINITY +clone_color: A:3 allocation score on rhel7-auto1: -INFINITY +clone_color: A:3 allocation score on rhel7-auto2: 0 +clone_color: A:3 allocation score on rhel7-auto3: -INFINITY +clone_color: A:3 allocation score on rhel7-auto4: -INFINITY +clone_color: B-clone allocation score on rhel7-auto1: 0 +clone_color: B-clone allocation score on rhel7-auto2: -INFINITY +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: 1 +clone_color: B:0 allocation score on rhel7-auto2: -INFINITY +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: 0 +clone_color: B:1 allocation score on rhel7-auto2: -INFINITY +clone_color: B:1 allocation score on rhel7-auto3: 1 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: 0 +clone_color: B:2 allocation score on rhel7-auto2: -INFINITY +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 1 +clone_color: B:3 allocation score on rhel7-auto1: 0 +clone_color: B:3 allocation score on rhel7-auto2: -INFINITY +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: -INFINITY +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: -INFINITY +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: -INFINITY +native_color: A:1 allocation score on rhel7-auto2: 1 +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: 1 +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: -INFINITY +native_color: B:0 allocation score on rhel7-auto4: -INFINITY +native_color: B:1 allocation score on rhel7-auto1: 0 +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: 1 +native_color: B:1 allocation score on rhel7-auto4: 0 +native_color: B:2 allocation score on rhel7-auto1: 0 +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: 1 +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-6.summary b/pengine/test10/clone-require-all-6.summary new file mode 100644 index 0000000000..6561ea3020 --- /dev/null +++ b/pengine/test10/clone-require-all-6.summary @@ -0,0 +1,36 @@ + +Current cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Stopped: [ rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto2 ] + +Transition Summary: + * Stop A:0 (rhel7-auto1) + * Stop A:2 (rhel7-auto3) + +Executing cluster transition: + * Pseudo action: A-clone_stop_0 + * Resource action: A stop on rhel7-auto1 + * Resource action: A stop on rhel7-auto3 + * Pseudo action: A-clone_stopped_0 + * Pseudo action: A-clone_start_0 + * Pseudo action: all_stopped + * Pseudo action: A-clone_running_0 + +Revised cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto2 ] + Stopped: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto2 ] + diff --git a/pengine/test10/clone-require-all-6.xml b/pengine/test10/clone-require-all-6.xml new file mode 100644 index 0000000000..5222b337c8 --- /dev/null +++ b/pengine/test10/clone-require-all-6.xml @@ -0,0 +1,153 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-7.dot b/pengine/test10/clone-require-all-7.dot new file mode 100644 index 0000000000..baa87a29f2 --- /dev/null +++ b/pengine/test10/clone-require-all-7.dot @@ -0,0 +1,51 @@ + digraph "g" { +"A-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = bold] +"A-clone_start_0" -> "A:0_start_0 rhel7-auto2" [ style = bold] +"A-clone_start_0" -> "A:1_start_0 rhel7-auto1" [ style = bold] +"A-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"A:0_monitor_0 rhel7-auto2" -> "probe_complete rhel7-auto2" [ style = bold] +"A:0_monitor_0 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"A:0_monitor_0 rhel7-auto3" -> "probe_complete rhel7-auto3" [ style = bold] +"A:0_monitor_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"A:0_monitor_0 rhel7-auto4" -> "probe_complete rhel7-auto4" [ style = bold] +"A:0_monitor_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"A:0_monitor_10000 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"A:0_start_0 rhel7-auto2" -> "A-clone_running_0" [ style = bold] +"A:0_start_0 rhel7-auto2" -> "A:0_monitor_10000 rhel7-auto2" [ style = bold] +"A:0_start_0 rhel7-auto2" -> "clone-one-or-more:order-A-clone-B-clone-mandatory" [ style = bold] +"A:0_start_0 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"A:1_monitor_0 rhel7-auto1" -> "probe_complete rhel7-auto1" [ style = bold] +"A:1_monitor_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"A:1_monitor_10000 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"A:1_start_0 rhel7-auto1" -> "A-clone_running_0" [ style = bold] +"A:1_start_0 rhel7-auto1" -> "A:1_monitor_10000 rhel7-auto1" [ style = bold] +"A:1_start_0 rhel7-auto1" -> "clone-one-or-more:order-A-clone-B-clone-mandatory" [ style = bold] +"A:1_start_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"B-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = bold] +"B-clone_start_0" -> "B:1_start_0 rhel7-auto4" [ style = bold] +"B-clone_start_0" -> "B_start_0 rhel7-auto3" [ style = bold] +"B-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"B:1_monitor_10000 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B:1_start_0 rhel7-auto4" -> "B-clone_running_0" [ style = bold] +"B:1_start_0 rhel7-auto4" -> "B:1_monitor_10000 rhel7-auto4" [ style = bold] +"B:1_start_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B_start_0 rhel7-auto3" -> "B-clone_running_0" [ style = bold] +"B_start_0 rhel7-auto3" -> "B_monitor_10000 rhel7-auto3" [ style = bold] +"B_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"clone-one-or-more:order-A-clone-B-clone-mandatory" -> "B-clone_start_0" [ style = bold] +"clone-one-or-more:order-A-clone-B-clone-mandatory" [ style=bold color="green" fontcolor="orange"] +"probe_complete rhel7-auto1" -> "probe_complete" [ style = bold] +"probe_complete rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"probe_complete rhel7-auto2" -> "probe_complete" [ style = bold] +"probe_complete rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"probe_complete rhel7-auto3" -> "probe_complete" [ style = bold] +"probe_complete rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"probe_complete rhel7-auto4" -> "probe_complete" [ style = bold] +"probe_complete rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"probe_complete" -> "A:0_start_0 rhel7-auto2" [ style = bold] +"probe_complete" -> "A:1_start_0 rhel7-auto1" [ style = bold] +"probe_complete" [ style=bold color="green" fontcolor="orange"] +} diff --git a/pengine/test10/clone-require-all-7.exp b/pengine/test10/clone-require-all-7.exp new file mode 100644 index 0000000000..d37a82f943 --- /dev/null +++ b/pengine/test10/clone-require-all-7.exp @@ -0,0 +1,288 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-7.scores b/pengine/test10/clone-require-all-7.scores new file mode 100644 index 0000000000..8c61fd10fa --- /dev/null +++ b/pengine/test10/clone-require-all-7.scores @@ -0,0 +1,77 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: 0 +clone_color: A-clone allocation score on rhel7-auto2: 0 +clone_color: A-clone allocation score on rhel7-auto3: -INFINITY +clone_color: A-clone allocation score on rhel7-auto4: -INFINITY +clone_color: A:0 allocation score on rhel7-auto1: 0 +clone_color: A:0 allocation score on rhel7-auto2: 0 +clone_color: A:0 allocation score on rhel7-auto3: -INFINITY +clone_color: A:0 allocation score on rhel7-auto4: -INFINITY +clone_color: A:1 allocation score on rhel7-auto1: 0 +clone_color: A:1 allocation score on rhel7-auto2: 0 +clone_color: A:1 allocation score on rhel7-auto3: -INFINITY +clone_color: A:1 allocation score on rhel7-auto4: -INFINITY +clone_color: A:2 allocation score on rhel7-auto1: 0 +clone_color: A:2 allocation score on rhel7-auto2: 0 +clone_color: A:2 allocation score on rhel7-auto3: -INFINITY +clone_color: A:2 allocation score on rhel7-auto4: -INFINITY +clone_color: A:3 allocation score on rhel7-auto1: 0 +clone_color: A:3 allocation score on rhel7-auto2: 0 +clone_color: A:3 allocation score on rhel7-auto3: -INFINITY +clone_color: A:3 allocation score on rhel7-auto4: -INFINITY +clone_color: B-clone allocation score on rhel7-auto1: -INFINITY +clone_color: B-clone allocation score on rhel7-auto2: -INFINITY +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: -INFINITY +clone_color: B:0 allocation score on rhel7-auto2: -INFINITY +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: -INFINITY +clone_color: B:1 allocation score on rhel7-auto2: -INFINITY +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: -INFINITY +clone_color: B:2 allocation score on rhel7-auto2: -INFINITY +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: -INFINITY +clone_color: B:3 allocation score on rhel7-auto2: -INFINITY +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: 0 +native_color: A:0 allocation score on rhel7-auto2: 0 +native_color: A:0 allocation score on rhel7-auto3: -INFINITY +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: 0 +native_color: A:1 allocation score on rhel7-auto2: -INFINITY +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 0 +native_color: B:0 allocation score on rhel7-auto4: 0 +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: -INFINITY +native_color: B:1 allocation score on rhel7-auto4: 0 +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-7.summary b/pengine/test10/clone-require-all-7.summary new file mode 100644 index 0000000000..411f738e5d --- /dev/null +++ b/pengine/test10/clone-require-all-7.summary @@ -0,0 +1,47 @@ + +Current cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + +Transition Summary: + * Start A:0 (rhel7-auto2) + * Start A:1 (rhel7-auto1) + * Start B:0 (rhel7-auto3) + * Start B:1 (rhel7-auto4) + +Executing cluster transition: + * Resource action: A:0 monitor on rhel7-auto4 + * Resource action: A:0 monitor on rhel7-auto3 + * Resource action: A:0 monitor on rhel7-auto2 + * Resource action: A:1 monitor on rhel7-auto1 + * Pseudo action: A-clone_start_0 + * Pseudo action: probe_complete + * Resource action: A:0 start on rhel7-auto2 + * Resource action: A:1 start on rhel7-auto1 + * Pseudo action: A-clone_running_0 + * Pseudo action: clone-one-or-more:order-A-clone-B-clone-mandatory + * Resource action: A:0 monitor=10000 on rhel7-auto2 + * Resource action: A:1 monitor=10000 on rhel7-auto1 + * Pseudo action: B-clone_start_0 + * Resource action: B start on rhel7-auto3 + * Resource action: B start on rhel7-auto4 + * Pseudo action: B-clone_running_0 + * Resource action: B monitor=10000 on rhel7-auto3 + * Resource action: B monitor=10000 on rhel7-auto4 + +Revised cluster status: +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto1 rhel7-auto2 ] + Stopped: [ rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto3 rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 ] + diff --git a/pengine/test10/clone-require-all-7.xml b/pengine/test10/clone-require-all-7.xml new file mode 100644 index 0000000000..6aea47b643 --- /dev/null +++ b/pengine/test10/clone-require-all-7.xml @@ -0,0 +1,138 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-no-interleave-1.dot b/pengine/test10/clone-require-all-no-interleave-1.dot new file mode 100644 index 0000000000..d03703b627 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-1.dot @@ -0,0 +1,40 @@ + digraph "g" { +"A-clone_running_0" -> "B-clone_start_0" [ style = bold] +"A-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = bold] +"A-clone_start_0" -> "A_start_0 rhel7-auto3" [ style = bold] +"A-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"A_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"A_start_0 rhel7-auto3" -> "A-clone_running_0" [ style = bold] +"A_start_0 rhel7-auto3" -> "A_monitor_10000 rhel7-auto3" [ style = bold] +"A_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = bold] +"B-clone_start_0" -> "B_start_0 rhel7-auto3" [ style = bold] +"B-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"B_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B_start_0 rhel7-auto3" -> "B-clone_running_0" [ style = bold] +"B_start_0 rhel7-auto3" -> "B_monitor_10000 rhel7-auto3" [ style = bold] +"B_start_0 rhel7-auto3" -> "clone-one-or-more:order-B-clone-C-clone-mandatory" [ style = bold] +"B_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"C-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"C-clone_start_0" -> "C-clone_running_0" [ style = bold] +"C-clone_start_0" -> "C:1_start_0 rhel7-auto1" [ style = bold] +"C-clone_start_0" -> "C:2_start_0 rhel7-auto3" [ style = bold] +"C-clone_start_0" -> "C_start_0 rhel7-auto2" [ style = bold] +"C-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"C:1_monitor_10000 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"C:1_start_0 rhel7-auto1" -> "C-clone_running_0" [ style = bold] +"C:1_start_0 rhel7-auto1" -> "C:1_monitor_10000 rhel7-auto1" [ style = bold] +"C:1_start_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"C:2_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"C:2_start_0 rhel7-auto3" -> "C-clone_running_0" [ style = bold] +"C:2_start_0 rhel7-auto3" -> "C:2_monitor_10000 rhel7-auto3" [ style = bold] +"C:2_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"C_monitor_10000 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"C_start_0 rhel7-auto2" -> "C-clone_running_0" [ style = bold] +"C_start_0 rhel7-auto2" -> "C_monitor_10000 rhel7-auto2" [ style = bold] +"C_start_0 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"clone-one-or-more:order-B-clone-C-clone-mandatory" -> "C-clone_start_0" [ style = bold] +"clone-one-or-more:order-B-clone-C-clone-mandatory" [ style=bold color="green" fontcolor="orange"] +} diff --git a/pengine/test10/clone-require-all-no-interleave-1.exp b/pengine/test10/clone-require-all-no-interleave-1.exp new file mode 100644 index 0000000000..7048f513ec --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-1.exp @@ -0,0 +1,227 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-no-interleave-1.scores b/pengine/test10/clone-require-all-no-interleave-1.scores new file mode 100644 index 0000000000..f6d3232ea0 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-1.scores @@ -0,0 +1,113 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: -INFINITY +clone_color: A-clone allocation score on rhel7-auto2: -INFINITY +clone_color: A-clone allocation score on rhel7-auto3: 0 +clone_color: A-clone allocation score on rhel7-auto4: 0 +clone_color: A:0 allocation score on rhel7-auto1: -INFINITY +clone_color: A:0 allocation score on rhel7-auto2: -INFINITY +clone_color: A:0 allocation score on rhel7-auto3: 0 +clone_color: A:0 allocation score on rhel7-auto4: 0 +clone_color: A:1 allocation score on rhel7-auto1: -INFINITY +clone_color: A:1 allocation score on rhel7-auto2: -INFINITY +clone_color: A:1 allocation score on rhel7-auto3: 0 +clone_color: A:1 allocation score on rhel7-auto4: 0 +clone_color: A:2 allocation score on rhel7-auto1: -INFINITY +clone_color: A:2 allocation score on rhel7-auto2: -INFINITY +clone_color: A:2 allocation score on rhel7-auto3: 0 +clone_color: A:2 allocation score on rhel7-auto4: 0 +clone_color: A:3 allocation score on rhel7-auto1: -INFINITY +clone_color: A:3 allocation score on rhel7-auto2: -INFINITY +clone_color: A:3 allocation score on rhel7-auto3: 0 +clone_color: A:3 allocation score on rhel7-auto4: 0 +clone_color: B-clone allocation score on rhel7-auto1: 0 +clone_color: B-clone allocation score on rhel7-auto2: 0 +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: 0 +clone_color: B:0 allocation score on rhel7-auto2: 0 +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: 0 +clone_color: B:1 allocation score on rhel7-auto2: 0 +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: 0 +clone_color: B:2 allocation score on rhel7-auto2: 0 +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: 0 +clone_color: B:3 allocation score on rhel7-auto2: 0 +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +clone_color: C-clone allocation score on rhel7-auto1: 0 +clone_color: C-clone allocation score on rhel7-auto2: 0 +clone_color: C-clone allocation score on rhel7-auto3: 0 +clone_color: C-clone allocation score on rhel7-auto4: 0 +clone_color: C:0 allocation score on rhel7-auto1: 0 +clone_color: C:0 allocation score on rhel7-auto2: 0 +clone_color: C:0 allocation score on rhel7-auto3: 0 +clone_color: C:0 allocation score on rhel7-auto4: 0 +clone_color: C:1 allocation score on rhel7-auto1: 0 +clone_color: C:1 allocation score on rhel7-auto2: 0 +clone_color: C:1 allocation score on rhel7-auto3: 0 +clone_color: C:1 allocation score on rhel7-auto4: 0 +clone_color: C:2 allocation score on rhel7-auto1: 0 +clone_color: C:2 allocation score on rhel7-auto2: 0 +clone_color: C:2 allocation score on rhel7-auto3: 0 +clone_color: C:2 allocation score on rhel7-auto4: 0 +clone_color: C:3 allocation score on rhel7-auto1: 0 +clone_color: C:3 allocation score on rhel7-auto2: 0 +clone_color: C:3 allocation score on rhel7-auto3: 0 +clone_color: C:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: -INFINITY +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: 0 +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: -INFINITY +native_color: A:1 allocation score on rhel7-auto2: -INFINITY +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 0 +native_color: B:0 allocation score on rhel7-auto4: -INFINITY +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: -INFINITY +native_color: B:1 allocation score on rhel7-auto4: -INFINITY +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: C:0 allocation score on rhel7-auto1: 0 +native_color: C:0 allocation score on rhel7-auto2: 0 +native_color: C:0 allocation score on rhel7-auto3: 0 +native_color: C:0 allocation score on rhel7-auto4: -INFINITY +native_color: C:1 allocation score on rhel7-auto1: 0 +native_color: C:1 allocation score on rhel7-auto2: -INFINITY +native_color: C:1 allocation score on rhel7-auto3: 0 +native_color: C:1 allocation score on rhel7-auto4: -INFINITY +native_color: C:2 allocation score on rhel7-auto1: -INFINITY +native_color: C:2 allocation score on rhel7-auto2: -INFINITY +native_color: C:2 allocation score on rhel7-auto3: 0 +native_color: C:2 allocation score on rhel7-auto4: -INFINITY +native_color: C:3 allocation score on rhel7-auto1: -INFINITY +native_color: C:3 allocation score on rhel7-auto2: -INFINITY +native_color: C:3 allocation score on rhel7-auto3: -INFINITY +native_color: C:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-no-interleave-1.summary b/pengine/test10/clone-require-all-no-interleave-1.summary new file mode 100644 index 0000000000..dd4bc99405 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-1.summary @@ -0,0 +1,54 @@ + +Current cluster status: +Node rhel7-auto4 (4): standby +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + Clone Set: C-clone [C] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + +Transition Summary: + * Start A:0 (rhel7-auto3) + * Start B:0 (rhel7-auto3) + * Start C:0 (rhel7-auto2) + * Start C:1 (rhel7-auto1) + * Start C:2 (rhel7-auto3) + +Executing cluster transition: + * Pseudo action: A-clone_start_0 + * Resource action: A start on rhel7-auto3 + * Pseudo action: A-clone_running_0 + * Pseudo action: B-clone_start_0 + * Resource action: A monitor=10000 on rhel7-auto3 + * Resource action: B start on rhel7-auto3 + * Pseudo action: B-clone_running_0 + * Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory + * Resource action: B monitor=10000 on rhel7-auto3 + * Pseudo action: C-clone_start_0 + * Resource action: C start on rhel7-auto2 + * Resource action: C start on rhel7-auto1 + * Resource action: C start on rhel7-auto3 + * Pseudo action: C-clone_running_0 + * Resource action: C monitor=10000 on rhel7-auto2 + * Resource action: C monitor=10000 on rhel7-auto1 + * Resource action: C monitor=10000 on rhel7-auto3 + +Revised cluster status: +Node rhel7-auto4 (4): standby +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto3 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto3 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + Clone Set: C-clone [C] + Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Stopped: [ rhel7-auto4 ] + diff --git a/pengine/test10/clone-require-all-no-interleave-1.xml b/pengine/test10/clone-require-all-no-interleave-1.xml new file mode 100644 index 0000000000..4630f96013 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-1.xml @@ -0,0 +1,177 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-no-interleave-2.dot b/pengine/test10/clone-require-all-no-interleave-2.dot new file mode 100644 index 0000000000..1d7f8be497 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-2.dot @@ -0,0 +1,40 @@ + digraph "g" { +"A-clone_running_0" -> "B-clone_start_0" [ style = bold] +"A-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = bold] +"A-clone_start_0" -> "A_start_0 rhel7-auto4" [ style = bold] +"A-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"A_monitor_10000 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"A_start_0 rhel7-auto4" -> "A-clone_running_0" [ style = bold] +"A_start_0 rhel7-auto4" -> "A_monitor_10000 rhel7-auto4" [ style = bold] +"A_start_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = bold] +"B-clone_start_0" -> "B_start_0 rhel7-auto4" [ style = bold] +"B-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"B_monitor_10000 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B_start_0 rhel7-auto4" -> "B-clone_running_0" [ style = bold] +"B_start_0 rhel7-auto4" -> "B_monitor_10000 rhel7-auto4" [ style = bold] +"B_start_0 rhel7-auto4" -> "clone-one-or-more:order-B-clone-C-clone-mandatory" [ style = bold] +"B_start_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"C-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"C-clone_start_0" -> "C-clone_running_0" [ style = bold] +"C-clone_start_0" -> "C:1_start_0 rhel7-auto1" [ style = bold] +"C-clone_start_0" -> "C:2_start_0 rhel7-auto4" [ style = bold] +"C-clone_start_0" -> "C_start_0 rhel7-auto2" [ style = bold] +"C-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"C:1_monitor_10000 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"C:1_start_0 rhel7-auto1" -> "C-clone_running_0" [ style = bold] +"C:1_start_0 rhel7-auto1" -> "C:1_monitor_10000 rhel7-auto1" [ style = bold] +"C:1_start_0 rhel7-auto1" [ style=bold color="green" fontcolor="black"] +"C:2_monitor_10000 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"C:2_start_0 rhel7-auto4" -> "C-clone_running_0" [ style = bold] +"C:2_start_0 rhel7-auto4" -> "C:2_monitor_10000 rhel7-auto4" [ style = bold] +"C:2_start_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"C_monitor_10000 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"C_start_0 rhel7-auto2" -> "C-clone_running_0" [ style = bold] +"C_start_0 rhel7-auto2" -> "C_monitor_10000 rhel7-auto2" [ style = bold] +"C_start_0 rhel7-auto2" [ style=bold color="green" fontcolor="black"] +"clone-one-or-more:order-B-clone-C-clone-mandatory" -> "C-clone_start_0" [ style = bold] +"clone-one-or-more:order-B-clone-C-clone-mandatory" [ style=bold color="green" fontcolor="orange"] +} diff --git a/pengine/test10/clone-require-all-no-interleave-2.exp b/pengine/test10/clone-require-all-no-interleave-2.exp new file mode 100644 index 0000000000..35a2df6bb1 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-2.exp @@ -0,0 +1,227 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-no-interleave-2.scores b/pengine/test10/clone-require-all-no-interleave-2.scores new file mode 100644 index 0000000000..50e054e88c --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-2.scores @@ -0,0 +1,113 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: -INFINITY +clone_color: A-clone allocation score on rhel7-auto2: -INFINITY +clone_color: A-clone allocation score on rhel7-auto3: 0 +clone_color: A-clone allocation score on rhel7-auto4: 0 +clone_color: A:0 allocation score on rhel7-auto1: -INFINITY +clone_color: A:0 allocation score on rhel7-auto2: -INFINITY +clone_color: A:0 allocation score on rhel7-auto3: 0 +clone_color: A:0 allocation score on rhel7-auto4: 0 +clone_color: A:1 allocation score on rhel7-auto1: -INFINITY +clone_color: A:1 allocation score on rhel7-auto2: -INFINITY +clone_color: A:1 allocation score on rhel7-auto3: 0 +clone_color: A:1 allocation score on rhel7-auto4: 0 +clone_color: A:2 allocation score on rhel7-auto1: -INFINITY +clone_color: A:2 allocation score on rhel7-auto2: -INFINITY +clone_color: A:2 allocation score on rhel7-auto3: 0 +clone_color: A:2 allocation score on rhel7-auto4: 0 +clone_color: A:3 allocation score on rhel7-auto1: -INFINITY +clone_color: A:3 allocation score on rhel7-auto2: -INFINITY +clone_color: A:3 allocation score on rhel7-auto3: 0 +clone_color: A:3 allocation score on rhel7-auto4: 0 +clone_color: B-clone allocation score on rhel7-auto1: 0 +clone_color: B-clone allocation score on rhel7-auto2: 0 +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: 0 +clone_color: B:0 allocation score on rhel7-auto2: 0 +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 0 +clone_color: B:1 allocation score on rhel7-auto1: 0 +clone_color: B:1 allocation score on rhel7-auto2: 0 +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: 0 +clone_color: B:2 allocation score on rhel7-auto2: 0 +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: 0 +clone_color: B:3 allocation score on rhel7-auto2: 0 +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +clone_color: C-clone allocation score on rhel7-auto1: 0 +clone_color: C-clone allocation score on rhel7-auto2: 0 +clone_color: C-clone allocation score on rhel7-auto3: 0 +clone_color: C-clone allocation score on rhel7-auto4: 0 +clone_color: C:0 allocation score on rhel7-auto1: 0 +clone_color: C:0 allocation score on rhel7-auto2: 0 +clone_color: C:0 allocation score on rhel7-auto3: 0 +clone_color: C:0 allocation score on rhel7-auto4: 0 +clone_color: C:1 allocation score on rhel7-auto1: 0 +clone_color: C:1 allocation score on rhel7-auto2: 0 +clone_color: C:1 allocation score on rhel7-auto3: 0 +clone_color: C:1 allocation score on rhel7-auto4: 0 +clone_color: C:2 allocation score on rhel7-auto1: 0 +clone_color: C:2 allocation score on rhel7-auto2: 0 +clone_color: C:2 allocation score on rhel7-auto3: 0 +clone_color: C:2 allocation score on rhel7-auto4: 0 +clone_color: C:3 allocation score on rhel7-auto1: 0 +clone_color: C:3 allocation score on rhel7-auto2: 0 +clone_color: C:3 allocation score on rhel7-auto3: 0 +clone_color: C:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: -INFINITY +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: -INFINITY +native_color: A:0 allocation score on rhel7-auto4: 0 +native_color: A:1 allocation score on rhel7-auto1: -INFINITY +native_color: A:1 allocation score on rhel7-auto2: -INFINITY +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: -INFINITY +native_color: B:0 allocation score on rhel7-auto4: 0 +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: -INFINITY +native_color: B:1 allocation score on rhel7-auto4: -INFINITY +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: C:0 allocation score on rhel7-auto1: 0 +native_color: C:0 allocation score on rhel7-auto2: 0 +native_color: C:0 allocation score on rhel7-auto3: -INFINITY +native_color: C:0 allocation score on rhel7-auto4: 0 +native_color: C:1 allocation score on rhel7-auto1: 0 +native_color: C:1 allocation score on rhel7-auto2: -INFINITY +native_color: C:1 allocation score on rhel7-auto3: -INFINITY +native_color: C:1 allocation score on rhel7-auto4: 0 +native_color: C:2 allocation score on rhel7-auto1: -INFINITY +native_color: C:2 allocation score on rhel7-auto2: -INFINITY +native_color: C:2 allocation score on rhel7-auto3: -INFINITY +native_color: C:2 allocation score on rhel7-auto4: 0 +native_color: C:3 allocation score on rhel7-auto1: -INFINITY +native_color: C:3 allocation score on rhel7-auto2: -INFINITY +native_color: C:3 allocation score on rhel7-auto3: -INFINITY +native_color: C:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-no-interleave-2.summary b/pengine/test10/clone-require-all-no-interleave-2.summary new file mode 100644 index 0000000000..f16be9b3ef --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-2.summary @@ -0,0 +1,54 @@ + +Current cluster status: +Node rhel7-auto3 (3): standby +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + Clone Set: B-clone [B] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + Clone Set: C-clone [C] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ] + +Transition Summary: + * Start A:0 (rhel7-auto4) + * Start B:0 (rhel7-auto4) + * Start C:0 (rhel7-auto2) + * Start C:1 (rhel7-auto1) + * Start C:2 (rhel7-auto4) + +Executing cluster transition: + * Pseudo action: A-clone_start_0 + * Resource action: A start on rhel7-auto4 + * Pseudo action: A-clone_running_0 + * Pseudo action: B-clone_start_0 + * Resource action: A monitor=10000 on rhel7-auto4 + * Resource action: B start on rhel7-auto4 + * Pseudo action: B-clone_running_0 + * Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory + * Resource action: B monitor=10000 on rhel7-auto4 + * Pseudo action: C-clone_start_0 + * Resource action: C start on rhel7-auto2 + * Resource action: C start on rhel7-auto1 + * Resource action: C start on rhel7-auto4 + * Pseudo action: C-clone_running_0 + * Resource action: C monitor=10000 on rhel7-auto2 + * Resource action: C monitor=10000 on rhel7-auto1 + * Resource action: C monitor=10000 on rhel7-auto4 + +Revised cluster status: +Node rhel7-auto3 (3): standby +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Clone Set: C-clone [C] + Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + Stopped: [ rhel7-auto3 ] + diff --git a/pengine/test10/clone-require-all-no-interleave-2.xml b/pengine/test10/clone-require-all-no-interleave-2.xml new file mode 100644 index 0000000000..214a7c72ff --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-2.xml @@ -0,0 +1,179 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-no-interleave-3.dot b/pengine/test10/clone-require-all-no-interleave-3.dot new file mode 100644 index 0000000000..58f97a5d4e --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-3.dot @@ -0,0 +1,60 @@ + digraph "g" { +"A-clone_running_0" -> "B-clone_start_0" [ style = bold] +"A-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_start_0" -> "A-clone_running_0" [ style = bold] +"A-clone_start_0" -> "A_start_0 rhel7-auto3" [ style = bold] +"A-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stop_0" -> "A-clone_stopped_0" [ style = bold] +"A-clone_stop_0" -> "A_stop_0 rhel7-auto4" [ style = bold] +"A-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"A-clone_stopped_0" -> "A-clone_start_0" [ style = bold] +"A-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"A_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"A_start_0 rhel7-auto3" -> "A-clone_running_0" [ style = bold] +"A_start_0 rhel7-auto3" -> "A_monitor_10000 rhel7-auto3" [ style = bold] +"A_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"A_stop_0 rhel7-auto4" -> "A-clone_stopped_0" [ style = bold] +"A_stop_0 rhel7-auto4" -> "A_start_0 rhel7-auto3" [ style = bold] +"A_stop_0 rhel7-auto4" -> "all_stopped" [ style = bold] +"A_stop_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"B-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_start_0" -> "B-clone_running_0" [ style = bold] +"B-clone_start_0" -> "B_start_0 rhel7-auto3" [ style = bold] +"B-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_stop_0" -> "B-clone_stopped_0" [ style = bold] +"B-clone_stop_0" -> "B_stop_0 rhel7-auto4" [ style = bold] +"B-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"B-clone_stopped_0" -> "A-clone_stop_0" [ style = bold] +"B-clone_stopped_0" -> "B-clone_start_0" [ style = bold] +"B-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"B_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B_start_0 rhel7-auto3" -> "B-clone_running_0" [ style = bold] +"B_start_0 rhel7-auto3" -> "B_monitor_10000 rhel7-auto3" [ style = bold] +"B_start_0 rhel7-auto3" -> "clone-one-or-more:order-B-clone-C-clone-mandatory" [ style = bold] +"B_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"B_stop_0 rhel7-auto4" -> "B-clone_stopped_0" [ style = bold] +"B_stop_0 rhel7-auto4" -> "B_start_0 rhel7-auto3" [ style = bold] +"B_stop_0 rhel7-auto4" -> "all_stopped" [ style = bold] +"B_stop_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"C-clone_running_0" [ style=bold color="green" fontcolor="orange"] +"C-clone_start_0" -> "C-clone_running_0" [ style = bold] +"C-clone_start_0" -> "C_start_0 rhel7-auto3" [ style = bold] +"C-clone_start_0" [ style=bold color="green" fontcolor="orange"] +"C-clone_stop_0" -> "C-clone_stopped_0" [ style = bold] +"C-clone_stop_0" -> "C_stop_0 rhel7-auto4" [ style = bold] +"C-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"C-clone_stopped_0" -> "B-clone_stop_0" [ style = bold] +"C-clone_stopped_0" -> "C-clone_start_0" [ style = bold] +"C-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"C_monitor_10000 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"C_start_0 rhel7-auto3" -> "C-clone_running_0" [ style = bold] +"C_start_0 rhel7-auto3" -> "C_monitor_10000 rhel7-auto3" [ style = bold] +"C_start_0 rhel7-auto3" [ style=bold color="green" fontcolor="black"] +"C_stop_0 rhel7-auto4" -> "C-clone_stopped_0" [ style = bold] +"C_stop_0 rhel7-auto4" -> "C_start_0 rhel7-auto3" [ style = bold] +"C_stop_0 rhel7-auto4" -> "all_stopped" [ style = bold] +"C_stop_0 rhel7-auto4" [ style=bold color="green" fontcolor="black"] +"all_stopped" [ style=bold color="green" fontcolor="orange"] +"clone-one-or-more:order-B-clone-C-clone-mandatory" -> "C-clone_start_0" [ style = bold] +"clone-one-or-more:order-B-clone-C-clone-mandatory" [ style=bold color="green" fontcolor="orange"] +} diff --git a/pengine/test10/clone-require-all-no-interleave-3.exp b/pengine/test10/clone-require-all-no-interleave-3.exp new file mode 100644 index 0000000000..8aba35e52e --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-3.exp @@ -0,0 +1,322 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/clone-require-all-no-interleave-3.scores b/pengine/test10/clone-require-all-no-interleave-3.scores new file mode 100644 index 0000000000..70dd2d1754 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-3.scores @@ -0,0 +1,113 @@ +Allocation scores: +clone_color: A-clone allocation score on rhel7-auto1: -INFINITY +clone_color: A-clone allocation score on rhel7-auto2: -INFINITY +clone_color: A-clone allocation score on rhel7-auto3: 0 +clone_color: A-clone allocation score on rhel7-auto4: 0 +clone_color: A:0 allocation score on rhel7-auto1: -INFINITY +clone_color: A:0 allocation score on rhel7-auto2: -INFINITY +clone_color: A:0 allocation score on rhel7-auto3: 0 +clone_color: A:0 allocation score on rhel7-auto4: 1 +clone_color: A:1 allocation score on rhel7-auto1: -INFINITY +clone_color: A:1 allocation score on rhel7-auto2: -INFINITY +clone_color: A:1 allocation score on rhel7-auto3: 0 +clone_color: A:1 allocation score on rhel7-auto4: 0 +clone_color: A:2 allocation score on rhel7-auto1: -INFINITY +clone_color: A:2 allocation score on rhel7-auto2: -INFINITY +clone_color: A:2 allocation score on rhel7-auto3: 0 +clone_color: A:2 allocation score on rhel7-auto4: 0 +clone_color: A:3 allocation score on rhel7-auto1: -INFINITY +clone_color: A:3 allocation score on rhel7-auto2: -INFINITY +clone_color: A:3 allocation score on rhel7-auto3: 0 +clone_color: A:3 allocation score on rhel7-auto4: 0 +clone_color: B-clone allocation score on rhel7-auto1: 0 +clone_color: B-clone allocation score on rhel7-auto2: 0 +clone_color: B-clone allocation score on rhel7-auto3: 0 +clone_color: B-clone allocation score on rhel7-auto4: 0 +clone_color: B:0 allocation score on rhel7-auto1: 0 +clone_color: B:0 allocation score on rhel7-auto2: 0 +clone_color: B:0 allocation score on rhel7-auto3: 0 +clone_color: B:0 allocation score on rhel7-auto4: 1 +clone_color: B:1 allocation score on rhel7-auto1: 0 +clone_color: B:1 allocation score on rhel7-auto2: 0 +clone_color: B:1 allocation score on rhel7-auto3: 0 +clone_color: B:1 allocation score on rhel7-auto4: 0 +clone_color: B:2 allocation score on rhel7-auto1: 0 +clone_color: B:2 allocation score on rhel7-auto2: 0 +clone_color: B:2 allocation score on rhel7-auto3: 0 +clone_color: B:2 allocation score on rhel7-auto4: 0 +clone_color: B:3 allocation score on rhel7-auto1: 0 +clone_color: B:3 allocation score on rhel7-auto2: 0 +clone_color: B:3 allocation score on rhel7-auto3: 0 +clone_color: B:3 allocation score on rhel7-auto4: 0 +clone_color: C-clone allocation score on rhel7-auto1: 0 +clone_color: C-clone allocation score on rhel7-auto2: 0 +clone_color: C-clone allocation score on rhel7-auto3: 0 +clone_color: C-clone allocation score on rhel7-auto4: 0 +clone_color: C:0 allocation score on rhel7-auto1: 0 +clone_color: C:0 allocation score on rhel7-auto2: 0 +clone_color: C:0 allocation score on rhel7-auto3: 0 +clone_color: C:0 allocation score on rhel7-auto4: 1 +clone_color: C:1 allocation score on rhel7-auto1: 1 +clone_color: C:1 allocation score on rhel7-auto2: 0 +clone_color: C:1 allocation score on rhel7-auto3: 0 +clone_color: C:1 allocation score on rhel7-auto4: 0 +clone_color: C:2 allocation score on rhel7-auto1: 0 +clone_color: C:2 allocation score on rhel7-auto2: 1 +clone_color: C:2 allocation score on rhel7-auto3: 0 +clone_color: C:2 allocation score on rhel7-auto4: 0 +clone_color: C:3 allocation score on rhel7-auto1: 0 +clone_color: C:3 allocation score on rhel7-auto2: 0 +clone_color: C:3 allocation score on rhel7-auto3: 0 +clone_color: C:3 allocation score on rhel7-auto4: 0 +native_color: A:0 allocation score on rhel7-auto1: -INFINITY +native_color: A:0 allocation score on rhel7-auto2: -INFINITY +native_color: A:0 allocation score on rhel7-auto3: 0 +native_color: A:0 allocation score on rhel7-auto4: -INFINITY +native_color: A:1 allocation score on rhel7-auto1: -INFINITY +native_color: A:1 allocation score on rhel7-auto2: -INFINITY +native_color: A:1 allocation score on rhel7-auto3: -INFINITY +native_color: A:1 allocation score on rhel7-auto4: -INFINITY +native_color: A:2 allocation score on rhel7-auto1: -INFINITY +native_color: A:2 allocation score on rhel7-auto2: -INFINITY +native_color: A:2 allocation score on rhel7-auto3: -INFINITY +native_color: A:2 allocation score on rhel7-auto4: -INFINITY +native_color: A:3 allocation score on rhel7-auto1: -INFINITY +native_color: A:3 allocation score on rhel7-auto2: -INFINITY +native_color: A:3 allocation score on rhel7-auto3: -INFINITY +native_color: A:3 allocation score on rhel7-auto4: -INFINITY +native_color: B:0 allocation score on rhel7-auto1: -INFINITY +native_color: B:0 allocation score on rhel7-auto2: -INFINITY +native_color: B:0 allocation score on rhel7-auto3: 0 +native_color: B:0 allocation score on rhel7-auto4: -INFINITY +native_color: B:1 allocation score on rhel7-auto1: -INFINITY +native_color: B:1 allocation score on rhel7-auto2: -INFINITY +native_color: B:1 allocation score on rhel7-auto3: -INFINITY +native_color: B:1 allocation score on rhel7-auto4: -INFINITY +native_color: B:2 allocation score on rhel7-auto1: -INFINITY +native_color: B:2 allocation score on rhel7-auto2: -INFINITY +native_color: B:2 allocation score on rhel7-auto3: -INFINITY +native_color: B:2 allocation score on rhel7-auto4: -INFINITY +native_color: B:3 allocation score on rhel7-auto1: -INFINITY +native_color: B:3 allocation score on rhel7-auto2: -INFINITY +native_color: B:3 allocation score on rhel7-auto3: -INFINITY +native_color: B:3 allocation score on rhel7-auto4: -INFINITY +native_color: C:0 allocation score on rhel7-auto1: -INFINITY +native_color: C:0 allocation score on rhel7-auto2: -INFINITY +native_color: C:0 allocation score on rhel7-auto3: 0 +native_color: C:0 allocation score on rhel7-auto4: -INFINITY +native_color: C:1 allocation score on rhel7-auto1: 1 +native_color: C:1 allocation score on rhel7-auto2: -INFINITY +native_color: C:1 allocation score on rhel7-auto3: 0 +native_color: C:1 allocation score on rhel7-auto4: -INFINITY +native_color: C:2 allocation score on rhel7-auto1: 0 +native_color: C:2 allocation score on rhel7-auto2: 1 +native_color: C:2 allocation score on rhel7-auto3: 0 +native_color: C:2 allocation score on rhel7-auto4: -INFINITY +native_color: C:3 allocation score on rhel7-auto1: -INFINITY +native_color: C:3 allocation score on rhel7-auto2: -INFINITY +native_color: C:3 allocation score on rhel7-auto3: -INFINITY +native_color: C:3 allocation score on rhel7-auto4: -INFINITY +native_color: shooter allocation score on rhel7-auto1: 0 +native_color: shooter allocation score on rhel7-auto2: 0 +native_color: shooter allocation score on rhel7-auto3: 0 +native_color: shooter allocation score on rhel7-auto4: 0 diff --git a/pengine/test10/clone-require-all-no-interleave-3.summary b/pengine/test10/clone-require-all-no-interleave-3.summary new file mode 100644 index 0000000000..43796446c4 --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-3.summary @@ -0,0 +1,61 @@ + +Current cluster status: +Node rhel7-auto4 (4): standby +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto4 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Clone Set: C-clone [C] + Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + Stopped: [ rhel7-auto3 ] + +Transition Summary: + * Move A:0 (Started rhel7-auto4 -> rhel7-auto3) + * Move B:0 (Started rhel7-auto4 -> rhel7-auto3) + * Move C:0 (Started rhel7-auto4 -> rhel7-auto3) + +Executing cluster transition: + * Pseudo action: C-clone_stop_0 + * Resource action: C stop on rhel7-auto4 + * Pseudo action: C-clone_stopped_0 + * Pseudo action: B-clone_stop_0 + * Resource action: B stop on rhel7-auto4 + * Pseudo action: B-clone_stopped_0 + * Pseudo action: A-clone_stop_0 + * Resource action: A stop on rhel7-auto4 + * Pseudo action: A-clone_stopped_0 + * Pseudo action: A-clone_start_0 + * Pseudo action: all_stopped + * Resource action: A start on rhel7-auto3 + * Pseudo action: A-clone_running_0 + * Pseudo action: B-clone_start_0 + * Resource action: A monitor=10000 on rhel7-auto3 + * Resource action: B start on rhel7-auto3 + * Pseudo action: B-clone_running_0 + * Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory + * Resource action: B monitor=10000 on rhel7-auto3 + * Pseudo action: C-clone_start_0 + * Resource action: C start on rhel7-auto3 + * Pseudo action: C-clone_running_0 + * Resource action: C monitor=10000 on rhel7-auto3 + +Revised cluster status: +Node rhel7-auto4 (4): standby +Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + + shooter (stonith:fence_xvm): Started rhel7-auto1 + Clone Set: A-clone [A] + Started: [ rhel7-auto3 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + Clone Set: B-clone [B] + Started: [ rhel7-auto3 ] + Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ] + Clone Set: C-clone [C] + Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] + Stopped: [ rhel7-auto4 ] + diff --git a/pengine/test10/clone-require-all-no-interleave-3.xml b/pengine/test10/clone-require-all-no-interleave-3.xml new file mode 100644 index 0000000000..8a9bea8c7b --- /dev/null +++ b/pengine/test10/clone-require-all-no-interleave-3.xml @@ -0,0 +1,179 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/whitebox-imply-stop-on-fence.dot b/pengine/test10/whitebox-imply-stop-on-fence.dot new file mode 100644 index 0000000000..66700b891b --- /dev/null +++ b/pengine/test10/whitebox-imply-stop-on-fence.dot @@ -0,0 +1,93 @@ + digraph "g" { +"R-lxc-01_kiff-01_monitor_10000 kiff-02" [ style=bold color="green" fontcolor="black"] +"R-lxc-01_kiff-01_start_0 kiff-02" -> "R-lxc-01_kiff-01_monitor_10000 kiff-02" [ style = bold] +"R-lxc-01_kiff-01_start_0 kiff-02" -> "lxc-01_kiff-01_start_0 kiff-02" [ style = bold] +"R-lxc-01_kiff-01_start_0 kiff-02" [ style=bold color="green" fontcolor="black"] +"R-lxc-01_kiff-01_stop_0 kiff-01" -> "R-lxc-01_kiff-01_start_0 kiff-02" [ style = bold] +"R-lxc-01_kiff-01_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"R-lxc-01_kiff-01_stop_0 kiff-01" -> "shared0-clone_stop_0" [ style = bold] +"R-lxc-01_kiff-01_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"R-lxc-02_kiff-01_monitor_10000 kiff-02" [ style=bold color="green" fontcolor="black"] +"R-lxc-02_kiff-01_start_0 kiff-02" -> "R-lxc-02_kiff-01_monitor_10000 kiff-02" [ style = bold] +"R-lxc-02_kiff-01_start_0 kiff-02" -> "lxc-02_kiff-01_start_0 kiff-02" [ style = bold] +"R-lxc-02_kiff-01_start_0 kiff-02" [ style=bold color="green" fontcolor="black"] +"R-lxc-02_kiff-01_stop_0 kiff-01" -> "R-lxc-02_kiff-01_start_0 kiff-02" [ style = bold] +"R-lxc-02_kiff-01_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"R-lxc-02_kiff-01_stop_0 kiff-01" -> "shared0-clone_stop_0" [ style = bold] +"R-lxc-02_kiff-01_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"all_stopped" [ style=bold color="green" fontcolor="orange"] +"clvmd-clone_stop_0" -> "clvmd-clone_stopped_0" [ style = bold] +"clvmd-clone_stop_0" -> "clvmd_stop_0 kiff-01" [ style = bold] +"clvmd-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"clvmd-clone_stopped_0" -> "dlm-clone_stop_0" [ style = bold] +"clvmd-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"clvmd_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"clvmd_stop_0 kiff-01" -> "clvmd-clone_stopped_0" [ style = bold] +"clvmd_stop_0 kiff-01" -> "dlm_stop_0 kiff-01" [ style = bold] +"clvmd_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"dlm-clone_stop_0" -> "dlm-clone_stopped_0" [ style = bold] +"dlm-clone_stop_0" -> "dlm_stop_0 kiff-01" [ style = bold] +"dlm-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"dlm-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"dlm_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"dlm_stop_0 kiff-01" -> "dlm-clone_stopped_0" [ style = bold] +"dlm_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"fence-kiff-02_monitor_60000 kiff-02" [ style=bold color="green" fontcolor="black"] +"fence-kiff-02_start_0 kiff-02" -> "fence-kiff-02_monitor_60000 kiff-02" [ style = bold] +"fence-kiff-02_start_0 kiff-02" [ style=bold color="green" fontcolor="black"] +"fence-kiff-02_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"fence-kiff-02_stop_0 kiff-01" -> "fence-kiff-02_start_0 kiff-02" [ style = bold] +"fence-kiff-02_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"lxc-01_kiff-01_monitor_30000 kiff-02" [ style=bold color="green" fontcolor="black"] +"lxc-01_kiff-01_start_0 kiff-02" -> "lxc-01_kiff-01_monitor_30000 kiff-02" [ style = bold] +"lxc-01_kiff-01_start_0 kiff-02" -> "vm-fs_monitor_20000 lxc-01_kiff-01" [ style = bold] +"lxc-01_kiff-01_start_0 kiff-02" -> "vm-fs_start_0 lxc-01_kiff-01" [ style = bold] +"lxc-01_kiff-01_start_0 kiff-02" [ style=bold color="green" fontcolor="black"] +"lxc-01_kiff-01_stop_0 kiff-01" -> "R-lxc-01_kiff-01_stop_0 kiff-01" [ style = bold] +"lxc-01_kiff-01_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"lxc-01_kiff-01_stop_0 kiff-01" -> "lxc-01_kiff-01_start_0 kiff-02" [ style = bold] +"lxc-01_kiff-01_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"lxc-02_kiff-01_monitor_30000 kiff-02" [ style=bold color="green" fontcolor="black"] +"lxc-02_kiff-01_start_0 kiff-02" -> "lxc-02_kiff-01_monitor_30000 kiff-02" [ style = bold] +"lxc-02_kiff-01_start_0 kiff-02" [ style=bold color="green" fontcolor="black"] +"lxc-02_kiff-01_stop_0 kiff-01" -> "R-lxc-02_kiff-01_stop_0 kiff-01" [ style = bold] +"lxc-02_kiff-01_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"lxc-02_kiff-01_stop_0 kiff-01" -> "lxc-02_kiff-01_start_0 kiff-02" [ style = bold] +"lxc-02_kiff-01_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"shared0-clone_stop_0" -> "shared0-clone_stopped_0" [ style = bold] +"shared0-clone_stop_0" -> "shared0_stop_0 kiff-01" [ style = bold] +"shared0-clone_stop_0" [ style=bold color="green" fontcolor="orange"] +"shared0-clone_stopped_0" -> "clvmd-clone_stop_0" [ style = bold] +"shared0-clone_stopped_0" [ style=bold color="green" fontcolor="orange"] +"shared0_stop_0 kiff-01" -> "all_stopped" [ style = bold] +"shared0_stop_0 kiff-01" -> "clvmd_stop_0 kiff-01" [ style = bold] +"shared0_stop_0 kiff-01" -> "shared0-clone_stopped_0" [ style = bold] +"shared0_stop_0 kiff-01" [ style=bold color="green" fontcolor="orange"] +"stonith 'reboot' kiff-01" -> "R-lxc-01_kiff-01_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "R-lxc-02_kiff-01_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "clvmd-clone_stop_0" [ style = bold] +"stonith 'reboot' kiff-01" -> "clvmd_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "dlm-clone_stop_0" [ style = bold] +"stonith 'reboot' kiff-01" -> "dlm_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "fence-kiff-02_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "lxc-01_kiff-01_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "lxc-02_kiff-01_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "shared0-clone_stop_0" [ style = bold] +"stonith 'reboot' kiff-01" -> "shared0_stop_0 kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" -> "stonith_complete" [ style = bold] +"stonith 'reboot' kiff-01" -> "vm-fs_stop_0 lxc-01_kiff-01" [ style = bold] +"stonith 'reboot' kiff-01" [ style=bold color="green" fontcolor="black"] +"stonith_complete" -> "R-lxc-01_kiff-01_start_0 kiff-02" [ style = bold] +"stonith_complete" -> "R-lxc-02_kiff-01_start_0 kiff-02" [ style = bold] +"stonith_complete" -> "all_stopped" [ style = bold] +"stonith_complete" -> "lxc-01_kiff-01_start_0 kiff-02" [ style = bold] +"stonith_complete" -> "lxc-02_kiff-01_start_0 kiff-02" [ style = bold] +"stonith_complete" -> "vm-fs_start_0 lxc-01_kiff-01" [ style = bold] +"stonith_complete" [ style=bold color="green" fontcolor="orange"] +"vm-fs_monitor_20000 lxc-01_kiff-01" [ style=bold color="green" fontcolor="black"] +"vm-fs_start_0 lxc-01_kiff-01" -> "vm-fs_monitor_20000 lxc-01_kiff-01" [ style = bold] +"vm-fs_start_0 lxc-01_kiff-01" [ style=bold color="green" fontcolor="black"] +"vm-fs_stop_0 lxc-01_kiff-01" -> "all_stopped" [ style = bold] +"vm-fs_stop_0 lxc-01_kiff-01" -> "vm-fs_start_0 lxc-01_kiff-01" [ style = bold] +"vm-fs_stop_0 lxc-01_kiff-01" [ style=bold color="green" fontcolor="orange"] +} diff --git a/pengine/test10/whitebox-imply-stop-on-fence.exp b/pengine/test10/whitebox-imply-stop-on-fence.exp new file mode 100644 index 0000000000..d13c25f578 --- /dev/null +++ b/pengine/test10/whitebox-imply-stop-on-fence.exp @@ -0,0 +1,466 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pengine/test10/whitebox-imply-stop-on-fence.scores b/pengine/test10/whitebox-imply-stop-on-fence.scores new file mode 100644 index 0000000000..e50f07774a --- /dev/null +++ b/pengine/test10/whitebox-imply-stop-on-fence.scores @@ -0,0 +1,301 @@ +Allocation scores: +clone_color: clvmd-clone allocation score on kiff-01: 0 +clone_color: clvmd-clone allocation score on kiff-02: 0 +clone_color: clvmd-clone allocation score on lxc-01_kiff-01: 0 +clone_color: clvmd-clone allocation score on lxc-01_kiff-02: 0 +clone_color: clvmd-clone allocation score on lxc-02_kiff-01: 0 +clone_color: clvmd-clone allocation score on lxc-02_kiff-02: 0 +clone_color: clvmd:0 allocation score on kiff-01: 1 +clone_color: clvmd:0 allocation score on kiff-02: 0 +clone_color: clvmd:0 allocation score on lxc-01_kiff-01: 0 +clone_color: clvmd:0 allocation score on lxc-01_kiff-02: 0 +clone_color: clvmd:0 allocation score on lxc-02_kiff-01: 0 +clone_color: clvmd:0 allocation score on lxc-02_kiff-02: 0 +clone_color: clvmd:1 allocation score on kiff-01: 0 +clone_color: clvmd:1 allocation score on kiff-02: 1 +clone_color: clvmd:1 allocation score on lxc-01_kiff-01: 0 +clone_color: clvmd:1 allocation score on lxc-01_kiff-02: 0 +clone_color: clvmd:1 allocation score on lxc-02_kiff-01: 0 +clone_color: clvmd:1 allocation score on lxc-02_kiff-02: 0 +clone_color: clvmd:2 allocation score on kiff-01: 0 +clone_color: clvmd:2 allocation score on kiff-02: 0 +clone_color: clvmd:2 allocation score on lxc-01_kiff-01: 0 +clone_color: clvmd:2 allocation score on lxc-01_kiff-02: 0 +clone_color: clvmd:2 allocation score on lxc-02_kiff-01: 0 +clone_color: clvmd:2 allocation score on lxc-02_kiff-02: 0 +clone_color: clvmd:3 allocation score on kiff-01: 0 +clone_color: clvmd:3 allocation score on kiff-02: 0 +clone_color: clvmd:3 allocation score on lxc-01_kiff-01: 0 +clone_color: clvmd:3 allocation score on lxc-01_kiff-02: 0 +clone_color: clvmd:3 allocation score on lxc-02_kiff-01: 0 +clone_color: clvmd:3 allocation score on lxc-02_kiff-02: 0 +clone_color: clvmd:4 allocation score on kiff-01: 0 +clone_color: clvmd:4 allocation score on kiff-02: 0 +clone_color: clvmd:4 allocation score on lxc-01_kiff-01: 0 +clone_color: clvmd:4 allocation score on lxc-01_kiff-02: 0 +clone_color: clvmd:4 allocation score on lxc-02_kiff-01: 0 +clone_color: clvmd:4 allocation score on lxc-02_kiff-02: 0 +clone_color: clvmd:5 allocation score on kiff-01: 0 +clone_color: clvmd:5 allocation score on kiff-02: 0 +clone_color: clvmd:5 allocation score on lxc-01_kiff-01: 0 +clone_color: clvmd:5 allocation score on lxc-01_kiff-02: 0 +clone_color: clvmd:5 allocation score on lxc-02_kiff-01: 0 +clone_color: clvmd:5 allocation score on lxc-02_kiff-02: 0 +clone_color: dlm-clone allocation score on kiff-01: 0 +clone_color: dlm-clone allocation score on kiff-02: 0 +clone_color: dlm-clone allocation score on lxc-01_kiff-01: -INFINITY +clone_color: dlm-clone allocation score on lxc-01_kiff-02: -INFINITY +clone_color: dlm-clone allocation score on lxc-02_kiff-01: -INFINITY +clone_color: dlm-clone allocation score on lxc-02_kiff-02: -INFINITY +clone_color: dlm:0 allocation score on kiff-01: 1 +clone_color: dlm:0 allocation score on kiff-02: 0 +clone_color: dlm:0 allocation score on lxc-01_kiff-01: -INFINITY +clone_color: dlm:0 allocation score on lxc-01_kiff-02: -INFINITY +clone_color: dlm:0 allocation score on lxc-02_kiff-01: -INFINITY +clone_color: dlm:0 allocation score on lxc-02_kiff-02: -INFINITY +clone_color: dlm:1 allocation score on kiff-01: 0 +clone_color: dlm:1 allocation score on kiff-02: 1 +clone_color: dlm:1 allocation score on lxc-01_kiff-01: -INFINITY +clone_color: dlm:1 allocation score on lxc-01_kiff-02: -INFINITY +clone_color: dlm:1 allocation score on lxc-02_kiff-01: -INFINITY +clone_color: dlm:1 allocation score on lxc-02_kiff-02: -INFINITY +clone_color: dlm:2 allocation score on kiff-01: 0 +clone_color: dlm:2 allocation score on kiff-02: 0 +clone_color: dlm:2 allocation score on lxc-01_kiff-01: -INFINITY +clone_color: dlm:2 allocation score on lxc-01_kiff-02: -INFINITY +clone_color: dlm:2 allocation score on lxc-02_kiff-01: -INFINITY +clone_color: dlm:2 allocation score on lxc-02_kiff-02: -INFINITY +clone_color: dlm:3 allocation score on kiff-01: 0 +clone_color: dlm:3 allocation score on kiff-02: 0 +clone_color: dlm:3 allocation score on lxc-01_kiff-01: -INFINITY +clone_color: dlm:3 allocation score on lxc-01_kiff-02: -INFINITY +clone_color: dlm:3 allocation score on lxc-02_kiff-01: -INFINITY +clone_color: dlm:3 allocation score on lxc-02_kiff-02: -INFINITY +clone_color: dlm:4 allocation score on kiff-01: 0 +clone_color: dlm:4 allocation score on kiff-02: 0 +clone_color: dlm:4 allocation score on lxc-01_kiff-01: -INFINITY +clone_color: dlm:4 allocation score on lxc-01_kiff-02: -INFINITY +clone_color: dlm:4 allocation score on lxc-02_kiff-01: -INFINITY +clone_color: dlm:4 allocation score on lxc-02_kiff-02: -INFINITY +clone_color: dlm:5 allocation score on kiff-01: 0 +clone_color: dlm:5 allocation score on kiff-02: 0 +clone_color: dlm:5 allocation score on lxc-01_kiff-01: -INFINITY +clone_color: dlm:5 allocation score on lxc-01_kiff-02: -INFINITY +clone_color: dlm:5 allocation score on lxc-02_kiff-01: -INFINITY +clone_color: dlm:5 allocation score on lxc-02_kiff-02: -INFINITY +clone_color: shared0-clone allocation score on kiff-01: 0 +clone_color: shared0-clone allocation score on kiff-02: 0 +clone_color: shared0-clone allocation score on lxc-01_kiff-01: 0 +clone_color: shared0-clone allocation score on lxc-01_kiff-02: 0 +clone_color: shared0-clone allocation score on lxc-02_kiff-01: 0 +clone_color: shared0-clone allocation score on lxc-02_kiff-02: 0 +clone_color: shared0:0 allocation score on kiff-01: 1 +clone_color: shared0:0 allocation score on kiff-02: 0 +clone_color: shared0:0 allocation score on lxc-01_kiff-01: 0 +clone_color: shared0:0 allocation score on lxc-01_kiff-02: 0 +clone_color: shared0:0 allocation score on lxc-02_kiff-01: 0 +clone_color: shared0:0 allocation score on lxc-02_kiff-02: 0 +clone_color: shared0:1 allocation score on kiff-01: 0 +clone_color: shared0:1 allocation score on kiff-02: 1 +clone_color: shared0:1 allocation score on lxc-01_kiff-01: 0 +clone_color: shared0:1 allocation score on lxc-01_kiff-02: 0 +clone_color: shared0:1 allocation score on lxc-02_kiff-01: 0 +clone_color: shared0:1 allocation score on lxc-02_kiff-02: 0 +clone_color: shared0:2 allocation score on kiff-01: 0 +clone_color: shared0:2 allocation score on kiff-02: 0 +clone_color: shared0:2 allocation score on lxc-01_kiff-01: 0 +clone_color: shared0:2 allocation score on lxc-01_kiff-02: 0 +clone_color: shared0:2 allocation score on lxc-02_kiff-01: 0 +clone_color: shared0:2 allocation score on lxc-02_kiff-02: 0 +clone_color: shared0:3 allocation score on kiff-01: 0 +clone_color: shared0:3 allocation score on kiff-02: 0 +clone_color: shared0:3 allocation score on lxc-01_kiff-01: 0 +clone_color: shared0:3 allocation score on lxc-01_kiff-02: 0 +clone_color: shared0:3 allocation score on lxc-02_kiff-01: 0 +clone_color: shared0:3 allocation score on lxc-02_kiff-02: 0 +clone_color: shared0:4 allocation score on kiff-01: 0 +clone_color: shared0:4 allocation score on kiff-02: 0 +clone_color: shared0:4 allocation score on lxc-01_kiff-01: 0 +clone_color: shared0:4 allocation score on lxc-01_kiff-02: 0 +clone_color: shared0:4 allocation score on lxc-02_kiff-01: 0 +clone_color: shared0:4 allocation score on lxc-02_kiff-02: 0 +clone_color: shared0:5 allocation score on kiff-01: 0 +clone_color: shared0:5 allocation score on kiff-02: 0 +clone_color: shared0:5 allocation score on lxc-01_kiff-01: 0 +clone_color: shared0:5 allocation score on lxc-01_kiff-02: 0 +clone_color: shared0:5 allocation score on lxc-02_kiff-01: 0 +clone_color: shared0:5 allocation score on lxc-02_kiff-02: 0 +native_color: R-lxc-01_kiff-01 allocation score on kiff-01: -INFINITY +native_color: R-lxc-01_kiff-01 allocation score on kiff-02: 0 +native_color: R-lxc-01_kiff-01 allocation score on lxc-01_kiff-01: -INFINITY +native_color: R-lxc-01_kiff-01 allocation score on lxc-01_kiff-02: -INFINITY +native_color: R-lxc-01_kiff-01 allocation score on lxc-02_kiff-01: -INFINITY +native_color: R-lxc-01_kiff-01 allocation score on lxc-02_kiff-02: -INFINITY +native_color: R-lxc-01_kiff-02 allocation score on kiff-01: -INFINITY +native_color: R-lxc-01_kiff-02 allocation score on kiff-02: 100 +native_color: R-lxc-01_kiff-02 allocation score on lxc-01_kiff-01: -INFINITY +native_color: R-lxc-01_kiff-02 allocation score on lxc-01_kiff-02: -INFINITY +native_color: R-lxc-01_kiff-02 allocation score on lxc-02_kiff-01: -INFINITY +native_color: R-lxc-01_kiff-02 allocation score on lxc-02_kiff-02: -INFINITY +native_color: R-lxc-02_kiff-01 allocation score on kiff-01: -INFINITY +native_color: R-lxc-02_kiff-01 allocation score on kiff-02: 0 +native_color: R-lxc-02_kiff-01 allocation score on lxc-01_kiff-01: -INFINITY +native_color: R-lxc-02_kiff-01 allocation score on lxc-01_kiff-02: -INFINITY +native_color: R-lxc-02_kiff-01 allocation score on lxc-02_kiff-01: -INFINITY +native_color: R-lxc-02_kiff-01 allocation score on lxc-02_kiff-02: -INFINITY +native_color: R-lxc-02_kiff-02 allocation score on kiff-01: -INFINITY +native_color: R-lxc-02_kiff-02 allocation score on kiff-02: 100 +native_color: R-lxc-02_kiff-02 allocation score on lxc-01_kiff-01: -INFINITY +native_color: R-lxc-02_kiff-02 allocation score on lxc-01_kiff-02: -INFINITY +native_color: R-lxc-02_kiff-02 allocation score on lxc-02_kiff-01: -INFINITY +native_color: R-lxc-02_kiff-02 allocation score on lxc-02_kiff-02: -INFINITY +native_color: clvmd:0 allocation score on kiff-01: -INFINITY +native_color: clvmd:0 allocation score on kiff-02: -INFINITY +native_color: clvmd:0 allocation score on lxc-01_kiff-01: -INFINITY +native_color: clvmd:0 allocation score on lxc-01_kiff-02: -INFINITY +native_color: clvmd:0 allocation score on lxc-02_kiff-01: -INFINITY +native_color: clvmd:0 allocation score on lxc-02_kiff-02: -INFINITY +native_color: clvmd:1 allocation score on kiff-01: -INFINITY +native_color: clvmd:1 allocation score on kiff-02: 1 +native_color: clvmd:1 allocation score on lxc-01_kiff-01: -INFINITY +native_color: clvmd:1 allocation score on lxc-01_kiff-02: -INFINITY +native_color: clvmd:1 allocation score on lxc-02_kiff-01: -INFINITY +native_color: clvmd:1 allocation score on lxc-02_kiff-02: -INFINITY +native_color: clvmd:2 allocation score on kiff-01: -INFINITY +native_color: clvmd:2 allocation score on kiff-02: -INFINITY +native_color: clvmd:2 allocation score on lxc-01_kiff-01: -INFINITY +native_color: clvmd:2 allocation score on lxc-01_kiff-02: -INFINITY +native_color: clvmd:2 allocation score on lxc-02_kiff-01: -INFINITY +native_color: clvmd:2 allocation score on lxc-02_kiff-02: -INFINITY +native_color: clvmd:3 allocation score on kiff-01: -INFINITY +native_color: clvmd:3 allocation score on kiff-02: -INFINITY +native_color: clvmd:3 allocation score on lxc-01_kiff-01: -INFINITY +native_color: clvmd:3 allocation score on lxc-01_kiff-02: -INFINITY +native_color: clvmd:3 allocation score on lxc-02_kiff-01: -INFINITY +native_color: clvmd:3 allocation score on lxc-02_kiff-02: -INFINITY +native_color: clvmd:4 allocation score on kiff-01: -INFINITY +native_color: clvmd:4 allocation score on kiff-02: -INFINITY +native_color: clvmd:4 allocation score on lxc-01_kiff-01: -INFINITY +native_color: clvmd:4 allocation score on lxc-01_kiff-02: -INFINITY +native_color: clvmd:4 allocation score on lxc-02_kiff-01: -INFINITY +native_color: clvmd:4 allocation score on lxc-02_kiff-02: -INFINITY +native_color: clvmd:5 allocation score on kiff-01: -INFINITY +native_color: clvmd:5 allocation score on kiff-02: -INFINITY +native_color: clvmd:5 allocation score on lxc-01_kiff-01: -INFINITY +native_color: clvmd:5 allocation score on lxc-01_kiff-02: -INFINITY +native_color: clvmd:5 allocation score on lxc-02_kiff-01: -INFINITY +native_color: clvmd:5 allocation score on lxc-02_kiff-02: -INFINITY +native_color: dlm:0 allocation score on kiff-01: -INFINITY +native_color: dlm:0 allocation score on kiff-02: -INFINITY +native_color: dlm:0 allocation score on lxc-01_kiff-01: -INFINITY +native_color: dlm:0 allocation score on lxc-01_kiff-02: -INFINITY +native_color: dlm:0 allocation score on lxc-02_kiff-01: -INFINITY +native_color: dlm:0 allocation score on lxc-02_kiff-02: -INFINITY +native_color: dlm:1 allocation score on kiff-01: -INFINITY +native_color: dlm:1 allocation score on kiff-02: 1 +native_color: dlm:1 allocation score on lxc-01_kiff-01: -INFINITY +native_color: dlm:1 allocation score on lxc-01_kiff-02: -INFINITY +native_color: dlm:1 allocation score on lxc-02_kiff-01: -INFINITY +native_color: dlm:1 allocation score on lxc-02_kiff-02: -INFINITY +native_color: dlm:2 allocation score on kiff-01: -INFINITY +native_color: dlm:2 allocation score on kiff-02: -INFINITY +native_color: dlm:2 allocation score on lxc-01_kiff-01: -INFINITY +native_color: dlm:2 allocation score on lxc-01_kiff-02: -INFINITY +native_color: dlm:2 allocation score on lxc-02_kiff-01: -INFINITY +native_color: dlm:2 allocation score on lxc-02_kiff-02: -INFINITY +native_color: dlm:3 allocation score on kiff-01: -INFINITY +native_color: dlm:3 allocation score on kiff-02: -INFINITY +native_color: dlm:3 allocation score on lxc-01_kiff-01: -INFINITY +native_color: dlm:3 allocation score on lxc-01_kiff-02: -INFINITY +native_color: dlm:3 allocation score on lxc-02_kiff-01: -INFINITY +native_color: dlm:3 allocation score on lxc-02_kiff-02: -INFINITY +native_color: dlm:4 allocation score on kiff-01: -INFINITY +native_color: dlm:4 allocation score on kiff-02: -INFINITY +native_color: dlm:4 allocation score on lxc-01_kiff-01: -INFINITY +native_color: dlm:4 allocation score on lxc-01_kiff-02: -INFINITY +native_color: dlm:4 allocation score on lxc-02_kiff-01: -INFINITY +native_color: dlm:4 allocation score on lxc-02_kiff-02: -INFINITY +native_color: dlm:5 allocation score on kiff-01: -INFINITY +native_color: dlm:5 allocation score on kiff-02: -INFINITY +native_color: dlm:5 allocation score on lxc-01_kiff-01: -INFINITY +native_color: dlm:5 allocation score on lxc-01_kiff-02: -INFINITY +native_color: dlm:5 allocation score on lxc-02_kiff-01: -INFINITY +native_color: dlm:5 allocation score on lxc-02_kiff-02: -INFINITY +native_color: fence-kiff-01 allocation score on kiff-01: 0 +native_color: fence-kiff-01 allocation score on kiff-02: 0 +native_color: fence-kiff-01 allocation score on lxc-01_kiff-01: -INFINITY +native_color: fence-kiff-01 allocation score on lxc-01_kiff-02: -INFINITY +native_color: fence-kiff-01 allocation score on lxc-02_kiff-01: -INFINITY +native_color: fence-kiff-01 allocation score on lxc-02_kiff-02: -INFINITY +native_color: fence-kiff-02 allocation score on kiff-01: 0 +native_color: fence-kiff-02 allocation score on kiff-02: 0 +native_color: fence-kiff-02 allocation score on lxc-01_kiff-01: -INFINITY +native_color: fence-kiff-02 allocation score on lxc-01_kiff-02: -INFINITY +native_color: fence-kiff-02 allocation score on lxc-02_kiff-01: -INFINITY +native_color: fence-kiff-02 allocation score on lxc-02_kiff-02: -INFINITY +native_color: lxc-01_kiff-01 allocation score on kiff-01: -INFINITY +native_color: lxc-01_kiff-01 allocation score on kiff-02: 0 +native_color: lxc-01_kiff-01 allocation score on lxc-01_kiff-01: -INFINITY +native_color: lxc-01_kiff-01 allocation score on lxc-01_kiff-02: -INFINITY +native_color: lxc-01_kiff-01 allocation score on lxc-02_kiff-01: -INFINITY +native_color: lxc-01_kiff-01 allocation score on lxc-02_kiff-02: -INFINITY +native_color: lxc-01_kiff-02 allocation score on kiff-01: -INFINITY +native_color: lxc-01_kiff-02 allocation score on kiff-02: 0 +native_color: lxc-01_kiff-02 allocation score on lxc-01_kiff-01: -INFINITY +native_color: lxc-01_kiff-02 allocation score on lxc-01_kiff-02: -INFINITY +native_color: lxc-01_kiff-02 allocation score on lxc-02_kiff-01: -INFINITY +native_color: lxc-01_kiff-02 allocation score on lxc-02_kiff-02: -INFINITY +native_color: lxc-02_kiff-01 allocation score on kiff-01: -INFINITY +native_color: lxc-02_kiff-01 allocation score on kiff-02: 0 +native_color: lxc-02_kiff-01 allocation score on lxc-01_kiff-01: -INFINITY +native_color: lxc-02_kiff-01 allocation score on lxc-01_kiff-02: -INFINITY +native_color: lxc-02_kiff-01 allocation score on lxc-02_kiff-01: -INFINITY +native_color: lxc-02_kiff-01 allocation score on lxc-02_kiff-02: -INFINITY +native_color: lxc-02_kiff-02 allocation score on kiff-01: -INFINITY +native_color: lxc-02_kiff-02 allocation score on kiff-02: 0 +native_color: lxc-02_kiff-02 allocation score on lxc-01_kiff-01: -INFINITY +native_color: lxc-02_kiff-02 allocation score on lxc-01_kiff-02: -INFINITY +native_color: lxc-02_kiff-02 allocation score on lxc-02_kiff-01: -INFINITY +native_color: lxc-02_kiff-02 allocation score on lxc-02_kiff-02: -INFINITY +native_color: shared0:0 allocation score on kiff-01: -INFINITY +native_color: shared0:0 allocation score on kiff-02: -INFINITY +native_color: shared0:0 allocation score on lxc-01_kiff-01: -INFINITY +native_color: shared0:0 allocation score on lxc-01_kiff-02: -INFINITY +native_color: shared0:0 allocation score on lxc-02_kiff-01: -INFINITY +native_color: shared0:0 allocation score on lxc-02_kiff-02: -INFINITY +native_color: shared0:1 allocation score on kiff-01: -INFINITY +native_color: shared0:1 allocation score on kiff-02: 1 +native_color: shared0:1 allocation score on lxc-01_kiff-01: -INFINITY +native_color: shared0:1 allocation score on lxc-01_kiff-02: -INFINITY +native_color: shared0:1 allocation score on lxc-02_kiff-01: -INFINITY +native_color: shared0:1 allocation score on lxc-02_kiff-02: -INFINITY +native_color: shared0:2 allocation score on kiff-01: -INFINITY +native_color: shared0:2 allocation score on kiff-02: -INFINITY +native_color: shared0:2 allocation score on lxc-01_kiff-01: -INFINITY +native_color: shared0:2 allocation score on lxc-01_kiff-02: -INFINITY +native_color: shared0:2 allocation score on lxc-02_kiff-01: -INFINITY +native_color: shared0:2 allocation score on lxc-02_kiff-02: -INFINITY +native_color: shared0:3 allocation score on kiff-01: -INFINITY +native_color: shared0:3 allocation score on kiff-02: -INFINITY +native_color: shared0:3 allocation score on lxc-01_kiff-01: -INFINITY +native_color: shared0:3 allocation score on lxc-01_kiff-02: -INFINITY +native_color: shared0:3 allocation score on lxc-02_kiff-01: -INFINITY +native_color: shared0:3 allocation score on lxc-02_kiff-02: -INFINITY +native_color: shared0:4 allocation score on kiff-01: -INFINITY +native_color: shared0:4 allocation score on kiff-02: -INFINITY +native_color: shared0:4 allocation score on lxc-01_kiff-01: -INFINITY +native_color: shared0:4 allocation score on lxc-01_kiff-02: -INFINITY +native_color: shared0:4 allocation score on lxc-02_kiff-01: -INFINITY +native_color: shared0:4 allocation score on lxc-02_kiff-02: -INFINITY +native_color: shared0:5 allocation score on kiff-01: -INFINITY +native_color: shared0:5 allocation score on kiff-02: -INFINITY +native_color: shared0:5 allocation score on lxc-01_kiff-01: -INFINITY +native_color: shared0:5 allocation score on lxc-01_kiff-02: -INFINITY +native_color: shared0:5 allocation score on lxc-02_kiff-01: -INFINITY +native_color: shared0:5 allocation score on lxc-02_kiff-02: -INFINITY +native_color: vm-fs allocation score on kiff-01: 0 +native_color: vm-fs allocation score on kiff-02: 0 +native_color: vm-fs allocation score on lxc-01_kiff-01: 0 +native_color: vm-fs allocation score on lxc-01_kiff-02: 0 +native_color: vm-fs allocation score on lxc-02_kiff-01: 0 +native_color: vm-fs allocation score on lxc-02_kiff-02: 0 diff --git a/pengine/test10/whitebox-imply-stop-on-fence.summary b/pengine/test10/whitebox-imply-stop-on-fence.summary new file mode 100644 index 0000000000..79e77de83a --- /dev/null +++ b/pengine/test10/whitebox-imply-stop-on-fence.summary @@ -0,0 +1,88 @@ + +Current cluster status: +Node kiff-01 (1): UNCLEAN (offline) +Online: [ kiff-02 ] +Containers: [ lxc-01_kiff-01:R-lxc-01_kiff-01 lxc-01_kiff-02:R-lxc-01_kiff-02 lxc-02_kiff-01:R-lxc-02_kiff-01 lxc-02_kiff-02:R-lxc-02_kiff-02 ] + + fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02 + fence-kiff-02 (stonith:fence_ipmilan): Started kiff-01 + Clone Set: dlm-clone [dlm] + Started: [ kiff-01 kiff-02 ] + Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ] + Clone Set: clvmd-clone [clvmd] + Started: [ kiff-01 kiff-02 ] + Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ] + Clone Set: shared0-clone [shared0] + Started: [ kiff-01 kiff-02 ] + Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ] + R-lxc-01_kiff-01 (ocf::heartbeat:VirtualDomain): Started kiff-01 + R-lxc-02_kiff-01 (ocf::heartbeat:VirtualDomain): Started kiff-01 + R-lxc-01_kiff-02 (ocf::heartbeat:VirtualDomain): Started kiff-02 + R-lxc-02_kiff-02 (ocf::heartbeat:VirtualDomain): Started kiff-02 + vm-fs (ocf::heartbeat:Filesystem): Started lxc-01_kiff-01 + +Transition Summary: + * Move fence-kiff-02 (Started kiff-01 -> kiff-02) + * Stop dlm:0 (kiff-01) + * Stop clvmd:0 (kiff-01) + * Stop shared0:0 (kiff-01) + * Move R-lxc-01_kiff-01 (Started kiff-01 -> kiff-02) + * Move R-lxc-02_kiff-01 (Started kiff-01 -> kiff-02) + * Restart vm-fs (Started lxc-01_kiff-01) + * Move lxc-01_kiff-01 (Started kiff-01 -> kiff-02) + * Move lxc-02_kiff-01 (Started kiff-01 -> kiff-02) + +Executing cluster transition: + * Fencing kiff-01 (reboot) + * Pseudo action: stonith_complete + * Pseudo action: fence-kiff-02_stop_0 + * Pseudo action: vm-fs_stop_0 + * Pseudo action: lxc-01_kiff-01_stop_0 + * Pseudo action: lxc-02_kiff-01_stop_0 + * Resource action: fence-kiff-02 start on kiff-02 + * Pseudo action: R-lxc-01_kiff-01_stop_0 + * Pseudo action: R-lxc-02_kiff-01_stop_0 + * Resource action: fence-kiff-02 monitor=60000 on kiff-02 + * Pseudo action: shared0-clone_stop_0 + * Resource action: R-lxc-01_kiff-01 start on kiff-02 + * Resource action: R-lxc-02_kiff-01 start on kiff-02 + * Resource action: lxc-01_kiff-01 start on kiff-02 + * Resource action: lxc-02_kiff-01 start on kiff-02 + * Pseudo action: shared0_stop_0 + * Pseudo action: shared0-clone_stopped_0 + * Resource action: R-lxc-01_kiff-01 monitor=10000 on kiff-02 + * Resource action: R-lxc-02_kiff-01 monitor=10000 on kiff-02 + * Resource action: vm-fs start on lxc-01_kiff-01 + * Resource action: vm-fs monitor=20000 on lxc-01_kiff-01 + * Resource action: lxc-01_kiff-01 monitor=30000 on kiff-02 + * Resource action: lxc-02_kiff-01 monitor=30000 on kiff-02 + * Pseudo action: clvmd-clone_stop_0 + * Pseudo action: clvmd_stop_0 + * Pseudo action: clvmd-clone_stopped_0 + * Pseudo action: dlm-clone_stop_0 + * Pseudo action: dlm_stop_0 + * Pseudo action: dlm-clone_stopped_0 + * Pseudo action: all_stopped + +Revised cluster status: +Online: [ kiff-02 ] +OFFLINE: [ kiff-01 ] +Containers: [ lxc-01_kiff-01:R-lxc-01_kiff-01 lxc-01_kiff-02:R-lxc-01_kiff-02 lxc-02_kiff-01:R-lxc-02_kiff-01 lxc-02_kiff-02:R-lxc-02_kiff-02 ] + + fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02 + fence-kiff-02 (stonith:fence_ipmilan): Started kiff-02 + Clone Set: dlm-clone [dlm] + Started: [ kiff-02 ] + Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ] + Clone Set: clvmd-clone [clvmd] + Started: [ kiff-02 ] + Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ] + Clone Set: shared0-clone [shared0] + Started: [ kiff-02 ] + Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ] + R-lxc-01_kiff-01 (ocf::heartbeat:VirtualDomain): Started kiff-02 + R-lxc-02_kiff-01 (ocf::heartbeat:VirtualDomain): Started kiff-02 + R-lxc-01_kiff-02 (ocf::heartbeat:VirtualDomain): Started kiff-02 + R-lxc-02_kiff-02 (ocf::heartbeat:VirtualDomain): Started kiff-02 + vm-fs (ocf::heartbeat:Filesystem): Started lxc-01_kiff-01 + diff --git a/pengine/test10/whitebox-imply-stop-on-fence.xml b/pengine/test10/whitebox-imply-stop-on-fence.xml new file mode 100644 index 0000000000..609b281377 --- /dev/null +++ b/pengine/test10/whitebox-imply-stop-on-fence.xml @@ -0,0 +1,347 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/xml/constraints-next.rng b/xml/constraints-2.3.rng similarity index 91% copy from xml/constraints-next.rng copy to xml/constraints-2.3.rng index 0defe8fc54..d9a4701fb9 100644 --- a/xml/constraints-next.rng +++ b/xml/constraints-2.3.rng @@ -1,264 +1,252 @@ - - - - + group listed - - - - - - + + + - - - - - - stop demote fence freeze always never exclusive start promote demote stop Stopped Started Master Slave Optional Mandatory Serialize diff --git a/xml/constraints-next.rng b/xml/constraints-next.rng index 0defe8fc54..9d110031fa 100644 --- a/xml/constraints-next.rng +++ b/xml/constraints-next.rng @@ -1,264 +1,267 @@ group listed + + + stop demote fence freeze always never exclusive start promote demote stop Stopped Started Master Slave Optional Mandatory Serialize