diff --git a/.gitignore b/.gitignore index 6fd91974cb..7acbbac977 100644 --- a/.gitignore +++ b/.gitignore @@ -1,322 +1,323 @@ # Common conventions for files that should be ignored *~ *.bz2 *.diff *.orig *.patch *.rej *.sed *.swp *.tar.gz *.tgz \#* .\#* logs # libtool artifacts *.la *.lo .libs libltdl libtool libtool.m4 ltdl.m4 /m4/argz.m4 /m4/ltargz.m4 /m4/ltoptions.m4 /m4/ltsugar.m4 /m4/ltversion.m4 /m4/lt~obsolete.m4 # autotools artifacts .deps .dirstamp Makefile Makefile.in aclocal.m4 autoconf autoheader autom4te.cache/ automake /confdefs.h config.log config.status configure /conftest* # gettext artifacts /ABOUT-NLS /m4/codeset.m4 /m4/fcntl-o.m4 /m4/gettext.m4 /m4/glibc2.m4 /m4/glibc21.m4 /m4/iconv.m4 /m4/intdiv0.m4 /m4/intl.m4 /m4/intldir.m4 /m4/intlmacosx.m4 /m4/intmax.m4 /m4/inttypes-pri.m4 /m4/inttypes_h.m4 /m4/lcmessage.m4 /m4/lib-ld.m4 /m4/lib-link.m4 /m4/lib-prefix.m4 /m4/lock.m4 /m4/longlong.m4 /m4/nls.m4 /m4/po.m4 /m4/printf-posix.m4 /m4/progtest.m4 /m4/size_max.m4 /m4/stdint_h.m4 /m4/threadlib.m4 /m4/uintmax_t.m4 /m4/visibility.m4 /m4/wchar_t.m4 /m4/wint_t.m4 /m4/xsize.m4 /po/*.gmo /po/*.header /po/*.pot /po/*.sin /po/Makefile.in.in /po/Makevars.template /po/POTFILES /po/Rules-quot /po/stamp-po # configure targets /cts/benchmark/clubench /cts/cts-cli /cts/cts-coverage /cts/cts-exec /cts/cts-fencing /cts/cts-regression /cts/cts-scheduler /cts/lab/CTS.py /cts/lab/CTSlab.py /cts/lab/CTSvars.py /cts/lab/OCFIPraTest.py /cts/lab/cluster_test /cts/lab/cts /cts/lab/cts-log-watcher /cts/lxc_autogen.sh /cts/support/LSBDummy /cts/support/cts-support /cts/support/fence_dummy /cts/support/pacemaker-cts-dummyd /cts/support/pacemaker-cts-dummyd@.service /daemons/execd/pacemaker_remote /daemons/execd/pacemaker_remote.service /daemons/fenced/fence_legacy /daemons/fenced/fence_watchdog /daemons/pacemakerd/pacemaker.combined.upstart /daemons/pacemakerd/pacemaker.service /daemons/pacemakerd/pacemaker.upstart /doc/Doxyfile /etc/init.d/pacemaker /etc/logrotate.d/pacemaker /extra/resources/ClusterMon /extra/resources/HealthSMART /extra/resources/SysInfo /extra/resources/controld /extra/resources/ifspeed /extra/resources/o2cb /include/config.h /include/config.h.in /include/crm_config.h /maint/bumplibs /tools/cibsecret /tools/crm_error /tools/crm_failcount /tools/crm_master /tools/crm_mon.service /tools/crm_mon.upstart /tools/crm_report /tools/crm_rule /tools/crm_standby /tools/pcmk_simtimes /tools/report.collector /tools/report.common # Compiled targets and intermediary files *.o *.pc *.pyc /daemons/attrd/pacemaker-attrd /daemons/based/pacemaker-based /daemons/controld/pacemaker-controld /daemons/execd/cts-exec-helper /daemons/execd/pacemaker-execd /daemons/execd/pacemaker-remoted /daemons/fenced/cts-fence-helper /daemons/fenced/pacemaker-fenced /daemons/pacemakerd/pacemakerd /daemons/schedulerd/pacemaker-schedulerd /devel/scratch /lib/gnu/stdalign.h /tools/attrd_updater /tools/cibadmin /tools/crmadmin /tools/crm_attribute /tools/crm_diff /tools/crm_mon /tools/crm_node /tools/crm_resource /tools/crm_shadow /tools/crm_simulate /tools/crm_ticket /tools/crm_verify /tools/iso8601 /tools/stonith_admin # Generated XML schema files /xml/crm_mon.rng /xml/pacemaker*.rng /xml/versions.rng /xml/api/api-result*.rng # Working directories for make dist and make export /pacemaker-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9] # Documentation build targets and intermediary files *.7 *.7.xml *.7.html *.8 *.8.xml *.8.html GPATH GRTAGS GTAGS TAGS /daemons/fenced/pacemaker-fenced.xml /daemons/schedulerd/pacemaker-schedulerd.xml /doc/.ABI-build /doc/HTML /doc/abi_dumps /doc/abi-check /doc/api/ /doc/compat_reports /doc/crm_fencing.html /doc/sphinx/*/_build /doc/sphinx/*/conf.py /doc/sphinx/build-2.1.txt /doc/sphinx/shared/images/*.png # Test artifacts (from unit tests, regression tests, static analysis, etc.) *.coverity *.gcda *.gcno coverity-* pacemaker_*.info /coverage +/cppcheck.out /cts/scheduler/*.ref /cts/scheduler/*.up /cts/scheduler/*.up.err /cts/scheduler/bug-rh-1097457.log /cts/scheduler/bug-rh-1097457.trs /cts/scheduler/shadow.* /cts/test-suite.log /lib/*/tests/*/*.log /lib/*/tests/*/*_test /lib/*/tests/*/*.trs /xml/test-*/*.up /xml/test-*/*.up.err /xml/assets/*.rng /xml/assets/diffview.js /xml/assets/xmlcatalog /test/_test_file.c # Packaging artifacts *.rpm /pacemaker.spec /rpm/[A-LN-Z]* /rpm/build.counter /rpm/mock # Project maintainer artifacts /maint/gnulib /maint/mocked/based /maint/testcc_helper.cc /maint/testcc_*_h # Formerly built files (helps when jumping back and forth in checkout) /.ABI-build /Doxyfile /HTML /abi_dumps /abi-check /build.counter /compat_reports /compile /cts/.regression.failed.diff /attrd /cib /config.guess /config.sub /coverage.sh /crmd /cts/CTS.py /cts/CTSlab.py /cts/CTSvars.py /cts/HBDummy /cts/LSBDummy /cts/OCFIPraTest.py /cts/cluster_test /cts/cts /cts/cts-log-watcher /cts/cts-support /cts/fence_dummy /cts/pacemaker-cts-dummyd /cts/pacemaker-cts-dummyd@.service /daemons/based/cibmon /daemons/pacemakerd/pacemaker /depcomp /doc/*.build /doc/*/en-US/Ap-*.xml /doc/*/en-US/Ch-*.xml /doc/*/publican.cfg /doc/*/publish /doc/*/tmp/** /doc/Clusters_from_Scratch.txt /doc/Pacemaker_Explained.txt /doc/acls.html /doc/publican-catalog* /doc/shared/en-US/*.xml /doc/shared/en-US/images/pcmk-*.png /doc/shared/en-US/images/Policy-Engine-*.png /extra/logrotate/pacemaker /fencing /include/stamp-* /install-sh /lib/common/md5.c /lib/common/tests/flags/pcmk__clear_flags_as /lib/common/tests/flags/pcmk__set_flags_as /lib/common/tests/flags/pcmk_all_flags_set /lib/common/tests/flags/pcmk_any_flags_set /lib/common/tests/operations/parse_op_key /lib/common/tests/strings/pcmk__btoa /lib/common/tests/strings/pcmk__parse_ll_range /lib/common/tests/strings/pcmk__scan_double /lib/common/tests/strings/pcmk__str_any_of /lib/common/tests/strings/pcmk__strcmp /lib/common/tests/strings/pcmk__char_in_any_str /lib/common/tests/utils/pcmk_str_is_infinity /lib/common/tests/utils/pcmk_str_is_minus_infinity /lib/gnu/libgnu.a /lib/pengine/tests/rules/pe_cron_range_satisfied /lrmd /ltmain.sh /mcp /missing /mock /pacemaker-*.spec /pengine /py-compile /scratch /test-driver /xml/crm.dtd ylwrap diff --git a/GNUmakefile b/GNUmakefile index c4a5e0da09..ffe6457df8 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -1,194 +1,106 @@ # # Copyright 2008-2022 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # default: build .PHONY: default # The toplevel "clean" targets are generated from Makefile.am, not this file. # We can't use autotools' CLEANFILES, clean-local, etc. here. Instead, we # define this target, which Makefile.am can use as a dependency of clean-local. EXTRA_CLEAN_TARGETS = ancillary-clean -include Makefile # The main purpose of this GNUmakefile is that its targets can be invoked # without having to call autogen.sh and configure first. That means automake # variables may or may not be defined. Here, we use the current working # directory if a relevant variable hasn't been defined. -# -# The idea is to keep generated artifacts in the build tree, in case a VPATH -# build is in use, but in practice it would be difficult to make the targets -# here usable from a different location than the source tree. abs_srcdir ?= $(shell pwd) -abs_builddir ?= $(shell pwd) # Define release-related variables include $(abs_srcdir)/mk/release.mk GLIB_CFLAGS ?= $(pkg-config --cflags glib-2.0) PACKAGE ?= pacemaker .PHONY: init init: test -e configure && test -e libltdl || ./autogen.sh test -e Makefile || ./configure .PHONY: build build: init $(MAKE) $(AM_MAKEFLAGS) core ## RPM-related targets (deprecated; use targets in rpm subdirectory instead) # Pass option depending on whether automake has been run or not USE_FILE = $(shell test -e rpm/Makefile || echo "-f Makefile.am") .PHONY: $(PACKAGE).spec chroot dirty export mock rc release rpm rpmlint srpm $(PACKAGE).spec chroot dirty export mock rc release rpm rpmlint srpm: $(MAKE) $(AM_MAKEFLAGS) -C rpm $(USE_FILE) "$@" .PHONY: mock-% rpm-% spec-% srpm-% mock-% rpm-% spec-% srpm-%: $(MAKE) $(AM_MAKEFLAGS) -C rpm $(USE_FILE) "$@" -## indent-related targets (deprecated; use targets in devel subdir instead) - -.PHONY: indent -indent: - @echo 'Deprecated: Use "make -C devel $@" instead' - $(MAKE) $(AM_MAKEFLAGS) -C devel "$@" - -## Static analysis via coverity - -# Aggressiveness (low, medium, or high) -COVLEVEL ?= low - -# Generated outputs -COVERITY_DIR = $(abs_builddir)/coverity-$(TAG) -COVTAR = $(abs_builddir)/$(PACKAGE)-coverity-$(TAG).tgz -COVEMACS = $(abs_builddir)/$(TAG).coverity -COVHTML = $(COVERITY_DIR)/output/errors +## Targets that moved to devel subdirectory -# Coverity outputs are phony so they get rebuilt every invocation +COVLEVEL ?= low -.PHONY: $(COVERITY_DIR) -$(COVERITY_DIR): init core-clean coverity-clean - $(AM_V_GEN)cov-build --dir "$@" $(MAKE) $(AM_MAKEFLAGS) core - -# Public coverity instance - -.PHONY: $(COVTAR) -$(COVTAR): $(COVERITY_DIR) - $(AM_V_GEN)tar czf "$@" --transform="s@.*$(TAG)@cov-int@" "$<" - -.PHONY: coverity -coverity: $(COVTAR) - @echo "Now go to https://scan.coverity.com/users/sign_in and upload:" - @echo " $(COVTAR)" - @echo "then make core-clean coverity-clean" +.PHONY: clang cppcheck indent +.PHONY: coverity coverity-analyze coverity-clean coverity-corp +clang coverity coverity-analyze coverity-clean coverity-corp cppcheck indent: + @echo 'Deprecated: Use "make -C devel $@" instead' + $(MAKE) $(AM_MAKEFLAGS) \ + CLANG_checkers=$(CLANG_checkers) \ + COVLEVEL=$(COVLEVEL) \ + CPPCHECK_ARGS=$(CPPCHECK_ARGS) \ + -C devel "$@" -# Licensed coverity instance -# -# The prerequisites are a little hacky; rather than actually required, some -# of them are designed so that things execute in the proper order (which is -# not the same as GNU make's order-only prerequisites). - -.PHONY: coverity-analyze -coverity-analyze: $(COVERITY_DIR) - @echo "" - @echo "Analyzing (waiting for coverity license if necessary) ..." - cov-analyze --dir "$<" --wait-for-license --security \ - --aggressiveness-level "$(COVLEVEL)" - -.PHONY: $(COVEMACS) -$(COVEMACS): coverity-analyze - $(AM_V_GEN)cov-format-errors --dir "$(COVERITY_DIR)" --emacs-style > "$@" - -.PHONY: $(COVHTML) -$(COVHTML): $(COVEMACS) - $(AM_V_GEN)cov-format-errors --dir "$(COVERITY_DIR)" --html-output "$@" - -.PHONY: coverity-corp -coverity-corp: $(COVHTML) - $(MAKE) $(AM_MAKEFLAGS) core-clean - @echo "Done. See:" - @echo " file://$(COVHTML)/index.html" - @echo "When no longer needed, make coverity-clean" - -# Remove all outputs regardless of tag -.PHONY: coverity-clean -coverity-clean: - -rm -rf "$(abs_builddir)"/coverity-* \ - "$(abs_builddir)"/$(PACKAGE)-coverity-*.tgz \ - "$(abs_builddir)"/*.coverity - - -rel-tags: tags - find . -name TAGS -exec sed -i 's:\(.*\)/\(.*\)/TAGS:\2/TAGS:g' \{\} \; - -CLANG_checkers = - -# Use CPPCHECK_ARGS to pass extra cppcheck options, e.g.: -# --enable={warning,style,performance,portability,information,all} -# --inconclusive --std=posix -CPPCHECK_ARGS ?= -BASE_CPPCHECK_ARGS = -I include --max-configs=30 --library=posix --library=gnu \ - --library=gtk $(GLIB_CFLAGS) -D__GNUC__ --inline-suppr -q -cppcheck-all: - cppcheck $(CPPCHECK_ARGS) $(BASE_CPPCHECK_ARGS) -DBUILD_PUBLIC_LIBPACEMAKER \ - -DDEFAULT_CONCURRENT_FENCING_TRUE replace lib daemons tools - -cppcheck: - cppcheck $(CPPCHECK_ARGS) $(BASE_CPPCHECK_ARGS) replace lib daemons tools - -clang: - OUT=$$(scan-build $(CLANG_checkers:%=-enable-checker %) \ - $(MAKE) $(AM_MAKEFLAGS) CFLAGS="-std=c99 $(CFLAGS)" \ - clean all 2>&1); \ - REPORT=$$(echo "$$OUT" \ - | sed -n -e "s/.*'scan-view \(.*\)'.*/\1/p"); \ - [ -z "$$REPORT" ] && echo "$$OUT" || scan-view "$$REPORT" ## Coverage/profiling .PHONY: coverage coverage: core -find . -name "*.gcda" -exec rm -f \{\} \; -rm -rf coverage lcov --no-external --exclude='*_test.c' -c -i -d . -o pacemaker_base.info $(MAKE) $(AM_MAKEFLAGS) check lcov --no-external --exclude='*_test.c' -c -d . -o pacemaker_test.info lcov -a pacemaker_base.info -a pacemaker_test.info -o pacemaker_total.info genhtml pacemaker_total.info -o coverage -s .PHONY: coverage-cts coverage-cts: core -find . -name "*.gcda" -exec rm -f \{\} \; -rm -rf coverage lcov --no-external -c -i -d tools -o pacemaker_base.info cts/cts-cli lcov --no-external -c -d tools -o pacemaker_test.info lcov -a pacemaker_base.info -a pacemaker_test.info -o pacemaker_total.info genhtml pacemaker_total.info -o coverage -s # This target removes all coverage-related files. It is only to be run when # done with coverage analysis and you are ready to go back to normal development, # starting with re-running ./configure. It is not to be run in between # "make coverage" runs. # # In particular, the *.gcno files are generated when the source is built. # Removing those files will break "make coverage" until the whole source tree # has been built and the *.gcno files generated again. .PHONY: coverage-clean coverage-clean: -rm -f pacemaker_*.info -rm -rf coverage -find . \( -name "*.gcno" -o -name "*.gcda" \) -exec rm -f \{\} \; -ancillary-clean: mock-clean coverity-clean coverage-clean +ancillary-clean: coverage-clean diff --git a/configure.ac b/configure.ac index 1973047f4b..0eb4825b66 100644 --- a/configure.ac +++ b/configure.ac @@ -1,2112 +1,2116 @@ dnl dnl autoconf for Pacemaker dnl dnl Copyright 2009-2022 the Pacemaker project contributors dnl dnl The version control history for this file may have further details. dnl dnl This source code is licensed under the GNU General Public License version 2 dnl or later (GPLv2+) WITHOUT ANY WARRANTY. dnl =============================================== dnl Bootstrap dnl =============================================== AC_PREREQ(2.64) dnl AC_CONFIG_MACRO_DIR is deprecated as of autoconf 2.70 (2020-12-08). dnl Once we can require that version, we can simplify this, and no longer dnl need ACLOCAL_AMFLAGS in Makefile.am. m4_ifdef([AC_CONFIG_MACRO_DIRS], [AC_CONFIG_MACRO_DIRS([m4])], [AC_CONFIG_MACRO_DIR([m4])]) AC_DEFUN([AC_DATAROOTDIR_CHECKED]) dnl Suggested structure: dnl information on the package dnl checks for programs dnl checks for libraries dnl checks for header files dnl checks for types dnl checks for structures dnl checks for compiler characteristics dnl checks for library functions dnl checks for system services m4_include([m4/version.m4]) AC_INIT([pacemaker], VERSION_NUMBER, [users@clusterlabs.org], [pacemaker], PCMK_URL) PCMK_FEATURES="" LT_CONFIG_LTDL_DIR([libltdl]) AC_CONFIG_AUX_DIR([libltdl/config]) AC_CANONICAL_HOST dnl Where #defines that autoconf makes (e.g. HAVE_whatever) go dnl dnl Internal header: include/config.h dnl - Contains ALL defines dnl - include/config.h.in is generated automatically by autoheader dnl - NOT to be included in any header files except crm_internal.h dnl (which is also not to be included in any other header files) dnl dnl External header: include/crm_config.h dnl - Contains a subset of defines checked here dnl - Manually edit include/crm_config.h.in to have configure include dnl new defines dnl - Should not include HAVE_* defines dnl - Safe to include anywhere AC_CONFIG_HEADERS([include/config.h include/crm_config.h]) dnl 1.13: minimum automake version required dnl foreign: don't require GNU-standard top-level files dnl tar-ustar: use (older) POSIX variant of generated tar rather than v7 dnl subdir-objects: keep .o's with their .c's (no-op in 2.0+) AM_INIT_AUTOMAKE([1.13 foreign tar-ustar subdir-objects]) dnl Require minimum version of pkg-config PKG_PROG_PKG_CONFIG(0.27) AS_IF([test x"${PKG_CONFIG}" != x""], [], [AC_MSG_FAILURE([Could not find required build tool pkg-config (0.27 or later)])]) PKG_INSTALLDIR PKG_NOARCH_INSTALLDIR dnl Example 2.4. Silent Custom Rule to Generate a File dnl %-bar.pc: %.pc dnl $(AM_V_GEN)$(LN_S) $(notdir $^) $@ CC_IN_CONFIGURE=yes export CC_IN_CONFIGURE LDD=ldd dnl ======================================================================== dnl Compiler characteristics dnl ======================================================================== dnl A particular compiler can be forced by setting the CC environment variable AC_PROG_CC dnl Use at least C99 if possible (automatic for autoconf >= 2.70) m4_version_prereq([2.70], [:], [AC_PROG_CC_STDC]) dnl C++ is not needed for build, just maintainer utilities AC_PROG_CXX dnl We use md5.c from gnulib, which has its own m4 macros. Per its docs: dnl "The macro gl_EARLY must be called as soon as possible after verifying that dnl the C compiler is working. ... The core part of the gnulib checks are done dnl by the macro gl_INIT." In addition, prevent gnulib from introducing OpenSSL dnl as a dependency. gl_EARLY gl_SET_CRYPTO_CHECK_DEFAULT([no]) gl_INIT # --enable-new-dtags: Use RUNPATH instead of RPATH. # It is necessary to have this done before libtool does linker detection. # See also: https://github.com/kronosnet/kronosnet/issues/107 AX_CHECK_LINK_FLAG([-Wl,--enable-new-dtags], [AM_LDFLAGS=-Wl,--enable-new-dtags], [AC_MSG_ERROR(["Linker support for --enable-new-dtags is required"])]) AC_SUBST([AM_LDFLAGS]) saved_LDFLAGS="$LDFLAGS" LDFLAGS="$AM_LDFLAGS $LDFLAGS" LT_INIT([dlopen]) LDFLAGS="$saved_LDFLAGS" LTDL_INIT([convenience]) AC_TYPE_SIZE_T AC_CHECK_SIZEOF(char) AC_CHECK_SIZEOF(short) AC_CHECK_SIZEOF(int) AC_CHECK_SIZEOF(long) AC_CHECK_SIZEOF(long long) dnl =============================================== dnl Helpers dnl =============================================== cc_supports_flag() { local CFLAGS="-Werror $@" AC_MSG_CHECKING([whether $CC supports $@]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ ]], [[ ]])], [RC=0; AC_MSG_RESULT([yes])], [RC=1; AC_MSG_RESULT([no])]) return $RC } # Some tests need to use their own CFLAGS cc_temp_flags() { ac_save_CFLAGS="$CFLAGS" CFLAGS="$*" } cc_restore_flags() { CFLAGS=$ac_save_CFLAGS } # expand_path_option $path_variable_name $default expand_path_option() { # The first argument is the variable *name* (not value) ac_path_varname="$1" # Get the original value of the variable ac_path_value=$(eval echo "\${${ac_path_varname}}") # Expand any literal variable expressions in the value so that we don't # end up with something like '${prefix}' in #defines etc. # # Autoconf deliberately leaves values unexpanded to allow overriding # the configure script choices in make commands (for example, # "make exec_prefix=/foo install"). No longer being able to do this seems # like no great loss. eval ac_path_value=$(eval echo "${ac_path_value}") # Use (expanded) default if necessary AS_IF([test x"${ac_path_value}" = x""], [eval ac_path_value=$(eval echo "$2")]) # Require a full path AS_CASE(["$ac_path_value"], [/*], [eval ${ac_path_varname}="$ac_path_value"], [*], [AC_MSG_ERROR([$ac_path_varname value "$ac_path_value" is not a full path])] ) } # yes_no_try $user_response $default DISABLED=0 REQUIRED=1 OPTIONAL=2 yes_no_try() { local value AS_IF([test x"$1" = x""], [value="$2"], [value="$1"]) AS_CASE(["`echo "$value" | tr '[A-Z]' '[a-z]'`"], [0|no|false|disable], [return $DISABLED], [1|yes|true|enable], [return $REQUIRED], [try|check], [return $OPTIONAL] ) AC_MSG_ERROR([Invalid option value "$value"]) } check_systemdsystemunitdir() { AC_MSG_CHECKING([which system unit file directory to use]) PKG_CHECK_VAR([systemdsystemunitdir], [systemd], [systemdsystemunitdir]) AC_MSG_RESULT([${systemdsystemunitdir}]) test x"$systemdsystemunitdir" != x"" return $? } dnl =============================================== dnl Configure Options dnl =============================================== dnl Actual library checks come later, but pkg-config can be used here to grab dnl external values to use as defaults for configure options dnl Per the autoconf docs, --enable-*/--disable-* options should control dnl features inherent to Pacemaker, while --with-*/--without-* options should dnl control the use of external software. However, --enable-*/--disable-* may dnl implicitly require additional external dependencies, and dnl --with-*/--without-* may implicitly enable or disable features, so the dnl line is blurry. dnl dnl We also use --with-* options for custom file, directory, and path dnl locations, since autoconf does not provide an option type for those. dnl --enable-* options: build process AC_ARG_ENABLE([quiet], [AS_HELP_STRING([--enable-quiet], [suppress make output unless there is an error @<:@no@:>@])] ) yes_no_try "$enable_quiet" "no" enable_quiet=$? AC_ARG_ENABLE([fatal-warnings], [AS_HELP_STRING([--enable-fatal-warnings], [enable pedantic and fatal warnings for gcc @<:@try@:>@])], ) yes_no_try "$enable_fatal_warnings" "try" enable_fatal_warnings=$? AC_ARG_ENABLE([hardening], [AS_HELP_STRING([--enable-hardening], [harden the resulting executables/libraries @<:@try@:>@])] ) yes_no_try "$enable_hardening" "try" enable_hardening=$? dnl --enable-* options: features AC_ARG_ENABLE([systemd], [AS_HELP_STRING([--enable-systemd], [enable support for managing resources via systemd @<:@try@:>@])] ) yes_no_try "$enable_systemd" "try" enable_systemd=$? AC_ARG_ENABLE([upstart], [AS_HELP_STRING([--enable-upstart], [enable support for managing resources via Upstart (deprecated) @<:@try@:>@])] ) yes_no_try "$enable_upstart" "try" enable_upstart=$? dnl --enable-* options: features inherent to Pacemaker AC_ARG_ENABLE([compat-2.0], [AS_HELP_STRING([--enable-compat-2.0], m4_normalize([ preserve certain output as it was in 2.0; this option will be available only for the lifetime of the 2.1 series @<:@no@:>@]))] ) yes_no_try "$enable_compat_2_0" "no" enable_compat_2_0=$? AS_IF([test $enable_compat_2_0 -ne $DISABLED], [ AC_DEFINE_UNQUOTED([PCMK__COMPAT_2_0], [1], [Keep certain output compatible with 2.0 release series]) PCMK_FEATURES="$PCMK_FEATURES compat-2.0" ] ) # Add an option to create symlinks at the pre-2.0.0 daemon name locations, so # that users and tools can continue to invoke those names directly (e.g., for # meta-data). This option will be removed in a future release. AC_ARG_ENABLE([legacy-links], [AS_HELP_STRING([--enable-legacy-links], [add symlinks for old daemon names (deprecated) @<:@no@:>@])] ) yes_no_try "$enable_legacy_links" "no" enable_legacy_links=$? AM_CONDITIONAL([BUILD_LEGACY_LINKS], [test $enable_legacy_links -ne $DISABLED]) # AM_GNU_GETTEXT calls AM_NLS which defines the nls option, but it defaults # to enabled. We override the definition of AM_NLS to flip the default and mark # it as experimental in the help text. AC_DEFUN([AM_NLS], [AC_MSG_CHECKING([whether NLS is requested]) AC_ARG_ENABLE([nls], [AS_HELP_STRING([--enable-nls], [use Native Language Support (experimental)])], USE_NLS=$enableval, USE_NLS=no) AC_MSG_RESULT([$USE_NLS]) AC_SUBST([USE_NLS])] ) AM_GNU_GETTEXT([external]) AM_GNU_GETTEXT_VERSION([0.18]) AS_IF([test x"$enable_nls" = x"yes"], [PCMK_FEATURES="$PCMK_FEATURES nls"]) dnl --with-* options: external software support, and custom locations dnl This argument is defined via an M4 macro so default can be a variable AC_DEFUN([VERSION_ARG], [AC_ARG_WITH([version], [AS_HELP_STRING([--with-version=VERSION], [override package version @<:@$1@:>@])], [ PACEMAKER_VERSION="$withval" ], [ PACEMAKER_VERSION="$PACKAGE_VERSION" ])] ) VERSION_ARG(VERSION_NUMBER) # Redefine PACKAGE_VERSION and VERSION according to PACEMAKER_VERSION in case # the user used --with-version. Unfortunately, this can only affect the # substitution variables and later uses in this file, not the config.h # constants, so we have to be careful to use only PACEMAKER_VERSION in C code. PACKAGE_VERSION=$PACEMAKER_VERSION VERSION=$PACEMAKER_VERSION CRM_DAEMON_USER="" AC_ARG_WITH([daemon-user], [AS_HELP_STRING([--with-daemon-user=USER], [user to run unprivileged Pacemaker daemons as (advanced option: changing this may break other cluster components unless similarly configured) @<:@hacluster@:>@])], [ CRM_DAEMON_USER="$withval" ] ) CRM_DAEMON_GROUP="" AC_ARG_WITH([daemon-group], [AS_HELP_STRING([--with-daemon-group=GROUP], [group to run unprivileged Pacemaker daemons as (advanced option: changing this may break other cluster components unless similarly configured) @<:@haclient@:>@])], [ CRM_DAEMON_GROUP="$withval" ] ) BUG_URL="" AC_ARG_WITH([bug-url], [AS_HELP_STRING([--with-bug-url=DIR], m4_normalize([ address where users should submit bug reports @<:@https://bugs.clusterlabs.org/enter_bug.cgi?product=Pacemaker@:>@]))], [ BUG_URL="$withval" ] ) dnl --with-* options: features AC_ARG_WITH([cibsecrets], [AS_HELP_STRING([--with-cibsecrets], [support separate file for CIB secrets @<:@no@:>@])] ) yes_no_try "$with_cibsecrets" "no" with_cibsecrets=$? AC_ARG_WITH([gnutls], [AS_HELP_STRING([--with-gnutls], [support Pacemaker Remote and remote-tls-port using GnuTLS @<:@try@:>@])] ) yes_no_try "$with_gnutls" "try" with_gnutls=$? PCMK_GNUTLS_PRIORITIES="NORMAL" AC_ARG_WITH([gnutls-priorities], [AS_HELP_STRING([--with-gnutls-priorities], [default GnuTLS cipher priorities @<:@NORMAL@:>@])], [ test x"$withval" = x"no" || PCMK_GNUTLS_PRIORITIES="$withval" ] ) AC_ARG_WITH([concurrent-fencing-default], [AS_HELP_STRING([--with-concurrent-fencing-default], [default value for concurrent-fencing cluster option @<:@false@:>@])], ) AS_CASE([$with_concurrent_fencing_default], [""], [with_concurrent_fencing_default="false"], [false], [], [true], [PCMK_FEATURES="$PCMK_FEATURES default-concurrent-fencing"], [AC_MSG_ERROR([Invalid value "$with_concurrent_fencing_default" for --with-concurrent-fencing-default])] ) AC_DEFINE_UNQUOTED([PCMK__CONCURRENT_FENCING_DEFAULT], ["$with_concurrent_fencing_default"], [Default value for concurrent-fencing cluster option]) AC_ARG_WITH([sbd-sync-default], [AS_HELP_STRING([--with-sbd-sync-default], m4_normalize([ default value used by sbd if SBD_SYNC_RESOURCE_STARTUP environment variable is not set @<:@false@:>@]))], ) AS_CASE([$with_sbd_sync_default], [""], [with_sbd_sync_default=false], [false], [], [true], [PCMK_FEATURES="$PCMK_FEATURES default-sbd-sync"], [AC_MSG_ERROR([Invalid value "$with_sbd_sync_default" for --with-sbd-sync-default])] ) AC_DEFINE_UNQUOTED([PCMK__SBD_SYNC_DEFAULT], [$with_sbd_sync_default], [Default value for SBD_SYNC_RESOURCE_STARTUP environment variable]) AC_ARG_WITH([resource-stickiness-default], [AS_HELP_STRING([--with-resource-stickiness-default], [If positive, value to add to new CIBs as explicit resource default for resource-stickiness @<:@0@:>@])], ) errmsg="Invalid value \"$with_resource_stickiness_default\" for --with-resource-stickiness-default" AS_CASE([$with_resource_stickiness_default], [0|""], [with_resource_stickiness_default="0"], [*[[!0-9]]*], [AC_MSG_ERROR([$errmsg])], [PCMK_FEATURES="$PCMK_FEATURES default-resource-stickiness"] ) AC_DEFINE_UNQUOTED([PCMK__RESOURCE_STICKINESS_DEFAULT], [$with_resource_stickiness_default], [Default value for resource-stickiness resource meta-attribute]) AC_ARG_WITH([corosync], [AS_HELP_STRING([--with-corosync], [support the Corosync messaging and membership layer @<:@try@:>@])] ) yes_no_try "$with_corosync" "try" with_corosync=$? dnl Get default from corosync if possible. PKG_CHECK_VAR([PCMK__COROSYNC_CONF], [corosync], [corosysconfdir], [], [PCMK__COROSYNC_CONF="${sysconfdir}/corosync/corosync.conf"]) AC_ARG_WITH([corosync-conf], [AS_HELP_STRING([--with-corosync-conf], m4_normalize([ location of Corosync configuration file @<:@value from Corosync package if available otherwise SYSCONFDIR/corosync/corosync.conf@:>@]))], [ PCMK__COROSYNC_CONF="$withval" ] ) AC_ARG_WITH([nagios], [AS_HELP_STRING([--with-nagios], [support nagios resources])] ) yes_no_try "$with_nagios" "try" with_nagios=$? dnl --with-* options: directory locations AC_ARG_WITH([nagios-plugin-dir], [AS_HELP_STRING([--with-nagios-plugin-dir=DIR], [directory for nagios plugins @<:@LIBEXECDIR/nagios/plugins@:>@])], [ NAGIOS_PLUGIN_DIR="$withval" ] ) AC_ARG_WITH([nagios-metadata-dir], [AS_HELP_STRING([--with-nagios-metadata-dir=DIR], [directory for nagios plugins metadata @<:@DATADIR/nagios/plugins-metadata@:>@])], [ NAGIOS_METADATA_DIR="$withval" ] ) INITDIR="" AC_ARG_WITH([initdir], [AS_HELP_STRING([--with-initdir=DIR], [directory for init (rc) scripts])], [ INITDIR="$withval" ] ) systemdsystemunitdir="${systemdsystemunitdir-}" AC_ARG_WITH([systemdsystemunitdir], [AS_HELP_STRING([--with-systemdsystemunitdir=DIR], [directory for systemd unit files (advanced option: must match what systemd uses)])], [ systemdsystemunitdir="$withval" ] ) CONFIGDIR="" AC_ARG_WITH([configdir], [AS_HELP_STRING([--with-configdir=DIR], [directory for Pacemaker configuration file @<:@SYSCONFDIR/sysconfig@:>@])], [ CONFIGDIR="$withval" ] ) dnl --runstatedir is available as of autoconf 2.70 (2020-12-08). When users dnl have an older version, they can use our --with-runstatedir. pcmk_runstatedir="" AC_ARG_WITH([runstatedir], [AS_HELP_STRING([--with-runstatedir=DIR], [modifiable per-process data @<:@LOCALSTATEDIR/run@:>@ (ignored if --runstatedir is available)])], [ pcmk_runstatedir="$withval" ] ) CRM_LOG_DIR="" AC_ARG_WITH([logdir], [AS_HELP_STRING([--with-logdir=DIR], [directory for Pacemaker log file @<:@LOCALSTATEDIR/log/pacemaker@:>@])], [ CRM_LOG_DIR="$withval" ] ) CRM_BUNDLE_DIR="" AC_ARG_WITH([bundledir], [AS_HELP_STRING([--with-bundledir=DIR], [directory for Pacemaker bundle logs @<:@LOCALSTATEDIR/log/pacemaker/bundles@:>@])], [ CRM_BUNDLE_DIR="$withval" ] ) dnl Get default from resource-agents if possible. Otherwise, the default uses dnl /usr/lib rather than libdir because it's determined by the OCF project and dnl not Pacemaker. Even if a user wants to install Pacemaker to /usr/local or dnl such, the OCF agents will be expected in their usual location. However, we dnl do give the user the option to override it. PKG_CHECK_VAR([OCF_ROOT_DIR], [resource-agents], [ocfrootdir], [], [OCF_ROOT_DIR="/usr/lib/ocf"]) AC_ARG_WITH([ocfdir], [AS_HELP_STRING([--with-ocfdir=DIR], m4_normalize([ OCF resource agent root directory (advanced option: changing this may break other cluster components unless similarly configured) @<:@value from resource-agents package if available otherwise /usr/lib/ocf@:>@]))], [ OCF_ROOT_DIR="$withval" ] ) AC_SUBST(OCF_ROOT_DIR) AC_DEFINE_UNQUOTED([OCF_ROOT_DIR], ["$OCF_ROOT_DIR"], [OCF root directory for resource agents and libraries]) PKG_CHECK_VAR([OCF_RA_PATH], [resource-agents], [ocfrapath], [], [OCF_RA_PATH="$OCF_ROOT_DIR/resource.d"]) AC_ARG_WITH([ocfrapath], [AS_HELP_STRING([--with-ocfrapath=DIR], m4_normalize([ OCF resource agent directories (colon-separated) to search @<:@value from resource-agents package if available otherwise OCFDIR/resource.d@:>@]))], [ OCF_RA_PATH="$withval" ] ) AC_SUBST(OCF_RA_PATH) OCF_RA_INSTALL_DIR="$OCF_ROOT_DIR/resource.d" AC_ARG_WITH([ocfrainstalldir], [AS_HELP_STRING([--with-ocfrainstalldir=DIR], m4_normalize([ OCF installation directory for Pacemakers resource agents @<:@OCFDIR/resource.d@:>@]))], [ OCF_RA_INSTALL_DIR="$withval" ] ) AC_SUBST(OCF_RA_INSTALL_DIR) dnl Get default from fence-agents if available PKG_CHECK_VAR([FA_PREFIX], [fence-agents], [prefix], [PCMK__FENCE_BINDIR="${FA_PREFIX}/sbin"], [PCMK__FENCE_BINDIR="$sbindir"]) AC_ARG_WITH([fence-bindir], [AS_HELP_STRING([--with-fence-bindir=DIR], m4_normalize([ directory for executable fence agents @<:@value from fence-agents package if available otherwise SBINDIR@:>@]))], [ PCMK__FENCE_BINDIR="$withval" ] ) AC_SUBST(PCMK__FENCE_BINDIR) dnl --with-* options: non-production testing AC_ARG_WITH([profiling], [AS_HELP_STRING([--with-profiling], [disable optimizations, for effective profiling @<:@no@:>@])] ) yes_no_try "$with_profiling" "no" with_profiling=$? AC_ARG_WITH([coverage], [AS_HELP_STRING([--with-coverage], [disable optimizations, for effective profiling and coverage testing @<:@no@:>@])] ) yes_no_try "$with_coverage" "no" with_coverage=$? AC_ARG_WITH([sanitizers], [AS_HELP_STRING([--with-sanitizers=...,...], [enable SANitizer build, do *NOT* use for production. Only ASAN/UBSAN/TSAN are currently supported])], [ SANITIZERS="$withval" ], [ SANITIZERS="" ]) dnl Environment variable options AC_ARG_VAR([CFLAGS_HARDENED_LIB], [extra C compiler flags for hardened libraries]) AC_ARG_VAR([LDFLAGS_HARDENED_LIB], [extra linker flags for hardened libraries]) AC_ARG_VAR([CFLAGS_HARDENED_EXE], [extra C compiler flags for hardened executables]) AC_ARG_VAR([LDFLAGS_HARDENED_EXE], [extra linker flags for hardened executables]) dnl =============================================== dnl General Processing dnl =============================================== AC_DEFINE_UNQUOTED(PACEMAKER_VERSION, "$VERSION", [Version number of this Pacemaker build]) PACKAGE_SERIES=`echo $VERSION | awk -F. '{ print $1"."$2 }'` AC_SUBST(PACKAGE_SERIES) AC_PROG_LN_S AC_PROG_MKDIR_P # Check for fatal warning support AS_IF([test $enable_fatal_warnings -ne $DISABLED && test x"$GCC" = x"yes" && cc_supports_flag -Werror], [WERROR="-Werror"], [ WERROR="" AS_CASE([$enable_fatal_warnings], [$REQUIRED], [AC_MSG_ERROR([Compiler does not support fatal warnings])], [$OPTIONAL], [ AC_MSG_NOTICE([Compiler does not support fatal warnings]) enable_fatal_warnings=$DISABLED ]) ]) AC_MSG_NOTICE([Sanitizing prefix: ${prefix}]) AS_IF([test x"$prefix" = x"NONE"], [ prefix=/usr dnl Fix default variables - "prefix" variable if not specified AS_IF([test x"$localstatedir" = x"\${prefix}/var"], [localstatedir="/var"]) AS_IF([test x"$sysconfdir" = x"\${prefix}/etc"], [sysconfdir="/etc"]) ]) AC_MSG_NOTICE([Sanitizing exec_prefix: ${exec_prefix}]) AS_CASE([$exec_prefix], [prefix|NONE], [exec_prefix=$prefix]) AC_MSG_NOTICE([Sanitizing INITDIR: ${INITDIR}]) AS_CASE([$INITDIR], [prefix], [INITDIR=$prefix], [""], [ AC_MSG_CHECKING([which init (rc) directory to use]) for initdir in /etc/init.d /etc/rc.d/init.d /sbin/init.d \ /usr/local/etc/rc.d /etc/rc.d do AS_IF([test -d $initdir], [ INITDIR=$initdir break ]) done AC_MSG_RESULT([$INITDIR]) ]) AC_SUBST(INITDIR) AC_MSG_NOTICE([Sanitizing libdir: ${libdir}]) AS_CASE([$libdir], [prefix|NONE], [ AC_MSG_CHECKING([which lib directory to use]) for aDir in lib64 lib do trydir="${exec_prefix}/${aDir}" AS_IF([test -d ${trydir}], [ libdir=${trydir} break ]) done AC_MSG_RESULT([$libdir]) ]) dnl Expand values of autoconf-provided directory options expand_path_option prefix expand_path_option exec_prefix expand_path_option bindir expand_path_option sbindir expand_path_option libexecdir expand_path_option datadir expand_path_option sysconfdir expand_path_option sharedstatedir expand_path_option localstatedir expand_path_option libdir expand_path_option includedir expand_path_option oldincludedir expand_path_option infodir expand_path_option mandir dnl Home-grown variables expand_path_option localedir "${datadir}/locale" AC_DEFINE_UNQUOTED([PCMK__LOCALE_DIR],["$localedir"], [Base directory for message catalogs]) AS_IF([test x"${runstatedir}" = x""], [runstatedir="${pcmk_runstatedir}"]) expand_path_option runstatedir "${localstatedir}/run" AC_DEFINE_UNQUOTED([PCMK_RUN_DIR], ["$runstatedir"], [Location for modifiable per-process data]) AC_SUBST(runstatedir) expand_path_option INITDIR AC_DEFINE_UNQUOTED([PCMK__LSB_INIT_DIR], ["$INITDIR"], [Location for LSB init scripts]) expand_path_option docdir "${datadir}/doc/${PACKAGE}-${VERSION}" AC_SUBST(docdir) expand_path_option CONFIGDIR "${sysconfdir}/sysconfig" AC_SUBST(CONFIGDIR) expand_path_option PCMK__COROSYNC_CONF "${sysconfdir}/corosync/corosync.conf" AC_SUBST(PCMK__COROSYNC_CONF) expand_path_option CRM_LOG_DIR "${localstatedir}/log/pacemaker" AC_DEFINE_UNQUOTED(CRM_LOG_DIR,"$CRM_LOG_DIR", Location for Pacemaker log file) AC_SUBST(CRM_LOG_DIR) expand_path_option CRM_BUNDLE_DIR "${localstatedir}/log/pacemaker/bundles" AC_DEFINE_UNQUOTED(CRM_BUNDLE_DIR,"$CRM_BUNDLE_DIR", Location for Pacemaker bundle logs) AC_SUBST(CRM_BUNDLE_DIR) expand_path_option PCMK__FENCE_BINDIR AC_DEFINE_UNQUOTED(PCMK__FENCE_BINDIR,"$PCMK__FENCE_BINDIR", [Location for executable fence agents]) expand_path_option OCF_RA_PATH AC_DEFINE_UNQUOTED([OCF_RA_PATH], ["$OCF_RA_PATH"], [OCF directories to search for resource agents ]) AS_IF([test x"${PCMK_GNUTLS_PRIORITIES}" != x""], [], [AC_MSG_ERROR([--with-gnutls-priorities value must not be empty])]) AC_DEFINE_UNQUOTED([PCMK_GNUTLS_PRIORITIES], ["$PCMK_GNUTLS_PRIORITIES"], [GnuTLS cipher priorities]) AS_IF([test x"${BUG_URL}" = x""], [BUG_URL="https://bugs.clusterlabs.org/enter_bug.cgi?product=Pacemaker"]) AC_SUBST(BUG_URL) for j in prefix exec_prefix bindir sbindir libexecdir datadir sysconfdir \ sharedstatedir localstatedir libdir includedir oldincludedir infodir \ mandir INITDIR docdir CONFIGDIR localedir do dirname=`eval echo '${'${j}'}'` AS_IF([test ! -d "$dirname"], [AC_MSG_WARN([$j directory ($dirname) does not exist (yet)])]) done us_auth= AC_CHECK_HEADER([sys/socket.h], [ AC_CHECK_DECL([SO_PEERCRED], [ # Linux AC_CHECK_TYPE([struct ucred], [ us_auth=peercred_ucred; AC_DEFINE([HAVE_UCRED], [1], [Define if Unix socket auth method is getsockopt(s, SO_PEERCRED, &ucred, ...)]) ], [ # OpenBSD AC_CHECK_TYPE([struct sockpeercred], [ us_auth=localpeercred_sockepeercred; AC_DEFINE([HAVE_SOCKPEERCRED], [1], [Define if Unix socket auth method is getsockopt(s, SO_PEERCRED, &sockpeercred, ...)]) ], [], [[#include ]]) ], [[#define _GNU_SOURCE #include ]]) ], [], [[#include ]]) ]) AS_IF([test -z "${us_auth}"], [ # FreeBSD AC_CHECK_DECL([getpeereid], [ us_auth=getpeereid; AC_DEFINE([HAVE_GETPEEREID], [1], [Define if Unix socket auth method is getpeereid(s, &uid, &gid)]) ], [ # Solaris/OpenIndiana AC_CHECK_DECL([getpeerucred], [ us_auth=getpeerucred; AC_DEFINE([HAVE_GETPEERUCRED], [1], [Define if Unix socket auth method is getpeercred(s, &ucred)]) ], [ AC_MSG_FAILURE([No way to authenticate a Unix socket peer]) ], [[#include ]]) ]) ]) dnl OS-based decision-making is poor autotools practice; feature-based dnl mechanisms are strongly preferred. Keep this section to a bare minimum; dnl regard as a "necessary evil". INIT_EXT="" PROCFS=0 dnl Solaris and some *BSD versions support procfs but not files we need AS_CASE(["$host_os"], [*bsd*], [INIT_EXT=".sh"], [*linux*], [PROCFS=1], [darwin*], [ LIBS="$LIBS -L${prefix}/lib" CFLAGS="$CFLAGS -I${prefix}/include" ]) AC_SUBST(INIT_EXT) +AM_CONDITIONAL([SUPPORT_PROCFS], [test $PROCFS -eq 1]) AC_DEFINE_UNQUOTED([HAVE_LINUX_PROCFS], [$PROCFS], [Define to 1 if procfs is supported]) AS_CASE(["$host_cpu"], [ppc64|powerpc64], [ AS_CASE([$CFLAGS], [*powerpc64*], [], [*], [AS_IF([test x"$GCC" = x"yes"], [CFLAGS="$CFLAGS -m64"]) ]) ]) dnl =============================================== dnl Program Paths dnl =============================================== PATH="$PATH:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin" export PATH dnl Pacemaker's executable python scripts will invoke the python specified by dnl configure's PYTHON variable. If not specified, AM_PATH_PYTHON will check a dnl built-in list with (unversioned) "python" having precedence. To configure dnl Pacemaker to use a specific python interpreter version, define PYTHON dnl when calling configure, for example: ./configure PYTHON=/usr/bin/python3.6 dnl Ensure PYTHON is an absolute path AS_IF([test x"${PYTHON}" != x""], [AC_PATH_PROG([PYTHON], [$PYTHON])]) dnl Require a minimum Python version AM_PATH_PYTHON([3.4]) AC_PATH_PROGS([ASCIIDOC_CONV], [asciidoc asciidoctor]) AC_PATH_PROG([HELP2MAN], [help2man]) AC_PATH_PROG([SPHINX], [sphinx-build]) AC_PATH_PROG([INKSCAPE], [inkscape]) AC_PATH_PROG([XSLTPROC], [xsltproc]) AC_PATH_PROG([XMLCATALOG], [xmlcatalog]) dnl Bash is needed for building man pages and running regression tests. dnl BASH is already an environment variable, so use something else. AC_PATH_PROG([BASH_PATH], [bash]) AS_IF([test x"${BASH_PATH}" != x""], [], [AC_MSG_FAILURE([Could not find required build tool bash])]) AC_PATH_PROGS(VALGRIND_BIN, valgrind, /usr/bin/valgrind) AC_DEFINE_UNQUOTED(VALGRIND_BIN, "$VALGRIND_BIN", Valgrind command) AM_CONDITIONAL(BUILD_HELP, test x"${HELP2MAN}" != x"") AS_IF([test x"${HELP2MAN}" != x""], [PCMK_FEATURES="$PCMK_FEATURES generated-manpages"]) MANPAGE_XSLT="" AS_IF([test x"${XSLTPROC}" != x""], [ AC_MSG_CHECKING([for DocBook-to-manpage transform]) # first try to figure out correct template using xmlcatalog query, # resort to extensive (semi-deterministic) file search if that fails DOCBOOK_XSL_URI='http://docbook.sourceforge.net/release/xsl/current' DOCBOOK_XSL_PATH='manpages/docbook.xsl' MANPAGE_XSLT=$(${XMLCATALOG} "" ${DOCBOOK_XSL_URI}/${DOCBOOK_XSL_PATH} \ | sed -n 's|^file://||p;q') AS_IF([test x"${MANPAGE_XSLT}" = x""], [ DIRS=$(find "${datadir}" -name $(basename $(dirname ${DOCBOOK_XSL_PATH})) \ -type d 2>/dev/null | LC_ALL=C sort) XSLT=$(basename ${DOCBOOK_XSL_PATH}) for d in ${DIRS} do AS_IF([test -f "${d}/${XSLT}"], [ MANPAGE_XSLT="${d}/${XSLT}" break ]) done ]) ]) AC_MSG_RESULT([$MANPAGE_XSLT]) AC_SUBST(MANPAGE_XSLT) AM_CONDITIONAL(BUILD_XML_HELP, test x"${MANPAGE_XSLT}" != x"") AS_IF([test x"${MANPAGE_XSLT}" != x""], [PCMK_FEATURES="$PCMK_FEATURES agent-manpages"]) AM_CONDITIONAL([IS_ASCIIDOC], [echo "${ASCIIDOC_CONV}" | grep -Eq 'asciidoc$']) AM_CONDITIONAL([BUILD_ASCIIDOC], [test "x${ASCIIDOC_CONV}" != x]) AS_IF([test x"${ASCIIDOC_CONV}" != x""], [PCMK_FEATURES="$PCMK_FEATURES ascii-docs"]) AM_CONDITIONAL([BUILD_SPHINX_DOCS], [test x"${SPHINX}" != x"" && test x"${INKSCAPE}" != x""]) AM_COND_IF([BUILD_SPHINX_DOCS], [PCMK_FEATURES="$PCMK_FEATURES books"]) dnl Pacemaker's shell scripts (and thus man page builders) rely on GNU getopt AC_MSG_CHECKING([for GNU-compatible getopt]) IFS_orig=$IFS IFS=: for PATH_DIR in $PATH do IFS=$IFS_orig GETOPT_PATH="${PATH_DIR}/getopt" AS_IF([test -f "$GETOPT_PATH" && test -x "$GETOPT_PATH"], [ $GETOPT_PATH -T >/dev/null 2>/dev/null AS_IF([test $? -eq 4], [break]) ]) GETOPT_PATH="" done IFS=$IFS_orig AS_IF([test -n "$GETOPT_PATH"], [AC_MSG_RESULT([$GETOPT_PATH])], [ AC_MSG_RESULT([no]) AC_MSG_ERROR([Could not find required build tool GNU-compatible getopt]) ]) AC_SUBST([GETOPT_PATH]) dnl ======================================================================== dnl checks for library functions to replace them dnl dnl NoSuchFunctionName: dnl is a dummy function which no system supplies. It is here to make dnl the system compile semi-correctly on OpenBSD which doesn't know dnl how to create an empty archive dnl dnl scandir: Only on BSD. dnl System-V systems may have it, but hidden and/or deprecated. dnl A replacement function is supplied for it. dnl dnl strerror: returns a string that corresponds to an errno. dnl A replacement function is supplied for it. dnl dnl strnlen: is a gnu function similar to strlen, but safer. dnl We wrote a tolerably-fast replacement function for it. dnl dnl strndup: is a gnu function similar to strdup, but safer. dnl We wrote a tolerably-fast replacement function for it. AC_REPLACE_FUNCS(alphasort NoSuchFunctionName scandir strerror strchrnul strnlen strndup) dnl =============================================== dnl Libraries dnl =============================================== AC_CHECK_LIB(socket, socket) dnl -lsocket AC_CHECK_LIB(c, dlopen) dnl if dlopen is in libc... AC_CHECK_LIB(dl, dlopen) dnl -ldl (for Linux) AC_CHECK_LIB(rt, sched_getscheduler) dnl -lrt (for Tru64) AC_CHECK_LIB(gnugetopt, getopt_long) dnl -lgnugetopt ( if available ) AC_CHECK_LIB(pam, pam_start) dnl -lpam (if available) PKG_CHECK_MODULES([UUID], [uuid], [CPPFLAGS="${CPPFLAGS} ${UUID_CFLAGS}" LIBS="${LIBS} ${UUID_LIBS}"]) AC_CHECK_FUNCS([sched_setscheduler]) AS_IF([test x"$ac_cv_func_sched_setscheduler" != x"yes"], [PC_LIBS_RT=""], [PC_LIBS_RT="-lrt"]) AC_SUBST(PC_LIBS_RT) # Require minimum glib version PKG_CHECK_MODULES([GLIB], [glib-2.0 >= 2.42.0], [CPPFLAGS="${CPPFLAGS} ${GLIB_CFLAGS}" LIBS="${LIBS} ${GLIB_LIBS}"]) # Check whether high-resolution sleep function is available AC_CHECK_FUNCS([nanosleep usleep]) # # Where is dlopen? # AS_IF([test x"$ac_cv_lib_c_dlopen" = x"yes"], [LIBADD_DL=""], [test x"$ac_cv_lib_dl_dlopen" = x"yes"], [LIBADD_DL=-ldl], [LIBADD_DL=${lt_cv_dlopen_libs}]) PKG_CHECK_MODULES(LIBXML2, [libxml-2.0], [CPPFLAGS="${CPPFLAGS} ${LIBXML2_CFLAGS}" LIBS="${LIBS} ${LIBXML2_LIBS}"]) REQUIRE_LIB([xslt], [xsltApplyStylesheet]) dnl ======================================================================== dnl Headers dnl ======================================================================== # Some distributions insert #warnings into deprecated headers. If we will # enable fatal warnings for the build, then enable them for the header checks # as well, otherwise the build could fail even though the header check # succeeds. (We should probably be doing this in more places.) cc_temp_flags "$CFLAGS $WERROR" # Optional headers (inclusion of these should be conditional in C code) AC_CHECK_HEADERS([getopt.h]) AC_CHECK_HEADERS([linux/swab.h]) AC_CHECK_HEADERS([stddef.h]) AC_CHECK_HEADERS([sys/signalfd.h]) AC_CHECK_HEADERS([uuid/uuid.h]) AC_CHECK_HEADERS([security/pam_appl.h pam/pam_appl.h]) # Required headers REQUIRE_HEADER([arpa/inet.h]) REQUIRE_HEADER([ctype.h]) REQUIRE_HEADER([dirent.h]) REQUIRE_HEADER([errno.h]) REQUIRE_HEADER([glib.h]) REQUIRE_HEADER([grp.h]) REQUIRE_HEADER([limits.h]) REQUIRE_HEADER([netdb.h]) REQUIRE_HEADER([netinet/in.h]) REQUIRE_HEADER([netinet/ip.h], [ #include #include ]) REQUIRE_HEADER([pwd.h]) REQUIRE_HEADER([signal.h]) REQUIRE_HEADER([stdio.h]) REQUIRE_HEADER([stdlib.h]) REQUIRE_HEADER([string.h]) REQUIRE_HEADER([strings.h]) REQUIRE_HEADER([sys/ioctl.h]) REQUIRE_HEADER([sys/param.h]) REQUIRE_HEADER([sys/reboot.h]) REQUIRE_HEADER([sys/resource.h]) REQUIRE_HEADER([sys/socket.h]) REQUIRE_HEADER([sys/stat.h]) REQUIRE_HEADER([sys/time.h]) REQUIRE_HEADER([sys/types.h]) REQUIRE_HEADER([sys/utsname.h]) REQUIRE_HEADER([sys/wait.h]) REQUIRE_HEADER([time.h]) REQUIRE_HEADER([unistd.h]) REQUIRE_HEADER([libxml/xpath.h]) REQUIRE_HEADER([libxslt/xslt.h]) cc_restore_flags AC_CHECK_FUNCS([uuid_unparse], [], [AC_MSG_FAILURE([Could not find required C function uuid_unparse()])]) AC_CACHE_CHECK([whether __progname and __progname_full are available], - [pf_cv_var_progname], AC_LINK_IFELSE([ - AC_LANG_PROGRAM([[extern char *__progname, *__progname_full;]], - [[__progname = "foo"; __progname_full = "foo bar";]], - [pf_cv_var_progname="yes"], - [pf_cv_var_progname="no"]) - ])) -AS_IF([test x"$pf_cv_var_progname" = x"yes"], [AC_DEFINE(HAVE___PROGNAME,1,[ ])]) + [pf_cv_var_progname], + [AC_LINK_IFELSE( + [AC_LANG_PROGRAM([[extern char *__progname, *__progname_full;]], + [[__progname = "foo"; __progname_full = "foo bar";]])], + [pf_cv_var_progname="yes"], + [pf_cv_var_progname="no"] + )] + ) +AS_IF([test x"$pf_cv_var_progname" = x"yes"], + [AC_DEFINE(HAVE_PROGNAME,1,[Define to 1 if processes can change their name])]) dnl ======================================================================== dnl Generic declarations dnl ======================================================================== AC_CHECK_DECLS([CLOCK_MONOTONIC], [PCMK_FEATURES="$PCMK_FEATURES monotonic"], [], [[ #include ]]) dnl ======================================================================== dnl Unit test declarations dnl ======================================================================== AC_CHECK_DECLS([assert_float_equal], [], [], [[ #include #include #include #include ]]) cc_temp_flags "$CFLAGS -Wl,--wrap=uname" WRAPPABLE_UNAME="no" AC_MSG_CHECKING([if uname() can be wrapped]) AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include int __wrap_uname(struct utsname *buf) { return 100; } int main(int argc, char **argv) { struct utsname x; return uname(&x) == 100 ? 0 : 1; } ]])], [ WRAPPABLE_UNAME="yes" ], [ WRAPPABLE_UNAME="no"]) AC_MSG_RESULT([$WRAPPABLE_UNAME]) AM_CONDITIONAL([WRAPPABLE_UNAME], [test x"$WRAPPABLE_UNAME" = x"yes"]) cc_restore_flags dnl ======================================================================== dnl Structures dnl ======================================================================== AC_CHECK_MEMBERS([struct tm.tm_gmtoff],,,[[#include ]]) AC_CHECK_MEMBER([struct dirent.d_type], AC_DEFINE(HAVE_STRUCT_DIRENT_D_TYPE,1,[Define this if struct dirent has d_type]),, [#include ]) dnl ======================================================================== dnl Functions dnl ======================================================================== REQUIRE_FUNC([getopt]) REQUIRE_FUNC([setenv]) REQUIRE_FUNC([unsetenv]) REQUIRE_FUNC([vasprintf]) AC_CACHE_CHECK(whether sscanf supports %m, pf_cv_var_sscanf, AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include const char *s = "some-command-line-arg"; int main(int argc, char **argv) { char *name = NULL; int n = sscanf(s, "%ms", &name); return n == 1 ? 0 : 1; } ]])], pf_cv_var_sscanf="yes", pf_cv_var_sscanf="no", pf_cv_var_sscanf="no")) AS_IF([test x"$pf_cv_var_sscanf" = x"yes"], [AC_DEFINE([HAVE_SSCANF_M], [1], [Define to 1 if sscanf %m modifier is available])]) dnl ======================================================================== dnl bzip2 dnl ======================================================================== REQUIRE_HEADER([bzlib.h]) REQUIRE_LIB([bz2], [BZ2_bzBuffToBuffCompress]) dnl ======================================================================== dnl sighandler_t is missing from Illumos, Solaris11 systems dnl ======================================================================== AC_MSG_CHECKING([for sighandler_t]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include ]], [[sighandler_t *f;]])], [ AC_MSG_RESULT([yes]) AC_DEFINE([HAVE_SIGHANDLER_T], [1], [Define to 1 if sighandler_t is available]) ], [AC_MSG_RESULT([no])]) dnl ======================================================================== dnl ncurses dnl ======================================================================== dnl dnl A few OSes (e.g. Linux) deliver a default "ncurses" alongside "curses". dnl Many non-Linux deliver "curses"; sites may add "ncurses". dnl dnl However, the source-code recommendation for both is to #include "curses.h" dnl (i.e. "ncurses" still wants the include to be simple, no-'n', "curses.h"). dnl dnl ncurses takes precedence. dnl AC_CHECK_HEADERS([curses.h curses/curses.h ncurses.h ncurses/ncurses.h]) dnl Although n-library is preferred, only look for it if the n-header was found. CURSESLIBS='' PC_NAME_CURSES="" PC_LIBS_CURSES="" AS_IF([test x"$ac_cv_header_ncurses_h" = x"yes"], [ AC_CHECK_LIB(ncurses, printw, [AC_DEFINE(HAVE_LIBNCURSES,1, have ncurses library)]) CURSESLIBS=`$PKG_CONFIG --libs ncurses` || CURSESLIBS='-lncurses' PC_NAME_CURSES="ncurses" ]) AS_IF([test x"$ac_cv_header_ncurses_ncurses_h" = x"yes"], [ AC_CHECK_LIB(ncurses, printw, [AC_DEFINE(HAVE_LIBNCURSES,1, have ncurses library)]) CURSESLIBS=`$PKG_CONFIG --libs ncurses` || CURSESLIBS='-lncurses' PC_NAME_CURSES="ncurses" ]) dnl Only look for non-n-library if there was no n-library. AS_IF([test x"$CURSESLIBS" = x"" && test x"$ac_cv_header_curses_h" = x"yes"], [ AC_CHECK_LIB(curses, printw, [CURSESLIBS='-lcurses'; AC_DEFINE(HAVE_LIBCURSES,1, have curses library)]) PC_LIBS_CURSES="$CURSESLIBS" ]) dnl Only look for non-n-library if there was no n-library. AS_IF([test x"$CURSESLIBS" = x"" && test x"$ac_cv_header_curses_curses_h" = x"yes"], [ AC_CHECK_LIB(curses, printw, [CURSESLIBS='-lcurses'; AC_DEFINE(HAVE_LIBCURSES,1, have curses library)]) PC_LIBS_CURSES="$CURSESLIBS" ]) AS_IF([test x"$CURSESLIBS" != x""], [PCMK_FEATURES="$PCMK_FEATURES ncurses"]) dnl Check for printw() prototype compatibility AS_IF([test x"$CURSESLIBS" != x"" && cc_supports_flag -Wcast-qual], [ ac_save_LIBS=$LIBS LIBS="$CURSESLIBS" # avoid broken test because of hardened build environment in Fedora 23+ # - https://fedoraproject.org/wiki/Changes/Harden_All_Packages # - https://bugzilla.redhat.com/1297985 AS_IF([cc_supports_flag -fPIC], [cc_temp_flags "-Wcast-qual $WERROR -fPIC"], [cc_temp_flags "-Wcast-qual $WERROR"]) AC_MSG_CHECKING([whether curses library is compatible]) AC_LINK_IFELSE( [AC_LANG_PROGRAM([ #if defined(HAVE_NCURSES_H) # include #elif defined(HAVE_NCURSES_NCURSES_H) # include #elif defined(HAVE_CURSES_H) # include #endif ], [printw((const char *)"Test");] )], [AC_MSG_RESULT([yes])], [ AC_MSG_RESULT([no]) AC_MSG_WARN(m4_normalize([Disabling curses because the printw() function of your (n)curses library is old. If you wish to enable curses, update to a newer version (ncurses 5.4 or later is recommended, available from https://invisible-island.net/ncurses/) ])) AC_DEFINE([HAVE_INCOMPATIBLE_PRINTW], [1], [Define to 1 if curses library has incompatible printw()]) ] ) LIBS=$ac_save_LIBS cc_restore_flags ]) AC_SUBST(CURSESLIBS) AC_SUBST(PC_NAME_CURSES) AC_SUBST(PC_LIBS_CURSES) dnl ======================================================================== dnl Profiling and GProf dnl ======================================================================== CFLAGS_ORIG="$CFLAGS" AS_IF([test $with_coverage -ne $DISABLED], [ with_profiling=$REQUIRED PCMK_FEATURES="$PCMK_FEATURES coverage" CFLAGS="$CFLAGS -fprofile-arcs -ftest-coverage" dnl During linking, make sure to specify -lgcov or -coverage ] ) AS_IF([test $with_profiling -ne $DISABLED], [ with_profiling=$REQUIRED PCMK_FEATURES="$PCMK_FEATURES profile" dnl Disable various compiler optimizations CFLAGS="$CFLAGS -fno-omit-frame-pointer -fno-inline -fno-builtin" dnl CFLAGS="$CFLAGS -fno-inline-functions" dnl CFLAGS="$CFLAGS -fno-default-inline" dnl CFLAGS="$CFLAGS -fno-inline-functions-called-once" dnl CFLAGS="$CFLAGS -fno-optimize-sibling-calls" dnl Turn off optimization so tools can get accurate line numbers CFLAGS=`echo $CFLAGS | sed \ -e 's/-O.\ //g' \ -e 's/-Wp,-D_FORTIFY_SOURCE=.\ //g' \ -e 's/-D_FORTIFY_SOURCE=.\ //g'` CFLAGS="$CFLAGS -O0 -g3 -gdwarf-2" AC_MSG_NOTICE([CFLAGS before adding profiling options: $CFLAGS_ORIG]) AC_MSG_NOTICE([CFLAGS after: $CFLAGS]) ] ) AC_DEFINE_UNQUOTED([SUPPORT_PROFILING], [$with_profiling], [Support profiling]) AM_CONDITIONAL([BUILD_PROFILING], [test "$with_profiling" = "$REQUIRED"]) dnl ======================================================================== dnl Cluster infrastructure - LibQB dnl ======================================================================== PKG_CHECK_MODULES(libqb, libqb >= 0.17) CPPFLAGS="$libqb_CFLAGS $CPPFLAGS" LIBS="$libqb_LIBS $LIBS" dnl libqb 2.0.5+ (2022-03) AC_CHECK_FUNCS([qb_ipcc_connect_async]) dnl libqb 2.0.2+ (2020-10) AC_CHECK_FUNCS([qb_ipcc_auth_get]) dnl libqb 2.0.0+ (2020-05) CHECK_ENUM_VALUE([qb/qblog.h],[qb_log_conf],[QB_LOG_CONF_MAX_LINE_LEN]) CHECK_ENUM_VALUE([qb/qblog.h],[qb_log_conf],[QB_LOG_CONF_ELLIPSIS]) dnl Support Linux-HA fence agents if available AS_IF([test x"$cross_compiling" != x"yes"], [CPPFLAGS="$CPPFLAGS -I${prefix}/include/heartbeat"]) AC_CHECK_HEADERS([stonith/stonith.h], [ AC_CHECK_LIB([pils], [PILLoadPlugin]) AC_CHECK_LIB([plumb], [G_main_add_IPC_Channel]) PCMK_FEATURES="$PCMK_FEATURES lha" ]) AM_CONDITIONAL([BUILD_LHA_SUPPORT], [test x"$ac_cv_header_stonith_stonith_h" = x"yes"]) dnl =============================================== dnl Variables needed for substitution dnl =============================================== CRM_SCHEMA_DIRECTORY="${datadir}/pacemaker" AC_DEFINE_UNQUOTED(CRM_SCHEMA_DIRECTORY,"$CRM_SCHEMA_DIRECTORY", Location for the Pacemaker Relax-NG Schema) AC_SUBST(CRM_SCHEMA_DIRECTORY) CRM_CORE_DIR="${localstatedir}/lib/pacemaker/cores" AC_DEFINE_UNQUOTED([CRM_CORE_DIR], ["$CRM_CORE_DIR"], [Directory Pacemaker daemons should change to (without systemd, core files will go here)]) AC_SUBST(CRM_CORE_DIR) AS_IF([test x"${CRM_DAEMON_USER}" = x""], [CRM_DAEMON_USER="hacluster"]) AC_DEFINE_UNQUOTED(CRM_DAEMON_USER,"$CRM_DAEMON_USER", User to run Pacemaker daemons as) AC_SUBST(CRM_DAEMON_USER) AS_IF([test x"${CRM_DAEMON_GROUP}" = x""], [CRM_DAEMON_GROUP="haclient"]) AC_DEFINE_UNQUOTED(CRM_DAEMON_GROUP,"$CRM_DAEMON_GROUP", Group to run Pacemaker daemons as) AC_SUBST(CRM_DAEMON_GROUP) CRM_PACEMAKER_DIR=${localstatedir}/lib/pacemaker AC_DEFINE_UNQUOTED(CRM_PACEMAKER_DIR,"$CRM_PACEMAKER_DIR", Location to store directory produced by Pacemaker daemons) AC_SUBST(CRM_PACEMAKER_DIR) CRM_BLACKBOX_DIR=${localstatedir}/lib/pacemaker/blackbox AC_DEFINE_UNQUOTED(CRM_BLACKBOX_DIR,"$CRM_BLACKBOX_DIR", Where to keep blackbox dumps) AC_SUBST(CRM_BLACKBOX_DIR) PE_STATE_DIR="${localstatedir}/lib/pacemaker/pengine" AC_DEFINE_UNQUOTED(PE_STATE_DIR,"$PE_STATE_DIR", Where to keep scheduler outputs) AC_SUBST(PE_STATE_DIR) CRM_CONFIG_DIR="${localstatedir}/lib/pacemaker/cib" AC_DEFINE_UNQUOTED(CRM_CONFIG_DIR,"$CRM_CONFIG_DIR", Where to keep configuration files) AC_SUBST(CRM_CONFIG_DIR) CRM_DAEMON_DIR="${libexecdir}/pacemaker" AC_DEFINE_UNQUOTED(CRM_DAEMON_DIR,"$CRM_DAEMON_DIR", Location for Pacemaker daemons) AC_SUBST(CRM_DAEMON_DIR) CRM_STATE_DIR="${runstatedir}/crm" AC_DEFINE_UNQUOTED([CRM_STATE_DIR], ["$CRM_STATE_DIR"], [Where to keep state files and sockets]) AC_SUBST(CRM_STATE_DIR) CRM_RSCTMP_DIR="${runstatedir}/resource-agents" AC_DEFINE_UNQUOTED(CRM_RSCTMP_DIR,"$CRM_RSCTMP_DIR", Where resource agents should keep state files) AC_SUBST(CRM_RSCTMP_DIR) PACEMAKER_CONFIG_DIR="${sysconfdir}/pacemaker" AC_DEFINE_UNQUOTED(PACEMAKER_CONFIG_DIR,"$PACEMAKER_CONFIG_DIR", Where to keep configuration files like authkey) AC_SUBST(PACEMAKER_CONFIG_DIR) AC_DEFINE_UNQUOTED(SBIN_DIR,"$sbindir",[Location for system binaries]) AC_PATH_PROGS(GIT, git false) AC_MSG_CHECKING([build version]) BUILD_VERSION=$Format:%h$ AS_IF([test $BUILD_VERSION != ":%h$"], [AC_MSG_RESULT([$BUILD_VERSION (archive hash)])], [test -x $GIT && test -d .git], [ BUILD_VERSION=`$GIT log --pretty="format:%h" -n 1` AC_MSG_RESULT([$BUILD_VERSION (git hash)]) ], [ # The current directory name make a reasonable default # Most generated archives will include the hash or tag BASE=`basename $PWD` BUILD_VERSION=`echo $BASE | sed s:.*[[Pp]]acemaker-::` AC_MSG_RESULT([$BUILD_VERSION (directory name)]) ]) AC_DEFINE_UNQUOTED(BUILD_VERSION, "$BUILD_VERSION", Build version) AC_SUBST(BUILD_VERSION) HAVE_dbus=1 PKG_CHECK_MODULES([DBUS], [dbus-1], [CPPFLAGS="${CPPFLAGS} ${DBUS_CFLAGS}"], [HAVE_dbus=0]) AC_DEFINE_UNQUOTED(HAVE_DBUS, $HAVE_dbus, Support dbus) AM_CONDITIONAL(BUILD_DBUS, test $HAVE_dbus = 1) dnl libdbus 1.5.12+ (2012-03) / 1.6.0+ (2012-06) AC_CHECK_TYPES([DBusBasicValue],,,[[#include ]]) AS_IF([test $HAVE_dbus = 0], [PC_NAME_DBUS=""], [PC_NAME_DBUS="dbus-1"]) AC_SUBST(PC_NAME_DBUS) AS_CASE([$enable_systemd], [$REQUIRED], [ AS_IF([test $HAVE_dbus = 0], [AC_MSG_FAILURE([Cannot support systemd resources without DBus])]) AS_IF([test "$ac_cv_have_decl_CLOCK_MONOTONIC" = "no"], [AC_MSG_FAILURE([Cannot support systemd resources without monotonic clock])]) AS_IF([check_systemdsystemunitdir], [], [AC_MSG_FAILURE([Cannot support systemd resources without systemdsystemunitdir])]) ], [$OPTIONAL], [ AS_IF([test $HAVE_dbus = 0 \ || test x"$ac_cv_have_decl_CLOCK_MONOTONIC" = x"no"], [enable_systemd=$DISABLED], [ AC_MSG_CHECKING([for systemd version (using dbus-send)]) ret=$({ dbus-send --system --print-reply \ --dest=org.freedesktop.systemd1 \ /org/freedesktop/systemd1 \ org.freedesktop.DBus.Properties.Get \ string:org.freedesktop.systemd1.Manager \ string:Version 2>/dev/null \ || echo "version unavailable"; } | tail -n1) # sanitize output a bit (interested just in value, not type), # ret is intentionally unenquoted so as to normalize whitespace ret=$(echo ${ret} | cut -d' ' -f2-) AC_MSG_RESULT([${ret}]) AS_IF([test x"$ret" != x"unavailable" \ || systemctl --version 2>/dev/null | grep -q systemd], [ AS_IF([check_systemdsystemunitdir], [enable_systemd=$REQUIRED], [enable_systemd=$DISABLED]) ], [enable_systemd=$DISABLED] ) ]) ], ) AC_MSG_CHECKING([whether to enable support for managing resources via systemd]) AS_IF([test $enable_systemd -eq $DISABLED], [AC_MSG_RESULT([no])], [ AC_MSG_RESULT([yes]) PCMK_FEATURES="$PCMK_FEATURES systemd" ] ) AC_SUBST([systemdsystemunitdir]) AC_DEFINE_UNQUOTED([SUPPORT_SYSTEMD], [$enable_systemd], [Support systemd resources]) AM_CONDITIONAL([BUILD_SYSTEMD], [test $enable_systemd = $REQUIRED]) AC_SUBST(SUPPORT_SYSTEMD) AS_CASE([$enable_upstart], [$REQUIRED], [ AS_IF([test $HAVE_dbus = 0], [AC_MSG_FAILURE([Cannot support Upstart resources without DBus])]) ], [$OPTIONAL], [ AS_IF([test $HAVE_dbus = 0], [enable_upstart=$DISABLED], [ AC_MSG_CHECKING([for Upstart version (using dbus-send)]) ret=$({ dbus-send --system --print-reply \ --dest=com.ubuntu.Upstart \ /com/ubuntu/Upstart org.freedesktop.DBus.Properties.Get \ string:com.ubuntu.Upstart0_6 string:version 2>/dev/null \ || echo "version unavailable"; } | tail -n1) # sanitize output a bit (interested just in value, not type), # ret is intentionally unenquoted so as to normalize whitespace ret=$(echo ${ret} | cut -d' ' -f2-) AC_MSG_RESULT([${ret}]) AS_IF([test x"$ret" != x"unavailable" \ || initctl --version 2>/dev/null | grep -q upstart], [enable_upstart=$REQUIRED], [enable_upstart=$DISABLED] ) ]) ], ) AC_MSG_CHECKING([whether to enable support for managing resources via Upstart]) AS_IF([test $enable_upstart -eq $DISABLED], [AC_MSG_RESULT([no])], [ AC_MSG_RESULT([yes]) PCMK_FEATURES="$PCMK_FEATURES upstart" ] ) AC_DEFINE_UNQUOTED([SUPPORT_UPSTART], [$enable_upstart], [Support Upstart resources]) AM_CONDITIONAL([BUILD_UPSTART], [test $enable_upstart -eq $REQUIRED]) AC_SUBST(SUPPORT_UPSTART) AS_CASE([$with_nagios], [$REQUIRED], [ AS_IF([test x"$ac_cv_have_decl_CLOCK_MONOTONIC" = x"no"], [AC_MSG_FAILURE([Cannot support nagios resources without monotonic clock])]) ], [$OPTIONAL], [ AS_IF([test x"$ac_cv_have_decl_CLOCK_MONOTONIC" = x"no"], [with_nagios=$DISABLED], [with_nagios=$REQUIRED]) ] ) AS_IF([test $with_nagios -eq $REQUIRED], [PCMK_FEATURES="$PCMK_FEATURES nagios"]) AC_DEFINE_UNQUOTED([SUPPORT_NAGIOS], [$with_nagios], [Support nagios plugins]) AM_CONDITIONAL([BUILD_NAGIOS], [test $with_nagios -eq $REQUIRED]) AS_IF([test x"$NAGIOS_PLUGIN_DIR" = x""], [NAGIOS_PLUGIN_DIR="${libexecdir}/nagios/plugins"]) AC_DEFINE_UNQUOTED(NAGIOS_PLUGIN_DIR, "$NAGIOS_PLUGIN_DIR", Directory for nagios plugins) AC_SUBST(NAGIOS_PLUGIN_DIR) AS_IF([test x"$NAGIOS_METADATA_DIR" = x""], [NAGIOS_METADATA_DIR="${datadir}/nagios/plugins-metadata"]) AC_DEFINE_UNQUOTED(NAGIOS_METADATA_DIR, "$NAGIOS_METADATA_DIR", Directory for nagios plugins metadata) AC_SUBST(NAGIOS_METADATA_DIR) STACKS="" CLUSTERLIBS="" PC_NAME_CLUSTER="" dnl ======================================================================== dnl Cluster stack - Corosync dnl ======================================================================== COROSYNC_LIBS="" AS_CASE([$with_corosync], [$REQUIRED], [ # These will be fatal if unavailable PKG_CHECK_MODULES([cpg], [libcpg]) PKG_CHECK_MODULES([cfg], [libcfg]) PKG_CHECK_MODULES([cmap], [libcmap]) PKG_CHECK_MODULES([quorum], [libquorum]) PKG_CHECK_MODULES([libcorosync_common], [libcorosync_common]) ] [$OPTIONAL], [ PKG_CHECK_MODULES([cpg], [libcpg], [], [with_corosync=$DISABLED]) PKG_CHECK_MODULES([cfg], [libcfg], [], [with_corosync=$DISABLED]) PKG_CHECK_MODULES([cmap], [libcmap], [], [with_corosync=$DISABLED]) PKG_CHECK_MODULES([quorum], [libquorum], [], [with_corosync=$DISABLED]) PKG_CHECK_MODULES([libcorosync_common], [libcorosync_common], [], [with_corosync=$DISABLED]) AS_IF([test $with_corosync -ne $DISABLED], [with_corosync=$REQUIRED]) ] ) AS_IF([test $with_corosync -ne $DISABLED], [ AC_MSG_CHECKING([for Corosync 2 or later]) AC_MSG_RESULT([yes]) CFLAGS="$CFLAGS $libqb_CFLAGS $cpg_CFLAGS $cfg_CFLAGS $cmap_CFLAGS $quorum_CFLAGS $libcorosync_common_CFLAGS" CPPFLAGS="$CPPFLAGS `$PKG_CONFIG --cflags-only-I corosync`" COROSYNC_LIBS="$COROSYNC_LIBS $cpg_LIBS $cfg_LIBS $cmap_LIBS $quorum_LIBS $libcorosync_common_LIBS" CLUSTERLIBS="$CLUSTERLIBS $COROSYNC_LIBS" PC_NAME_CLUSTER="$PC_CLUSTER_NAME libcfg libcmap libcorosync_common libcpg libquorum" STACKS="$STACKS corosync-ge-2" dnl Shutdown tracking added (back) to corosync Jan 2021 saved_LIBS="$LIBS" LIBS="$LIBS $COROSYNC_LIBS" AC_CHECK_FUNCS([corosync_cfg_trackstart]) LIBS="$saved_LIBS" ] ) AC_DEFINE_UNQUOTED([SUPPORT_COROSYNC], [$with_corosync], [Support the Corosync messaging and membership layer]) AM_CONDITIONAL([BUILD_CS_SUPPORT], [test $with_corosync -eq $REQUIRED]) AC_SUBST([SUPPORT_COROSYNC]) dnl dnl Cluster stack - Sanity dnl AS_IF([test x"$STACKS" != x""], [AC_MSG_NOTICE([Supported stacks:${STACKS}])], [AC_MSG_FAILURE([At least one cluster stack must be supported])]) PCMK_FEATURES="${PCMK_FEATURES}${STACKS}" AC_SUBST(CLUSTERLIBS) AC_SUBST(PC_NAME_CLUSTER) dnl ======================================================================== dnl CIB secrets dnl ======================================================================== AS_IF([test $with_cibsecrets -ne $DISABLED], [ with_cibsecrets=$REQUIRED PCMK_FEATURES="$PCMK_FEATURES cibsecrets" LRM_CIBSECRETS_DIR="${localstatedir}/lib/pacemaker/lrm/secrets" AC_DEFINE_UNQUOTED([LRM_CIBSECRETS_DIR], ["$LRM_CIBSECRETS_DIR"], [Location for CIB secrets]) AC_SUBST([LRM_CIBSECRETS_DIR]) ] ) AC_DEFINE_UNQUOTED([SUPPORT_CIBSECRETS], [$with_cibsecrets], [Support CIB secrets]) AM_CONDITIONAL([BUILD_CIBSECRETS], [test $with_cibsecrets -eq $REQUIRED]) dnl ======================================================================== dnl GnuTLS dnl ======================================================================== dnl Require GnuTLS >=2.12.0 (2011-03) for Pacemaker Remote support PC_NAME_GNUTLS="" AS_CASE([$with_gnutls], [$REQUIRED], [ REQUIRE_LIB([gnutls], [gnutls_sec_param_to_pk_bits]) REQUIRE_HEADER([gnutls/gnutls.h]) ], [$OPTIONAL], [ AC_CHECK_LIB([gnutls], [gnutls_sec_param_to_pk_bits], [], [with_gnutls=$DISABLED]) AC_CHECK_HEADERS([gnutls/gnutls.h], [], [with_gnutls=$DISABLED]) ] ) AS_IF([test $with_gnutls -ne $DISABLED], [ PC_NAME_GNUTLS="gnutls" PCMK_FEATURES="$PCMK_FEATURES remote" ] ) AC_SUBST([PC_NAME_GNUTLS]) AM_CONDITIONAL([BUILD_REMOTE], [test $with_gnutls -ne $DISABLED]) # --- ASAN/UBSAN/TSAN (see man gcc) --- # when using SANitizers, we need to pass the -fsanitize.. # to both CFLAGS and LDFLAGS. The CFLAGS/LDFLAGS must be # specified as first in the list or there will be runtime # issues (for example user has to LD_PRELOAD asan for it to work # properly). AS_IF([test -n "${SANITIZERS}"], [ SANITIZERS=$(echo $SANITIZERS | sed -e 's/,/ /g') for SANITIZER in $SANITIZERS do AS_CASE([$SANITIZER], [asan|ASAN], [ SANITIZERS_CFLAGS="$SANITIZERS_CFLAGS -fsanitize=address" SANITIZERS_LDFLAGS="$SANITIZERS_LDFLAGS -fsanitize=address -lasan" PCMK_FEATURES="$PCMK_FEATURES asan" REQUIRE_LIB([asan],[main]) ], [ubsan|UBSAN], [ SANITIZERS_CFLAGS="$SANITIZERS_CFLAGS -fsanitize=undefined" SANITIZERS_LDFLAGS="$SANITIZERS_LDFLAGS -fsanitize=undefined -lubsan" PCMK_FEATURES="$PCMK_FEATURES ubsan" REQUIRE_LIB([ubsan],[main]) ], [tsan|TSAN], [ SANITIZERS_CFLAGS="$SANITIZERS_CFLAGS -fsanitize=thread" SANITIZERS_LDFLAGS="$SANITIZERS_LDFLAGS -fsanitize=thread -ltsan" PCMK_FEATURES="$PCMK_FEATURES tsan" REQUIRE_LIB([tsan],[main]) ]) done ]) dnl ======================================================================== dnl Compiler flags dnl ======================================================================== dnl Make sure that CFLAGS is not exported. If the user did dnl not have CFLAGS in their environment then this should have dnl no effect. However if CFLAGS was exported from the user's dnl environment, then the new CFLAGS will also be exported dnl to sub processes. AS_IF([export | fgrep " CFLAGS=" > /dev/null], [ SAVED_CFLAGS="$CFLAGS" unset CFLAGS CFLAGS="$SAVED_CFLAGS" unset SAVED_CFLAGS ]) CC_EXTRAS="" AS_IF([test x"$GCC" != x"yes"], [CFLAGS="$CFLAGS -g"], [ CFLAGS="$CFLAGS -ggdb" dnl When we don't have diagnostic push / pull, we can't explicitly disable dnl checking for nonliteral formats in the places where they occur on purpose dnl thus we disable nonliteral format checking globally as we are aborting dnl on warnings. dnl what makes the things really ugly is that nonliteral format checking is dnl obviously available as an extra switch in very modern gcc but for older dnl gcc this is part of -Wformat=2 dnl so if we have push/pull we can enable -Wformat=2 -Wformat-nonliteral dnl if we don't have push/pull but -Wformat-nonliteral we can enable -Wformat=2 dnl otherwise none of both gcc_diagnostic_push_pull=no cc_temp_flags "$CFLAGS $WERROR" AC_MSG_CHECKING([for gcc diagnostic push / pull]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ #pragma GCC diagnostic push #pragma GCC diagnostic pop ]])], [ AC_MSG_RESULT([yes]) gcc_diagnostic_push_pull=yes ], AC_MSG_RESULT([no])) cc_restore_flags AS_IF([cc_supports_flag "-Wformat-nonliteral"], [gcc_format_nonliteral=yes], [gcc_format_nonliteral=no]) # We had to eliminate -Wnested-externs because of libtool changes # Make sure to order options so that the former stand for prerequisites # of the latter (e.g., -Wformat-nonliteral requires -Wformat). EXTRA_FLAGS="-fgnu89-inline" EXTRA_FLAGS="$EXTRA_FLAGS -Wall" EXTRA_FLAGS="$EXTRA_FLAGS -Waggregate-return" EXTRA_FLAGS="$EXTRA_FLAGS -Wbad-function-cast" EXTRA_FLAGS="$EXTRA_FLAGS -Wcast-align" EXTRA_FLAGS="$EXTRA_FLAGS -Wdeclaration-after-statement" EXTRA_FLAGS="$EXTRA_FLAGS -Wendif-labels" EXTRA_FLAGS="$EXTRA_FLAGS -Wfloat-equal" EXTRA_FLAGS="$EXTRA_FLAGS -Wformat-security" EXTRA_FLAGS="$EXTRA_FLAGS -Wimplicit-fallthrough" EXTRA_FLAGS="$EXTRA_FLAGS -Wmissing-prototypes" EXTRA_FLAGS="$EXTRA_FLAGS -Wmissing-declarations" EXTRA_FLAGS="$EXTRA_FLAGS -Wnested-externs" EXTRA_FLAGS="$EXTRA_FLAGS -Wno-long-long" EXTRA_FLAGS="$EXTRA_FLAGS -Wno-strict-aliasing" EXTRA_FLAGS="$EXTRA_FLAGS -Wpointer-arith" EXTRA_FLAGS="$EXTRA_FLAGS -Wstrict-prototypes" EXTRA_FLAGS="$EXTRA_FLAGS -Wwrite-strings" EXTRA_FLAGS="$EXTRA_FLAGS -Wunused-but-set-variable" EXTRA_FLAGS="$EXTRA_FLAGS -Wunsigned-char" AS_IF([test x"$gcc_diagnostic_push_pull" = x"yes"], [ AC_DEFINE([HAVE_FORMAT_NONLITERAL], [], [gcc can complain about nonliterals in format]) EXTRA_FLAGS="$EXTRA_FLAGS -Wformat=2 -Wformat-nonliteral" ], [test x"$gcc_format_nonliteral" = x"yes"], [EXTRA_FLAGS="$EXTRA_FLAGS -Wformat=2"]) # Additional warnings it might be nice to enable one day # -Wshadow # -Wunreachable-code for j in $EXTRA_FLAGS do AS_IF([cc_supports_flag $CC_EXTRAS $j], [CC_EXTRAS="$CC_EXTRAS $j"]) done AC_MSG_NOTICE([Using additional gcc flags: ${CC_EXTRAS}]) ]) dnl dnl Hardening flags dnl dnl The prime control of whether to apply (targeted) hardening build flags and dnl which ones is --{enable,disable}-hardening option passed to ./configure: dnl dnl --enable-hardening=try (default): dnl depending on whether any of CFLAGS_HARDENED_EXE, LDFLAGS_HARDENED_EXE, dnl CFLAGS_HARDENED_LIB or LDFLAGS_HARDENED_LIB environment variables dnl (see below) is set and non-null, all these custom flags (even if not dnl set) are used as are, otherwise the best effort is made to offer dnl reasonably strong hardening in several categories (RELRO, PIE, dnl "bind now", stack protector) according to what the selected toolchain dnl can offer dnl dnl --enable-hardening: dnl same effect as --enable-hardening=try when the environment variables dnl in question are suppressed dnl dnl --disable-hardening: dnl do not apply any targeted hardening measures at all dnl dnl The user-injected environment variables that regulate the hardening in dnl default case are as follows: dnl dnl * CFLAGS_HARDENED_EXE, LDFLAGS_HARDENED_EXE dnl compiler and linker flags (respectively) for daemon programs dnl (pacemakerd, pacemaker-attrd, pacemaker-controld, pacemaker-execd, dnl pacemaker-based, pacemaker-fenced, pacemaker-remoted, dnl pacemaker-schedulerd) dnl dnl * CFLAGS_HARDENED_LIB, LDFLAGS_HARDENED_LIB dnl compiler and linker flags (respectively) for libraries linked dnl with the daemon programs dnl dnl Note that these are purposedly targeted variables (addressing particular dnl targets all over the scattered Makefiles) and have no effect outside of dnl the predestined scope (e.g., CLI utilities). For a global reach, dnl use CFLAGS, LDFLAGS, etc. as usual. dnl dnl For guidance on the suitable flags consult, for instance: dnl https://fedoraproject.org/wiki/Changes/Harden_All_Packages#Detailed_Harden_Flags_Description dnl https://owasp.org/index.php/C-Based_Toolchain_Hardening#GCC.2FBinutils dnl AS_IF([test $enable_hardening -eq $OPTIONAL], [ AS_IF([test "$(env | grep -Ec '^(C|LD)FLAGS_HARDENED_(EXE|LIB)=.')" = 0], [enable_hardening=$REQUIRED], [AC_MSG_NOTICE([Hardening: using custom flags from environment])] ) ], [ unset CFLAGS_HARDENED_EXE unset CFLAGS_HARDENED_LIB unset LDFLAGS_HARDENED_EXE unset LDFLAGS_HARDENED_LIB ] ) AS_CASE([$enable_hardening], [$DISABLED], [AC_MSG_NOTICE([Hardening: explicitly disabled])], [$REQUIRED], [ CFLAGS_HARDENED_EXE= CFLAGS_HARDENED_LIB= LDFLAGS_HARDENED_EXE= LDFLAGS_HARDENED_LIB= relro=0 pie=0 bindnow=0 stackprot="none" # daemons incl. libs: partial RELRO flag="-Wl,-z,relro" CC_CHECK_LDFLAGS(["${flag}"], [ LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}" LDFLAGS_HARDENED_LIB="${LDFLAGS_HARDENED_LIB} ${flag}" relro=1 ]) # daemons: PIE for both CFLAGS and LDFLAGS AS_IF([cc_supports_flag -fPIE], [ flag="-pie" CC_CHECK_LDFLAGS(["${flag}"], [ CFLAGS_HARDENED_EXE="${CFLAGS_HARDENED_EXE} -fPIE" LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}" pie=1 ]) ] ) # daemons incl. libs: full RELRO if sensible + as-needed linking # so as to possibly mitigate startup performance # hit caused by excessive linking with unneeded # libraries AS_IF([test "${relro}" = 1 && test "${pie}" = 1], [ flag="-Wl,-z,now" CC_CHECK_LDFLAGS(["${flag}"], [ LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}" LDFLAGS_HARDENED_LIB="${LDFLAGS_HARDENED_LIB} ${flag}" bindnow=1 ]) ] ) AS_IF([test "${bindnow}" = 1], [ flag="-Wl,--as-needed" CC_CHECK_LDFLAGS(["${flag}"], [ LDFLAGS_HARDENED_EXE="${LDFLAGS_HARDENED_EXE} ${flag}" LDFLAGS_HARDENED_LIB="${LDFLAGS_HARDENED_LIB} ${flag}" ]) ]) # universal: prefer strong > all > default stack protector if possible flag= AS_IF([cc_supports_flag -fstack-protector-strong], [ flag="-fstack-protector-strong" stackprot="strong" ], [cc_supports_flag -fstack-protector-all], [ flag="-fstack-protector-all" stackprot="all" ], [cc_supports_flag -fstack-protector], [ flag="-fstack-protector" stackprot="default" ] ) AS_IF([test -n "${flag}"], [CC_EXTRAS="${CC_EXTRAS} ${flag}"]) # universal: enable stack clash protection if possible AS_IF([cc_supports_flag -fstack-clash-protection], [ CC_EXTRAS="${CC_EXTRAS} -fstack-clash-protection" AS_IF([test "${stackprot}" = "none"], [stackprot="clash-only"], [stackprot="${stackprot}+clash"] ) ] ) # Log a summary AS_IF([test "${relro}" = 1 || test "${pie}" = 1 || test x"${stackprot}" != x"none"], [AC_MSG_NOTICE(m4_normalize([Hardening: relro=${relro} pie=${pie} bindnow=${bindnow} stackprot=${stackprot}])) ], [AC_MSG_WARN([Hardening: no suitable features in the toolchain detected])] ) ], ) CFLAGS="$SANITIZERS_CFLAGS $CFLAGS $CC_EXTRAS" LDFLAGS="$SANITIZERS_LDFLAGS $LDFLAGS" CFLAGS_HARDENED_EXE="$SANITIZERS_CFLAGS $CFLAGS_HARDENED_EXE" LDFLAGS_HARDENED_EXE="$SANITIZERS_LDFLAGS $LDFLAGS_HARDENED_EXE" NON_FATAL_CFLAGS="$CFLAGS" AC_SUBST(NON_FATAL_CFLAGS) dnl dnl We reset CFLAGS to include our warnings *after* all function dnl checking goes on, so that our warning flags don't keep the dnl AC_*FUNCS() calls above from working. In particular, -Werror will dnl *always* cause us troubles if we set it before here. dnl dnl AS_IF([test $enable_fatal_warnings -ne $DISABLED], [ AC_MSG_NOTICE([Enabling fatal compiler warnings]) CFLAGS="$CFLAGS $WERROR" ]) AC_SUBST(CFLAGS) dnl This is useful for use in Makefiles that need to remove one specific flag CFLAGS_COPY="$CFLAGS" AC_SUBST(CFLAGS_COPY) AC_SUBST(LIBADD_DL) dnl extra flags for dynamic linking libraries AC_SUBST(LOCALE) dnl Options for cleaning up the compiler output AS_IF([test $enable_quiet -ne $DISABLED], [ AC_MSG_NOTICE([Suppressing make details]) QUIET_LIBTOOL_OPTS="--silent" QUIET_MAKE_OPTS="-s" # POSIX compliant ], [ QUIET_LIBTOOL_OPTS="" QUIET_MAKE_OPTS="" ] ) dnl Put the above variables to use LIBTOOL="${LIBTOOL} --tag=CC \$(QUIET_LIBTOOL_OPTS)" MAKEFLAGS="${MAKEFLAGS} ${QUIET_MAKE_OPTS}" # Make features list available (sorted alphabetically, without leading space) PCMK_FEATURES=`echo "$PCMK_FEATURES" | sed -e 's/^ //' -e 's/ /\n/g' | sort | xargs` AC_DEFINE_UNQUOTED(CRM_FEATURES, "$PCMK_FEATURES", Set of enabled features) AC_SUBST(PCMK_FEATURES) AC_SUBST(CC) AC_SUBST(MAKEFLAGS) AC_SUBST(LIBTOOL) AC_SUBST(QUIET_LIBTOOL_OPTS) dnl Files we output that need to be executable CONFIG_FILES_EXEC([cts/cts-cli], [cts/cts-exec], [cts/cts-fencing], [cts/cts-regression], [cts/cts-scheduler], [cts/lxc_autogen.sh], [cts/benchmark/clubench], [cts/lab/CTSlab.py], [cts/lab/OCFIPraTest.py], [cts/lab/cluster_test], [cts/lab/cts], [cts/lab/cts-log-watcher], [cts/support/LSBDummy], [cts/support/cts-support], [cts/support/fence_dummy], [cts/support/pacemaker-cts-dummyd], [daemons/fenced/fence_legacy], [daemons/fenced/fence_watchdog], [doc/abi-check], [extra/resources/ClusterMon], [extra/resources/HealthSMART], [extra/resources/SysInfo], [extra/resources/controld], [extra/resources/ifspeed], [extra/resources/o2cb], [maint/bumplibs], [tools/crm_failcount], [tools/crm_master], [tools/crm_report], [tools/crm_standby], [tools/cibsecret], [tools/pcmk_simtimes]) dnl Other files we output AC_CONFIG_FILES(Makefile \ cts/Makefile \ cts/benchmark/Makefile \ cts/lab/CTSvars.py \ cts/lab/Makefile \ cts/scheduler/Makefile \ cts/scheduler/dot/Makefile \ cts/scheduler/exp/Makefile \ cts/scheduler/scores/Makefile \ cts/scheduler/stderr/Makefile \ cts/scheduler/summary/Makefile \ cts/scheduler/xml/Makefile \ cts/support/Makefile \ cts/support/pacemaker-cts-dummyd@.service \ daemons/Makefile \ daemons/attrd/Makefile \ daemons/based/Makefile \ daemons/controld/Makefile \ daemons/execd/Makefile \ daemons/execd/pacemaker_remote \ daemons/execd/pacemaker_remote.service \ daemons/fenced/Makefile \ daemons/pacemakerd/Makefile \ daemons/pacemakerd/pacemaker.combined.upstart \ daemons/pacemakerd/pacemaker.service \ daemons/pacemakerd/pacemaker.upstart \ daemons/schedulerd/Makefile \ devel/Makefile \ doc/Doxyfile \ doc/Makefile \ doc/sphinx/Makefile \ etc/Makefile \ etc/init.d/pacemaker \ etc/logrotate.d/pacemaker \ extra/Makefile \ extra/alerts/Makefile \ extra/resources/Makefile \ include/Makefile \ include/crm/Makefile \ include/crm/cib/Makefile \ include/crm/common/Makefile \ include/crm/cluster/Makefile \ include/crm/fencing/Makefile \ include/crm/pengine/Makefile \ include/pcmki/Makefile \ lib/Makefile \ lib/cib/Makefile \ lib/cluster/Makefile \ lib/common/Makefile \ lib/common/tests/Makefile \ lib/common/tests/acl/Makefile \ lib/common/tests/agents/Makefile \ lib/common/tests/cmdline/Makefile \ lib/common/tests/flags/Makefile \ lib/common/tests/health/Makefile \ lib/common/tests/io/Makefile \ lib/common/tests/iso8601/Makefile \ lib/common/tests/lists/Makefile \ lib/common/tests/nvpair/Makefile \ lib/common/tests/operations/Makefile \ lib/common/tests/options/Makefile \ lib/common/tests/output/Makefile \ lib/common/tests/procfs/Makefile \ lib/common/tests/results/Makefile \ lib/common/tests/scores/Makefile \ lib/common/tests/strings/Makefile \ lib/common/tests/utils/Makefile \ lib/common/tests/xml/Makefile \ lib/common/tests/xpath/Makefile \ lib/fencing/Makefile \ lib/gnu/Makefile \ lib/libpacemaker.pc \ lib/lrmd/Makefile \ lib/pacemaker/Makefile \ lib/pacemaker.pc \ lib/pacemaker-cib.pc \ lib/pacemaker-cluster.pc \ lib/pacemaker-fencing.pc \ lib/pacemaker-lrmd.pc \ lib/pacemaker-service.pc \ lib/pacemaker-pe_rules.pc \ lib/pacemaker-pe_status.pc \ lib/pengine/Makefile \ lib/pengine/tests/Makefile \ lib/pengine/tests/native/Makefile \ lib/pengine/tests/rules/Makefile \ lib/pengine/tests/status/Makefile \ lib/pengine/tests/unpack/Makefile \ lib/pengine/tests/utils/Makefile \ lib/services/Makefile \ maint/Makefile \ po/Makefile.in \ replace/Makefile \ rpm/Makefile \ tests/Makefile \ tools/Makefile \ tools/crm_mon.service \ tools/crm_mon.upstart \ tools/report.collector \ tools/report.common \ xml/Makefile \ xml/pacemaker-schemas.pc \ ) dnl Now process the entire list of files added by previous dnl calls to AC_CONFIG_FILES() AC_OUTPUT() dnl ***************** dnl Configure summary dnl ***************** AC_MSG_NOTICE([]) AC_MSG_NOTICE([$PACKAGE configuration:]) AC_MSG_NOTICE([ Version = ${VERSION} (Build: $BUILD_VERSION)]) AC_MSG_NOTICE([ Features = ${PCMK_FEATURES}]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([ Prefix = ${prefix}]) AC_MSG_NOTICE([ Executables = ${sbindir}]) AC_MSG_NOTICE([ Man pages = ${mandir}]) AC_MSG_NOTICE([ Libraries = ${libdir}]) AC_MSG_NOTICE([ Header files = ${includedir}]) AC_MSG_NOTICE([ Arch-independent files = ${datadir}]) AC_MSG_NOTICE([ State information = ${localstatedir}]) AC_MSG_NOTICE([ System configuration = ${sysconfdir}]) AC_MSG_NOTICE([ OCF agents = ${OCF_ROOT_DIR}]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([ HA group name = ${CRM_DAEMON_GROUP}]) AC_MSG_NOTICE([ HA user name = ${CRM_DAEMON_USER}]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([ CFLAGS = ${CFLAGS}]) AC_MSG_NOTICE([ CFLAGS_HARDENED_EXE = ${CFLAGS_HARDENED_EXE}]) AC_MSG_NOTICE([ CFLAGS_HARDENED_LIB = ${CFLAGS_HARDENED_LIB}]) AC_MSG_NOTICE([ LDFLAGS_HARDENED_EXE = ${LDFLAGS_HARDENED_EXE}]) AC_MSG_NOTICE([ LDFLAGS_HARDENED_LIB = ${LDFLAGS_HARDENED_LIB}]) AC_MSG_NOTICE([ Libraries = ${LIBS}]) AC_MSG_NOTICE([ Stack Libraries = ${CLUSTERLIBS}]) AC_MSG_NOTICE([ Unix socket auth method = ${us_auth}]) diff --git a/cts/scheduler/xml/migration-ping-pong.xml b/cts/scheduler/xml/migration-ping-pong.xml index b459b92049..e50e500872 100644 --- a/cts/scheduler/xml/migration-ping-pong.xml +++ b/cts/scheduler/xml/migration-ping-pong.xml @@ -1,165 +1,165 @@ - + diff --git a/cts/scheduler/xml/probe-2.xml b/cts/scheduler/xml/probe-2.xml index 8bf4fd0c4d..1d741dd53b 100644 --- a/cts/scheduler/xml/probe-2.xml +++ b/cts/scheduler/xml/probe-2.xml @@ -1,481 +1,480 @@ - diff --git a/daemons/execd/cts-exec-helper.c b/daemons/execd/cts-exec-helper.c index 31cc79c684..cba388fe19 100644 --- a/daemons/execd/cts-exec-helper.c +++ b/daemons/execd/cts-exec-helper.c @@ -1,623 +1,625 @@ /* * Copyright 2012-2022 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #include #include #include #include #include #include #include #include #include #include #include #include #define SUMMARY "cts-exec-helper - inject commands into the Pacemaker executor and watch for events" static int exec_call_id = 0; static gboolean start_test(gpointer user_data); static void try_connect(void); static char *key = NULL; static char *val = NULL; static struct { int verbose; int quiet; guint interval_ms; int timeout; int start_delay; int cancel_call_id; gboolean no_wait; gboolean is_running; gboolean no_connect; int exec_call_opts; const char *api_call; const char *rsc_id; const char *provider; const char *class; const char *type; const char *action; const char *listen; gboolean use_tls; lrmd_key_value_t *params; } options; static gboolean interval_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { options.interval_ms = crm_parse_interval_spec(optarg); return errno == 0; } static gboolean notify_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { if (pcmk__str_any_of(option_name, "--notify-orig", "-n", NULL)) { options.exec_call_opts = lrmd_opt_notify_orig_only; } else if (pcmk__str_any_of(option_name, "--notify-changes", "-o", NULL)) { options.exec_call_opts = lrmd_opt_notify_changes_only; } return TRUE; } static gboolean param_key_val_cb(const gchar *option_name, const gchar *optarg, gpointer data, GError **error) { if (pcmk__str_any_of(option_name, "--param-key", "-k", NULL)) { pcmk__str_update(&key, optarg); } else if (pcmk__str_any_of(option_name, "--param-val", "-v", NULL)) { pcmk__str_update(&val, optarg); } if (key != NULL && val != NULL) { options.params = lrmd_key_value_add(options.params, key, val); pcmk__str_update(&key, NULL); pcmk__str_update(&val, NULL); } return TRUE; } static GOptionEntry basic_entries[] = { { "api-call", 'c', 0, G_OPTION_ARG_STRING, &options.api_call, "Directly relates to executor API functions", NULL }, { "is-running", 'R', 0, G_OPTION_ARG_NONE, &options.is_running, "Determine if a resource is registered and running", NULL }, { "listen", 'l', 0, G_OPTION_ARG_STRING, &options.listen, "Listen for a specific event string", NULL }, { "no-wait", 'w', 0, G_OPTION_ARG_NONE, &options.no_wait, "Make api call and do not wait for result", NULL }, { "notify-changes", 'o', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, notify_cb, "Only notify client changes to recurring operations", NULL }, { "notify-orig", 'n', G_OPTION_FLAG_NO_ARG, G_OPTION_ARG_CALLBACK, notify_cb, "Only notify this client of the results of an API action", NULL }, { "tls", 'S', 0, G_OPTION_ARG_NONE, &options.use_tls, "Use TLS backend for local connection", NULL }, { NULL } }; static GOptionEntry api_call_entries[] = { { "action", 'a', 0, G_OPTION_ARG_STRING, &options.action, NULL, NULL }, { "cancel-call-id", 'x', 0, G_OPTION_ARG_INT, &options.cancel_call_id, NULL, NULL }, { "class", 'C', 0, G_OPTION_ARG_STRING, &options.class, NULL, NULL }, { "interval", 'i', 0, G_OPTION_ARG_CALLBACK, interval_cb, NULL, NULL }, { "param-key", 'k', 0, G_OPTION_ARG_CALLBACK, param_key_val_cb, NULL, NULL }, { "param-val", 'v', 0, G_OPTION_ARG_CALLBACK, param_key_val_cb, NULL, NULL }, { "provider", 'P', 0, G_OPTION_ARG_STRING, &options.provider, NULL, NULL }, { "rsc-id", 'r', 0, G_OPTION_ARG_STRING, &options.rsc_id, NULL, NULL }, { "start-delay", 's', 0, G_OPTION_ARG_INT, &options.start_delay, NULL, NULL }, { "timeout", 't', 0, G_OPTION_ARG_INT, &options.timeout, NULL, NULL }, { "type", 'T', 0, G_OPTION_ARG_STRING, &options.type, NULL, NULL }, { NULL } }; static GMainLoop *mainloop = NULL; static lrmd_t *lrmd_conn = NULL; static char event_buf_v0[1024]; static crm_exit_t test_exit(crm_exit_t exit_code) { lrmd_api_delete(lrmd_conn); return crm_exit(exit_code); } -#define print_result(result) \ - if (!options.quiet) { \ - result; \ - } \ +#define print_result(fmt, args...) \ + if (!options.quiet) { \ + printf(fmt "\n" , ##args); \ + } #define report_event(event) \ snprintf(event_buf_v0, sizeof(event_buf_v0), "NEW_EVENT event_type:%s rsc_id:%s action:%s rc:%s op_status:%s", \ lrmd_event_type2str(event->type), \ event->rsc_id, \ event->op_type ? event->op_type : "none", \ services_ocf_exitcode_str(event->rc), \ pcmk_exec_status_str(event->op_status)); \ crm_info("%s", event_buf_v0); static void test_shutdown(int nsig) { lrmd_api_delete(lrmd_conn); lrmd_conn = NULL; } static void read_events(lrmd_event_data_t * event) { report_event(event); if (options.listen) { if (pcmk__str_eq(options.listen, event_buf_v0, pcmk__str_casei)) { - print_result(printf("LISTEN EVENT SUCCESSFUL\n")); + print_result("LISTEN EVENT SUCCESSFUL"); test_exit(CRM_EX_OK); } } if (exec_call_id && (event->call_id == exec_call_id)) { if (event->op_status == 0 && event->rc == 0) { - print_result(printf("API-CALL SUCCESSFUL for 'exec'\n")); + print_result("API-CALL SUCCESSFUL for 'exec'"); } else { - print_result(printf("API-CALL FAILURE for 'exec', rc:%d lrmd_op_status:%s\n", - event->rc, - pcmk_exec_status_str(event->op_status))); + print_result("API-CALL FAILURE for 'exec', rc:%d lrmd_op_status:%s", + event->rc, pcmk_exec_status_str(event->op_status)); test_exit(CRM_EX_ERROR); } if (!options.listen) { test_exit(CRM_EX_OK); } } } static gboolean timeout_err(gpointer data) { - print_result(printf("LISTEN EVENT FAILURE - timeout occurred, never found.\n")); + print_result("LISTEN EVENT FAILURE - timeout occurred, never found"); test_exit(CRM_EX_TIMEOUT); return FALSE; } static void connection_events(lrmd_event_data_t * event) { int rc = event->connection_rc; if (event->type != lrmd_event_connect) { /* ignore */ return; } if (!rc) { crm_info("Executor client connection established"); start_test(NULL); return; } else { sleep(1); try_connect(); crm_notice("Executor client connection failed"); } } static void try_connect(void) { int tries = 10; static int num_tries = 0; int rc = 0; lrmd_conn->cmds->set_callback(lrmd_conn, connection_events); for (; num_tries < tries; num_tries++) { rc = lrmd_conn->cmds->connect_async(lrmd_conn, crm_system_name, 3000); if (!rc) { return; /* we'll hear back in async callback */ } sleep(1); } - print_result(printf("API CONNECTION FAILURE\n")); + print_result("API CONNECTION FAILURE"); test_exit(CRM_EX_ERROR); } static gboolean start_test(gpointer user_data) { int rc = 0; if (!options.no_connect) { if (!lrmd_conn->cmds->is_connected(lrmd_conn)) { try_connect(); /* async connect -- this function will get called back into */ return 0; } } lrmd_conn->cmds->set_callback(lrmd_conn, read_events); if (options.timeout) { g_timeout_add(options.timeout, timeout_err, NULL); } if (!options.api_call) { return 0; } if (pcmk__str_eq(options.api_call, "exec", pcmk__str_casei)) { rc = lrmd_conn->cmds->exec(lrmd_conn, options.rsc_id, options.action, NULL, options.interval_ms, options.timeout, options.start_delay, options.exec_call_opts, options.params); if (rc > 0) { exec_call_id = rc; - print_result(printf("API-CALL 'exec' action pending, waiting on response\n")); + print_result("API-CALL 'exec' action pending, waiting on response"); } } else if (pcmk__str_eq(options.api_call, "register_rsc", pcmk__str_casei)) { rc = lrmd_conn->cmds->register_rsc(lrmd_conn, options.rsc_id, options.class, options.provider, options.type, 0); } else if (pcmk__str_eq(options.api_call, "get_rsc_info", pcmk__str_casei)) { lrmd_rsc_info_t *rsc_info; rsc_info = lrmd_conn->cmds->get_rsc_info(lrmd_conn, options.rsc_id, 0); if (rsc_info) { - print_result(printf("RSC_INFO: id:%s class:%s provider:%s type:%s\n", - rsc_info->id, rsc_info->standard, - rsc_info->provider ? rsc_info->provider : "", - rsc_info->type)); + print_result("RSC_INFO: id:%s class:%s provider:%s type:%s", + rsc_info->id, rsc_info->standard, + (rsc_info->provider? rsc_info->provider : ""), + rsc_info->type); lrmd_free_rsc_info(rsc_info); rc = pcmk_ok; } else { rc = -1; } } else if (pcmk__str_eq(options.api_call, "unregister_rsc", pcmk__str_casei)) { rc = lrmd_conn->cmds->unregister_rsc(lrmd_conn, options.rsc_id, 0); } else if (pcmk__str_eq(options.api_call, "cancel", pcmk__str_casei)) { rc = lrmd_conn->cmds->cancel(lrmd_conn, options.rsc_id, options.action, options.interval_ms); } else if (pcmk__str_eq(options.api_call, "metadata", pcmk__str_casei)) { char *output = NULL; rc = lrmd_conn->cmds->get_metadata(lrmd_conn, options.class, options.provider, options.type, &output, 0); if (rc == pcmk_ok) { - print_result(printf("%s", output)); + print_result("%s", output); free(output); } } else if (pcmk__str_eq(options.api_call, "list_agents", pcmk__str_casei)) { lrmd_list_t *list = NULL; lrmd_list_t *iter = NULL; rc = lrmd_conn->cmds->list_agents(lrmd_conn, &list, options.class, options.provider); if (rc > 0) { - print_result(printf("%d agents found\n", rc)); + print_result("%d agents found", rc); for (iter = list; iter != NULL; iter = iter->next) { - print_result(printf("%s\n", iter->val)); + print_result("%s", iter->val); } lrmd_list_freeall(list); rc = 0; } else { - print_result(printf("API_CALL FAILURE - no agents found\n")); + print_result("API_CALL FAILURE - no agents found"); rc = -1; } } else if (pcmk__str_eq(options.api_call, "list_ocf_providers", pcmk__str_casei)) { lrmd_list_t *list = NULL; lrmd_list_t *iter = NULL; rc = lrmd_conn->cmds->list_ocf_providers(lrmd_conn, options.type, &list); if (rc > 0) { - print_result(printf("%d providers found\n", rc)); + print_result("%d providers found", rc); for (iter = list; iter != NULL; iter = iter->next) { - print_result(printf("%s\n", iter->val)); + print_result("%s", iter->val); } lrmd_list_freeall(list); rc = 0; } else { - print_result(printf("API_CALL FAILURE - no providers found\n")); + print_result("API_CALL FAILURE - no providers found"); rc = -1; } } else if (pcmk__str_eq(options.api_call, "list_standards", pcmk__str_casei)) { lrmd_list_t *list = NULL; lrmd_list_t *iter = NULL; rc = lrmd_conn->cmds->list_standards(lrmd_conn, &list); if (rc > 0) { - print_result(printf("%d standards found\n", rc)); + print_result("%d standards found", rc); for (iter = list; iter != NULL; iter = iter->next) { - print_result(printf("%s\n", iter->val)); + print_result("%s", iter->val); } lrmd_list_freeall(list); rc = 0; } else { - print_result(printf("API_CALL FAILURE - no providers found\n")); + print_result("API_CALL FAILURE - no providers found"); rc = -1; } } else if (pcmk__str_eq(options.api_call, "get_recurring_ops", pcmk__str_casei)) { GList *op_list = NULL; GList *op_item = NULL; rc = lrmd_conn->cmds->get_recurring_ops(lrmd_conn, options.rsc_id, 0, 0, &op_list); for (op_item = op_list; op_item != NULL; op_item = op_item->next) { lrmd_op_info_t *op_info = op_item->data; - print_result(printf("RECURRING_OP: %s_%s_%s timeout=%sms\n", - op_info->rsc_id, op_info->action, - op_info->interval_ms_s, op_info->timeout_ms_s)); + print_result("RECURRING_OP: %s_%s_%s timeout=%sms", + op_info->rsc_id, op_info->action, + op_info->interval_ms_s, op_info->timeout_ms_s); lrmd_free_op_info(op_info); } g_list_free(op_list); } else if (options.api_call) { - print_result(printf("API-CALL FAILURE unknown action '%s'\n", options.action)); + print_result("API-CALL FAILURE unknown action '%s'", options.action); test_exit(CRM_EX_ERROR); } if (rc < 0) { - print_result(printf("API-CALL FAILURE for '%s' api_rc:%d\n", options.api_call, rc)); + print_result("API-CALL FAILURE for '%s' api_rc:%d", + options.api_call, rc); test_exit(CRM_EX_ERROR); } if (options.api_call && rc == pcmk_ok) { - print_result(printf("API-CALL SUCCESSFUL for '%s'\n", options.api_call)); + print_result("API-CALL SUCCESSFUL for '%s'", options.api_call); if (!options.listen) { test_exit(CRM_EX_OK); } } if (options.no_wait) { /* just make the call and exit regardless of anything else. */ test_exit(CRM_EX_OK); } return 0; } +/*! + * \internal + * \brief Generate resource parameters from CIB if none explicitly given + * + * \return Standard Pacemaker return code + */ static int generate_params(void) { - int rc = 0; + int rc = pcmk_rc_ok; pe_working_set_t *data_set = NULL; xmlNode *cib_xml_copy = NULL; pe_resource_t *rsc = NULL; GHashTable *params = NULL; GHashTable *meta = NULL; GHashTableIter iter; + char *key = NULL; + char *value = NULL; - if (options.params) { - return 0; + if (options.params != NULL) { + return pcmk_rc_ok; // User specified parameters explicitly } - data_set = pe_new_working_set(); - if (data_set == NULL) { - crm_crit("Could not allocate working set"); - return -ENOMEM; - } - pe__set_working_set_flags(data_set, pe_flag_no_counts|pe_flag_no_compat); - + // Retrieve and update CIB rc = cib__signon_query(NULL, &cib_xml_copy); - if (rc != pcmk_rc_ok) { crm_err("CIB query failed: %s", pcmk_rc_str(rc)); - goto param_gen_bail; + return rc; } - - if (cli_config_update(&cib_xml_copy, NULL, FALSE) == FALSE) { - crm_err("Error updating cib configuration"); - rc = -1; - goto param_gen_bail; + if (!cli_config_update(&cib_xml_copy, NULL, FALSE)) { + crm_err("Could not update CIB"); + return pcmk_rc_cib_corrupt; } + // Calculate cluster status + data_set = pe_new_working_set(); + if (data_set == NULL) { + crm_crit("Could not allocate working set"); + return ENOMEM; + } + pe__set_working_set_flags(data_set, pe_flag_no_counts|pe_flag_no_compat); data_set->input = cib_xml_copy; data_set->now = crm_time_new(NULL); - cluster_status(data_set); - if (options.rsc_id) { - rsc = pe_find_resource_with_flags(data_set->resources, options.rsc_id, - pe_find_renamed|pe_find_any); - } - if (!rsc) { + // Find resource in CIB + rsc = pe_find_resource_with_flags(data_set->resources, options.rsc_id, + pe_find_renamed|pe_find_any); + if (rsc == NULL) { crm_err("Resource does not exist in config"); - rc = -1; - goto param_gen_bail; + pe_free_working_set(data_set); + return EINVAL; } + // Add resource instance parameters to options.params params = pe_rsc_params(rsc, NULL, data_set); - meta = pcmk__strkey_table(free, free); - - get_meta_attributes(meta, rsc, NULL, data_set); - if (params != NULL) { - char *key = NULL; - char *value = NULL; - g_hash_table_iter_init(&iter, params); - while (g_hash_table_iter_next(&iter, (gpointer *) & key, (gpointer *) & value)) { + while (g_hash_table_iter_next(&iter, (gpointer *) &key, + (gpointer *) &value)) { options.params = lrmd_key_value_add(options.params, key, value); } } - if (meta) { - char *key = NULL; - char *value = NULL; - - g_hash_table_iter_init(&iter, meta); - while (g_hash_table_iter_next(&iter, (gpointer *) & key, (gpointer *) & value)) { - char *crm_name = crm_meta_name(key); + // Add resource meta-attributes to options.params + meta = pcmk__strkey_table(free, free); + get_meta_attributes(meta, rsc, NULL, data_set); + g_hash_table_iter_init(&iter, meta); + while (g_hash_table_iter_next(&iter, (gpointer *) &key, + (gpointer *) &value)) { + char *crm_name = crm_meta_name(key); - options.params = lrmd_key_value_add(options.params, crm_name, value); - free(crm_name); - } - g_hash_table_destroy(meta); + options.params = lrmd_key_value_add(options.params, crm_name, value); + free(crm_name); } + g_hash_table_destroy(meta); - param_gen_bail: pe_free_working_set(data_set); return rc; } static GOptionContext * build_arg_context(pcmk__common_args_t *args, GOptionGroup **group) { GOptionContext *context = NULL; context = pcmk__build_arg_context(args, NULL, group, NULL); pcmk__add_main_args(context, basic_entries); pcmk__add_arg_group(context, "api-call", "API Call Options:", "Parameters for api-call option", api_call_entries); return context; } int main(int argc, char **argv) { GError *error = NULL; crm_exit_t exit_code = CRM_EX_OK; - - crm_trigger_t *trig; + crm_trigger_t *trig = NULL; pcmk__common_args_t *args = pcmk__new_common_args(SUMMARY); /* Typically we'd pass all the single character options that take an argument * as the second parameter here (and there's a bunch of those in this tool). * However, we control how this program is called so we can just not call it * in a way where the preprocessing ever matters. */ gchar **processed_args = pcmk__cmdline_preproc(argv, NULL); GOptionContext *context = build_arg_context(args, NULL); if (!g_option_context_parse_strv(context, &processed_args, &error)) { exit_code = CRM_EX_USAGE; goto done; } /* We have to use crm_log_init here to set up the logging because there's * different handling for daemons vs. command line programs, and * pcmk__cli_init_logging is set up to only handle the latter. */ crm_log_init(NULL, LOG_INFO, TRUE, (args->verbosity? TRUE : FALSE), argc, argv, FALSE); for (int i = 0; i < args->verbosity; i++) { crm_bump_log_level(argc, argv); } if (!options.listen && pcmk__strcase_any_of(options.api_call, "metadata", "list_agents", "list_standards", "list_ocf_providers", NULL)) { options.no_connect = TRUE; } if (options.is_running) { - if (!options.timeout) { - options.timeout = 30000; + int rc = pcmk_rc_ok; + + if (options.rsc_id == NULL) { + exit_code = CRM_EX_USAGE; + g_set_error(&error, PCMK__EXITC_ERROR, exit_code, + "--is-running requires --rsc-id"); + goto done; } + options.interval_ms = 0; - if (!options.rsc_id) { - exit_code = CRM_EX_ERROR; - g_set_error(&error, PCMK__EXITC_ERROR, exit_code, "rsc-id must be given when is-running is used"); - goto done; + if (options.timeout == 0) { + options.timeout = 30000; } - if (generate_params()) { - exit_code = CRM_EX_ERROR; - print_result(printf - ("Failed to retrieve rsc parameters from cib, can not determine if rsc is running.\n")); + rc = generate_params(); + if (rc != pcmk_rc_ok) { + exit_code = pcmk_rc2exitc(rc); + g_set_error(&error, PCMK__EXITC_ERROR, exit_code, + "Can not determine resource status: " + "unable to get parameters from CIB"); goto done; } options.api_call = "exec"; options.action = "monitor"; options.exec_call_opts = lrmd_opt_notify_orig_only; } - /* if we can't perform an api_call or listen for events, - * there is nothing to do */ if (!options.api_call && !options.listen) { + exit_code = CRM_EX_USAGE; g_set_error(&error, PCMK__EXITC_ERROR, exit_code, - "Nothing to be done. Please specify 'api-call' and/or 'listen'"); + "Must specify at least one of --api-call, --listen, " + "or --is-running"); goto done; } if (options.use_tls) { lrmd_conn = lrmd_remote_api_new(NULL, "localhost", 0); } else { lrmd_conn = lrmd_api_new(); } trig = mainloop_add_trigger(G_PRIORITY_HIGH, start_test, NULL); mainloop_set_trigger(trig); mainloop_add_signal(SIGTERM, test_shutdown); crm_info("Starting"); mainloop = g_main_loop_new(NULL, FALSE); g_main_loop_run(mainloop); done: g_strfreev(processed_args); pcmk__free_arg_context(context); free(key); free(val); pcmk__output_and_clear_error(error, NULL); - return test_exit(CRM_EX_OK); + return test_exit(exit_code); } diff --git a/daemons/execd/remoted_pidone.c b/daemons/execd/remoted_pidone.c index 7204d3b5db..4f914ebd53 100644 --- a/daemons/execd/remoted_pidone.c +++ b/daemons/execd/remoted_pidone.c @@ -1,298 +1,298 @@ /* * Copyright 2017-2020 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #include #include #include #include #include #include #include #include #include #include "pacemaker-execd.h" static pid_t main_pid = 0; static void sigdone(void) { exit(CRM_EX_OK); } static void sigreap(void) { pid_t pid = 0; int status; do { /* * Opinions seem to differ as to what to put here: * -1, any child process * 0, any child process whose process group ID is equal to that of the calling process */ pid = waitpid(-1, &status, WNOHANG); if (pid == main_pid) { /* Exit when pacemaker-remote exits and use the same return code */ if (WIFEXITED(status)) { exit(WEXITSTATUS(status)); } exit(CRM_EX_ERROR); } } while (pid > 0); } static struct { int sig; void (*handler)(void); } sigmap[] = { { SIGCHLD, sigreap }, { SIGINT, sigdone }, }; /*! * \internal * \brief Check a line of text for a valid environment variable name * * \param[in] line Text to check * \param[out] first First character of valid name if found, NULL otherwise * \param[out] last Last character of valid name if found, NULL otherwise * * \return TRUE if valid name found, FALSE otherwise * \note It's reasonable to impose limitations on environment variable names * beyond what C or setenv() does: We only allow names that contain only * [a-zA-Z0-9_] characters and do not start with a digit. */ static bool find_env_var_name(char *line, char **first, char **last) { // Skip leading whitespace *first = line; while (isspace(**first)) { ++*first; } if (isalpha(**first) || (**first == '_')) { // Valid first character *last = *first; while (isalnum(*(*last + 1)) || (*(*last + 1) == '_')) { ++*last; } return TRUE; } *first = *last = NULL; return FALSE; } static void load_env_vars(const char *filename) { /* We haven't forked or initialized logging yet, so don't leave any file * descriptors open, and don't log -- silently ignore errors. */ FILE *fp = fopen(filename, "r"); if (fp != NULL) { char line[LINE_MAX] = { '\0', }; while (fgets(line, LINE_MAX, fp) != NULL) { char *name = NULL; char *end = NULL; char *value = NULL; char *quote = NULL; // Look for valid name immediately followed by equals sign if (find_env_var_name(line, &name, &end) && (*++end == '=')) { // Null-terminate name, and advance beyond equals sign *end++ = '\0'; // Check whether value is quoted if ((*end == '\'') || (*end == '"')) { quote = end++; } value = end; if (quote) { /* Value is remaining characters up to next non-backslashed * matching quote character. */ while (((*end != *quote) || (*(end - 1) == '\\')) && (*end != '\0')) { end++; } if (*end == *quote) { // Null-terminate value, and advance beyond close quote *end++ = '\0'; } else { // Matching closing quote wasn't found value = NULL; } } else { /* Value is remaining characters up to next non-backslashed * whitespace. */ while ((!isspace(*end) || (*(end - 1) == '\\')) && (*end != '\0')) { ++end; } if (end == (line + LINE_MAX - 1)) { // Line was too long value = NULL; } // Do NOT null-terminate value (yet) } /* We have a valid name and value, and end is now the character * after the closing quote or the first whitespace after the * unquoted value. Make sure the rest of the line is just * whitespace or a comment. */ if (value) { char *value_end = end; while (isspace(*end) && (*end != '\n')) { ++end; } if ((*end == '\n') || (*end == '#')) { if (quote == NULL) { // Now we can null-terminate an unquoted value *value_end = '\0'; } // Don't overwrite (bundle options take precedence) setenv(name, value, 0); } else { value = NULL; } } } if ((value == NULL) && (strchr(line, '\n') == NULL)) { // Eat remainder of line beyond LINE_MAX if (fscanf(fp, "%*[^\n]\n") == EOF) { value = NULL; // Don't care, make compiler happy } } } fclose(fp); } } void remoted_spawn_pidone(int argc, char **argv, char **envp) { sigset_t set; /* This environment variable exists for two purposes: * - For testing, setting it to "full" enables full PID 1 behavior even * when PID is not 1 * - Setting to "vars" enables just the loading of environment variables * from /etc/pacemaker/pcmk-init.env, which could be useful for testing or * containers with a custom PID 1 script that launches pacemaker-remoted. */ const char *pid1 = (getpid() == 1)? "full" : getenv("PCMK_remote_pid1"); if (pid1 == NULL) { return; } /* When a container is launched, it may be given specific environment * variables, which for Pacemaker bundles are given in the bundle * configuration. However, that does not allow for host-specific values. * To allow for that, look for a special file containing a shell-like syntax * of name/value pairs, and export those into the environment. */ load_env_vars("/etc/pacemaker/pcmk-init.env"); if (strcmp(pid1, "full")) { return; } /* Containers can be expected to have /var/log, but they may not have * /var/log/pacemaker, so use a different default if no value has been * explicitly configured in the container's environment. */ if (pcmk__env_option(PCMK__ENV_LOGFILE) == NULL) { pcmk__set_env_option(PCMK__ENV_LOGFILE, "/var/log/pcmk-init.log"); } sigfillset(&set); sigprocmask(SIG_BLOCK, &set, 0); main_pid = fork(); switch (main_pid) { case 0: sigprocmask(SIG_UNBLOCK, &set, NULL); setsid(); setpgid(0, 0); // Child remains as pacemaker-remoted return; case -1: perror("fork"); } /* Parent becomes the reaper of zombie processes */ /* Safe to initialize logging now if needed */ -# ifdef HAVE___PROGNAME +# ifdef HAVE_PROGNAME /* Differentiate ourselves in the 'ps' output */ { char *p; int i, maxlen; char *LastArgv = NULL; const char *name = "pcmk-init"; for (i = 0; i < argc; i++) { if (!i || (LastArgv + 1 == argv[i])) LastArgv = argv[i] + strlen(argv[i]); } for (i = 0; envp[i] != NULL; i++) { if ((LastArgv + 1) == envp[i]) { LastArgv = envp[i] + strlen(envp[i]); } } maxlen = (LastArgv - argv[0]) - 2; i = strlen(name); /* We can overwrite individual argv[] arguments */ snprintf(argv[0], maxlen, "%s", name); /* Now zero out everything else */ p = &argv[0][i]; while (p < LastArgv) { *p++ = '\0'; } argv[1] = NULL; } -# endif // HAVE___PROGNAME +# endif // HAVE_PROGNAME while (1) { int sig; size_t i; sigwait(&set, &sig); for (i = 0; i < PCMK__NELEM(sigmap); i++) { if (sigmap[i].sig == sig) { sigmap[i].handler(); break; } } } } diff --git a/devel/Makefile.am b/devel/Makefile.am index 4b04876fd6..b544b8ae44 100644 --- a/devel/Makefile.am +++ b/devel/Makefile.am @@ -1,129 +1,245 @@ # # Copyright 2020-2022 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # +include $(top_srcdir)/mk/common.mk +include $(top_srcdir)/mk/release.mk + # Coccinelle is a tool that takes special patch-like files (called semantic patches) and # applies them throughout a source tree. This is useful when refactoring, changing APIs, # catching dangerous or incorrect code, and other similar tasks. It's not especially # easy to write a semantic patch but most users should only be concerned about running # the target and inspecting the results. # # Documentation (including examples, which are the most useful): # https://coccinelle.gitlabpages.inria.fr/website/docs/ # # Run the "make cocci" target to just output what would be done, or "make cocci-inplace" # to apply the changes to the source tree. # # COCCI_FILES may be set on the command line, if you want to test just a single file # while it's under development. Otherwise, it is a list of all the files that are ready # to be run. # # ref-passed-variables-inited.cocci seems to be returning some false positives around # GHashTableIters, so it is disabled for the moment. COCCI_FILES ?= coccinelle/string-any-of.cocci \ coccinelle/string-empty.cocci \ coccinelle/string-null-matches.cocci \ coccinelle/use-func.cocci dist_noinst_SCRIPTS = coccinelle/test/testrunner.sh EXTRA_DIST = README gdbhelpers $(COCCI_FILES) \ coccinelle/ref-passed-variables-inited.cocci \ coccinelle/rename-fn.cocci \ coccinelle/test/ref-passed-variables-inited.input.c \ coccinelle/test/ref-passed-variables-inited.output # Any file in this list is allowed to use any of the pcmk__ internal functions. # Coccinelle can use any transformation that depends on "internal" to rewrite # code to use the internal functions. MAY_USE_INTERNAL_FILES = $(shell find .. -path "../lib/*.c" -o -path "../lib/*private.h" -o -path "../tools/*.c" -o -path "../daemons/*.c" -o -path '../include/pcmki/*h' -o -name '*internal.h') # And then any file in this list is public API, which may not use internal # functions. Thus, only those transformations that do not depend on "internal" # may be applied. OTHER_FILES = $(shell find ../include -name '*h' -a \! -name '*internal.h' -a \! -path '../include/pcmki/*') cocci: -for cf in $(COCCI_FILES); do \ for f in $(MAY_USE_INTERNAL_FILES); do \ spatch $(_SPATCH_FLAGS) -D internal --very-quiet --local-includes --preprocess --sp-file $$cf $$f; \ done ; \ for f in $(OTHER_FILES); do \ spatch $(_SPATCH_FLAGS) --very-quiet --local-includes --preprocess --sp-file $$cf $$f; \ done ; \ done cocci-inplace: $(MAKE) $(AM_MAKEFLAGS) _SPATCH_FLAGS=--in-place cocci cocci-test: for f in coccinelle/test/*.c; do \ coccinelle/test/testrunner.sh $$f; \ done +# +# Static analysis +# + +## clang + +# See scan-build(1) for possible checkers (leave empty to use default set) +CLANG_checkers ?= + +clang: + OUT=$$(cd $(top_builddir) \ + && scan-build $(CLANG_checkers:%=-enable-checker %) \ + $(MAKE) $(AM_MAKEFLAGS) CFLAGS="-std=c99 $(CFLAGS)" \ + clean all 2>&1); \ + REPORT=$$(echo "$$OUT" \ + | sed -n -e "s/.*'scan-view \(.*\)'.*/\1/p"); \ + [ -z "$$REPORT" ] && echo "$$OUT" || scan-view "$$REPORT" + +## coverity + +# Aggressiveness (low, medium, or high) +COVLEVEL ?= low + +# Generated outputs +COVERITY_DIR = $(abs_top_builddir)/coverity-$(TAG) +COVTAR = $(abs_top_builddir)/$(PACKAGE)-coverity-$(TAG).tgz +COVEMACS = $(abs_top_builddir)/$(TAG).coverity +COVHTML = $(COVERITY_DIR)/output/errors + +# Coverity outputs are phony so they get rebuilt every invocation + +.PHONY: $(COVERITY_DIR) +$(COVERITY_DIR): coverity-clean + $(MAKE) $(AM_MAKEFLAGS) -C $(top_builddir) init core-clean + $(AM_V_GEN)cd $(top_builddir) \ + && cov-build --dir "$@" $(MAKE) $(AM_MAKEFLAGS) core + +# Public coverity instance + +.PHONY: $(COVTAR) +$(COVTAR): $(COVERITY_DIR) + $(AM_V_GEN)tar czf "$@" --transform="s@.*$(TAG)@cov-int@" "$<" + +.PHONY: coverity +coverity: $(COVTAR) + @echo "Now go to https://scan.coverity.com/users/sign_in and upload:" + @echo " $(COVTAR)" + @echo "then make clean at the top level" + +# Licensed coverity instance +# +# The prerequisites are a little hacky; rather than actually required, some +# of them are designed so that things execute in the proper order (which is +# not the same as GNU make's order-only prerequisites). + +.PHONY: coverity-analyze +coverity-analyze: $(COVERITY_DIR) + @echo "" + @echo "Analyzing (waiting for coverity license if necessary) ..." + cd $(top_builddir) && cov-analyze --dir "$<" --wait-for-license \ + --security --aggressiveness-level "$(COVLEVEL)" + +.PHONY: $(COVEMACS) +$(COVEMACS): coverity-analyze + $(AM_V_GEN)cd $(top_builddir) \ + && cov-format-errors --dir "$(COVERITY_DIR)" --emacs-style > "$@" + +.PHONY: $(COVHTML) +$(COVHTML): $(COVEMACS) + $(AM_V_GEN)cd $(top_builddir) \ + && cov-format-errors --dir "$(COVERITY_DIR)" --html-output "$@" + +.PHONY: coverity-corp +coverity-corp: $(COVHTML) + $(MAKE) $(AM_MAKEFLAGS) -C $(top_builddir) core-clean + @echo "Done. See:" + @echo " file://$(COVHTML)/index.html" + @echo "When no longer needed, make coverity-clean" + +# Remove all outputs regardless of tag +.PHONY: coverity-clean +coverity-clean: + -rm -rf "$(abs_builddir)"/coverity-* \ + "$(abs_builddir)"/$(PACKAGE)-coverity-*.tgz \ + "$(abs_builddir)"/*.coverity + + +## cppcheck + +# Use CPPCHECK_ARGS to pass extra cppcheck options, e.g.: +# --enable={warning,style,performance,portability,information,all} +# --inconclusive --std=posix +# -DBUILD_PUBLIC_LIBPACEMAKER -DDEFAULT_CONCURRENT_FENCING_TRUE +CPPCHECK_ARGS ?= + +CPPCHECK_DIRS = replace lib daemons tools +CPPCHECK_OUT = $(abs_top_builddir)/cppcheck.out + +cppcheck: + cppcheck $(CPPCHECK_ARGS) -I $(top_srcdir)/include \ + --output-file=$(CPPCHECK_OUT) \ + --max-configs=30 --inline-suppr -q \ + --library=posix --library=gnu --library=gtk \ + $(GLIB_CFLAGS) -D__GNUC__ \ + $(foreach dir,$(CPPCHECK_DIRS),$(top_srcdir)/$(dir)) + @echo "Done: See $(CPPCHECK_OUT)" + @echo "When no longer needed, make cppcheck-clean" + +.PHONY: cppcheck-clean +cppcheck-clean: + -rm -f "$(CPPCHECK_OUT)" + + # # indent cannot cope with all our exceptions and needs heavy manual editing # # indent target: Limit indent to these directories INDENT_DIRS ?= . # indent target: Extra options to pass to indent INDENT_OPTS ?= INDENT_IGNORE_PATHS = daemons/controld/controld_fsa.h \ lib/gnu/* INDENT_PACEMAKER_STYLE = --blank-lines-after-declarations \ --blank-lines-after-procedures \ --braces-after-func-def-line \ --braces-on-if-line \ --braces-on-struct-decl-line \ --break-before-boolean-operator \ --case-brace-indentation4 \ --case-indentation4 \ --comment-indentation0 \ --continuation-indentation4 \ --continue-at-parentheses \ --cuddle-do-while \ --cuddle-else \ --declaration-comment-column0 \ --declaration-indentation1 \ --else-endif-column0 \ --honour-newlines \ --indent-label0 \ --indent-level4 \ --line-comments-indentation0 \ --line-length80 \ --no-blank-lines-after-commas \ --no-comment-delimiters-on-blank-lines \ --no-space-after-function-call-names \ --no-space-after-parentheses \ --no-tabs \ --preprocessor-indentation2 \ --procnames-start-lines \ --space-after-cast \ --start-left-side-of-comments \ --swallow-optional-blank-lines \ --tab-size8 indent: VERSION_CONTROL=none \ find $(INDENT_DIRS) -type f -name "*.[ch]" \ $(INDENT_IGNORE_PATHS:%= ! -path '%') \ -exec indent $(INDENT_PACEMAKER_STYLE) $(INDENT_OPTS) \{\} \; # # Scratch file for ad-hoc testing # EXTRA_PROGRAMS = scratch nodist_scratch_SOURCES = scratch.c scratch_LDADD = $(top_builddir)/lib/common/libcrmcommon.la -clean-local: +clean-local: coverity-clean cppcheck-clean -rm -f $(EXTRA_PROGRAMS) diff --git a/lib/common/iso8601.c b/lib/common/iso8601.c index f3c364e55c..8a35a89dcd 100644 --- a/lib/common/iso8601.c +++ b/lib/common/iso8601.c @@ -1,1808 +1,1809 @@ /* * Copyright 2005-2022 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ /* * References: * https://en.wikipedia.org/wiki/ISO_8601 * http://www.staff.science.uu.nl/~gent0113/calendar/isocalendar.htm */ #include #include #include #include #include #include #include /* * Andrew's code was originally written for OSes whose "struct tm" contains: * long tm_gmtoff; :: Seconds east of UTC * const char *tm_zone; :: Timezone abbreviation * Some OSes lack these, instead having: * time_t (or long) timezone; :: "difference between UTC and local standard time" * char *tzname[2] = { "...", "..." }; * I (David Lee) confess to not understanding the details. So my attempted * generalisations for where their use is necessary may be flawed. * * 1. Does "difference between ..." subtract the same or opposite way? * 2. Should it use "altzone" instead of "timezone"? * 3. Should it use tzname[0] or tzname[1]? Interaction with timezone/altzone? */ #if defined(HAVE_STRUCT_TM_TM_GMTOFF) # define GMTOFF(tm) ((tm)->tm_gmtoff) #else /* Note: extern variable; macro argument not actually used. */ # define GMTOFF(tm) (-timezone+daylight) #endif #define HOUR_SECONDS (60 * 60) #define DAY_SECONDS (HOUR_SECONDS * 24) // A date/time or duration struct crm_time_s { int years; // Calendar year (date/time) or number of years (duration) int months; // Number of months (duration only) int days; // Ordinal day of year (date/time) or number of days (duration) int seconds; // Seconds of day (date/time) or number of seconds (duration) int offset; // Seconds offset from UTC (date/time only) bool duration; // True if duration }; static crm_time_t *parse_date(const char *date_str); static crm_time_t * crm_get_utc_time(const crm_time_t *dt) { crm_time_t *utc = NULL; if (dt == NULL) { errno = EINVAL; return NULL; } utc = crm_time_new_undefined(); utc->years = dt->years; utc->days = dt->days; utc->seconds = dt->seconds; utc->offset = 0; if (dt->offset) { crm_time_add_seconds(utc, -dt->offset); } else { /* Durations (which are the only things that can include months, never have a timezone */ utc->months = dt->months; } crm_time_log(LOG_TRACE, "utc-source", dt, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); crm_time_log(LOG_TRACE, "utc-target", utc, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); return utc; } crm_time_t * crm_time_new(const char *date_time) { time_t tm_now; crm_time_t *dt = NULL; tzset(); if (date_time == NULL) { tm_now = time(NULL); dt = crm_time_new_undefined(); crm_time_set_timet(dt, &tm_now); } else { dt = parse_date(date_time); } return dt; } /*! * \brief Allocate memory for an uninitialized time object * * \return Newly allocated time object * \note The caller is responsible for freeing the return value using * crm_time_free(). */ crm_time_t * crm_time_new_undefined(void) { crm_time_t *result = calloc(1, sizeof(crm_time_t)); CRM_ASSERT(result != NULL); return result; } /*! * \brief Check whether a time object has been initialized yet * * \param[in] t Time object to check * * \return TRUE if time object has been initialized, FALSE otherwise */ bool crm_time_is_defined(const crm_time_t *t) { // Any nonzero member indicates something has been done to t return (t != NULL) && (t->years || t->months || t->days || t->seconds || t->offset || t->duration); } void crm_time_free(crm_time_t * dt) { if (dt == NULL) { return; } free(dt); } static int year_days(int year) { int d = 365; if (crm_time_leapyear(year)) { d++; } return d; } /* From http://myweb.ecu.edu/mccartyr/ISOwdALG.txt : * * 5. Find the Jan1Weekday for Y (Monday=1, Sunday=7) * YY = (Y-1) % 100 * C = (Y-1) - YY * G = YY + YY/4 * Jan1Weekday = 1 + (((((C / 100) % 4) x 5) + G) % 7) */ int crm_time_january1_weekday(int year) { int YY = (year - 1) % 100; int C = (year - 1) - YY; int G = YY + YY / 4; int jan1 = 1 + (((((C / 100) % 4) * 5) + G) % 7); crm_trace("YY=%d, C=%d, G=%d", YY, C, G); crm_trace("January 1 %.4d: %d", year, jan1); return jan1; } int crm_time_weeks_in_year(int year) { int weeks = 52; int jan1 = crm_time_january1_weekday(year); /* if jan1 == thursday */ if (jan1 == 4) { weeks++; } else { jan1 = crm_time_january1_weekday(year + 1); /* if dec31 == thursday aka. jan1 of next year is a friday */ if (jan1 == 5) { weeks++; } } return weeks; } // Jan-Dec plus Feb of leap years static int month_days[13] = { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 29 }; /*! * \brief Return number of days in given month of given year * * \param[in] Ordinal month (1-12) * \param[in] Gregorian year * * \return Number of days in given month (0 if given month is invalid) */ int crm_time_days_in_month(int month, int year) { if ((month < 1) || (month > 12)) { return 0; } if ((month == 2) && crm_time_leapyear(year)) { month = 13; } return month_days[month - 1]; } bool crm_time_leapyear(int year) { gboolean is_leap = FALSE; if (year % 4 == 0) { is_leap = TRUE; } if (year % 100 == 0 && year % 400 != 0) { is_leap = FALSE; } return is_leap; } static uint32_t get_ordinal_days(uint32_t y, uint32_t m, uint32_t d) { int lpc; for (lpc = 1; lpc < m; lpc++) { d += crm_time_days_in_month(lpc, y); } return d; } void crm_time_log_alias(int log_level, const char *file, const char *function, int line, const char *prefix, const crm_time_t *date_time, int flags) { char *date_s = crm_time_as_string(date_time, flags); if (log_level == LOG_STDOUT) { printf("%s%s%s\n", (prefix? prefix : ""), (prefix? ": " : ""), date_s); } else { do_crm_log_alias(log_level, file, function, line, "%s%s%s", (prefix? prefix : ""), (prefix? ": " : ""), date_s); } free(date_s); } static void crm_time_get_sec(int sec, uint32_t *h, uint32_t *m, uint32_t *s) { uint32_t hours, minutes, seconds; if (sec < 0) { seconds = 0 - sec; } else { seconds = sec; } hours = seconds / HOUR_SECONDS; seconds -= HOUR_SECONDS * hours; minutes = seconds / 60; seconds -= 60 * minutes; crm_trace("%d == %.2d:%.2d:%.2d", sec, hours, minutes, seconds); *h = hours; *m = minutes; *s = seconds; } int crm_time_get_timeofday(const crm_time_t *dt, uint32_t *h, uint32_t *m, uint32_t *s) { crm_time_get_sec(dt->seconds, h, m, s); return TRUE; } int crm_time_get_timezone(const crm_time_t *dt, uint32_t *h, uint32_t *m) { uint32_t s; crm_time_get_sec(dt->seconds, h, m, &s); return TRUE; } long long crm_time_get_seconds(const crm_time_t *dt) { int lpc; crm_time_t *utc = NULL; long long in_seconds = 0; if (dt == NULL) { return 0; } utc = crm_get_utc_time(dt); if (utc == NULL) { return 0; } for (lpc = 1; lpc < utc->years; lpc++) { long long dmax = year_days(lpc); in_seconds += DAY_SECONDS * dmax; } /* utc->months is an offset that can only be set for a duration. * By definition, the value is variable depending on the date to * which it is applied. * * Force 30-day months so that something vaguely sane happens * for anyone that tries to use a month in this way. */ if (utc->months > 0) { in_seconds += DAY_SECONDS * 30 * (long long) (utc->months); } if (utc->days > 0) { in_seconds += DAY_SECONDS * (long long) (utc->days - 1); } in_seconds += utc->seconds; crm_time_free(utc); return in_seconds; } #define EPOCH_SECONDS 62135596800ULL /* Calculated using crm_time_get_seconds() */ long long crm_time_get_seconds_since_epoch(const crm_time_t *dt) { return (dt == NULL)? 0 : (crm_time_get_seconds(dt) - EPOCH_SECONDS); } int crm_time_get_gregorian(const crm_time_t *dt, uint32_t *y, uint32_t *m, uint32_t *d) { int months = 0; int days = dt->days; if(dt->years != 0) { for (months = 1; months <= 12 && days > 0; months++) { int mdays = crm_time_days_in_month(months, dt->years); if (mdays >= days) { break; } else { days -= mdays; } } } else if (dt->months) { /* This is a duration including months, don't convert the days field */ months = dt->months; } else { /* This is a duration not including months, still don't convert the days field */ } *y = dt->years; *m = months; *d = days; crm_trace("%.4d-%.3d -> %.4d-%.2d-%.2d", dt->years, dt->days, dt->years, months, days); return TRUE; } int crm_time_get_ordinal(const crm_time_t *dt, uint32_t *y, uint32_t *d) { *y = dt->years; *d = dt->days; return TRUE; } int crm_time_get_isoweek(const crm_time_t *dt, uint32_t *y, uint32_t *w, uint32_t *d) { /* * Monday 29 December 2008 is written "2009-W01-1" * Sunday 3 January 2010 is written "2009-W53-7" */ int year_num = 0; int jan1 = crm_time_january1_weekday(dt->years); int h = -1; CRM_CHECK(dt->days > 0, return FALSE); /* 6. Find the Weekday for Y M D */ h = dt->days + jan1 - 1; *d = 1 + ((h - 1) % 7); /* 7. Find if Y M D falls in YearNumber Y-1, WeekNumber 52 or 53 */ if (dt->days <= (8 - jan1) && jan1 > 4) { crm_trace("year--, jan1=%d", jan1); year_num = dt->years - 1; *w = crm_time_weeks_in_year(year_num); } else { year_num = dt->years; } /* 8. Find if Y M D falls in YearNumber Y+1, WeekNumber 1 */ if (year_num == dt->years) { int dmax = year_days(year_num); int correction = 4 - *d; if ((dmax - dt->days) < correction) { crm_trace("year++, jan1=%d, i=%d vs. %d", jan1, dmax - dt->days, correction); year_num = dt->years + 1; *w = 1; } } /* 9. Find if Y M D falls in YearNumber Y, WeekNumber 1 through 53 */ if (year_num == dt->years) { int j = dt->days + (7 - *d) + (jan1 - 1); *w = j / 7; if (jan1 > 4) { *w -= 1; } } *y = year_num; crm_trace("Converted %.4d-%.3d to %.4d-W%.2d-%d", dt->years, dt->days, *y, *w, *d); return TRUE; } #define DATE_MAX 128 static void crm_duration_as_string(const crm_time_t *dt, char *result) { size_t offset = 0; if (dt->years) { offset += snprintf(result + offset, DATE_MAX - offset, "%4d year%s ", dt->years, pcmk__plural_s(dt->years)); } if (dt->months) { offset += snprintf(result + offset, DATE_MAX - offset, "%2d month%s ", dt->months, pcmk__plural_s(dt->months)); } if (dt->days) { offset += snprintf(result + offset, DATE_MAX - offset, "%2d day%s ", dt->days, pcmk__plural_s(dt->days)); } if (((offset == 0) || (dt->seconds != 0)) && (dt->seconds > -60) && (dt->seconds < 60)) { offset += snprintf(result + offset, DATE_MAX - offset, "%d second%s", dt->seconds, pcmk__plural_s(dt->seconds)); } else if (dt->seconds) { uint32_t h = 0, m = 0, s = 0; offset += snprintf(result + offset, DATE_MAX - offset, "%d seconds (", dt->seconds); crm_time_get_sec(dt->seconds, &h, &m, &s); if (h) { offset += snprintf(result + offset, DATE_MAX - offset, "%u hour%s%s", h, pcmk__plural_s(h), ((m || s)? " " : "")); } if (m) { offset += snprintf(result + offset, DATE_MAX - offset, "%u minute%s%s", m, pcmk__plural_s(m), (s? " " : "")); } if (s) { offset += snprintf(result + offset, DATE_MAX - offset, "%u second%s", s, pcmk__plural_s(s)); } offset += snprintf(result + offset, DATE_MAX - offset, ")"); } } char * crm_time_as_string(const crm_time_t *date_time, int flags) { const crm_time_t *dt = NULL; crm_time_t *utc = NULL; char result[DATE_MAX] = { '\0', }; char *result_copy = NULL; size_t offset = 0; // Convert to UTC if local timezone was not requested if (date_time && date_time->offset && !pcmk_is_set(flags, crm_time_log_with_timezone)) { crm_trace("UTC conversion"); utc = crm_get_utc_time(date_time); dt = utc; } else { dt = date_time; } if (!crm_time_is_defined(dt)) { strcpy(result, ""); goto done; } // Simple cases: as duration, seconds, or seconds since epoch if (flags & crm_time_log_duration) { crm_duration_as_string(date_time, result); goto done; } if (flags & crm_time_seconds) { snprintf(result, DATE_MAX, "%lld", crm_time_get_seconds(date_time)); goto done; } if (flags & crm_time_epoch) { snprintf(result, DATE_MAX, "%lld", crm_time_get_seconds_since_epoch(date_time)); goto done; } // As readable string if (flags & crm_time_log_date) { if (flags & crm_time_weeks) { // YYYY-WW-D uint32_t y, w, d; if (crm_time_get_isoweek(dt, &y, &w, &d)) { offset += snprintf(result + offset, DATE_MAX - offset, "%u-W%.2u-%u", y, w, d); } } else if (flags & crm_time_ordinal) { // YYYY-DDD uint32_t y, d; if (crm_time_get_ordinal(dt, &y, &d)) { offset += snprintf(result + offset, DATE_MAX - offset, "%u-%.3u", y, d); } } else { // YYYY-MM-DD uint32_t y, m, d; if (crm_time_get_gregorian(dt, &y, &m, &d)) { offset += snprintf(result + offset, DATE_MAX - offset, "%.4u-%.2u-%.2u", y, m, d); } } } if (flags & crm_time_log_timeofday) { uint32_t h = 0, m = 0, s = 0; if (offset > 0) { offset += snprintf(result + offset, DATE_MAX - offset, " "); } if (crm_time_get_timeofday(dt, &h, &m, &s)) { offset += snprintf(result + offset, DATE_MAX - offset, "%.2u:%.2u:%.2u", h, m, s); } if ((flags & crm_time_log_with_timezone) && (dt->offset != 0)) { crm_time_get_sec(dt->offset, &h, &m, &s); offset += snprintf(result + offset, DATE_MAX - offset, " %c%.2u:%.2u", ((dt->offset < 0)? '-' : '+'), h, m); } else { offset += snprintf(result + offset, DATE_MAX - offset, "Z"); } } done: crm_time_free(utc); result_copy = strdup(result); CRM_ASSERT(result_copy != NULL); return result_copy; } /*! * \internal * \brief Determine number of seconds from an hour:minute:second string * * \param[in] time_str Time specification string * \param[out] result Number of seconds equivalent to time_str * * \return TRUE if specification was valid, FALSE (and set errno) otherwise * \note This may return the number of seconds in a day (which is out of bounds * for a time object) if given 24:00:00. */ static bool crm_time_parse_sec(const char *time_str, int *result) { int rc; uint32_t hour = 0; uint32_t minute = 0; uint32_t second = 0; *result = 0; // Must have at least hour, but minutes and seconds are optional rc = sscanf(time_str, "%d:%d:%d", &hour, &minute, &second); if (rc == 1) { rc = sscanf(time_str, "%2d%2d%2d", &hour, &minute, &second); } if (rc == 0) { crm_err("%s is not a valid ISO 8601 time specification", time_str); errno = EINVAL; return FALSE; } crm_trace("Got valid time: %.2d:%.2d:%.2d", hour, minute, second); if ((hour == 24) && (minute == 0) && (second == 0)) { // Equivalent to 00:00:00 of next day, return number of seconds in day } else if (hour >= 24) { crm_err("%s is not a valid ISO 8601 time specification " "because %d is not a valid hour", time_str, hour); errno = EINVAL; return FALSE; } if (minute >= 60) { crm_err("%s is not a valid ISO 8601 time specification " "because %d is not a valid minute", time_str, minute); errno = EINVAL; return FALSE; } if (second >= 60) { crm_err("%s is not a valid ISO 8601 time specification " "because %d is not a valid second", time_str, second); errno = EINVAL; return FALSE; } *result = (hour * HOUR_SECONDS) + (minute * 60) + second; return TRUE; } static bool crm_time_parse_offset(const char *offset_str, int *offset) { tzset(); if (offset_str == NULL) { // Use local offset #if defined(HAVE_STRUCT_TM_TM_GMTOFF) time_t now = time(NULL); struct tm *now_tm = localtime(&now); #endif int h_offset = GMTOFF(now_tm) / HOUR_SECONDS; int m_offset = (GMTOFF(now_tm) - (HOUR_SECONDS * h_offset)) / 60; if (h_offset < 0 && m_offset < 0) { m_offset = 0 - m_offset; } *offset = (HOUR_SECONDS * h_offset) + (60 * m_offset); return TRUE; } if (offset_str[0] == 'Z') { // @TODO invalid if anything after? *offset = 0; return TRUE; } *offset = 0; if ((offset_str[0] == '+') || (offset_str[0] == '-') || isdigit((int)offset_str[0])) { gboolean negate = FALSE; if (offset_str[0] == '+') { offset_str++; } else if (offset_str[0] == '-') { negate = TRUE; offset_str++; } if (crm_time_parse_sec(offset_str, offset) == FALSE) { return FALSE; } if (negate) { *offset = 0 - *offset; } } // @TODO else invalid? return TRUE; } /*! * \internal * \brief Parse the time portion of an ISO 8601 date/time string * * \param[in] time_str Time portion of specification (after any 'T') * \param[in,out] a_time Time object to parse into * * \return TRUE if valid time was parsed, FALSE (and set errno) otherwise * \note This may add a day to a_time (if the time is 24:00:00). */ static bool crm_time_parse(const char *time_str, crm_time_t *a_time) { uint32_t h, m, s; char *offset_s = NULL; tzset(); if (time_str) { if (crm_time_parse_sec(time_str, &(a_time->seconds)) == FALSE) { return FALSE; } offset_s = strstr(time_str, "Z"); if (offset_s == NULL) { offset_s = strstr(time_str, " "); if (offset_s) { while (isspace(offset_s[0])) { offset_s++; } } } } if (crm_time_parse_offset(offset_s, &(a_time->offset)) == FALSE) { return FALSE; } crm_time_get_sec(a_time->offset, &h, &m, &s); crm_trace("Got tz: %c%2.d:%.2d", ((a_time->offset < 0)? '-' : '+'), h, m); if (a_time->seconds == DAY_SECONDS) { // 24:00:00 == 00:00:00 of next day a_time->seconds = 0; crm_time_add_days(a_time, 1); } return TRUE; } /* * \internal * \brief Parse a time object from an ISO 8601 date/time specification * * \param[in] date_str ISO 8601 date/time specification (or "epoch") * * \return New time object on success, NULL (and set errno) otherwise */ static crm_time_t * parse_date(const char *date_str) { const char *time_s = NULL; crm_time_t *dt = NULL; int year = 0; int month = 0; int week = 0; int day = 0; int rc = 0; if (pcmk__str_empty(date_str)) { crm_err("No ISO 8601 date/time specification given"); goto invalid; } if ((date_str[0] == 'T') || (date_str[2] == ':')) { /* Just a time supplied - Infer current date */ dt = crm_time_new(NULL); if (date_str[0] == 'T') { time_s = date_str + 1; } else { time_s = date_str; } goto parse_time; } dt = crm_time_new_undefined(); if (!strncasecmp("epoch", date_str, 5) && ((date_str[5] == '\0') || (date_str[5] == '/') || isspace(date_str[5]))) { dt->days = 1; dt->years = 1970; crm_time_log(LOG_TRACE, "Unpacked", dt, crm_time_log_date | crm_time_log_timeofday); return dt; } /* YYYY-MM-DD */ rc = sscanf(date_str, "%d-%d-%d", &year, &month, &day); if (rc == 1) { /* YYYYMMDD */ rc = sscanf(date_str, "%4d%2d%2d", &year, &month, &day); } if (rc == 3) { if (month > 12) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid month", date_str, month); goto invalid; } else if (day > crm_time_days_in_month(month, year)) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid day of the month", date_str, day); goto invalid; } else { dt->years = year; dt->days = get_ordinal_days(year, month, day); crm_trace("Parsed Gregorian date '%.4d-%.3d' from date string '%s'", year, dt->days, date_str); } goto parse_time; } /* YYYY-DDD */ rc = sscanf(date_str, "%d-%d", &year, &day); if (rc == 2) { if (day > year_days(year)) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid day of the year (max %d)", date_str, day, year_days(year)); goto invalid; } crm_trace("Parsed ordinal year %d and days %d from date string '%s'", year, day, date_str); dt->days = day; dt->years = year; goto parse_time; } /* YYYY-Www-D */ rc = sscanf(date_str, "%d-W%d-%d", &year, &week, &day); if (rc == 3) { if (week > crm_time_weeks_in_year(year)) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid week of the year (max %d)", date_str, week, crm_time_weeks_in_year(year)); goto invalid; } else if (day < 1 || day > 7) { crm_err("'%s' is not a valid ISO 8601 date/time specification " "because '%d' is not a valid day of the week", date_str, day); goto invalid; } else { /* * See https://en.wikipedia.org/wiki/ISO_week_date * * Monday 29 December 2008 is written "2009-W01-1" * Sunday 3 January 2010 is written "2009-W53-7" * Saturday 27 September 2008 is written "2008-W37-6" * * If 1 January is on a Monday, Tuesday, Wednesday or Thursday, it is in week 01. * If 1 January is on a Friday, Saturday or Sunday, it is in week 52 or 53 of the previous year. */ int jan1 = crm_time_january1_weekday(year); crm_trace("Got year %d (Jan 1 = %d), week %d, and day %d from date string '%s'", year, jan1, week, day, date_str); dt->years = year; crm_time_add_days(dt, (week - 1) * 7); if (jan1 <= 4) { crm_time_add_days(dt, 1 - jan1); } else { crm_time_add_days(dt, 8 - jan1); } crm_time_add_days(dt, day); } goto parse_time; } crm_err("'%s' is not a valid ISO 8601 date/time specification", date_str); goto invalid; parse_time: if (time_s == NULL) { time_s = date_str + strspn(date_str, "0123456789-W"); if ((time_s[0] == ' ') || (time_s[0] == 'T')) { ++time_s; } else { time_s = NULL; } } if ((time_s != NULL) && (crm_time_parse(time_s, dt) == FALSE)) { goto invalid; } crm_time_log(LOG_TRACE, "Unpacked", dt, crm_time_log_date | crm_time_log_timeofday); if (crm_time_check(dt) == FALSE) { crm_err("'%s' is not a valid ISO 8601 date/time specification", date_str); goto invalid; } return dt; invalid: crm_time_free(dt); errno = EINVAL; return NULL; } // Parse an ISO 8601 numeric value and return number of characters consumed // @TODO This cannot handle >INT_MAX int values // @TODO Fractions appear to be not working // @TODO Error out on invalid specifications static int parse_int(const char *str, int field_width, int upper_bound, int *result) { int lpc = 0; int offset = 0; int intermediate = 0; gboolean fraction = FALSE; gboolean negate = FALSE; *result = 0; if (*str == '\0') { return 0; } if (str[offset] == 'T') { offset++; } if (str[offset] == '.' || str[offset] == ',') { fraction = TRUE; field_width = -1; offset++; } else if (str[offset] == '-') { negate = TRUE; offset++; } else if (str[offset] == '+' || str[offset] == ':') { offset++; } for (; (fraction || lpc < field_width) && isdigit((int)str[offset]); lpc++) { if (fraction) { intermediate = (str[offset] - '0') / (10 ^ lpc); } else { *result *= 10; intermediate = str[offset] - '0'; } *result += intermediate; offset++; } if (fraction) { *result = (int)(*result * upper_bound); } else if (upper_bound > 0 && *result > upper_bound) { *result = upper_bound; } if (negate) { *result = 0 - *result; } if (lpc > 0) { crm_trace("Found int: %d. Stopped at str[%d]='%c'", *result, lpc, str[lpc]); return offset; } return 0; } /*! * \brief Parse a time duration from an ISO 8601 duration specification * * \param[in] period_s ISO 8601 duration specification (optionally followed by * whitespace, after which the rest of the string will be * ignored) * * \return New time object on success, NULL (and set errno) otherwise * \note It is the caller's responsibility to return the result using * crm_time_free(). */ crm_time_t * crm_time_parse_duration(const char *period_s) { gboolean is_time = FALSE; crm_time_t *diff = NULL; if (pcmk__str_empty(period_s)) { crm_err("No ISO 8601 time duration given"); goto invalid; } if (period_s[0] != 'P') { crm_err("'%s' is not a valid ISO 8601 time duration " "because it does not start with a 'P'", period_s); goto invalid; } if ((period_s[1] == '\0') || isspace(period_s[1])) { crm_err("'%s' is not a valid ISO 8601 time duration " "because nothing follows 'P'", period_s); goto invalid; } diff = crm_time_new_undefined(); diff->duration = TRUE; for (const char *current = period_s + 1; current[0] && (current[0] != '/') && !isspace(current[0]); ++current) { int an_int = 0, rc; if (current[0] == 'T') { /* A 'T' separates year/month/day from hour/minute/seconds. We don't * require it strictly, but just use it to differentiate month from * minutes. */ is_time = TRUE; continue; } // An integer must be next rc = parse_int(current, 10, 0, &an_int); if (rc == 0) { crm_err("'%s' is not a valid ISO 8601 time duration " "because no integer at '%s'", period_s, current); goto invalid; } current += rc; // A time unit must be next (we're not strict about the order) switch (current[0]) { case 'Y': diff->years = an_int; break; case 'M': if (is_time) { /* Minutes */ diff->seconds += an_int * 60; } else { diff->months = an_int; } break; case 'W': diff->days += an_int * 7; break; case 'D': diff->days += an_int; break; case 'H': diff->seconds += an_int * HOUR_SECONDS; break; case 'S': diff->seconds += an_int; break; case '\0': crm_err("'%s' is not a valid ISO 8601 time duration " "because no units after %d", period_s, an_int); goto invalid; default: crm_err("'%s' is not a valid ISO 8601 time duration " "because '%c' is not a valid time unit", period_s, current[0]); goto invalid; } } if (!crm_time_is_defined(diff)) { crm_err("'%s' is not a valid ISO 8601 time duration " "because no amounts and units given", period_s); goto invalid; } return diff; invalid: crm_time_free(diff); errno = EINVAL; return NULL; } /*! * \brief Parse a time period from an ISO 8601 interval specification * * \param[in] period_str ISO 8601 interval specification (start/end, * start/duration, or duration/end) * * \return New time period object on success, NULL (and set errno) otherwise * \note The caller is responsible for freeing the result using * crm_time_free_period(). */ crm_time_period_t * crm_time_parse_period(const char *period_str) { const char *original = period_str; crm_time_period_t *period = NULL; if (pcmk__str_empty(period_str)) { crm_err("No ISO 8601 time period given"); goto invalid; } tzset(); period = calloc(1, sizeof(crm_time_period_t)); CRM_ASSERT(period != NULL); if (period_str[0] == 'P') { period->diff = crm_time_parse_duration(period_str); if (period->diff == NULL) { goto error; } } else { period->start = parse_date(period_str); if (period->start == NULL) { goto error; } } period_str = strstr(original, "/"); if (period_str) { ++period_str; if (period_str[0] == 'P') { if (period->diff != NULL) { crm_err("'%s' is not a valid ISO 8601 time period " "because it has two durations", original); goto invalid; } period->diff = crm_time_parse_duration(period_str); if (period->diff == NULL) { goto error; } } else { period->end = parse_date(period_str); if (period->end == NULL) { goto error; } } } else if (period->diff != NULL) { // Only duration given, assume start is now period->start = crm_time_new(NULL); } else { // Only start given crm_err("'%s' is not a valid ISO 8601 time period " "because it has no duration or ending time", original); goto invalid; } if (period->start == NULL) { period->start = crm_time_subtract(period->end, period->diff); } else if (period->end == NULL) { period->end = crm_time_add(period->start, period->diff); } if (crm_time_check(period->start) == FALSE) { crm_err("'%s' is not a valid ISO 8601 time period " "because the start is invalid", period_str); goto invalid; } if (crm_time_check(period->end) == FALSE) { crm_err("'%s' is not a valid ISO 8601 time period " "because the end is invalid", period_str); goto invalid; } return period; invalid: errno = EINVAL; error: crm_time_free_period(period); return NULL; } /*! * \brief Free a dynamically allocated time period object * * \param[in,out] period Time period to free */ void crm_time_free_period(crm_time_period_t *period) { if (period) { crm_time_free(period->start); crm_time_free(period->end); crm_time_free(period->diff); free(period); } } void crm_time_set(crm_time_t *target, const crm_time_t *source) { crm_trace("target=%p, source=%p", target, source); CRM_CHECK(target != NULL && source != NULL, return); target->years = source->years; target->days = source->days; target->months = source->months; /* Only for durations */ target->seconds = source->seconds; target->offset = source->offset; crm_time_log(LOG_TRACE, "source", source, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); crm_time_log(LOG_TRACE, "target", target, crm_time_log_date | crm_time_log_timeofday | crm_time_log_with_timezone); } static void ha_set_tm_time(crm_time_t *target, const struct tm *source) { int h_offset = 0; int m_offset = 0; /* Ensure target is fully initialized */ target->years = 0; target->months = 0; target->days = 0; target->seconds = 0; target->offset = 0; target->duration = FALSE; if (source->tm_year > 0) { /* years since 1900 */ target->years = 1900 + source->tm_year; } if (source->tm_yday >= 0) { /* days since January 1 [0-365] */ target->days = 1 + source->tm_yday; } if (source->tm_hour >= 0) { target->seconds += HOUR_SECONDS * source->tm_hour; } if (source->tm_min >= 0) { target->seconds += 60 * source->tm_min; } if (source->tm_sec >= 0) { target->seconds += source->tm_sec; } /* tm_gmtoff == offset from UTC in seconds */ h_offset = GMTOFF(source) / HOUR_SECONDS; m_offset = (GMTOFF(source) - (HOUR_SECONDS * h_offset)) / 60; crm_trace("Time offset is %lds (%.2d:%.2d)", GMTOFF(source), h_offset, m_offset); target->offset += HOUR_SECONDS * h_offset; target->offset += 60 * m_offset; } void crm_time_set_timet(crm_time_t *target, const time_t *source) { ha_set_tm_time(target, localtime(source)); } crm_time_t * pcmk_copy_time(const crm_time_t *source) { crm_time_t *target = crm_time_new_undefined(); crm_time_set(target, source); return target; } crm_time_t * crm_time_add(const crm_time_t *dt, const crm_time_t *value) { crm_time_t *utc = NULL; crm_time_t *answer = NULL; if ((dt == NULL) || (value == NULL)) { errno = EINVAL; return NULL; } answer = pcmk_copy_time(dt); utc = crm_get_utc_time(value); if (utc == NULL) { crm_time_free(answer); return NULL; } answer->years += utc->years; crm_time_add_months(answer, utc->months); crm_time_add_days(answer, utc->days); crm_time_add_seconds(answer, utc->seconds); crm_time_free(utc); return answer; } crm_time_t * crm_time_calculate_duration(const crm_time_t *dt, const crm_time_t *value) { crm_time_t *utc = NULL; crm_time_t *answer = NULL; if ((dt == NULL) || (value == NULL)) { errno = EINVAL; return NULL; } utc = crm_get_utc_time(value); if (utc == NULL) { return NULL; } answer = crm_get_utc_time(dt); if (answer == NULL) { crm_time_free(utc); return NULL; } answer->duration = TRUE; answer->years -= utc->years; if(utc->months != 0) { crm_time_add_months(answer, -utc->months); } crm_time_add_days(answer, -utc->days); crm_time_add_seconds(answer, -utc->seconds); crm_time_free(utc); return answer; } crm_time_t * crm_time_subtract(const crm_time_t *dt, const crm_time_t *value) { crm_time_t *utc = NULL; crm_time_t *answer = NULL; if ((dt == NULL) || (value == NULL)) { errno = EINVAL; return NULL; } utc = crm_get_utc_time(value); if (utc == NULL) { return NULL; } answer = pcmk_copy_time(dt); answer->years -= utc->years; if(utc->months != 0) { crm_time_add_months(answer, -utc->months); } crm_time_add_days(answer, -utc->days); crm_time_add_seconds(answer, -utc->seconds); + crm_time_free(utc); return answer; } /*! * \brief Check whether a time object represents a sensible date/time * * \param[in] dt Date/time object to check * * \return \c true if years, days, and seconds are sensible, \c false otherwise */ bool crm_time_check(const crm_time_t *dt) { return (dt != NULL) && (dt->days > 0) && (dt->days <= year_days(dt->years)) && (dt->seconds >= 0) && (dt->seconds < DAY_SECONDS); } #define do_cmp_field(l, r, field) \ if(rc == 0) { \ if(l->field > r->field) { \ crm_trace("%s: %d > %d", \ #field, l->field, r->field); \ rc = 1; \ } else if(l->field < r->field) { \ crm_trace("%s: %d < %d", \ #field, l->field, r->field); \ rc = -1; \ } \ } int crm_time_compare(const crm_time_t *a, const crm_time_t *b) { int rc = 0; crm_time_t *t1 = crm_get_utc_time(a); crm_time_t *t2 = crm_get_utc_time(b); if ((t1 == NULL) && (t2 == NULL)) { rc = 0; } else if (t1 == NULL) { rc = -1; } else if (t2 == NULL) { rc = 1; } else { do_cmp_field(t1, t2, years); do_cmp_field(t1, t2, days); do_cmp_field(t1, t2, seconds); } crm_time_free(t1); crm_time_free(t2); return rc; } /*! * \brief Add a given number of seconds to a date/time or duration * * \param[in,out] a_time Date/time or duration to add seconds to * \param[in] extra Number of seconds to add */ void crm_time_add_seconds(crm_time_t *a_time, int extra) { int days = 0; crm_trace("Adding %d seconds to %d (max=%d)", extra, a_time->seconds, DAY_SECONDS); a_time->seconds += extra; days = a_time->seconds / DAY_SECONDS; a_time->seconds %= DAY_SECONDS; // Don't have negative seconds if (a_time->seconds < 0) { a_time->seconds += DAY_SECONDS; --days; } crm_time_add_days(a_time, days); } void crm_time_add_days(crm_time_t * a_time, int extra) { int lower_bound = 1; int ydays = crm_time_leapyear(a_time->years) ? 366 : 365; crm_trace("Adding %d days to %.4d-%.3d", extra, a_time->years, a_time->days); a_time->days += extra; while (a_time->days > ydays) { a_time->years++; a_time->days -= ydays; ydays = crm_time_leapyear(a_time->years) ? 366 : 365; } if(a_time->duration) { lower_bound = 0; } while (a_time->days < lower_bound) { a_time->years--; a_time->days += crm_time_leapyear(a_time->years) ? 366 : 365; } } void crm_time_add_months(crm_time_t * a_time, int extra) { int lpc; uint32_t y, m, d, dmax; crm_time_get_gregorian(a_time, &y, &m, &d); crm_trace("Adding %d months to %.4d-%.2d-%.2d", extra, y, m, d); if (extra > 0) { for (lpc = extra; lpc > 0; lpc--) { m++; if (m == 13) { m = 1; y++; } } } else { for (lpc = -extra; lpc > 0; lpc--) { m--; if (m == 0) { m = 12; y--; } } } dmax = crm_time_days_in_month(m, y); if (dmax < d) { /* Preserve day-of-month unless the month doesn't have enough days */ d = dmax; } crm_trace("Calculated %.4d-%.2d-%.2d", y, m, d); a_time->years = y; a_time->days = get_ordinal_days(y, m, d); crm_time_get_gregorian(a_time, &y, &m, &d); crm_trace("Got %.4d-%.2d-%.2d", y, m, d); } void crm_time_add_minutes(crm_time_t * a_time, int extra) { crm_time_add_seconds(a_time, extra * 60); } void crm_time_add_hours(crm_time_t * a_time, int extra) { crm_time_add_seconds(a_time, extra * HOUR_SECONDS); } void crm_time_add_weeks(crm_time_t * a_time, int extra) { crm_time_add_days(a_time, extra * 7); } void crm_time_add_years(crm_time_t * a_time, int extra) { a_time->years += extra; } static void ha_get_tm_time(struct tm *target, const crm_time_t *source) { *target = (struct tm) { .tm_year = source->years - 1900, .tm_mday = source->days, .tm_sec = source->seconds % 60, .tm_min = ( source->seconds / 60 ) % 60, .tm_hour = source->seconds / HOUR_SECONDS, .tm_isdst = -1, /* don't adjust */ #if defined(HAVE_STRUCT_TM_TM_GMTOFF) .tm_gmtoff = source->offset #endif }; mktime(target); } /* The high-resolution variant of time object was added to meet an immediate * need, and is kept internal API. * * @TODO The long-term goal is to come up with a clean, unified design for a * time type (or types) that meets all the various needs, to replace * crm_time_t, pcmk__time_hr_t, and struct timespec (in lrmd_cmd_t). * Using glib's GDateTime is a possibility (if we are willing to require * glib >= 2.26). */ pcmk__time_hr_t * pcmk__time_hr_convert(pcmk__time_hr_t *target, const crm_time_t *dt) { pcmk__time_hr_t *hr_dt = NULL; if (dt) { hr_dt = target?target:calloc(1, sizeof(pcmk__time_hr_t)); CRM_ASSERT(hr_dt != NULL); *hr_dt = (pcmk__time_hr_t) { .years = dt->years, .months = dt->months, .days = dt->days, .seconds = dt->seconds, .offset = dt->offset, .duration = dt->duration }; } return hr_dt; } void pcmk__time_set_hr_dt(crm_time_t *target, const pcmk__time_hr_t *hr_dt) { CRM_ASSERT((hr_dt) && (target)); *target = (crm_time_t) { .years = hr_dt->years, .months = hr_dt->months, .days = hr_dt->days, .seconds = hr_dt->seconds, .offset = hr_dt->offset, .duration = hr_dt->duration }; } /*! * \internal * \brief Return the current time as a high-resolution time * * \param[out] epoch If not NULL, this will be set to seconds since epoch * * \return Newly allocated high-resolution time set to the current time */ pcmk__time_hr_t * pcmk__time_hr_now(time_t *epoch) { struct timespec tv; crm_time_t dt; pcmk__time_hr_t *hr; qb_util_timespec_from_epoch_get(&tv); if (epoch != NULL) { *epoch = tv.tv_sec; } crm_time_set_timet(&dt, &(tv.tv_sec)); hr = pcmk__time_hr_convert(NULL, &dt); if (hr != NULL) { hr->useconds = tv.tv_nsec / QB_TIME_NS_IN_USEC; } return hr; } pcmk__time_hr_t * pcmk__time_hr_new(const char *date_time) { pcmk__time_hr_t *hr_dt = NULL; if (date_time == NULL) { hr_dt = pcmk__time_hr_now(NULL); } else { crm_time_t *dt; dt = parse_date(date_time); hr_dt = pcmk__time_hr_convert(NULL, dt); crm_time_free(dt); } return hr_dt; } void pcmk__time_hr_free(pcmk__time_hr_t * hr_dt) { free(hr_dt); } char * pcmk__time_format_hr(const char *format, const pcmk__time_hr_t *hr_dt) { const char *mark_s; int max = 128, scanned_pos = 0, printed_pos = 0, fmt_pos = 0, date_len = 0, nano_digits = 0; char nano_s[10], date_s[max+1], nanofmt_s[5] = "%", *tmp_fmt_s; struct tm tm; crm_time_t dt; if (!format) { return NULL; } pcmk__time_set_hr_dt(&dt, hr_dt); ha_get_tm_time(&tm, &dt); sprintf(nano_s, "%06d000", hr_dt->useconds); while ((format[scanned_pos]) != '\0') { mark_s = strchr(&format[scanned_pos], '%'); if (mark_s) { int fmt_len = 1; fmt_pos = mark_s - format; while ((format[fmt_pos+fmt_len] != '\0') && (format[fmt_pos+fmt_len] >= '0') && (format[fmt_pos+fmt_len] <= '9')) { fmt_len++; } scanned_pos = fmt_pos + fmt_len + 1; if (format[fmt_pos+fmt_len] == 'N') { nano_digits = atoi(&format[fmt_pos+1]); nano_digits = (nano_digits > 6)?6:nano_digits; nano_digits = (nano_digits < 0)?0:nano_digits; sprintf(&nanofmt_s[1], ".%ds", nano_digits); } else { if (format[scanned_pos] != '\0') { continue; } fmt_pos = scanned_pos; /* print till end */ } } else { scanned_pos = strlen(format); fmt_pos = scanned_pos; /* print till end */ } tmp_fmt_s = strndup(&format[printed_pos], fmt_pos - printed_pos); #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wformat-nonliteral" #endif date_len += strftime(&date_s[date_len], max-date_len, tmp_fmt_s, &tm); #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic pop #endif printed_pos = scanned_pos; free(tmp_fmt_s); if (nano_digits) { #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wformat-nonliteral" #endif date_len += snprintf(&date_s[date_len], max-date_len, nanofmt_s, nano_s); #ifdef HAVE_FORMAT_NONLITERAL #pragma GCC diagnostic pop #endif nano_digits = 0; } } return (date_len == 0)?NULL:strdup(date_s); } /*! * \internal * \brief Return human-friendly string corresponding to a time * * \param[in] when Pointer to epoch time value (or NULL for current time) * * \return Current time as string (as by ctime() but without newline) on * success, NULL otherwise * \note The return value points to a statically allocated string which might be * overwritten by subsequent calls to any of the C library date and time * functions. */ const char * pcmk__epoch2str(const time_t *when) { char *since_epoch = NULL; if (when == NULL) { time_t a_time = time(NULL); if (a_time == (time_t) -1) { return NULL; } else { since_epoch = ctime(&a_time); } } else { since_epoch = ctime(when); } if (since_epoch == NULL) { return NULL; } else { return pcmk__trim(since_epoch); } } /*! * \internal * \brief Given a millisecond interval, return a log-friendly string * * \param[in] interval_ms Interval in milliseconds * * \return Readable version of \p interval_ms * * \note The return value is a pointer to static memory that will be * overwritten by later calls to this function. */ const char * pcmk__readable_interval(guint interval_ms) { #define MS_IN_S (1000) #define MS_IN_M (MS_IN_S * 60) #define MS_IN_H (MS_IN_M * 60) #define MS_IN_D (MS_IN_H * 24) #define MAXSTR sizeof("..d..h..m..s...ms") static char str[MAXSTR] = { '\0', }; int offset = 0; if (interval_ms > MS_IN_D) { offset += snprintf(str + offset, MAXSTR - offset, "%ud", interval_ms / MS_IN_D); interval_ms -= (interval_ms / MS_IN_D) * MS_IN_D; } if (interval_ms > MS_IN_H) { offset += snprintf(str + offset, MAXSTR - offset, "%uh", interval_ms / MS_IN_H); interval_ms -= (interval_ms / MS_IN_H) * MS_IN_H; } if (interval_ms > MS_IN_M) { offset += snprintf(str + offset, MAXSTR - offset, "%um", interval_ms / MS_IN_M); interval_ms -= (interval_ms / MS_IN_M) * MS_IN_M; } // Ns, N.NNNs, or NNNms if (interval_ms > MS_IN_S) { offset += snprintf(str + offset, MAXSTR - offset, "%u", interval_ms / MS_IN_S); interval_ms -= (interval_ms / MS_IN_S) * MS_IN_S; if (interval_ms > 0) { offset += snprintf(str + offset, MAXSTR - offset, ".%03u", interval_ms); } (void) snprintf(str + offset, MAXSTR - offset, "s"); } else if (interval_ms > 0) { (void) snprintf(str + offset, MAXSTR - offset, "%ums", interval_ms); } else if (str[0] == '\0') { strcpy(str, "0s"); } return str; } diff --git a/lib/common/tests/Makefile.am b/lib/common/tests/Makefile.am index 2528403ed4..b147309b40 100644 --- a/lib/common/tests/Makefile.am +++ b/lib/common/tests/Makefile.am @@ -1,29 +1,32 @@ # # Copyright 2020-2022 the Pacemaker project contributors # # The version control history for this file may have further details. # # This source code is licensed under the GNU General Public License version 2 # or later (GPLv2+) WITHOUT ANY WARRANTY. # SUBDIRS = \ acl \ agents \ cmdline \ flags \ health \ io \ iso8601 \ lists \ nvpair \ operations \ options \ output \ - procfs \ results \ scores \ strings \ utils \ xml \ xpath + +if SUPPORT_PROCFS +SUBDIRS += procfs +endif diff --git a/lib/common/tests/procfs/pcmk__procfs_has_pids_false_test.c b/lib/common/tests/procfs/pcmk__procfs_has_pids_false_test.c index 3827b8a453..14a8117cd1 100644 --- a/lib/common/tests/procfs/pcmk__procfs_has_pids_false_test.c +++ b/lib/common/tests/procfs/pcmk__procfs_has_pids_false_test.c @@ -1,49 +1,42 @@ /* * Copyright 2022 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #include #include #include "mock_private.h" #include #include #include -#if HAVE_LINUX_PROCFS static void no_pids(void **state) { char path[PATH_MAX]; snprintf(path, PATH_MAX, "/proc/%u/exe", getpid()); // Set readlink() errno and link contents (for /proc/PID/exe) pcmk__mock_readlink = true; expect_string(__wrap_readlink, path, path); expect_any(__wrap_readlink, buf); expect_value(__wrap_readlink, bufsize, PATH_MAX - 1); will_return(__wrap_readlink, ENOENT); will_return(__wrap_readlink, NULL); assert_false(pcmk__procfs_has_pids()); pcmk__mock_readlink = false; } -#endif // HAVE_LINUX_PROCFS - -PCMK__UNIT_TEST(NULL, NULL, -#if HAVE_LINUX_PROCFS - cmocka_unit_test(no_pids) -#endif // HAVE_LINUX_PROCFS - ) +PCMK__UNIT_TEST(NULL, NULL, cmocka_unit_test(no_pids)) diff --git a/lib/common/tests/procfs/pcmk__procfs_has_pids_true_test.c b/lib/common/tests/procfs/pcmk__procfs_has_pids_true_test.c index 71ef24b908..16c9fd74fa 100644 --- a/lib/common/tests/procfs/pcmk__procfs_has_pids_true_test.c +++ b/lib/common/tests/procfs/pcmk__procfs_has_pids_true_test.c @@ -1,49 +1,41 @@ /* * Copyright 2022 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #include #include #include "mock_private.h" #include #include #include -#if HAVE_LINUX_PROCFS - static void has_pids(void **state) { char path[PATH_MAX]; snprintf(path, PATH_MAX, "/proc/%u/exe", getpid()); // Set readlink() errno and link contents (for /proc/PID/exe) pcmk__mock_readlink = true; expect_string(__wrap_readlink, path, path); expect_any(__wrap_readlink, buf); expect_value(__wrap_readlink, bufsize, PATH_MAX - 1); will_return(__wrap_readlink, 0); will_return(__wrap_readlink, "/ok"); assert_true(pcmk__procfs_has_pids()); pcmk__mock_readlink = false; } -#endif // HAVE_LINUX_PROCFS - -PCMK__UNIT_TEST(NULL, NULL, -#if HAVE_LINUX_PROCFS - cmocka_unit_test(has_pids) -#endif // HAVE_LINUX_PROCFS - ) +PCMK__UNIT_TEST(NULL, NULL, cmocka_unit_test(has_pids)) diff --git a/lib/common/tests/procfs/pcmk__procfs_pid2path_test.c b/lib/common/tests/procfs/pcmk__procfs_pid2path_test.c index 78e071fdcc..97c7eb4210 100644 --- a/lib/common/tests/procfs/pcmk__procfs_pid2path_test.c +++ b/lib/common/tests/procfs/pcmk__procfs_pid2path_test.c @@ -1,90 +1,83 @@ /* * Copyright 2022 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU Lesser General Public License * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY. */ #include #include #include "mock_private.h" #include #include #include -#if HAVE_LINUX_PROCFS - static void no_exe_file(void **state) { char path[PATH_MAX]; // Set readlink() errno and link contents pcmk__mock_readlink = true; expect_string(__wrap_readlink, path, "/proc/1000/exe"); expect_value(__wrap_readlink, buf, path); expect_value(__wrap_readlink, bufsize, sizeof(path) - 1); will_return(__wrap_readlink, ENOENT); will_return(__wrap_readlink, NULL); assert_int_equal(pcmk__procfs_pid2path(1000, path, sizeof(path)), ENOENT); pcmk__mock_readlink = false; } static void contents_too_long(void **state) { char path[10]; // Set readlink() errno and link contents pcmk__mock_readlink = true; expect_string(__wrap_readlink, path, "/proc/1000/exe"); expect_value(__wrap_readlink, buf, path); expect_value(__wrap_readlink, bufsize, sizeof(path) - 1); will_return(__wrap_readlink, 0); will_return(__wrap_readlink, "/more/than/10/characters"); assert_int_equal(pcmk__procfs_pid2path(1000, path, sizeof(path)), ENAMETOOLONG); pcmk__mock_readlink = false; } static void contents_ok(void **state) { char path[PATH_MAX]; // Set readlink() errno and link contents pcmk__mock_readlink = true; expect_string(__wrap_readlink, path, "/proc/1000/exe"); expect_value(__wrap_readlink, buf, path); expect_value(__wrap_readlink, bufsize, sizeof(path) - 1); will_return(__wrap_readlink, 0); will_return(__wrap_readlink, "/ok"); assert_int_equal(pcmk__procfs_pid2path((pid_t) 1000, path, sizeof(path)), pcmk_rc_ok); assert_string_equal(path, "/ok"); pcmk__mock_readlink = false; } -#endif // HAVE_LINUX_PROCFS - PCMK__UNIT_TEST(NULL, NULL, -#if HAVE_LINUX_PROCFS cmocka_unit_test(no_exe_file), cmocka_unit_test(contents_too_long), - cmocka_unit_test(contents_ok) -#endif - ) + cmocka_unit_test(contents_ok)) diff --git a/lib/pacemaker/pcmk_sched_ordering.c b/lib/pacemaker/pcmk_sched_ordering.c index b214914145..527b97569a 100644 --- a/lib/pacemaker/pcmk_sched_ordering.c +++ b/lib/pacemaker/pcmk_sched_ordering.c @@ -1,1466 +1,1466 @@ /* * Copyright 2004-2022 the Pacemaker project contributors * * The version control history for this file may have further details. * * This source code is licensed under the GNU General Public License version 2 * or later (GPLv2+) WITHOUT ANY WARRANTY. */ #include #include // PRIx32 #include #include #include #include #include "libpacemaker_private.h" enum pe_order_kind { pe_order_kind_optional, pe_order_kind_mandatory, pe_order_kind_serialize, }; enum ordering_symmetry { ordering_asymmetric, // the only relation in an asymmetric ordering ordering_symmetric, // the normal relation in a symmetric ordering ordering_symmetric_inverse, // the inverse relation in a symmetric ordering }; #define EXPAND_CONSTRAINT_IDREF(__set, __rsc, __name) do { \ __rsc = pcmk__find_constraint_resource(data_set->resources, __name); \ if (__rsc == NULL) { \ pcmk__config_err("%s: No resource found for %s", __set, __name); \ return pcmk_rc_unpack_error; \ } \ } while (0) static const char * invert_action(const char *action) { if (pcmk__str_eq(action, RSC_START, pcmk__str_casei)) { return RSC_STOP; } else if (pcmk__str_eq(action, RSC_STOP, pcmk__str_casei)) { return RSC_START; } else if (pcmk__str_eq(action, RSC_PROMOTE, pcmk__str_casei)) { return RSC_DEMOTE; } else if (pcmk__str_eq(action, RSC_DEMOTE, pcmk__str_casei)) { return RSC_PROMOTE; } else if (pcmk__str_eq(action, RSC_PROMOTED, pcmk__str_casei)) { return RSC_DEMOTED; } else if (pcmk__str_eq(action, RSC_DEMOTED, pcmk__str_casei)) { return RSC_PROMOTED; } else if (pcmk__str_eq(action, RSC_STARTED, pcmk__str_casei)) { return RSC_STOPPED; } else if (pcmk__str_eq(action, RSC_STOPPED, pcmk__str_casei)) { return RSC_STARTED; } crm_warn("Unknown action '%s' specified in order constraint", action); return NULL; } static enum pe_order_kind get_ordering_type(xmlNode *xml_obj) { enum pe_order_kind kind_e = pe_order_kind_mandatory; const char *kind = crm_element_value(xml_obj, XML_ORDER_ATTR_KIND); if (kind == NULL) { const char *score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE); kind_e = pe_order_kind_mandatory; if (score) { // @COMPAT deprecated informally since 1.0.7, formally since 2.0.1 int score_i = char2score(score); if (score_i == 0) { kind_e = pe_order_kind_optional; } pe_warn_once(pe_wo_order_score, "Support for 'score' in rsc_order is deprecated " "and will be removed in a future release " "(use 'kind' instead)"); } } else if (pcmk__str_eq(kind, "Mandatory", pcmk__str_casei)) { kind_e = pe_order_kind_mandatory; } else if (pcmk__str_eq(kind, "Optional", pcmk__str_casei)) { kind_e = pe_order_kind_optional; } else if (pcmk__str_eq(kind, "Serialize", pcmk__str_casei)) { kind_e = pe_order_kind_serialize; } else { pcmk__config_err("Resetting '" XML_ORDER_ATTR_KIND "' for constraint " "%s to 'Mandatory' because '%s' is not valid", pcmk__s(ID(xml_obj), "missing ID"), kind); } return kind_e; } /*! * \internal * \brief Get ordering symmetry from XML * * \param[in] xml_obj Ordering XML * \param[in] parent_kind Default ordering kind * \param[in] parent_symmetrical_s Parent element's symmetrical setting, if any * * \retval ordering_symmetric Ordering is symmetric * \retval ordering_asymmetric Ordering is asymmetric */ static enum ordering_symmetry get_ordering_symmetry(xmlNode *xml_obj, enum pe_order_kind parent_kind, const char *parent_symmetrical_s) { int rc = pcmk_rc_ok; bool symmetric = false; enum pe_order_kind kind = parent_kind; // Default to parent's kind // Check ordering XML for explicit kind if ((crm_element_value(xml_obj, XML_ORDER_ATTR_KIND) != NULL) || (crm_element_value(xml_obj, XML_RULE_ATTR_SCORE) != NULL)) { kind = get_ordering_type(xml_obj); } // Check ordering XML (and parent) for explicit symmetrical setting rc = pcmk__xe_get_bool_attr(xml_obj, XML_CONS_ATTR_SYMMETRICAL, &symmetric); if (rc != pcmk_rc_ok && parent_symmetrical_s != NULL) { symmetric = crm_is_true(parent_symmetrical_s); rc = pcmk_rc_ok; } if (rc == pcmk_rc_ok) { if (symmetric) { if (kind == pe_order_kind_serialize) { pcmk__config_warn("Ignoring " XML_CONS_ATTR_SYMMETRICAL " for '%s' because not valid with " XML_ORDER_ATTR_KIND " of 'Serialize'", ID(xml_obj)); } else { return ordering_symmetric; } } return ordering_asymmetric; } // Use default symmetry if (kind == pe_order_kind_serialize) { return ordering_asymmetric; } return ordering_symmetric; } /*! * \internal * \brief Get ordering flags appropriate to ordering kind * * \param[in] kind Ordering kind * \param[in] first Action name for 'first' action * \param[in] symmetry This ordering's symmetry role * * \return Minimal ordering flags appropriate to \p kind */ static uint32_t ordering_flags_for_kind(enum pe_order_kind kind, const char *first, enum ordering_symmetry symmetry) { uint32_t flags = pe_order_none; // so we trace-log all flags set pe__set_order_flags(flags, pe_order_optional); switch (kind) { case pe_order_kind_optional: break; case pe_order_kind_serialize: pe__set_order_flags(flags, pe_order_serialize_only); break; case pe_order_kind_mandatory: switch (symmetry) { case ordering_asymmetric: pe__set_order_flags(flags, pe_order_asymmetrical); break; case ordering_symmetric: pe__set_order_flags(flags, pe_order_implies_then); if (pcmk__strcase_any_of(first, RSC_START, RSC_PROMOTE, NULL)) { pe__set_order_flags(flags, pe_order_runnable_left); } break; case ordering_symmetric_inverse: pe__set_order_flags(flags, pe_order_implies_first); break; } break; } return flags; } /*! * \internal * \brief Find resource corresponding to ID specified in ordering * * \param[in] xml Ordering XML * \param[in] resource_attr XML attribute name for resource ID * \param[in] instance_attr XML attribute name for instance number. * This option is deprecated and will be removed in a * future release. * \param[in] data_set Cluster working set * * \return Resource corresponding to \p id, or NULL if none */ static pe_resource_t * get_ordering_resource(xmlNode *xml, const char *resource_attr, const char *instance_attr, pe_working_set_t *data_set) { // @COMPAT: instance_attr and instance_id variables deprecated since 2.1.5 pe_resource_t *rsc = NULL; const char *rsc_id = crm_element_value(xml, resource_attr); const char *instance_id = crm_element_value(xml, instance_attr); if (rsc_id == NULL) { pcmk__config_err("Ignoring constraint '%s' without %s", ID(xml), resource_attr); return NULL; } rsc = pcmk__find_constraint_resource(data_set->resources, rsc_id); if (rsc == NULL) { pcmk__config_err("Ignoring constraint '%s' because resource '%s' " "does not exist", ID(xml), rsc_id); return NULL; } if (instance_id != NULL) { pe_warn_once(pe_wo_order_inst, "Support for " XML_ORDER_ATTR_FIRST_INSTANCE " and " XML_ORDER_ATTR_THEN_INSTANCE " is deprecated and will be " "removed in a future release."); if (!pe_rsc_is_clone(rsc)) { pcmk__config_err("Ignoring constraint '%s' because resource '%s' " "is not a clone but instance '%s' was requested", ID(xml), rsc_id, instance_id); return NULL; } rsc = find_clone_instance(rsc, instance_id, data_set); if (rsc == NULL) { pcmk__config_err("Ignoring constraint '%s' because resource '%s' " "does not have an instance '%s'", "'%s'", ID(xml), rsc_id, instance_id); return NULL; } } return rsc; } /*! * \internal * \brief Determine minimum number of 'first' instances required in ordering * * \param[in] rsc 'First' resource in ordering * \param[in] xml Ordering XML * * \return Minimum 'first' instances required (or 0 if not applicable) */ static int get_minimum_first_instances(pe_resource_t *rsc, xmlNode *xml) { const char *clone_min = NULL; bool require_all = false; if (!pe_rsc_is_clone(rsc)) { return 0; } clone_min = g_hash_table_lookup(rsc->meta, XML_RSC_ATTR_INCARNATION_MIN); if (clone_min != NULL) { int clone_min_int = 0; pcmk__scan_min_int(clone_min, &clone_min_int, 0); return clone_min_int; } /* @COMPAT 1.1.13: * require-all=false is deprecated equivalent of clone-min=1 */ if (pcmk__xe_get_bool_attr(xml, "require-all", &require_all) != ENODATA) { pe_warn_once(pe_wo_require_all, "Support for require-all in ordering constraints " "is deprecated and will be removed in a future release" " (use clone-min clone meta-attribute instead)"); if (!require_all) { return 1; } } return 0; } /*! * \internal * \brief Create orderings for a constraint with clone-min > 0 * * \param[in] id Ordering ID * \param[in] rsc_first 'First' resource in ordering (a clone) * \param[in] action_first 'First' action in ordering * \param[in] rsc_then 'Then' resource in ordering * \param[in] action_then 'Then' action in ordering * \param[in] flags Ordering flags * \param[in] clone_min Minimum required instances of 'first' * \param[in] data_set Cluster working set */ static void clone_min_ordering(const char *id, pe_resource_t *rsc_first, const char *action_first, pe_resource_t *rsc_then, const char *action_then, uint32_t flags, int clone_min, pe_working_set_t *data_set) { // Create a pseudo-action for when the minimum instances are active char *task = crm_strdup_printf(CRM_OP_RELAXED_CLONE ":%s", id); pe_action_t *clone_min_met = get_pseudo_op(task, data_set); free(task); /* Require the pseudo-action to have the required number of actions to be * considered runnable before allowing the pseudo-action to be runnable. */ clone_min_met->required_runnable_before = clone_min; pe__set_action_flags(clone_min_met, pe_action_requires_any); // Order the actions for each clone instance before the pseudo-action for (GList *rIter = rsc_first->children; rIter != NULL; rIter = rIter->next) { pe_resource_t *child = rIter->data; pcmk__new_ordering(child, pcmk__op_key(child->id, action_first, 0), NULL, NULL, NULL, clone_min_met, pe_order_one_or_more|pe_order_implies_then_printed, data_set); } // Order "then" action after the pseudo-action (if runnable) pcmk__new_ordering(NULL, NULL, clone_min_met, rsc_then, pcmk__op_key(rsc_then->id, action_then, 0), NULL, flags|pe_order_runnable_left, data_set); } /*! * \internal * \brief Update ordering flags for restart-type=restart * * \param[in] rsc 'Then' resource in ordering * \param[in] kind Ordering kind * \param[in] flag Ordering flag to set (when applicable) * \param[out] flags Ordering flag set to update * * \compat The restart-type resource meta-attribute is deprecated. Eventually, * it will be removed, and pe_restart_ignore will be the only behavior, * at which time this can just be removed entirely. */ #define handle_restart_type(rsc, kind, flag, flags) do { \ if (((kind) == pe_order_kind_optional) \ && ((rsc)->restart_type == pe_restart_restart)) { \ pe__set_order_flags((flags), (flag)); \ } \ } while (0) /*! * \internal * \brief Create new ordering for inverse of symmetric constraint * * \param[in] id Ordering ID (for logging only) * \param[in] kind Ordering kind * \param[in] rsc_first 'First' resource in ordering (a clone) * \param[in] action_first 'First' action in ordering * \param[in] rsc_then 'Then' resource in ordering * \param[in] action_then 'Then' action in ordering * \param[in] data_set Cluster working set */ static void inverse_ordering(const char *id, enum pe_order_kind kind, pe_resource_t *rsc_first, const char *action_first, pe_resource_t *rsc_then, const char *action_then, pe_working_set_t *data_set) { action_then = invert_action(action_then); action_first = invert_action(action_first); if ((action_then == NULL) || (action_first == NULL)) { pcmk__config_warn("Cannot invert constraint '%s' " "(please specify inverse manually)", id); } else { uint32_t flags = ordering_flags_for_kind(kind, action_first, ordering_symmetric_inverse); handle_restart_type(rsc_then, kind, pe_order_implies_first, flags); pcmk__order_resource_actions(rsc_then, action_then, rsc_first, action_first, flags); } } static void unpack_simple_rsc_order(xmlNode *xml_obj, pe_working_set_t *data_set) { pe_resource_t *rsc_then = NULL; pe_resource_t *rsc_first = NULL; int min_required_before = 0; enum pe_order_kind kind = pe_order_kind_mandatory; uint32_t cons_weight = pe_order_none; enum ordering_symmetry symmetry; const char *action_then = NULL; const char *action_first = NULL; const char *id = NULL; CRM_CHECK(xml_obj != NULL, return); id = crm_element_value(xml_obj, XML_ATTR_ID); if (id == NULL) { pcmk__config_err("Ignoring <%s> constraint without " XML_ATTR_ID, crm_element_name(xml_obj)); return; } rsc_first = get_ordering_resource(xml_obj, XML_ORDER_ATTR_FIRST, XML_ORDER_ATTR_FIRST_INSTANCE, data_set); if (rsc_first == NULL) { return; } rsc_then = get_ordering_resource(xml_obj, XML_ORDER_ATTR_THEN, XML_ORDER_ATTR_THEN_INSTANCE, data_set); if (rsc_then == NULL) { return; } action_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST_ACTION); if (action_first == NULL) { action_first = RSC_START; } action_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN_ACTION); if (action_then == NULL) { action_then = action_first; } kind = get_ordering_type(xml_obj); symmetry = get_ordering_symmetry(xml_obj, kind, NULL); cons_weight = ordering_flags_for_kind(kind, action_first, symmetry); handle_restart_type(rsc_then, kind, pe_order_implies_then, cons_weight); /* If there is a minimum number of instances that must be runnable before * the 'then' action is runnable, we use a pseudo-action for convenience: * minimum number of clone instances have runnable actions -> * pseudo-action is runnable -> dependency is runnable. */ min_required_before = get_minimum_first_instances(rsc_first, xml_obj); if (min_required_before > 0) { clone_min_ordering(id, rsc_first, action_first, rsc_then, action_then, cons_weight, min_required_before, data_set); } else { pcmk__order_resource_actions(rsc_first, action_first, rsc_then, action_then, cons_weight); } if (symmetry == ordering_symmetric) { inverse_ordering(id, kind, rsc_first, action_first, rsc_then, action_then, data_set); } } /*! * \internal * \brief Create a new ordering between two actions * * \param[in] first_rsc Resource for 'first' action (if NULL and * \p first_action is a resource action, that * resource will be used) * \param[in] first_action_task Action key for 'first' action (if NULL and * \p first_action is not NULL, its UUID will be * used) * \param[in] first_action 'first' action (if NULL, \p first_rsc and * \p first_action_task must be set) * * \param[in] then_rsc Resource for 'then' action (if NULL and * \p then_action is a resource action, that * resource will be used) * \param[in] then_action_task Action key for 'then' action (if NULL and * \p then_action is not NULL, its UUID will be * used) * \param[in] then_action 'then' action (if NULL, \p then_rsc and * \p then_action_task must be set) * * \param[in] type Flag set of enum pe_ordering * \param[in] data_set Cluster working set to add ordering to * * \note This function takes ownership of first_action_task and * then_action_task, which do not need to be freed by the caller. */ void pcmk__new_ordering(pe_resource_t *first_rsc, char *first_action_task, pe_action_t *first_action, pe_resource_t *then_rsc, char *then_action_task, pe_action_t *then_action, uint32_t flags, pe_working_set_t *data_set) { pe__ordering_t *order = NULL; // One of action or resource must be specified for each side CRM_CHECK(((first_action != NULL) || (first_rsc != NULL)) && ((then_action != NULL) || (then_rsc != NULL)), free(first_action_task); free(then_action_task); return); if ((first_rsc == NULL) && (first_action != NULL)) { first_rsc = first_action->rsc; } if ((then_rsc == NULL) && (then_action != NULL)) { then_rsc = then_action->rsc; } order = calloc(1, sizeof(pe__ordering_t)); CRM_ASSERT(order != NULL); order->id = data_set->order_id++; order->flags = flags; order->lh_rsc = first_rsc; order->rh_rsc = then_rsc; order->lh_action = first_action; order->rh_action = then_action; order->lh_action_task = first_action_task; order->rh_action_task = then_action_task; if ((order->lh_action_task == NULL) && (first_action != NULL)) { order->lh_action_task = strdup(first_action->uuid); } if ((order->rh_action_task == NULL) && (then_action != NULL)) { order->rh_action_task = strdup(then_action->uuid); } if ((order->lh_rsc == NULL) && (first_action != NULL)) { order->lh_rsc = first_action->rsc; } if ((order->rh_rsc == NULL) && (then_action != NULL)) { order->rh_rsc = then_action->rsc; } pe_rsc_trace(first_rsc, "Created ordering %d for %s then %s", (data_set->order_id - 1), ((first_action_task == NULL)? "?" : first_action_task), ((then_action_task == NULL)? "?" : then_action_task)); data_set->ordering_constraints = g_list_prepend(data_set->ordering_constraints, order); pcmk__order_migration_equivalents(order); } /*! * \brief Unpack a set in an ordering constraint * * \param[in] set Set XML to unpack * \param[in] parent_kind rsc_order XML "kind" attribute * \param[in] parent_symmetrical_s rsc_order XML "symmetrical" attribute * \param[in] data_set Cluster working set * * \return Standard Pacemaker return code */ static int unpack_order_set(xmlNode *set, enum pe_order_kind parent_kind, const char *parent_symmetrical_s, pe_working_set_t *data_set) { xmlNode *xml_rsc = NULL; GList *set_iter = NULL; GList *resources = NULL; pe_resource_t *last = NULL; pe_resource_t *resource = NULL; int local_kind = parent_kind; bool sequential = false; uint32_t flags = pe_order_optional; enum ordering_symmetry symmetry; char *key = NULL; const char *id = ID(set); const char *action = crm_element_value(set, "action"); const char *sequential_s = crm_element_value(set, "sequential"); const char *kind_s = crm_element_value(set, XML_ORDER_ATTR_KIND); if (action == NULL) { action = RSC_START; } if (kind_s) { local_kind = get_ordering_type(set); } if (sequential_s == NULL) { sequential_s = "1"; } sequential = crm_is_true(sequential_s); symmetry = get_ordering_symmetry(set, parent_kind, parent_symmetrical_s); flags = ordering_flags_for_kind(local_kind, action, symmetry); for (xml_rsc = first_named_child(set, XML_TAG_RESOURCE_REF); xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) { EXPAND_CONSTRAINT_IDREF(id, resource, ID(xml_rsc)); resources = g_list_append(resources, resource); } if (pcmk__list_of_1(resources)) { crm_trace("Single set: %s", id); goto done; } set_iter = resources; while (set_iter != NULL) { resource = (pe_resource_t *) set_iter->data; set_iter = set_iter->next; key = pcmk__op_key(resource->id, action, 0); if (local_kind == pe_order_kind_serialize) { /* Serialize before everything that comes after */ for (GList *gIter = set_iter; gIter != NULL; gIter = gIter->next) { pe_resource_t *then_rsc = (pe_resource_t *) gIter->data; char *then_key = pcmk__op_key(then_rsc->id, action, 0); pcmk__new_ordering(resource, strdup(key), NULL, then_rsc, then_key, NULL, flags, data_set); } } else if (sequential) { if (last != NULL) { pcmk__order_resource_actions(last, action, resource, action, flags); } last = resource; } free(key); } if (symmetry == ordering_asymmetric) { goto done; } last = NULL; action = invert_action(action); flags = ordering_flags_for_kind(local_kind, action, ordering_symmetric_inverse); set_iter = resources; while (set_iter != NULL) { resource = (pe_resource_t *) set_iter->data; set_iter = set_iter->next; if (sequential) { if (last != NULL) { pcmk__order_resource_actions(resource, action, last, action, flags); } last = resource; } } done: g_list_free(resources); return pcmk_rc_ok; } /*! * \brief Order two resource sets relative to each other * * \param[in] id Ordering ID (for logging) * \param[in] set1 First listed set * \param[in] set2 Second listed set * \param[in] kind Ordering kind * \param[in] data_set Cluster working set * \param[in] symmetry Which ordering symmetry applies to this relation * * \return Standard Pacemaker return code */ static int order_rsc_sets(const char *id, xmlNode *set1, xmlNode *set2, enum pe_order_kind kind, pe_working_set_t *data_set, enum ordering_symmetry symmetry) { xmlNode *xml_rsc = NULL; xmlNode *xml_rsc_2 = NULL; pe_resource_t *rsc_1 = NULL; pe_resource_t *rsc_2 = NULL; const char *action_1 = crm_element_value(set1, "action"); const char *action_2 = crm_element_value(set2, "action"); uint32_t flags = pe_order_none; bool require_all = true; - pcmk__xe_get_bool_attr(set1, "require-all", &require_all); + (void) pcmk__xe_get_bool_attr(set1, "require-all", &require_all); if (action_1 == NULL) { action_1 = RSC_START; } if (action_2 == NULL) { action_2 = RSC_START; } if (symmetry == ordering_symmetric_inverse) { action_1 = invert_action(action_1); action_2 = invert_action(action_2); } if (pcmk__str_eq(RSC_STOP, action_1, pcmk__str_casei) || pcmk__str_eq(RSC_DEMOTE, action_1, pcmk__str_casei)) { /* Assuming: A -> ( B || C) -> D * The one-or-more logic only applies during the start/promote phase. * During shutdown neither B nor can shutdown until D is down, so simply * turn require_all back on. */ require_all = true; } // @TODO is action_2 correct here? flags = ordering_flags_for_kind(kind, action_2, symmetry); /* If we have an unordered set1, whether it is sequential or not is * irrelevant in regards to set2. */ if (!require_all) { char *task = crm_strdup_printf(CRM_OP_RELAXED_SET ":%s", ID(set1)); pe_action_t *unordered_action = get_pseudo_op(task, data_set); free(task); pe__set_action_flags(unordered_action, pe_action_requires_any); for (xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF); xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); /* Add an ordering constraint between every element in set1 and the * pseudo action. If any action in set1 is runnable the pseudo * action will be runnable. */ pcmk__new_ordering(rsc_1, pcmk__op_key(rsc_1->id, action_1, 0), NULL, NULL, NULL, unordered_action, pe_order_one_or_more|pe_order_implies_then_printed, data_set); } for (xml_rsc_2 = first_named_child(set2, XML_TAG_RESOURCE_REF); xml_rsc_2 != NULL; xml_rsc_2 = crm_next_same_xml(xml_rsc_2)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc_2)); /* Add an ordering constraint between the pseudo-action and every * element in set2. If the pseudo-action is runnable, every action * in set2 will be runnable. */ pcmk__new_ordering(NULL, NULL, unordered_action, rsc_2, pcmk__op_key(rsc_2->id, action_2, 0), NULL, flags|pe_order_runnable_left, data_set); } return pcmk_rc_ok; } if (pcmk__xe_attr_is_true(set1, "sequential")) { if (symmetry == ordering_symmetric_inverse) { // Get the first one xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF); if (xml_rsc != NULL) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); } } else { // Get the last one const char *rid = NULL; for (xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF); xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) { rid = ID(xml_rsc); } EXPAND_CONSTRAINT_IDREF(id, rsc_1, rid); } } if (pcmk__xe_attr_is_true(set2, "sequential")) { if (symmetry == ordering_symmetric_inverse) { // Get the last one const char *rid = NULL; for (xml_rsc = first_named_child(set2, XML_TAG_RESOURCE_REF); xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) { rid = ID(xml_rsc); } EXPAND_CONSTRAINT_IDREF(id, rsc_2, rid); } else { // Get the first one xml_rsc = first_named_child(set2, XML_TAG_RESOURCE_REF); if (xml_rsc != NULL) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc)); } } } if ((rsc_1 != NULL) && (rsc_2 != NULL)) { pcmk__order_resource_actions(rsc_1, action_1, rsc_2, action_2, flags); } else if (rsc_1 != NULL) { for (xml_rsc = first_named_child(set2, XML_TAG_RESOURCE_REF); xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc)); pcmk__order_resource_actions(rsc_1, action_1, rsc_2, action_2, flags); } } else if (rsc_2 != NULL) { for (xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF); xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); pcmk__order_resource_actions(rsc_1, action_1, rsc_2, action_2, flags); } } else { for (xml_rsc = first_named_child(set1, XML_TAG_RESOURCE_REF); xml_rsc != NULL; xml_rsc = crm_next_same_xml(xml_rsc)) { EXPAND_CONSTRAINT_IDREF(id, rsc_1, ID(xml_rsc)); for (xmlNode *xml_rsc_2 = first_named_child(set2, XML_TAG_RESOURCE_REF); xml_rsc_2 != NULL; xml_rsc_2 = crm_next_same_xml(xml_rsc_2)) { EXPAND_CONSTRAINT_IDREF(id, rsc_2, ID(xml_rsc_2)); pcmk__order_resource_actions(rsc_1, action_1, rsc_2, action_2, flags); } } } return pcmk_rc_ok; } /*! * \internal * \brief If an ordering constraint uses resource tags, expand them * * \param[in] xml_obj Ordering constraint XML * \param[out] expanded_xml Equivalent XML with tags expanded * \param[in] data_set Cluster working set * * \return Standard Pacemaker return code (specifically, pcmk_rc_ok on success, * and pcmk_rc_unpack_error on invalid configuration) */ static int unpack_order_tags(xmlNode *xml_obj, xmlNode **expanded_xml, pe_working_set_t *data_set) { const char *id_first = NULL; const char *id_then = NULL; const char *action_first = NULL; const char *action_then = NULL; pe_resource_t *rsc_first = NULL; pe_resource_t *rsc_then = NULL; pe_tag_t *tag_first = NULL; pe_tag_t *tag_then = NULL; xmlNode *rsc_set_first = NULL; xmlNode *rsc_set_then = NULL; bool any_sets = false; // Check whether there are any resource sets with template or tag references *expanded_xml = pcmk__expand_tags_in_sets(xml_obj, data_set); if (*expanded_xml != NULL) { crm_log_xml_trace(*expanded_xml, "Expanded rsc_order"); return pcmk_rc_ok; } id_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST); id_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN); if ((id_first == NULL) || (id_then == NULL)) { return pcmk_rc_ok; } if (!pcmk__valid_resource_or_tag(data_set, id_first, &rsc_first, &tag_first)) { pcmk__config_err("Ignoring constraint '%s' because '%s' is not a " "valid resource or tag", ID(xml_obj), id_first); return pcmk_rc_unpack_error; } if (!pcmk__valid_resource_or_tag(data_set, id_then, &rsc_then, &tag_then)) { pcmk__config_err("Ignoring constraint '%s' because '%s' is not a " "valid resource or tag", ID(xml_obj), id_then); return pcmk_rc_unpack_error; } if ((rsc_first != NULL) && (rsc_then != NULL)) { // Neither side references a template or tag return pcmk_rc_ok; } action_first = crm_element_value(xml_obj, XML_ORDER_ATTR_FIRST_ACTION); action_then = crm_element_value(xml_obj, XML_ORDER_ATTR_THEN_ACTION); *expanded_xml = copy_xml(xml_obj); // Convert template/tag reference in "first" into resource_set under constraint if (!pcmk__tag_to_set(*expanded_xml, &rsc_set_first, XML_ORDER_ATTR_FIRST, true, data_set)) { free_xml(*expanded_xml); *expanded_xml = NULL; return pcmk_rc_unpack_error; } if (rsc_set_first != NULL) { if (action_first != NULL) { // Move "first-action" into converted resource_set as "action" crm_xml_add(rsc_set_first, "action", action_first); xml_remove_prop(*expanded_xml, XML_ORDER_ATTR_FIRST_ACTION); } any_sets = true; } // Convert template/tag reference in "then" into resource_set under constraint if (!pcmk__tag_to_set(*expanded_xml, &rsc_set_then, XML_ORDER_ATTR_THEN, true, data_set)) { free_xml(*expanded_xml); *expanded_xml = NULL; return pcmk_rc_unpack_error; } if (rsc_set_then != NULL) { if (action_then != NULL) { // Move "then-action" into converted resource_set as "action" crm_xml_add(rsc_set_then, "action", action_then); xml_remove_prop(*expanded_xml, XML_ORDER_ATTR_THEN_ACTION); } any_sets = true; } if (any_sets) { crm_log_xml_trace(*expanded_xml, "Expanded rsc_order"); } else { free_xml(*expanded_xml); *expanded_xml = NULL; } return pcmk_rc_ok; } /*! * \internal * \brief Unpack ordering constraint XML * * \param[in] xml_obj Ordering constraint XML to unpack * \param[in,out] data_set Cluster working set */ void pcmk__unpack_ordering(xmlNode *xml_obj, pe_working_set_t *data_set) { xmlNode *set = NULL; xmlNode *last = NULL; xmlNode *orig_xml = NULL; xmlNode *expanded_xml = NULL; const char *id = crm_element_value(xml_obj, XML_ATTR_ID); const char *invert = crm_element_value(xml_obj, XML_CONS_ATTR_SYMMETRICAL); enum pe_order_kind kind = get_ordering_type(xml_obj); enum ordering_symmetry symmetry = get_ordering_symmetry(xml_obj, kind, NULL); // Expand any resource tags in the constraint XML if (unpack_order_tags(xml_obj, &expanded_xml, data_set) != pcmk_rc_ok) { return; } if (expanded_xml != NULL) { orig_xml = xml_obj; xml_obj = expanded_xml; } // If the constraint has resource sets, unpack them for (set = first_named_child(xml_obj, XML_CONS_TAG_RSC_SET); set != NULL; set = crm_next_same_xml(set)) { set = expand_idref(set, data_set->input); if ((set == NULL) // Configuration error, message already logged || (unpack_order_set(set, kind, invert, data_set) != pcmk_rc_ok)) { if (expanded_xml != NULL) { free_xml(expanded_xml); } return; } if (last != NULL) { if (order_rsc_sets(id, last, set, kind, data_set, symmetry) != pcmk_rc_ok) { if (expanded_xml != NULL) { free_xml(expanded_xml); } return; } if ((symmetry == ordering_symmetric) && (order_rsc_sets(id, set, last, kind, data_set, ordering_symmetric_inverse) != pcmk_rc_ok)) { if (expanded_xml != NULL) { free_xml(expanded_xml); } return; } } last = set; } if (expanded_xml) { free_xml(expanded_xml); xml_obj = orig_xml; } // If the constraint has no resource sets, unpack it as a simple ordering if (last == NULL) { return unpack_simple_rsc_order(xml_obj, data_set); } } static bool ordering_is_invalid(pe_action_t *action, pe_action_wrapper_t *input) { /* Prevent user-defined ordering constraints between resources * running in a guest node and the resource that defines that node. */ if (!pcmk_is_set(input->type, pe_order_preserve) && (input->action->rsc != NULL) && pcmk__rsc_corresponds_to_guest(action->rsc, input->action->node)) { crm_warn("Invalid ordering constraint between %s and %s", input->action->rsc->id, action->rsc->id); return true; } /* If there's an order like * "rscB_stop node2"-> "load_stopped_node2" -> "rscA_migrate_to node1" * * then rscA is being migrated from node1 to node2, while rscB is being * migrated from node2 to node1. If there would be a graph loop, * break the order "load_stopped_node2" -> "rscA_migrate_to node1". */ if ((input->type == pe_order_load) && action->rsc && pcmk__str_eq(action->task, RSC_MIGRATE, pcmk__str_casei) && pcmk__graph_has_loop(action, action, input)) { return true; } return false; } void pcmk__disable_invalid_orderings(pe_working_set_t *data_set) { for (GList *iter = data_set->actions; iter != NULL; iter = iter->next) { pe_action_t *action = (pe_action_t *) iter->data; pe_action_wrapper_t *input = NULL; for (GList *input_iter = action->actions_before; input_iter != NULL; input_iter = input_iter->next) { input = (pe_action_wrapper_t *) input_iter->data; if (ordering_is_invalid(action, input)) { input->type = pe_order_none; } } } } /*! * \internal * \brief Order stops on a node before the node's shutdown * * \param[in] node Node being shut down * \param[in] shutdown_op Shutdown action for node */ void pcmk__order_stops_before_shutdown(pe_node_t *node, pe_action_t *shutdown_op) { for (GList *iter = node->details->data_set->actions; iter != NULL; iter = iter->next) { pe_action_t *action = (pe_action_t *) iter->data; // Only stops on the node shutting down are relevant if ((action->rsc == NULL) || (action->node == NULL) || (action->node->details != node->details) || !pcmk__str_eq(action->task, RSC_STOP, pcmk__str_casei)) { continue; } // Resources and nodes in maintenance mode won't be touched if (pcmk_is_set(action->rsc->flags, pe_rsc_maintenance)) { pe_rsc_trace(action->rsc, "Not ordering %s before shutdown of %s because " "resource in maintenance mode", action->uuid, pe__node_name(node)); continue; } else if (node->details->maintenance) { pe_rsc_trace(action->rsc, "Not ordering %s before shutdown of %s because " "node in maintenance mode", action->uuid, pe__node_name(node)); continue; } /* Don't touch a resource that is unmanaged or blocked, to avoid * blocking the shutdown (though if another action depends on this one, * we may still end up blocking) */ if (!pcmk_any_flags_set(action->rsc->flags, pe_rsc_managed|pe_rsc_block)) { pe_rsc_trace(action->rsc, "Not ordering %s before shutdown of %s because " "resource is unmanaged or blocked", action->uuid, pe__node_name(node)); continue; } pe_rsc_trace(action->rsc, "Ordering %s before shutdown of %s", action->uuid, pe__node_name(node)); pe__clear_action_flags(action, pe_action_optional); pcmk__new_ordering(action->rsc, NULL, action, NULL, strdup(CRM_OP_SHUTDOWN), shutdown_op, pe_order_optional|pe_order_runnable_left, node->details->data_set); } } /*! * \brief Find resource actions matching directly or as child * * \param[in] rsc Resource to check * \param[in] original_key Action key to search for (possibly referencing * parent of \rsc) * * \return Newly allocated list of matching actions * \note It is the caller's responsibility to free the result with g_list_free() */ static GList * find_actions_by_task(const pe_resource_t *rsc, const char *original_key) { // Search under given task key directly GList *list = find_actions(rsc->actions, original_key, NULL); if (list == NULL) { // Search again using this resource's ID char *key = NULL; char *task = NULL; guint interval_ms = 0; if (parse_op_key(original_key, NULL, &task, &interval_ms)) { key = pcmk__op_key(rsc->id, task, interval_ms); list = find_actions(rsc->actions, key, NULL); free(key); free(task); } else { crm_err("Invalid operation key (bug?): %s", original_key); } } return list; } /*! * \internal * \brief Order relevant resource actions after a given action * * \param[in,out] first_action Action to order after (or NULL if none runnable) * \param[in] rsc Resource whose actions should be ordered * \param[in,out] order Ordering constraint being applied */ static void order_resource_actions_after(pe_action_t *first_action, const pe_resource_t *rsc, pe__ordering_t *order) { GList *then_actions = NULL; uint32_t flags = pe_order_none; CRM_CHECK((rsc != NULL) && (order != NULL), return); flags = order->flags; pe_rsc_trace(rsc, "Applying ordering %d for 'then' resource %s", order->id, rsc->id); if (order->rh_action != NULL) { then_actions = g_list_prepend(NULL, order->rh_action); } else { then_actions = find_actions_by_task(rsc, order->rh_action_task); } if (then_actions == NULL) { pe_rsc_trace(rsc, "Ignoring ordering %d: no %s actions found for %s", order->id, order->rh_action_task, rsc->id); return; } if ((first_action != NULL) && (first_action->rsc == rsc) && pcmk_is_set(first_action->flags, pe_action_dangle)) { pe_rsc_trace(rsc, "Detected dangling migration ordering (%s then %s %s)", first_action->uuid, order->rh_action_task, rsc->id); pe__clear_order_flags(flags, pe_order_implies_then); } if ((first_action == NULL) && !pcmk_is_set(flags, pe_order_implies_then)) { pe_rsc_debug(rsc, "Ignoring ordering %d for %s: No first action found", order->id, rsc->id); g_list_free(then_actions); return; } for (GList *iter = then_actions; iter != NULL; iter = iter->next) { pe_action_t *then_action_iter = (pe_action_t *) iter->data; if (first_action != NULL) { order_actions(first_action, then_action_iter, flags); } else { pe__clear_action_flags(then_action_iter, pe_action_runnable); crm_warn("%s of %s is unrunnable because there is no %s of %s " "to order it after", then_action_iter->task, rsc->id, order->lh_action_task, order->lh_rsc->id); } } g_list_free(then_actions); } static void rsc_order_first(pe_resource_t *first_rsc, pe__ordering_t *order, pe_working_set_t *data_set) { GList *first_actions = NULL; pe_action_t *first_action = order->lh_action; pe_resource_t *then_rsc = order->rh_rsc; CRM_ASSERT(first_rsc != NULL); pe_rsc_trace(first_rsc, "Applying ordering constraint %d (first: %s)", order->id, first_rsc->id); if (first_action != NULL) { first_actions = g_list_prepend(NULL, first_action); } else { first_actions = find_actions_by_task(first_rsc, order->lh_action_task); } if ((first_actions == NULL) && (first_rsc == then_rsc)) { pe_rsc_trace(first_rsc, "Ignoring constraint %d: first (%s for %s) not found", order->id, order->lh_action_task, first_rsc->id); } else if (first_actions == NULL) { char *key = NULL; char *op_type = NULL; guint interval_ms = 0; parse_op_key(order->lh_action_task, NULL, &op_type, &interval_ms); key = pcmk__op_key(first_rsc->id, op_type, interval_ms); if ((first_rsc->fns->state(first_rsc, TRUE) == RSC_ROLE_STOPPED) && pcmk__str_eq(op_type, RSC_STOP, pcmk__str_casei)) { free(key); pe_rsc_trace(first_rsc, "Ignoring constraint %d: first (%s for %s) not found", order->id, order->lh_action_task, first_rsc->id); } else if ((first_rsc->fns->state(first_rsc, TRUE) == RSC_ROLE_UNPROMOTED) && pcmk__str_eq(op_type, RSC_DEMOTE, pcmk__str_casei)) { free(key); pe_rsc_trace(first_rsc, "Ignoring constraint %d: first (%s for %s) not found", order->id, order->lh_action_task, first_rsc->id); } else { pe_rsc_trace(first_rsc, "Creating first (%s for %s) for constraint %d ", order->lh_action_task, first_rsc->id, order->id); first_action = custom_action(first_rsc, key, op_type, NULL, TRUE, TRUE, data_set); first_actions = g_list_prepend(NULL, first_action); } free(op_type); } if (then_rsc == NULL) { if (order->rh_action == NULL) { pe_rsc_trace(first_rsc, "Ignoring constraint %d: then not found", order->id); return; } then_rsc = order->rh_action->rsc; } for (GList *gIter = first_actions; gIter != NULL; gIter = gIter->next) { first_action = (pe_action_t *) gIter->data; if (then_rsc == NULL) { order_actions(first_action, order->rh_action, order->flags); } else { order_resource_actions_after(first_action, then_rsc, order); } } g_list_free(first_actions); } void pcmk__apply_orderings(pe_working_set_t *data_set) { crm_trace("Applying ordering constraints"); /* Ordering constraints need to be processed in the order they were created. * rsc_order_first() and order_resource_actions_after() require the relevant * actions to already exist in some cases, but rsc_order_first() will create * the 'first' action in certain cases. Thus calling rsc_order_first() can * change the behavior of later-created orderings. * * Also, g_list_append() should be avoided for performance reasons, so we * prepend orderings when creating them and reverse the list here. * * @TODO This is brittle and should be carefully redesigned so that the * order of creation doesn't matter, and the reverse becomes unneeded. */ data_set->ordering_constraints = g_list_reverse(data_set->ordering_constraints); for (GList *gIter = data_set->ordering_constraints; gIter != NULL; gIter = gIter->next) { pe__ordering_t *order = gIter->data; pe_resource_t *rsc = order->lh_rsc; if (rsc != NULL) { rsc_order_first(rsc, order, data_set); continue; } rsc = order->rh_rsc; if (rsc != NULL) { order_resource_actions_after(order->lh_action, rsc, order); } else { crm_trace("Applying ordering constraint %d (non-resource actions)", order->id); order_actions(order->lh_action, order->rh_action, order->flags); } } g_list_foreach(data_set->actions, (GFunc) pcmk__block_colocated_starts, data_set); crm_trace("Ordering probes"); pcmk__order_probes(data_set); crm_trace("Updating %d actions", g_list_length(data_set->actions)); g_list_foreach(data_set->actions, (GFunc) pcmk__update_action_for_orderings, data_set); pcmk__disable_invalid_orderings(data_set); } /*! * \internal * \brief Order a given action after each action in a given list * * \param[in] after "After" action * \param[in] list List of "before" actions */ void pcmk__order_after_each(pe_action_t *after, GList *list) { const char *after_desc = (after->task == NULL)? after->uuid : after->task; for (GList *iter = list; iter != NULL; iter = iter->next) { pe_action_t *before = (pe_action_t *) iter->data; const char *before_desc = before->task? before->task : before->uuid; crm_debug("Ordering %s on %s before %s on %s", before_desc, pe__node_name(before->node), after_desc, pe__node_name(after->node)); order_actions(before, after, pe_order_optional); } } /*! * \internal * \brief Order promotions and demotions for restarts of a clone or bundle * * \param[in] rsc Clone or bundle to order */ void pcmk__promotable_restart_ordering(pe_resource_t *rsc) { // Order start and promote after all instances are stopped pcmk__order_resource_actions(rsc, RSC_STOPPED, rsc, RSC_START, pe_order_optional); pcmk__order_resource_actions(rsc, RSC_STOPPED, rsc, RSC_PROMOTE, pe_order_optional); // Order stop, start, and promote after all instances are demoted pcmk__order_resource_actions(rsc, RSC_DEMOTED, rsc, RSC_STOP, pe_order_optional); pcmk__order_resource_actions(rsc, RSC_DEMOTED, rsc, RSC_START, pe_order_optional); pcmk__order_resource_actions(rsc, RSC_DEMOTED, rsc, RSC_PROMOTE, pe_order_optional); // Order promote after all instances are started pcmk__order_resource_actions(rsc, RSC_STARTED, rsc, RSC_PROMOTE, pe_order_optional); // Order demote after all instances are demoted pcmk__order_resource_actions(rsc, RSC_DEMOTE, rsc, RSC_DEMOTED, pe_order_optional); } diff --git a/xml/README.md b/xml/README.md index 4d74c67d56..e32edc2fc2 100644 --- a/xml/README.md +++ b/xml/README.md @@ -1,134 +1,136 @@ # Schema Reference Pacemaker's XML schema has a version of its own, independent of the version of Pacemaker itself. ## Versioned Schema Evolution A versioned schema offers transparent backward and forward compatibility. - It reflects the timeline of schema-backed features (introduction, changes to the syntax, possibly deprecation) through the versioned stable schema increments, while keeping schema versions used by default by older Pacemaker versions untouched. - Pacemaker internally uses the latest stable schema version, and relies on supplemental transformations to promote cluster configurations based on older, incompatible schema versions into the desired form. ## Mapping Pacemaker Versions to Schema Versions | Pacemaker | Latest Schema | Changed | --------- | ------------- | ---------------------------------------------- +| `2.1.5` | `3.9` | `alerts`, `constraints`, `nodes`, `nvset`, +| | | `options`, `resources`, `rule` | `2.1.3` | `3.8` | `acls` | `2.1.0` | `3.7` | `constraints`, `resources` | `2.0.5` | `3.5` | `api`, `resources`, `rule` | `2.0.4` | `3.3` | `tags` | `2.0.1` | `3.2` | `resources` | `2.0.0` | `3.1` | `constraints`, `resources` | `1.1.18` | `2.10` | `resources`, `alerts` | `1.1.17` | `2.9` | `resources`, `rule` | `1.1.16` | `2.6` | `constraints` | `1.1.15` | `2.5` | `alerts` | `1.1.14` | `2.4` | `fencing` | `1.1.13` | `2.3` | `constraints` | `1.1.12` | `2.0` | `nodes`, `nvset`, `resources`, `tags`, `acls` -| `1.1.8`+ | `1.2` | +| `1.1.8` | `1.2` | ## Schema generation Each logical portion of the schema goes into its own RNG file, named like `${base}-${X}.${Y}.rng`. `${base}` identifies the portion of the schema (e.g. constraints, resources); ${X}.${Y} is the latest schema version that contained changes in this portion of the schema. The complete, overall schema, `pacemaker-${X}.${Y}.rng`, is automatically generated from the other files via the Makefile. # Updating schema files # ## New features ## The current schema version is determined at runtime when crm\_schema\_init() scans the CRM\_SCHEMA\_DIRECTORY. It will have the form `pacemaker-${X}.${Y}` and the highest `${X}.${Y}` wins. ### Simple Additions When the new syntax is a simple addition to the previous one, create a new entry, incrementing `${Y}`. ### Feature Removal or otherwise Incompatible Changes When the new syntax is not a simple addition to the previous one, create a new entry, incrementing `${X}` and setting `${Y} = 0`. An XSLT file is also required that converts an old syntax to the new one and must be named `upgrade-${Xold}.${Yold}.xsl`. See `xml/upgrade-1.3.xsl` for an example. Since `xml/upgrade-2.10.xsl`, rather self-descriptive approach is taken, separating metadata of the replacements and other modifications to perform from the actual executive parts, which is leveraged, e.g., with the on-the-fly overview as obtained with `./regression.sh -X test2to3`. Also this was the first time particular key names of `nvpair`s, i.e. below the granularity of the schemas so far, received attention, and consequently, no longer expected names became systemically banned in the after-upgrade schemas, using `` construct in the data type specification pertaining the affected XML path. The implied complexity also resulted in establishing a new compound, stepwise transformation, alleviating the procedural burden from the core upgrade recipe. In particular, `id-ref` based syntactic simplification granted in the CIB format introduces nonnegligible internal "noise" because of the extra indirection encumbered with generally non-bijective character of such a scheme (context-dependent interpretation). To reduce this strain, a symmetric arrangement is introduced as a pair of _enter_/_leave_ (pre-upgrade/post-upgrade) transformations where the latter is meant to eventually reversibly restore what the former intentionally simplified (normalized) for upgrade transformation's peruse. It's optional (even the post-upgrade counterpart is optional alone) and depends on whether the suitable files are found along the upgrade transformation itself: e.g., for `upgrade-2.10.xsl`, such files are `upgrade-2.10-enter.xsl` and `upgrade-2.10-leave.xsl`. Note that unfolding + refolding `id-ref` shortcuts is just a practically imposed individual case of how to reversibly make the configuration space tractable in the upgrade itself, allowing for more sophistication down the road. ### General Procedure 1. Copy the most recent version of `${base}-*.rng` to `${base}-${X}.${Y}.rng`, such that the new file name increments the highest number of any schema file, not just the file being edited. 2. Commit the copy, e.g. `"Low: xml: clone ${base} schema in preparation for changes"`. This way, the actual change will be obvious in the commit history. 3. Modify `${base}-${X}.${Y}.rng` as required. 4. If required, add an XSLT file, and update `xslt\_SCRIPTS` in `xml/Makefile.am`. 5. Commit. 6. Run `make -C xml clean; make -C xml` to rebuild the schemas in the local 6. Run `make -C xml clean; make -C xml` to rebuild the schemas in the local source directory. 7. The CIB validity and upgrade regression tests will break after the schema is updated. Run `cts/cts-cli -s` to make the expected outputs reflect the changes made so far, and run `git diff` to ensure that these changes look sane. Finally, commit the changes. 8. Similarly, with the new major version `${X}`, it's advisable to refresh scheduler tests at some point. See the instructions in `cts/README.md`. ## Using a New Schema New features will not be available until the cluster administrator: 1. Updates all the nodes 2. Runs the equivalent of `cibadmin --upgrade --force` ## Random Notes From the source directory, run `make -C xml diff` to see the changes in the current schema (compared to the previous ones). Alternatively, if the intention is to grok the overall historical schema evolution, use `make -C xml fulldiff`.