diff --git a/cts/README.md b/cts/README.md index 3f603c24f4..6436736bc7 100644 --- a/cts/README.md +++ b/cts/README.md @@ -1,284 +1,326 @@ # Pacemaker Cluster Test Suite (CTS) ## Purpose Pacemaker's CTS is primarily for developers and packagers of the Pacemaker source code, but it can be useful for users who wish to see how their cluster will react to various situations. CTS consists of two main parts: a set of regression tests for verifying the functionality of particular Pacemaker components, and a cluster exerciser for intensively testing the behavior of an entire working cluster. The primary regression test front end is cts-regression in this directory. Run it with the --help option to see its usage. The regression tests can be run on any single cluster node. The cluster should be stopped on that node when running the tests. The rest of this document focuses on the cluster exerciser. The cluster exerciser runs a randomized series of predefined tests on the cluster. CTS can be run against a pre-existing cluster configuration or overwrite the existing configuration with a test configuration. ## Requirements * Three or more machines (one test exerciser and two or more test cluster machines). * The test cluster machines should be on the same subnet and have journalling filesystems (ext3, ext4, xfs, etc.) for all of their filesystems other than /boot. You also need a number of free IP addresses on that subnet if you intend to test mutual IP address takeover. * The test exerciser machine doesn't need to be on the same subnet as the test cluster machines. Minimal demands are made on the exerciser machine - it just has to stay up during the tests. * It helps a lot in tracking problems if all machines' clocks are closely synchronized. NTP does this automatically, but you can do it by hand if you want. * The exerciser needs to be able to ssh over to the cluster nodes as root without a password challenge. Configure ssh accordingly (see the Mini-HOWTO at the end of this document for more details). * The exerciser needs to be able to resolve the machine names of the test cluster - either by DNS or by /etc/hosts. * CTS is not guaranteed to run on all platforms that pacemaker itself does. It calls commands such as service that may not be provided by all OSes. ## Preparation Install Pacemaker (including CTS) on all machines. These scripts are coordinated with particular versions of Pacemaker, so you need the same version of CTS as the rest of Pacemaker, and you need the same version of pacemaker and CTS on both the test exerciser and the test cluster machines. You can install CTS from source, although many distributions provide packages that include it (e.g. pacemaker-cts or pacemaker-dev). Typically, packages will install CTS as /usr/share/pacemaker/tests/cts. Configure cluster communications (Corosync) on the cluster machines and verify everything works. NOTE: Do not run the cluster on the test exerciser machine. NOTE: Wherever machine names are mentioned in these configuration files, they must match the machines' `uname -n` name. This may or may not match the machines' FQDN (fully qualified domain name) - it depends on how you (and your OS) have named the machines. ## Run CTS Now assuming you did all this, what you need to do is run CTSlab.py: python ./CTSlab.py [options] number-of-tests-to-run You must specify which nodes are part of the cluster with --nodes, e.g.: --node "pcmk-1 pcmk-2 pcmk-3" Most people will want to save the output with --outputfile, e.g.: --outputfile ~/cts.log Unless you want to test your pre-existing cluster configuration, you also want: --clobber-cib --populate-resources --test-ip-base $IP # e.g. --test-ip-base 192.168.9.100 and configure some sort of fencing: --stonith $TYPE # e.g. "--stonith xvm" to use fence_xvm or "--stonith ssh" to use external/ssh A complete command line might look like: python ./CTSlab.py --nodes "pcmk-1 pcmk-2 pcmk-3" --outputfile ~/cts.log \ --clobber-cib --populate-resources --test-ip-base 192.168.9.100 \ --stonith xvm 50 For more options, use the --help option. NOTE: Perhaps more convenient way to compile a command line like above is to use cluster_test script that, at least in the source repository, sits in the same directory as this very file. To extract the result of a particular test, run: crm_report -T $test ## Optional/advanced testing ### Memory testing Pacemaker and CTS have various options for testing memory management. On the cluster nodes, pacemaker components will use various environment variables to control these options. How these variables are set varies by OS, but usually they are set in the /etc/sysconfig/pacemaker or /etc/default/pacemaker file. Valgrind is a program for detecting memory management problems (such as use-after-free errors). If you have valgrind installed, you can enable it by setting the following environment variables on all cluster nodes: PCMK_valgrind_enabled=pacemaker-attrd,pacemaker-controld,pacemaker-execd,pacemaker-fenced,cib,pacemaker-schedulerd VALGRIND_OPTS="--leak-check=full --trace-children=no --num-callers=25 --log-file=/var/lib/pacemaker/valgrind-%p --suppressions=/usr/share/pacemaker/tests/valgrind-pcmk.suppressions --gen-suppressions=all" and running CTS with these options: --valgrind-tests --valgrind-procs="pacemaker-attrd pacemaker-controld pacemaker-execd cib pacemaker-schedulerd pacemaker-fenced" These options should only be set while specifically testing memory management, because they may slow down the cluster significantly, and they will disable writes to the CIB. If desired, you can enable valgrind on a subset of pacemaker components rather than all of them as listed above. Valgrind will put a text file for each process in the location specified by valgrind's --log-file option. For explanations of the messages valgrind generates, see http://valgrind.org/docs/manual/mc-manual.html Separately, if you are using the GNU C library, the G_SLICE, MALLOC_PERTURB_, and MALLOC_CHECK_ environment variables can be set to affect the library's memory management functions. When using valgrind, G_SLICE should be set to "always-malloc", which helps valgrind track memory by always using the malloc() and free() routines directly. When not using valgrind, G_SLICE can be left unset, or set to "debug-blocks", which enables the C library to catch many memory errors but may impact performance. If the MALLOC_PERTURB_ environment variable is set to an 8-bit integer, the C library will initialize all newly allocated bytes of memory to the integer value, and will set all newly freed bytes of memory to the bitwise inverse of the integer value. This helps catch uses of uninitialized or freed memory blocks that might otherwise go unnoticed. Example: MALLOC_PERTURB_=221 If the MALLOC_CHECK_ environment variable is set, the C library will check for certain heap corruption errors. The most useful value in testing is 3, which will cause the library to print a message to stderr and abort execution. Example: MALLOC_CHECK_=3 Valgrind should be enabled for either all nodes or none, but the C library variables may be set differently on different nodes. ### Remote node testing If the pacemaker-remoted daemon is installed on all cluster nodes, CTS will enable remote node tests. The remote node tests choose a random node, stop the cluster on it, start pacemaker-remoted on it, and add an ocf:pacemaker:remote resource to turn it into a remote node. When the test is done, CTS will turn the node back into a cluster node. To avoid conflicts, CTS will rename the node, prefixing the original node name with "remote-". For example, "pcmk-1" will become "remote-pcmk-1". The name change may require special stonith configuration, if the fence agent expects the node name to be the same as its hostname. A common approach is to specify the "remote-" names in pcmk_host_list. If you use pcmk_host_list=all, CTS will expand that to all cluster nodes and their "remote-" names. You may additionally need a pcmk_host_map argument to map the "remote-" names to the hostnames. Example: --stonith xvm --stonith-args \ pcmk_arg_map=domain:uname,pcmk_host_list=all,pcmk_host_map=remote-pcmk-1:pcmk-1;remote-pcmk-2:pcmk-2 ### Remote node testing with valgrind When running the remote node tests, the pacemaker components on the cluster nodes can be run under valgrind as described in the "Memory testing" section. However, pacemaker-remoted cannot be run under valgrind that way, because it is started by the OS's regular boot system and not by pacemaker. Details vary by system, but the goal is to set the VALGRIND_OPTS environment variable and then start pacemaker-remoted by prefixing it with the path to valgrind. The init script and systemd service file provided with pacemaker-remoted will load the pacemaker environment variables from the same location used by other pacemaker components, so VALGRIND_OPTS will be set correctly if using one of those. For an OS using systemd, you can override the ExecStart parameter to run valgrind. For example: mkdir /etc/systemd/system/pacemaker_remote.service.d cat >/etc/systemd/system/pacemaker_remote.service.d/valgrind.conf < + + +that may be left behind into more canonical: + + + +so manual editing is tasked, or perhaps `--format` or `--c14n` +to `xmllint` will be of help (without any other side effects). + +If the overall process gets stuck anywhere, common sense to the rescue. +The initial part of the above recipe can be repeated anytime to verify +there's nothing to upgrade artificially like this, which is a desired +state. Note that `regression.sh` script performs validation of both +the input and output, should the upgrade take place, implicitly, so +there's no need of revalidation in the happy case. diff --git a/xml/Readme.md b/xml/Readme.md index 6cd1aff512..73aa64f584 100644 --- a/xml/Readme.md +++ b/xml/Readme.md @@ -1,110 +1,122 @@ # Schema Reference Pacemaker's XML schema has a version of its own, independent of the version of Pacemaker itself. ## Versioned Schema Evolution A versioned schema offers transparent backward and forward compatibility. - It reflects the timeline of schema-backed features (introduction, changes to the syntax, possibly deprecation) through the versioned stable schema increments, while keeping schema versions used by default by older Pacemaker versions untouched. - Pacemaker internally uses the latest stable schema version, and relies on supplemental transformations to promote cluster configurations based on older, incompatible schema versions into the desired form. - It allows experimental features with a possibly unstable configuration interface to be developed using the special `next` version of the schema. ## Mapping Pacemaker Versions to Schema Versions | Pacemaker | Latest Schema | Changed | --------- | ------------- | ---------------------------------------------- | `2.0.0` | `3.0` | `constraints`, `resources` | `1.1.18` | `2.10` | `resources`, `alerts` | `1.1.17` | `2.9` | `resources`, `rule` | `1.1.16` | `2.6` | `constraints` | `1.1.15` | `2.5` | `alerts` | `1.1.14` | `2.4` | `fencing` | `1.1.13` | `2.3` | `constraints` | `1.1.12` | `2.0` | `nodes`, `nvset`, `resources`, `tags`, `acls` | `1.1.8`+ | `1.2` | ## Schema generation Each logical portion of the schema goes into its own RNG file, named like `${base}-${X}.${Y}.rng`. `${base}` identifies the portion of the schema (e.g. constraints, resources); ${X}.${Y} is the latest schema version that contained changes in this portion of the schema. The complete, overall schema, `pacemaker-${X}.${Y}.rng`, is automatically generated from the other files via the Makefile. # Updating schema files # ## Experimental features ## Experimental features go into `${base}-next.rng` where `${base}` is the affected portion of the schema. If such a file does not already exist, create it by copying the most recent `${base}-${X}.${Y}.rng`. Pacemaker will not use the experimental schema by default; the cluster administrator must explicitly set the `validate-with` property appropriately to use it. ## Stable features ## The current stable version is determined at runtime when crm_schema_init() scans the CRM_SCHEMA_DIRECTORY. It will have the form `pacemaker-${X}.${Y}` and the highest `${X}.${Y}` wins. ### Simple Additions When the new syntax is a simple addition to the previous one, create a new entry, incrementing `${Y}`. ### Feature Removal or otherwise Incompatible Changes When the new syntax is not a simple addition to the previous one, create a new entry, incrementing `${X}` and setting `${Y} = 0`. An XSLT file is also required that converts an old syntax to the new one and must be named `upgrade-${Xold}.${Yold}.xsl`. See `xml/upgrade-1.3.xsl` for an example. +Since `xml/upgrade-2.10.xsl`, rather self-descriptive approach is taken, +separating metadata of the replacements and other modifications to +perform from the actual executive parts, which is leveraged, e.g., with +the on-the-fly overview as obtained with `./regression.sh -X test2to3`. +Also this was the first time particular key names of `nvpair`s, +i.e. below the granularity of the schemas so far, received attention, +and consequently, no longer expected names became systemically banned +in the after-upgrade schemas, using `` construct in the +data type specification pertaining the affected XML path. + ### General Procedure 1. Copy the most recent version of `${base}-*.rng` to `${base}-${X}.${Y}.rng` 1. Commit the copy, e.g. `"Low: xml: clone ${base} schema in preparation for changes"`. This way, the actual change will be obvious in the commit history. 1. Modify `${base}-${X}.${Y}.rng` as required. 1. If required, add an XSLT file, and update `xslt_SCRIPTS` in `xml/Makefile.am`. 1. Commit 1. `make -C xml clean; make -C xml all` to rebuild the schemas in the local source directory. 1. The CIB validity regression tests will break after the schema is updated. Run `tools/regression.sh` to get the new output, `diff tools/regression.validity.{out,exp}` to ensure the changes look correct, `cp tools/regression.validity.{out,exp}` to update the expected output, then commit the change. +1. Similarly, with the new major version `${X}`, it's advisable to refresh + scheduler tests at some point, see the instructions in `cts/README.md`. ## Using a New Schema New features will not be available until the cluster administrator: 1. Updates all the nodes 1. Runs the equivalent of `cibadmin --upgrade --force` ## Random Notes From the source directory, run `make -C xml diff` to see the changes in the current schema (compared to the previous ones) and also the pending changes in `pacemaker-next`. Alternatively, if the intention is to grok the overall historical schema evolution, use `make -C xml fulldiff`.