diff --git a/cts/README.md b/cts/README.md index c4ee1ebcdc..0e6eac148d 100644 --- a/cts/README.md +++ b/cts/README.md @@ -1,344 +1,370 @@ # Pacemaker Cluster Test Suite (CTS) -## Purpose +The Cluster Test Suite (CTS) refers to all Pacemaker testing code that can be +run in an installed environment. (Pacemaker also has unit tests that must be +run from a source distribution.) -Pacemaker's CTS is primarily for developers and packagers of the Pacemaker -source code, but it can be useful for users who wish to see how their cluster -will react to various situations. +CTS includes: -CTS consists of two main parts: a set of regression tests for verifying the -functionality of particular Pacemaker components, and a cluster exerciser for -intensively testing the behavior of an entire working cluster. +* Regression tests: These test specific Pacemaker components individually (no + integration tests). The primary front end is cts-regression in this + directory. Run it with the --help option to see its usage. -The primary regression test front end is cts-regression in this directory. Run -it with the --help option to see its usage. The regression tests can be run on -any single cluster node. The cluster should be stopped on that node when -running the tests. + cts-regression is a wrapper for individual component regression tests also + in this directory (cts-cli, cts-exec, cts-fencing, and cts-scheduler). -The rest of this document focuses on the cluster exerciser. The cluster -exerciser runs a randomized series of predefined tests on the cluster. CTS can -be run against a pre-existing cluster configuration or overwrite the existing -configuration with a test configuration. + The CLI and scheduler regression tests can also be run from a source + distribution. The other regression tests can only run in an installed + environment, and the cluster should not be running on the node running these + tests. +* The CTS lab: This is a cluster exerciser for intensively testing the behavior + of an entire working cluster. It is primarily for developers and packagers of + the Pacemaker source code, but it can be useful for users who wish to see how + their cluster will react to various situations. In an installed deployment, + the CTS lab is in the cts subdirectory of this directory; in a source + distibution, it is in cts/lab. -## Helpers + The CTS lab runs a randomized series of predefined tests on the cluster. CTS + can be run against a pre-existing cluster configuration or overwrite the + existing configuration with a test configuration. -Some of the component regression tests and the cluster exerciser require -certain helpers to be installed as root. This includes a dummy LSB init script, -dummy systemd service, etc. The tests will install these as needed and -uninstall them when done. +* Helpers: Some of the component regression tests and the CTS lab require + certain helpers to be installed as root. These include a dummy LSB init + script, dummy systemd service, etc. In a source distribution, the source for + these is in cts/support. -This means that the cluster configuration created by the cluster exerciser -will generate failures if started manually after the exerciser exits. However, -the helper installer can be run manually to make the configuration usable, if -you want to do your own further testing with it: + The tests will install these as needed and uninstall them when done. This + means that the cluster configuration created by the CTS lab will generate + failures if started manually after the lab exits. However, the helper + installer can be run manually to make the configuration usable, if you want + to do your own further testing with it: - /usr/libexec/pacemaker/cts-support install + /usr/libexec/pacemaker/cts-support install -As you might expect, you can also remove the helpers with: + As you might expect, you can also remove the helpers with: - /usr/libexec/pacemaker/cts-support uninstall + /usr/libexec/pacemaker/cts-support uninstall -## Requirements +* Cluster benchmark: The benchmark subdirectory of this directory contains some + cluster test environment benchmarking code. It is not particularly useful for + end users. -* Three or more machines (one test exerciser and two or more test cluster - machines). +* LXC generator: The lxc\_autogen.sh script can be used to create some guest + nodes for testing using LXC containers. It is not particularly useful for end + users. In an installed deployment, it is in the cts subdirectory of this + directory; in a source distribution, it is in this directory. -* The test cluster machines should be on the same subnet and have journalling - filesystems (ext3, ext4, xfs, etc.) for all of their filesystems other than +* Valgrind suppressions: When memory-testing Pacemaker code with valgrind, + various bugs in non-Pacemaker libraries and such can clutter the results. The + valgrind-pcmk.suppressions file in this directory can be used with valgrind's + --suppressions option to eliminate many of these. + + +## Using the CTS lab + +### Requirements + +* Three or more machines (one test exerciser and at least two cluster nodes). + +* The test cluster nodes should be on the same subnet and have journalling + filesystems (ext4, xfs, etc.) for all of their filesystems other than /boot. You also need a number of free IP addresses on that subnet if you - intend to test mutual IP address takeover. + intend to test IP address takeover. * The test exerciser machine doesn't need to be on the same subnet as the test - cluster machines. Minimal demands are made on the exerciser machine - it - just has to stay up during the tests. + cluster machines. Minimal demands are made on the exerciser; it just has to + stay up during the tests. -* It helps a lot in tracking problems if all machines' clocks are closely - synchronized. NTP does this automatically, but you can do it by hand if you - want. +* Tracking problems is easier if all machines' clocks are closely synchronized. + NTP does this automatically, but you can do it by hand if you want. -* The exerciser needs to be able to ssh over to the cluster nodes as root - without a password challenge. Configure ssh accordingly (see the Mini-HOWTO - at the end of this document for more details). +* The account on the exerciser used to run the CTS lab (which does not need to + be root) must be able to ssh as root to the cluster nodes without a password + challenge. See the Mini-HOWTO at the end of this file for details about how + to configure ssh for this. -* The exerciser needs to be able to resolve the machine names of the - test cluster - either by DNS or by /etc/hosts. +* The exerciser needs to be able to resolve all cluster node names, whether by + DNS or /etc/hosts. -* CTS is not guaranteed to run on all platforms that pacemaker itself does. +* CTS is not guaranteed to run on all platforms that Pacemaker itself does. It calls commands such as service that may not be provided by all OSes. -## Preparation - -Install Pacemaker (including CTS) on all machines. These scripts are -coordinated with particular versions of Pacemaker, so you need the same version -of CTS as the rest of Pacemaker, and you need the same version of -pacemaker and CTS on both the test exerciser and the test cluster machines. +### Preparation -You can install CTS from source, although many distributions provide -packages that include it (e.g. pacemaker-cts or pacemaker-dev). -Typically, packages will install CTS as /usr/share/pacemaker/tests/cts. +* Install Pacemaker, including the testing code, on all machines. The testing + code must be the same version as the rest of Pacemaker, and the Pacemaker + version must be the same on the exerciser and all cluster nodes. -Configure cluster communications (Corosync) on the -cluster machines and verify everything works. + You can install from source, although many distributions package the testing + code (named pacemaker-cts or similar). Typically, everything needed by the + CTS lab is installed in /usr/share/pacemaker/tests/cts. -NOTE: Do not run the cluster on the test exerciser machine. +* Configure the cluster layer (Corosync) on the cluster machines (*not* the + exerciser), and verify it works. Node names used in the cluster configuration + *must* match the hosts' names as returned by `uname -n`; they do not have to + match the machines' fully qualified domain names. -NOTE: Wherever machine names are mentioned in these configuration files, -they must match the machines' `uname -n` name. This may or may not match -the machines' FQDN (fully qualified domain name) - it depends on how -you (and your OS) have named the machines. +### Run -## Run CTS +The primary interface to the CTS lab is the CTSlab.py executable: -Now assuming you did all this, what you need to do is run CTSlab.py: + /usr/share/pacemaker/tests/cts/CTSlab.py [options] - python ./CTSlab.py [options] number-of-tests-to-run - -You must specify which nodes are part of the cluster with --nodes, e.g.: +As part of the options, specify the cluster nodes with --nodes, for example: --nodes "pcmk-1 pcmk-2 pcmk-3" -Most people will want to save the output with --outputfile, e.g.: +Most people will want to save the output to a file, for example: --outputfile ~/cts.log -Unless you want to test your pre-existing cluster configuration, you also want: +Unless you want to test a pre-existing cluster configuration, you also want +(*warning*: with these options, any existing configuration will be lost): --clobber-cib --populate-resources - --test-ip-base $IP # e.g. --test-ip-base 192.168.9.100 -and configure some sort of fencing: +You can test floating IP addresses (*not* already used by any host), one per +cluster node, by specifying the first, for example: + + --test-ip-base 192.168.9.100 + +Configure some sort of fencing, for example to use fence\_xvm: - --stonith $TYPE # e.g. "--stonith xvm" to use fence_xvm or "--stonith ssh" to use external/ssh + --stonith xvm -A complete command line might look like: +Putting all the above together, a command line might look like: - python ./CTSlab.py --nodes "pcmk-1 pcmk-2 pcmk-3" --outputfile ~/cts.log \ - --clobber-cib --populate-resources --test-ip-base 192.168.9.100 \ - --stonith xvm 50 + /usr/share/pacemaker/tests/cts/CTSlab.py --nodes "pcmk-1 pcmk-2 pcmk-3" \ + --outputfile ~/cts.log --clobber-cib --populate-resources \ + --test-ip-base 192.168.9.100 --stonith xvm 50 -For more options, use the --help option. +For more options, run with the --help option. -NOTE: Perhaps more convenient way to compile a command line like above - is to use cluster\_test script that, at least in the source repository, - sits in the same directory as this very file. +There are also a couple of wrappers for CTSlab.py that some users may find more +convenient: cts, which is typically installed in the same place as the rest of +the testing code; and cluster\_test, which is in the source directory and +typically not installed. To extract the result of a particular test, run: crm_report -T $test -## Optional/advanced testing +### Optional: Memory testing -### Memory testing +Pacemaker has various options for testing memory management. On cluster nodes, +Pacemaker components use various environment variables to control these +options. How these variables are set varies by OS, but usually they are set in +a file such as /etc/sysconfig/pacemaker or /etc/default/pacemaker. -Pacemaker and CTS have various options for testing memory management. On the -cluster nodes, pacemaker components will use various environment variables to -control these options. How these variables are set varies by OS, but usually -they are set in the /etc/sysconfig/pacemaker or /etc/default/pacemaker file. - -Valgrind is a program for detecting memory management problems (such as -use-after-free errors). If you have valgrind installed, you can enable it by +Valgrind is a program for detecting memory management problems such as +use-after-free errors. If you have valgrind installed, you can enable it by setting the following environment variables on all cluster nodes: PCMK_valgrind_enabled=pacemaker-attrd,pacemaker-based,pacemaker-controld,pacemaker-execd,pacemaker-fenced,pacemaker-schedulerd VALGRIND_OPTS="--leak-check=full --trace-children=no --num-callers=25 --log-file=/var/lib/pacemaker/valgrind-%p --suppressions=/usr/share/pacemaker/tests/valgrind-pcmk.suppressions --gen-suppressions=all" -and running CTS with these options: +If running the CTS lab with valgrind enabled on the cluster nodes, add these +options to CTSlab.py: --valgrind-tests --valgrind-procs "pacemaker-attrd pacemaker-based pacemaker-controld pacemaker-execd pacemaker-schedulerd pacemaker-fenced" These options should only be set while specifically testing memory management, because they may slow down the cluster significantly, and they will disable writes to the CIB. If desired, you can enable valgrind on a subset of pacemaker components rather than all of them as listed above. Valgrind will put a text file for each process in the location specified by -valgrind's --log-file option. For explanations of the messages valgrind -generates, see http://valgrind.org/docs/manual/mc-manual.html +valgrind's --log-file option. See +https://www.valgrind.org/docs/manual/mc-manual.html for explanations of the +messages valgrind generates. -Separately, if you are using the GNU C library, the G\_SLICE, MALLOC\_PERTURB\_, -and MALLOC\_CHECK\_ environment variables can be set to affect the library's -memory management functions. +Separately, if you are using the GNU C library, the G\_SLICE, +MALLOC\_PERTURB\_, and MALLOC\_CHECK\_ environment variables can be set to +affect the library's memory management functions. When using valgrind, G\_SLICE should be set to "always-malloc", which helps valgrind track memory by always using the malloc() and free() routines directly. When not using valgrind, G\_SLICE can be left unset, or set to "debug-blocks", which enables the C library to catch many memory errors but may impact performance. If the MALLOC\_PERTURB\_ environment variable is set to an 8-bit integer, the C library will initialize all newly allocated bytes of memory to the integer value, and will set all newly freed bytes of memory to the bitwise inverse of the integer value. This helps catch uses of uninitialized or freed memory blocks that might otherwise go unnoticed. Example: MALLOC_PERTURB_=221 If the MALLOC\_CHECK\_ environment variable is set, the C library will check for certain heap corruption errors. The most useful value in testing is 3, which will cause the library to print a message to stderr and abort execution. Example: MALLOC_CHECK_=3 -Valgrind should be enabled for either all nodes or none, but the C library -variables may be set differently on different nodes. +Valgrind should be enabled for either all nodes or none when used with the CTS +lab, but the C library variables may be set differently on different nodes. -### Remote node testing +### Optional: Remote node testing If the pacemaker-remoted daemon is installed on all cluster nodes, CTS will enable remote node tests. The remote node tests choose a random node, stop the cluster on it, start pacemaker-remoted on it, and add an ocf:pacemaker:remote resource to turn it into a remote node. When the test is done, CTS will turn the node back into a cluster node. To avoid conflicts, CTS will rename the node, prefixing the original node name -with "remote-". For example, "pcmk-1" will become "remote-pcmk-1". +with "remote-". For example, "pcmk-1" will become "remote-pcmk-1". These names +do not need to be resolvable. -The name change may require special stonith configuration, if the fence agent +The name change may require special fencing configuration, if the fence agent expects the node name to be the same as its hostname. A common approach is to -specify the "remote-" names in pcmk\_host\_list. If you use pcmk\_host\_list=all, -CTS will expand that to all cluster nodes and their "remote-" names. -You may additionally need a pcmk\_host\_map argument to map the "remote-" names -to the hostnames. Example: +specify the "remote-" names in pcmk\_host\_list. If you use +pcmk\_host\_list=all, CTS will expand that to all cluster nodes and their +"remote-" names. You may additionally need a pcmk\_host\_map argument to map +the "remote-" names to the hostnames. Example: --stonith xvm --stonith-args \ pcmk_host_list=all,pcmk_host_map=remote-pcmk-1:pcmk-1;remote-pcmk-2:pcmk-2 -### Remote node testing with valgrind -When running the remote node tests, the pacemaker components on the cluster +### Optional: Remote node testing with valgrind + +When running the remote node tests, the Pacemaker components on the *cluster* nodes can be run under valgrind as described in the "Memory testing" section. However, pacemaker-remoted cannot be run under valgrind that way, because it is -started by the OS's regular boot system and not by pacemaker. +started by the OS's regular boot system and not by Pacemaker. Details vary by system, but the goal is to set the VALGRIND\_OPTS environment variable and then start pacemaker-remoted by prefixing it with the path to valgrind. The init script and systemd service file provided with pacemaker-remoted will load the pacemaker environment variables from the same location used by other -pacemaker components, so VALGRIND\_OPTS will be set correctly if using one of +Pacemaker components, so VALGRIND\_OPTS will be set correctly if using one of those. For an OS using systemd, you can override the ExecStart parameter to run valgrind. For example: mkdir /etc/systemd/system/pacemaker_remote.service.d cat >/etc/systemd/system/pacemaker_remote.service.d/valgrind.conf < that may be left behind into more canonical: so manual editing is tasked, or perhaps `--format` or `--c14n` to `xmllint` will be of help (without any other side effects). If the overall process gets stuck anywhere, common sense to the rescue. The initial part of the above recipe can be repeated anytime to verify there's nothing to upgrade artificially like this, which is a desired state. Note that `regression.sh` script performs validation of both the input and output, should the upgrade take place, implicitly, so there's no need of revalidation in the happy case.