Page MenuHomeClusterLabs Projects

No OneTemporary

diff --git a/cts/README.md b/cts/README.md
index ef7b46117c..999131dc26 100644
--- a/cts/README.md
+++ b/cts/README.md
@@ -1,315 +1,315 @@
# Pacemaker Cluster Test Suite (CTS)
The Cluster Test Suite (CTS) refers to all Pacemaker testing code that can be
run in an installed environment. (Pacemaker also has unit tests that must be
run from a source distribution.)
CTS includes:
* Regression tests: These test specific Pacemaker components individually (no
integration tests). The primary front end is cts-regression in this
directory. Run it with the --help option to see its usage.
cts-regression is a wrapper for individual component regression tests also
in this directory (cts-cli, cts-exec, cts-fencing, and cts-scheduler).
The CLI and scheduler regression tests can also be run from a source
distribution. The other regression tests can only run in an installed
environment, and the cluster should not be running on the node running these
tests.
* The CTS lab: This is a cluster exerciser for intensively testing the behavior
of an entire working cluster. It is primarily for developers and packagers of
the Pacemaker source code, but it can be useful for users who wish to see how
their cluster will react to various situations. Most of the lab code is in
the Pacemaker Python module. The front end, cts-lab, is in this directory.
The CTS lab runs a randomized series of predefined tests on the cluster. It
can be run against a pre-existing cluster configuration or overwrite the
existing configuration with a test configuration.
* Helpers: Some of the component regression tests and the CTS lab require
certain helpers to be installed as root. These include a dummy LSB init
script, dummy systemd service, etc. In a source distribution, the source for
these is in cts/support.
The tests will install these as needed and uninstall them when done. This
means that the cluster configuration created by the CTS lab will generate
failures if started manually after the lab exits. However, the helper
installer can be run manually to make the configuration usable, if you want
to do your own further testing with it:
/usr/libexec/pacemaker/cts-support install
As you might expect, you can also remove the helpers with:
/usr/libexec/pacemaker/cts-support uninstall
(The actual directory location may vary depending on how Pacemaker was
built.)
* Cluster benchmark: The benchmark subdirectory of this directory contains some
cluster test environment benchmarking code. It is not particularly useful for
end users.
* Valgrind suppressions: When memory-testing Pacemaker code with valgrind,
various bugs in non-Pacemaker libraries and such can clutter the results. The
valgrind-pcmk.suppressions file in this directory can be used with valgrind's
--suppressions option to eliminate many of these.
## Using the CTS lab
### Requirements
* Three or more machines (one test exerciser and at least two cluster nodes).
* The test cluster nodes should be on the same subnet and have journalling
filesystems (ext4, xfs, etc.) for all of their filesystems other than
/boot. You also need a number of free IP addresses on that subnet if you
intend to test IP address takeover.
* The test exerciser machine doesn't need to be on the same subnet as the test
cluster machines. Minimal demands are made on the exerciser; it just has to
stay up during the tests.
* Tracking problems is easier if all machines' clocks are closely synchronized.
NTP does this automatically, but you can do it by hand if you want.
* The account on the exerciser used to run the CTS lab (which does not need to
be root) must be able to ssh as root to the cluster nodes without a password
challenge. See the Mini-HOWTO at the end of this file for details about how
to configure ssh for this.
* The exerciser needs to be able to resolve all cluster node names, whether by
DNS or /etc/hosts.
* CTS is not guaranteed to run on all platforms that Pacemaker itself does.
It calls commands such as service that may not be provided by all OSes.
### Preparation
* Install Pacemaker, including the testing code, on all machines. The testing
code must be the same version as the rest of Pacemaker, and the Pacemaker
version must be the same on the exerciser and all cluster nodes.
You can install from source, although many distributions package the testing
code (named pacemaker-cts or similar). Typically, everything needed by the
CTS lab is installed in /usr/share/pacemaker/tests/cts.
* Configure the cluster layer (Corosync) on the cluster machines (*not* the
exerciser), and verify it works. Node names used in the cluster configuration
*must* match the hosts' names as returned by `uname -n`; they do not have to
match the machines' fully qualified domain names.
* Optionally, configure the exerciser as a log aggregator, using something like
`rsyslog` log forwarding. If aggregation is detected, the exerciser will look
for new messages locally instead of requesting them repeatedly from cluster
nodes.
* Currently, `/var/log/messages` on the exerciser is the only supported log
destination. Further, if it's specified explicitly on the command line as
the log file, then CTS lab will not check for aggregation.
* CTS lab does not currently detect systemd journal log aggregation.
* Optionally, if the lab nodes use the systemd journal for logs, create
/etc/systemd/journald.conf.d/cts-lab.conf on each with
`RateLimitIntervalSec=0` or `RateLimitBurst=0`, to avoid issues with log
detection.
### Run
The primary interface to the CTS lab is the cts-lab executable:
/usr/share/pacemaker/tests/cts-lab [options] <number-of-tests-to-run>
(The actual directory location may vary depending on how Pacemaker was built.)
As part of the options, specify the cluster nodes with --nodes, for example:
--nodes "pcmk-1 pcmk-2 pcmk-3"
Most people will want to save the output to a file, for example:
--outputfile ~/cts.log
Unless you want to test a pre-existing cluster configuration, you also want
(*warning*: with these options, any existing configuration will be lost):
--clobber-cib
--populate-resources
You can test floating IP addresses (*not* already used by any host), one per
cluster node, by specifying the first, for example:
--test-ip-base 192.168.9.100
Configure some sort of fencing, for example to use fence\_xvm:
--stonith xvm
Putting all the above together, a command line might look like:
/usr/share/pacemaker/tests/cts-lab --nodes "pcmk-1 pcmk-2 pcmk-3" \
--outputfile ~/cts.log --clobber-cib --populate-resources \
--test-ip-base 192.168.9.100 --stonith xvm 50
For more options, run with the --help option.
There are also a couple of wrappers for cts-lab that some users may find more
convenient: cts, which is typically installed in the same place as the rest of
the testing code; and cluster\_test, which is in the source directory and
typically not installed.
To extract the result of a particular test, run:
crm_report -T $test
### Optional: Memory testing
Pacemaker has various options for testing memory management. On cluster nodes,
Pacemaker components use various environment variables to control these
options. How these variables are set varies by OS, but usually they are set in
a file such as /etc/sysconfig/pacemaker or /etc/default/pacemaker.
Valgrind is a program for detecting memory management problems such as
use-after-free errors. If you have valgrind installed, you can enable it by
setting the following environment variables on all cluster nodes:
PCMK_valgrind_enabled=pacemaker-attrd,pacemaker-based,pacemaker-controld,pacemaker-execd,pacemaker-fenced,pacemaker-schedulerd
VALGRIND_OPTS="--leak-check=full --trace-children=no --num-callers=25
--log-file=/var/lib/pacemaker/valgrind-%p
--suppressions=/usr/share/pacemaker/tests/valgrind-pcmk.suppressions
--gen-suppressions=all"
If running the CTS lab with valgrind enabled on the cluster nodes, add these
options to cts-lab:
- --valgrind-tests --valgrind-procs "pacemaker-attrd pacemaker-based pacemaker-controld pacemaker-execd pacemaker-schedulerd pacemaker-fenced"
+ --valgrind-procs "pacemaker-attrd pacemaker-based pacemaker-controld pacemaker-execd pacemaker-schedulerd pacemaker-fenced"
These options should only be set while specifically testing memory management,
because they may slow down the cluster significantly, and they will disable
writes to the CIB. If desired, you can enable valgrind on a subset of pacemaker
components rather than all of them as listed above.
Valgrind will put a text file for each process in the location specified by
valgrind's --log-file option. See
https://www.valgrind.org/docs/manual/mc-manual.html for explanations of the
messages valgrind generates.
Separately, if you are using the GNU C library, the G\_SLICE,
MALLOC\_PERTURB\_, and MALLOC\_CHECK\_ environment variables can be set to
affect the library's memory management functions.
When using valgrind, G\_SLICE should be set to "always-malloc", which helps
valgrind track memory by always using the malloc() and free() routines
directly. When not using valgrind, G\_SLICE can be left unset, or set to
"debug-blocks", which enables the C library to catch many memory errors
but may impact performance.
If the MALLOC\_PERTURB\_ environment variable is set to an 8-bit integer, the C
library will initialize all newly allocated bytes of memory to the integer
value, and will set all newly freed bytes of memory to the bitwise inverse of
the integer value. This helps catch uses of uninitialized or freed memory
blocks that might otherwise go unnoticed. Example:
MALLOC_PERTURB_=221
If the MALLOC\_CHECK\_ environment variable is set, the C library will check for
certain heap corruption errors. The most useful value in testing is 3, which
will cause the library to print a message to stderr and abort execution.
Example:
MALLOC_CHECK_=3
Valgrind should be enabled for either all nodes or none when used with the CTS
lab, but the C library variables may be set differently on different nodes.
### Optional: Remote node testing
If the pacemaker-remoted daemon is installed on all cluster nodes, the CTS lab
will enable remote node tests.
The remote node tests choose a random node, stop the cluster on it, start
pacemaker-remoted on it, and add an ocf:pacemaker:remote resource to turn it
into a remote node. When the test is done, the lab will turn the node back into
a cluster node.
To avoid conflicts, the lab will rename the node, prefixing the original node
name with "remote-". For example, "pcmk-1" will become "remote-pcmk-1". These
names do not need to be resolvable.
The name change may require special fencing configuration, if the fence agent
expects the node name to be the same as its hostname. A common approach is to
specify the "remote-" names in pcmk\_host\_list. If you use
pcmk\_host\_list=all, the lab will expand that to all cluster nodes and their
"remote-" names. You may additionally need a pcmk\_host\_map argument to map
the "remote-" names to the hostnames. Example:
--stonith xvm --stonith-args \
pcmk_host_list=all,pcmk_host_map=remote-pcmk-1:pcmk-1;remote-pcmk-2:pcmk-2
### Optional: Remote node testing with valgrind
When running the remote node tests, the Pacemaker components on the *cluster*
nodes can be run under valgrind as described in the "Memory testing" section.
However, pacemaker-remoted cannot be run under valgrind that way, because it is
started by the OS's regular boot system and not by Pacemaker.
Details vary by system, but the goal is to set the VALGRIND\_OPTS environment
variable and then start pacemaker-remoted by prefixing it with the path to
valgrind.
The init script and systemd service file provided with pacemaker-remoted will
load the pacemaker environment variables from the same location used by other
Pacemaker components, so VALGRIND\_OPTS will be set correctly if using one of
those.
For an OS using systemd, you can override the ExecStart parameter to run
valgrind. For example:
mkdir /etc/systemd/system/pacemaker_remote.service.d
cat >/etc/systemd/system/pacemaker_remote.service.d/valgrind.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/valgrind /usr/sbin/pacemaker-remoted
EOF
### Mini-HOWTO: Allow passwordless remote SSH connections
The CTS lab runs "ssh -l root" so you don't have to do any of your testing
logged in as root on the exerciser. Here is how to allow such connections
without requiring a password to be entered each time:
* On your test exerciser, create an SSH key if you do not already have one.
Most commonly, SSH keys will be in your ~/.ssh directory, with the
private key file not having an extension, and the public key file
named the same with the extension ".pub" (for example, ~/.ssh/id\_rsa.pub).
If you don't already have a key, you can create one with:
ssh-keygen -t rsa
* From your test exerciser, authorize your SSH public key for root on all test
machines (both the exerciser and the cluster test machines):
ssh-copy-id -i ~/.ssh/id_rsa.pub root@$MACHINE
You will probably have to provide your password, and possibly say
"yes" to some questions about accepting the identity of the test machines.
The above assumes you have a RSA SSH key in the specified location;
if you have some other type of key (DSA, ECDSA, etc.), use its file name
in the -i option above.
* To verify, try this command from the exerciser machine for each
of your cluster machines, and for the exerciser machine itself.
ssh -l root $MACHINE
If this works without prompting for a password, you're in business.
If not, look at the documentation for your version of ssh.
diff --git a/python/pacemaker/_cts/cmcorosync.py b/python/pacemaker/_cts/cmcorosync.py
index d0ab08d70c..b753c36e9a 100644
--- a/python/pacemaker/_cts/cmcorosync.py
+++ b/python/pacemaker/_cts/cmcorosync.py
@@ -1,75 +1,75 @@
"""Corosync-specific class for Pacemaker's Cluster Test Suite (CTS)."""
__all__ = ["Corosync2"]
__copyright__ = "Copyright 2007-2025 the Pacemaker project contributors"
__license__ = "GNU General Public License version 2 or later (GPLv2+) WITHOUT ANY WARRANTY"
from pacemaker._cts.CTS import Process
from pacemaker._cts.clustermanager import ClusterManager
from pacemaker._cts.patterns import PatternSelector
# Throughout this file, pylint has trouble understanding that EnvFactory
# is a singleton instance that can be treated as a subscriptable object.
# Various warnings are disabled because of this. See also a comment about
# self._rsh in environment.py.
# pylint: disable=unsubscriptable-object
class Corosync2(ClusterManager):
"""A subclass of ClusterManager specialized to handle corosync2 and later based clusters."""
def __init__(self):
"""Create a new Corosync2 instance."""
ClusterManager.__init__(self)
self._fullcomplist = {}
self.templates = PatternSelector(self.name)
@property
def components(self):
"""Return a list of patterns that should be ignored for the cluster's components."""
complist = []
if not self._fullcomplist:
common_ignore = self.templates.get_component("common-ignore")
daemons = [
"pacemaker-based",
"pacemaker-controld",
"pacemaker-attrd",
"pacemaker-execd",
"pacemaker-fenced"
]
for c in daemons:
badnews = self.templates.get_component(f"{c}-ignore") + common_ignore
proc = Process(self, c, pats=self.templates.get_component(c),
badnews_ignore=badnews)
self._fullcomplist[c] = proc
# the scheduler uses dc_pats instead of pats
badnews = self.templates.get_component("pacemaker-schedulerd-ignore") + common_ignore
proc = Process(self, "pacemaker-schedulerd",
dc_pats=self.templates.get_component("pacemaker-schedulerd"),
badnews_ignore=badnews)
self._fullcomplist["pacemaker-schedulerd"] = proc
# add (or replace) extra components
badnews = self.templates.get_component("corosync-ignore") + common_ignore
proc = Process(self, "corosync", pats=self.templates.get_component("corosync"),
badnews_ignore=badnews)
self._fullcomplist["corosync"] = proc
# Processes running under valgrind can't be shot with "killall -9 processname",
# so don't include them in the returned list
vgrind = self.env["valgrind-procs"].split()
for (key, val) in self._fullcomplist.items():
- if self.env["valgrind-tests"] and key in vgrind:
+ if key in vgrind:
self.log(f"Filtering {key} from the component list as it is being profiled by valgrind")
continue
if key == "pacemaker-fenced" and not self.env["DoFencing"]:
continue
complist.append(val)
return complist
diff --git a/python/pacemaker/_cts/environment.py b/python/pacemaker/_cts/environment.py
index b7987e45e5..8689463db8 100644
--- a/python/pacemaker/_cts/environment.py
+++ b/python/pacemaker/_cts/environment.py
@@ -1,642 +1,638 @@
"""Test environment classes for Pacemaker's Cluster Test Suite (CTS)."""
__all__ = ["EnvFactory"]
__copyright__ = "Copyright 2014-2025 the Pacemaker project contributors"
__license__ = "GNU General Public License version 2 or later (GPLv2+) WITHOUT ANY WARRANTY"
import argparse
from contextlib import suppress
import os
import random
import socket
import sys
import time
from pacemaker.buildoptions import BuildOptions
from pacemaker._cts.logging import LogFactory
from pacemaker._cts.remote import RemoteFactory
from pacemaker._cts.watcher import LogKind
class Environment:
"""
A class for managing the CTS environment.
This consists largely of processing and storing command line parameters.
"""
# pylint doesn't understand that self._rsh is callable (it stores the
# singleton instance of RemoteExec, as returned by the getInstance method
# of RemoteFactory).
# @TODO See if type annotations fix this.
# I think we could also fix this by getting rid of the getInstance methods,
# but that's a project for another day. For now, just disable the warning.
# pylint: disable=not-callable
def __init__(self, args):
"""
Create a new Environment instance.
This class can be treated kind of like a dictionary due to the presence
of typical dict functions like __contains__, __getitem__, and __setitem__.
However, it is not a dictionary so do not rely on standard dictionary
behavior.
Arguments:
args -- A list of command line parameters, minus the program name.
If None, sys.argv will be used.
"""
self.data = {}
self._nodes = []
# Set some defaults before processing command line arguments. These are
# either not set by any command line parameter, or they need a default
# that can't be set in add_argument.
self["DeadTime"] = 300
self["StartTime"] = 300
self["StableTime"] = 30
self["tests"] = []
self["IPagent"] = "IPaddr2"
self["DoFencing"] = True
self["ClobberCIB"] = False
self["CIBfilename"] = None
self["CIBResource"] = False
self["log_kind"] = None
self["node-limit"] = 0
self["scenario"] = "random"
self.random_gen = random.Random()
self._logger = LogFactory()
self._rsh = RemoteFactory().getInstance()
self._target = "localhost"
self._seed_random()
self._parse_args(args)
if not self["ListTests"]:
self._validate()
self._discover()
def _seed_random(self, seed=None):
"""
Initialize the random number generator.
Arguments:
seed -- Use this to see the random number generator, or use the
current time if None.
"""
if not seed:
seed = int(time.time())
self["RandSeed"] = seed
self.random_gen.seed(str(seed))
def dump(self):
"""Print the current environment."""
keys = []
for key in list(self.data.keys()):
keys.append(key)
keys.sort()
for key in keys:
self._logger.debug(f"{f'Environment[{key}]':35}: {str(self[key])}")
def keys(self):
"""Return a list of all environment keys stored in this instance."""
return list(self.data.keys())
def __contains__(self, key):
"""Return True if the given key exists in the environment."""
if key == "nodes":
return True
return key in self.data
def __getitem__(self, key):
"""Return the given environment key, or None if it does not exist."""
if str(key) == "0":
raise ValueError("Bad call to 'foo in X', should reference 'foo in X.keys()' instead")
if key == "nodes":
return self._nodes
if key == "Name":
return self._get_stack_short()
return self.data.get(key)
def __setitem__(self, key, value):
"""Set the given environment key to the given value, overriding any previous value."""
if key == "Stack":
self._set_stack(value)
elif key == "node-limit":
self.data[key] = value
self._filter_nodes()
elif key == "nodes":
self._nodes = []
for node in value:
# I don't think I need the IP address, etc. but this validates
# the node name against /etc/hosts and/or DNS, so it's a
# GoodThing(tm).
try:
n = node.strip()
# @TODO This only handles IPv4, use getaddrinfo() instead
# (here and in _discover())
socket.gethostbyname_ex(n)
self._nodes.append(n)
except socket.herror:
self._logger.log(f"{node} not found in DNS... aborting")
raise
self._filter_nodes()
else:
self.data[key] = value
def random_node(self):
"""Choose a random node from the cluster."""
return self.random_gen.choice(self["nodes"])
def get(self, key, default=None):
"""Return the value for key if key is in the environment, else default."""
if key == "nodes":
return self._nodes
return self.data.get(key, default)
def _set_stack(self, name):
"""Normalize the given cluster stack name."""
if name in ["corosync", "cs", "mcp"]:
self.data["Stack"] = "corosync 2+"
else:
raise ValueError(f"Unknown stack: {name}")
def _get_stack_short(self):
"""Return the short name for the currently set cluster stack."""
if "Stack" not in self.data:
return "unknown"
if self.data["Stack"] == "corosync 2+":
return "crm-corosync"
LogFactory().log(f"Unknown stack: {self['stack']}")
raise ValueError(f"Unknown stack: {self['stack']}")
def _detect_systemd(self):
"""Detect whether systemd is in use on the target node."""
if "have_systemd" not in self.data:
(rc, _) = self._rsh(self._target, "systemctl list-units", verbose=0)
self["have_systemd"] = rc == 0
def _detect_syslog(self):
"""Detect the syslog variant in use on the target node (if any)."""
if "syslogd" in self.data:
return
if self["have_systemd"]:
# Systemd
(_, lines) = self._rsh(self._target, r"systemctl list-units | grep syslog.*\.service.*active.*running | sed 's:.service.*::'", verbose=1)
else:
# SYS-V
(_, lines) = self._rsh(self._target, "chkconfig --list | grep syslog.*on | awk '{print $1}' | head -n 1", verbose=1)
with suppress(IndexError):
self["syslogd"] = lines[0].strip()
def disable_service(self, node, service):
"""Disable the given service on the given node."""
if self["have_systemd"]:
# Systemd
(rc, _) = self._rsh(node, f"systemctl disable {service}")
return rc
# SYS-V
(rc, _) = self._rsh(node, f"chkconfig {service} off")
return rc
def enable_service(self, node, service):
"""Enable the given service on the given node."""
if self["have_systemd"]:
# Systemd
(rc, _) = self._rsh(node, f"systemctl enable {service}")
return rc
# SYS-V
(rc, _) = self._rsh(node, f"chkconfig {service} on")
return rc
def service_is_enabled(self, node, service):
"""Return True if the given service is enabled on the given node."""
if self["have_systemd"]:
# Systemd
# With "systemctl is-enabled", we should check if the service is
# explicitly "enabled" instead of the return code. For example it returns
# 0 if the service is "static" or "indirect", but they don't really count
# as "enabled".
(rc, _) = self._rsh(node, f"systemctl is-enabled {service} | grep enabled")
return rc == 0
# SYS-V
(rc, _) = self._rsh(node, f"chkconfig --list | grep -e {service}.*on")
return rc == 0
def _detect_at_boot(self):
"""Detect if the cluster starts at boot."""
if "at-boot" not in self.data:
self["at-boot"] = self.service_is_enabled(self._target, "corosync") \
or self.service_is_enabled(self._target, "pacemaker")
def _detect_ip_offset(self):
"""Detect the offset for IPaddr resources."""
if self["CIBResource"] and "IPBase" not in self.data:
(_, lines) = self._rsh(self._target, "ip addr | grep inet | grep -v -e link -e inet6 -e '/32' -e ' lo' | awk '{print $2}'", verbose=0)
network = lines[0].strip()
(_, lines) = self._rsh(self._target, "nmap -sn -n %s | grep 'scan report' | awk '{print $NF}' | sed 's:(::' | sed 's:)::' | sort -V | tail -n 1" % network, verbose=0)
try:
self["IPBase"] = lines[0].strip()
except (IndexError, TypeError):
self["IPBase"] = None
if not self["IPBase"]:
self["IPBase"] = " fe80::1234:56:7890:1000"
self._logger.log("Could not determine an offset for IPaddr resources. Perhaps nmap is not installed on the nodes.")
self._logger.log(f"""Defaulting to '{self["IPBase"]}', use --test-ip-base to override""")
return
# pylint thinks self["IPBase"] is a list, not a string, which causes it
# to error out because a list doesn't have split().
# pylint: disable=no-member
last_part = self["IPBase"].split('.')[3]
if int(last_part) >= 240:
self._logger.log(f"Could not determine an offset for IPaddr resources. Upper bound is too high: {self['IPBase']} {last_part}")
self["IPBase"] = " fe80::1234:56:7890:1000"
self._logger.log(f"""Defaulting to '{self["IPBase"]}', use --test-ip-base to override""")
def _filter_nodes(self):
"""
Filter the list of cluster nodes.
If --limit-nodes is given, keep that many nodes from the front of the
list of cluster nodes and drop the rest.
"""
if self["node-limit"] > 0:
if len(self["nodes"]) > self["node-limit"]:
self._logger.log(f"Limiting the number of nodes configured={len(self['nodes'])} "
f"(max={self['node-limit']})")
while len(self["nodes"]) > self["node-limit"]:
self["nodes"].pop(len(self["nodes"]) - 1)
def _validate(self):
"""Check that we were given all required command line parameters."""
if not self["nodes"]:
raise ValueError("No nodes specified!")
def _discover(self):
"""Probe cluster nodes to figure out how to log and manage services."""
self._target = random.Random().choice(self["nodes"])
exerciser = socket.gethostname()
# Use the IP where possible to avoid name lookup failures
for ip in socket.gethostbyname_ex(exerciser)[2]:
if ip != "127.0.0.1":
exerciser = ip
break
self["cts-exerciser"] = exerciser
self._detect_systemd()
self._detect_syslog()
self._detect_at_boot()
self._detect_ip_offset()
def _parse_args(self, argv):
"""
Parse and validate command line parameters.
Set the appropriate values in the environment dictionary. If argv is
None, use sys.argv instead.
"""
if not argv:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(epilog=f"{sys.argv[0]} -g virt1 -r --stonith ssh --schema pacemaker-2.0 500")
grp1 = parser.add_argument_group("Common options")
grp1.add_argument("-g", "--dsh-group", "--group",
metavar="GROUP", dest="group",
help="Use the nodes listed in the named DSH group (~/.dsh/groups/$name)")
grp1.add_argument("-l", "--limit-nodes",
type=int, default=0,
metavar="MAX",
help="Only use the first MAX cluster nodes supplied with --nodes")
grp1.add_argument("--benchmark",
action="store_true",
help="Add timing information")
grp1.add_argument("--list", "--list-tests",
action="store_true", dest="list_tests",
help="List the valid tests")
grp1.add_argument("--nodes",
metavar="NODES",
help="List of cluster nodes separated by whitespace")
grp1.add_argument("--stack",
default="corosync",
metavar="STACK",
help="Which cluster stack is installed")
grp2 = parser.add_argument_group("Options that CTS will usually auto-detect correctly")
grp2.add_argument("-L", "--logfile",
metavar="PATH",
help="Where to look for logs from cluster nodes (or 'journal' for systemd journal)")
grp2.add_argument("--at-boot", "--cluster-starts-at-boot",
choices=["1", "0", "yes", "no"],
help="Does the cluster software start at boot time?")
grp2.add_argument("--facility", "--syslog-facility",
default="daemon",
metavar="NAME",
help="Which syslog facility to log to")
grp2.add_argument("--ip", "--test-ip-base",
metavar="IP",
help="Offset for generated IP address resources")
grp3 = parser.add_argument_group("Options for release testing")
grp3.add_argument("-r", "--populate-resources",
action="store_true",
help="Generate a sample configuration")
grp3.add_argument("--choose",
metavar="NAME",
help="Run only the named tests, separated by whitespace")
grp3.add_argument("--fencing", "--stonith",
choices=["1", "0", "yes", "no", "lha", "openstack", "rhcs", "rhevm", "scsi", "ssh", "virt", "xvm"],
default="1",
help="What fencing agent to use")
grp3.add_argument("--once",
action="store_true",
help="Run all valid tests once")
grp4 = parser.add_argument_group("Additional (less common) options")
grp4.add_argument("-c", "--clobber-cib",
action="store_true",
help="Erase any existing configuration")
grp4.add_argument("-y", "--yes",
action="store_true", dest="always_continue",
help="Continue to run whenever prompted")
grp4.add_argument("--boot",
action="store_true",
help="")
grp4.add_argument("--cib-filename",
metavar="PATH",
help="Install the given CIB file to the cluster")
grp4.add_argument("--experimental-tests",
action="store_true",
help="Include experimental tests")
grp4.add_argument("--loop-minutes",
type=int, default=60,
help="")
grp4.add_argument("--no-loop-tests",
action="store_true",
help="Don't run looping/time-based tests")
grp4.add_argument("--no-unsafe-tests",
action="store_true",
help="Don't run tests that are unsafe for use with ocfs2/drbd")
grp4.add_argument("--notification-agent",
metavar="PATH",
default="/var/lib/pacemaker/notify.sh",
help="Script to configure for Pacemaker alerts")
grp4.add_argument("--notification-recipient",
metavar="R",
default="/var/lib/pacemaker/notify.log",
help="Recipient to pass to alert script")
grp4.add_argument("--oprofile",
metavar="NODES",
help="List of cluster nodes to run oprofile on")
grp4.add_argument("--outputfile",
metavar="PATH",
help="Location to write logs to")
grp4.add_argument("--qarsh",
action="store_true",
help="Use QARSH to access nodes instead of SSH")
grp4.add_argument("--schema",
metavar="SCHEMA",
default=f"pacemaker-{BuildOptions.CIB_SCHEMA_VERSION}",
help="Create a CIB conforming to the given schema")
grp4.add_argument("--seed",
metavar="SEED",
help="Use the given string as the random number seed")
grp4.add_argument("--set",
action="append",
metavar="ARG",
default=[],
help="Set key=value pairs (can be specified multiple times)")
grp4.add_argument("--stonith-args",
metavar="ARGS",
default="hostlist=all,livedangerously=yes",
help="")
grp4.add_argument("--stonith-type",
metavar="TYPE",
default="external/ssh",
help="")
grp4.add_argument("--trunc",
action="store_true", dest="truncate",
help="Truncate log file before starting")
grp4.add_argument("--valgrind-procs",
metavar="PROCS",
default="pacemaker-attrd pacemaker-based pacemaker-controld pacemaker-execd pacemaker-fenced pacemaker-schedulerd",
help="Run valgrind against the given space-separated list of processes")
- grp4.add_argument("--valgrind-tests",
- action="store_true",
- help="Include tests using valgrind")
grp4.add_argument("--warn-inactive",
action="store_true",
help="Warn if a resource is assigned to an inactive node")
parser.add_argument("iterations",
nargs='?',
type=int, default=1,
help="Number of tests to run")
args = parser.parse_args(args=argv)
# Set values on this object based on what happened with command line
# processing. This has to be done in several blocks.
# These values can always be set. They get a default from the add_argument
# calls, only do one thing, and they do not have any side effects.
self["ClobberCIB"] = args.clobber_cib
self["ListTests"] = args.list_tests
self["Schema"] = args.schema
self["Stack"] = args.stack
self["SyslogFacility"] = args.facility
self["TruncateLog"] = args.truncate
self["at-boot"] = args.at_boot in ["1", "yes"]
self["benchmark"] = args.benchmark
self["continue"] = args.always_continue
self["experimental-tests"] = args.experimental_tests
self["iterations"] = args.iterations
self["loop-minutes"] = args.loop_minutes
self["loop-tests"] = not args.no_loop_tests
self["notification-agent"] = args.notification_agent
self["notification-recipient"] = args.notification_recipient
self["node-limit"] = args.limit_nodes
self["stonith-params"] = args.stonith_args
self["stonith-type"] = args.stonith_type
self["unsafe-tests"] = not args.no_unsafe_tests
self["valgrind-procs"] = args.valgrind_procs
- self["valgrind-tests"] = args.valgrind_tests
self["warn-inactive"] = args.warn_inactive
# Nodes and groups are mutually exclusive, so their defaults cannot be
# set in their add_argument calls. Additionally, groups does more than
# just set a value. Here, set nodes first and then if a group is
# specified, override the previous nodes value.
if args.nodes:
self["nodes"] = args.nodes.split(" ")
else:
self["nodes"] = []
if args.group:
self["OutputFile"] = f"{os.environ['HOME']}/cluster-{args.dsh_group}.log"
LogFactory().add_file(self["OutputFile"], "CTS")
dsh_file = f"{os.environ['HOME']}/.dsh/group/{args.dsh_group}"
if os.path.isfile(dsh_file):
self["nodes"] = []
with open(dsh_file, "r", encoding="utf-8") as f:
for line in f:
stripped = line.strip()
if not stripped.startswith('#'):
self["nodes"].append(stripped)
else:
print(f"Unknown DSH group: {args.dsh_group}")
# Everything else either can't have a default set in an add_argument
# call (likely because we don't want to always have a value set for it)
# or it does something fancier than just set a single value. However,
# order does not matter for these as long as the user doesn't provide
# conflicting arguments on the command line. So just do Everything
# alphabetically.
if args.boot:
self["scenario"] = "boot"
if args.cib_filename:
self["CIBfilename"] = args.cib_filename
else:
self["CIBfilename"] = None
if args.choose:
self["scenario"] = "sequence"
self["tests"].extend(args.choose.split())
self["iterations"] = len(self["tests"])
if args.fencing:
if args.fencing in ["0", "no"]:
self["DoFencing"] = False
else:
self["DoFencing"] = True
if args.fencing in ["rhcs", "virt", "xvm"]:
self["stonith-type"] = "fence_xvm"
elif args.fencing == "scsi":
self["stonith-type"] = "fence_scsi"
elif args.fencing in ["lha", "ssh"]:
self["stonith-params"] = "hostlist=all,livedangerously=yes"
self["stonith-type"] = "external/ssh"
elif args.fencing == "openstack":
self["stonith-type"] = "fence_openstack"
print("Obtaining OpenStack credentials from the current environment")
region = os.environ['OS_REGION_NAME']
tenant = os.environ['OS_TENANT_NAME']
auth = os.environ['OS_AUTH_URL']
user = os.environ['OS_USERNAME']
password = os.environ['OS_PASSWORD']
self["stonith-params"] = f"region={region},tenant={tenant},auth={auth},user={user},password={password}"
elif args.fencing == "rhevm":
self["stonith-type"] = "fence_rhevm"
print("Obtaining RHEV-M credentials from the current environment")
user = os.environ['RHEVM_USERNAME']
password = os.environ['RHEVM_PASSWORD']
server = os.environ['RHEVM_SERVER']
port = os.environ['RHEVM_PORT']
self["stonith-params"] = f"login={user},passwd={password},ipaddr={server},ipport={port},ssl=1,shell_timeout=10"
if args.ip:
self["CIBResource"] = True
self["ClobberCIB"] = True
self["IPBase"] = args.ip
if args.logfile == "journal":
self["LogAuditDisabled"] = True
self["log_kind"] = LogKind.JOURNAL
elif args.logfile:
self["LogAuditDisabled"] = True
self["LogFileName"] = args.logfile
self["log_kind"] = LogKind.REMOTE_FILE
else:
# We can't set this as the default on the parser.add_argument call
# for this option because then args.logfile will be set, which means
# the above branch will be taken and those other values will also be
# set.
self["LogFileName"] = "/var/log/messages"
if args.once:
self["scenario"] = "all-once"
if args.oprofile:
self["oprofile"] = args.oprofile.split(" ")
else:
self["oprofile"] = []
if args.outputfile:
self["OutputFile"] = args.outputfile
LogFactory().add_file(self["OutputFile"])
if args.populate_resources:
self["CIBResource"] = True
self["ClobberCIB"] = True
if args.qarsh:
self._rsh.enable_qarsh()
for kv in args.set:
(name, value) = kv.split("=")
self[name] = value
print(f"Setting {name} = {value}")
class EnvFactory:
"""A class for constructing a singleton instance of an Environment object."""
instance = None
# pylint: disable=invalid-name
def getInstance(self, args=None):
"""
Return the previously created instance of Environment.
If no instance exists, create a new instance and return that.
"""
if not EnvFactory.instance:
EnvFactory.instance = Environment(args)
return EnvFactory.instance
diff --git a/python/pacemaker/_cts/tests/ctstest.py b/python/pacemaker/_cts/tests/ctstest.py
index 976b34a477..16d0f23e7b 100644
--- a/python/pacemaker/_cts/tests/ctstest.py
+++ b/python/pacemaker/_cts/tests/ctstest.py
@@ -1,246 +1,242 @@
"""Base classes for CTS tests."""
__all__ = ["CTSTest"]
__copyright__ = "Copyright 2000-2025 the Pacemaker project contributors"
__license__ = "GNU General Public License version 2 or later (GPLv2+) WITHOUT ANY WARRANTY"
import re
from pacemaker._cts.environment import EnvFactory
from pacemaker._cts.logging import LogFactory
from pacemaker._cts.patterns import PatternSelector
from pacemaker._cts.remote import RemoteFactory
from pacemaker._cts.timer import Timer
from pacemaker._cts.watcher import LogWatcher
# Disable various pylint warnings that occur in so many places throughout this
# file it's easiest to just take care of them globally. This does introduce the
# possibility that we'll miss some other cause of the same warning, but we'll
# just have to be careful.
class CTSTest:
"""
The base class for all cluster tests.
This implements a basic set of properties and behaviors like setup, tear
down, time keeping, and statistics tracking. It is up to specific tests
to implement their own specialized behavior on top of this class.
"""
def __init__(self, cm):
"""
Create a new CTSTest instance.
Arguments:
cm -- A ClusterManager instance
"""
# pylint: disable=invalid-name
self.audits = []
self.name = None
self.templates = PatternSelector(cm["Name"])
self.stats = {
"auditfail": 0,
"calls": 0,
"failure": 0,
"skipped": 0,
"success": 0
}
self._cm = cm
self._env = EnvFactory().getInstance()
self._rsh = RemoteFactory().getInstance()
self._logger = LogFactory()
self._timers = {}
self.benchmark = True # which tests to benchmark
self.failed = False
self.is_experimental = False
self.is_loop = False
self.is_unsafe = False
- self.is_valgrind = False
self.passed = True
def log(self, args):
"""Log a message."""
self._logger.log(args)
def debug(self, args):
"""Log a debug message."""
self._logger.debug(args)
def get_timer(self, key="test"):
"""Get the start time of the given timer."""
try:
return self._timers[key].start_time
except KeyError:
return 0
def set_timer(self, key="test"):
"""Set the start time of the given timer to now, and return that time."""
if key not in self._timers:
self._timers[key] = Timer(self._logger, self.name, key)
self._timers[key].start()
return self._timers[key].start_time
def log_timer(self, key="test"):
"""Log the elapsed time of the given timer."""
if key not in self._timers:
return
elapsed = self._timers[key].elapsed
self.debug(f"{self.name}:{key} runtime: {elapsed:.2f}")
del self._timers[key]
def incr(self, name):
"""Increment the given stats key."""
if name not in self.stats:
self.stats[name] = 0
self.stats[name] += 1
# Reset the test passed boolean
if name == "calls":
self.passed = True
def failure(self, reason="none"):
"""Increment the failure count, with an optional failure reason."""
self.passed = False
self.incr("failure")
self._logger.log(f"{f'Test {self.name}':<35} FAILED: {reason}")
return False
def success(self):
"""Increment the success count."""
self.incr("success")
return True
def skipped(self):
"""Increment the skipped count."""
self.incr("skipped")
return True
def __call__(self, node):
"""Perform this test."""
raise NotImplementedError
def audit(self):
"""Perform all the relevant audits (see ClusterAudit), returning whether or not they all passed."""
passed = True
for audit in self.audits:
if not audit():
self._logger.log(f"Internal {self.name} Audit {audit.name} FAILED.")
self.incr("auditfail")
passed = False
return passed
def setup(self, node):
"""Set up this test."""
# node is used in subclasses
# pylint: disable=unused-argument
return self.success()
def teardown(self, node):
"""Tear down this test."""
# node is used in subclasses
# pylint: disable=unused-argument
return self.success()
def create_watch(self, patterns, timeout, name=None):
"""
Create a new LogWatcher object.
This object can be used to search log files for matching patterns
during this test's run.
Arguments:
patterns -- A list of regular expressions to match against the log
timeout -- Default number of seconds to watch a log file at a time;
this can be overridden by the timeout= parameter to
self.look on an as-needed basis
name -- A unique name to use when logging about this watch
"""
if not name:
name = self.name
return LogWatcher(self._env["LogFileName"], patterns,
self._env["nodes"], self._env["log_kind"], name,
timeout)
def local_badnews(self, prefix, watch, local_ignore=None):
"""
Search through log files for messages.
Arguments:
prefix -- The string to look for at the beginning of lines,
or "LocalBadNews:" if None.
watch -- The LogWatcher object to use for searching.
local_ignore -- A list of regexes that, if found in a line, will
cause that line to be ignored.
Return the number of matches found.
"""
errcount = 0
if not prefix:
prefix = "LocalBadNews:"
ignorelist = [" CTS: ", prefix]
if local_ignore:
ignorelist += local_ignore
while errcount < 100:
match = watch.look(0)
if match:
add_err = True
for ignore in ignorelist:
if add_err and re.search(ignore, match):
add_err = False
if add_err:
self._logger.log(f"{prefix} {match}")
errcount += 1
else:
break
else:
self._logger.log("Too many errors!")
watch.end()
return errcount
def is_applicable(self):
"""
Return True if this test is applicable in the current test configuration.
This method must be implemented by all subclasses.
"""
if self.is_loop and not self._env["loop-tests"]:
return False
if self.is_unsafe and not self._env["unsafe-tests"]:
return False
- if self.is_valgrind and not self._env["valgrind-tests"]:
- return False
-
if self.is_experimental and not self._env["experimental-tests"]:
return False
if self._env["benchmark"] and not self.benchmark:
return False
return True
@property
def errors_to_ignore(self):
"""Return a list of errors which should be ignored."""
return []

File Metadata

Mime Type
text/x-diff
Expires
Thu, Jul 10, 1:42 AM (22 h, 41 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1964381
Default Alt Text
(51 KB)

Event Timeline