diff --git a/README b/README index e3d5156..62ba740 100644 --- a/README +++ b/README @@ -1,191 +1,195 @@ The Booth Cluster Ticket Manager ============= Booth manages tickets which authorize cluster sites located in geographically dispersed locations to run resources. It facilitates support of geographically distributed clustering in Pacemaker. Booth is based on the Raft consensus algorithm. Though the implementation is not complete (there is no log) and there are a few additions and modifications, booth guarantees that a ticket is always available at just one site as long as it has exclusive control of the tickets. The git repository is available at github: github can also track issues or bug reports. Description of a booth cluster ============================== Booth cluster is a collection of cooperative servers communicating using the booth protocol. The purpose of the booth cluster is to manage cluster tickets. The booth cluster consists of at least three servers. A booth server can be either a site or an arbitrator. Arbitrators take part in elections and so help resolve ties, but cannot hold tickets. The basic unit in the booth cluster is a ticket. Every non-granted ticket is in the initial state on all servers. For granted tickets, the server holding the ticket is the leader and other servers are followers. The leader issues heartbeats and ticket updates to the followers. The followers are required to obey the leader. Booth startup ------------ On startup, the booth process first loads tickets, if available, from the CIB. Afterwards, it broadcasts a query to get tickets' status from other servers. In-memory copies are updated from the replies if they contain newer ticket data. If the server discovers that itself is the ticket leader, it tries to establish its authority again by broadcasting heartbeat. If it succeeds, it continues as the leader for this ticket. The other booth servers become followers. This procedure is possible only immediately after the booth startup. It also serves as a configuration reload. Grant and revoke operations ------------ A ticket first has to be granted using the 'booth client grant' command. Obviously, it is not possible to grant a ticket which is currently granted. Ticket revoke is the operation which is the opposite of grant. An administrative revoke may be started at any server, but the operation itself happens only at the leader. If the leader is unreachable, the ticket cannot be revoked. The user will need to wait until the ticket expires. A ticket grant may be delayed if not all sites are reachable. The delay is the ticket expiry time extended by acquire-after, if set. This is to ensure that the unreachable site relinquished the ticket it may have been holding and stopped the corresponding cluster resources. If the user is absolutely sure that the unreachable site does not hold the ticket, the delay may be skipped by using the '-F' option of the 'booth grant' command. If in effect, the grant delay time is shown in the 'booth list' command output. Ticket management and server operation ------------ A granted ticket is managed by the booth servers so that its availability is maximized without breaking the basic guarantee that the ticket is granted to one site only. The server where the ticket is granted is the leader, the other servers are followers. The leader occasionally sends heartbeats, once every half ticket expiry under normal circumstances. If a follower doesn't hear from the leader longer than the ticket expiry time, it will consider the ticket lost, and try to acquire it by starting new elections. A server starts elections by broadcasting the REQ_VOTE RPC. Other servers reply with the VOTE_FOR RPC, in which they record its vote. Normally, the sender of the first REQ_VOTE gets the vote of the receiver. Whichever server gets a majority of votes wins the elections. On ties, elections are restarted. To decrease chance of elections ending in a tie, a server waits for a short random period before sending out the REQ_VOTE packets. Everything else being equal, the server which sends REQ_VOTE first gets elected. Elections are described in more detail in the raft paper at . Ticket renewal (or update) is a two-step process. Before actually writing the ticket to the CIB, the server holding the ticket first tries to establish that it still has the majority for that ticket. That is done by broadcasting a heartbeat. If the server receives enough acknowledgements, it then stores the ticket to the CIB and broadcasts the UPDATE RPC with updated ticket expiry time so that the followers can update local ticket copies. Ticket renewals are configurable and by default set to half ticket expire time. -Before ticket renewal, the leader runs an external program if -such program is set in 'before-acquire-handler'. The external -program should ensure that the cluster managed service which is -protected by this ticket can run at this site. If that program -fails, the leader relinquishes the ticket. It announces its -intention to step down by broadcasting an unsolicited VOTE_FOR -with an empty vote. On receiving such RPC other servers start new -elections to elect a new leader. +Before ticket renewal, the leader runs one or more external +programs if such are set in 'before-acquire-handler'. This can +point either to a file or a directory. In the former case, that +file is the program, but in the latter there could be a number of +programs in the specified directory. All files which have the +executable bit set and whose names don't start with a '.' are +run sequentially. This program or programs should ensure that the +cluster managed service which is protected by this ticket can run +at this site. If any of them fails, the leader relinquishes the +ticket. It announces its intention to step down by broadcasting +an unsolicited VOTE_FOR with an empty vote. On receiving such RPC +other servers start new elections to elect a new leader. Split brain ------------ On split brains two possible issues arise: leader in minority and follower disconnected from the leader. Let's take a look at the first one. The leader in minority eventually expires the ticket because it cannot receieve majority of acknowledgements in reply to its heartbeats. The other partition runs elections (at about the same time, as they find the ticket lost after its expiry) and, if it can get the majority, the elections winner becomes a new leader for the ticket. After split brain gets resolved, the old leader will become follower as soon as it receives heartbeat from the new leader. Note the timing: the old leader releases the ticket at around the same time as when new elections in the other partition are held. This is because the leader ensures that the ticket expire time is always the same on all servers in the booth cluster. The second situation, where a follower is disconnected from the leader, is a bit more difficult to handle. After the ticket expiry time, the follower will consider the ticket lost and start new elections. The elections repeatedly get restarted until the split brain is resolved. Then, the rest of the cluster send rejects in reply to REQ_VOTE RPC because the ticket is still valid and therefore couldn't have been lost. They know that because the reason for elections is included with every REQ_VOTE. Short intermittent split brains are handled well because the leader keeps resending heartbeats until it gets replies from all servers serving sites. Authentication ============== In order to prevent malicious parties from affecting booth operation, booth server can authenticate both clients (connecting over TCP) and other booth servers (connecting over UDP). The authentication is based on SHA1 HMAC (Keyed-Hashing Message Authentication) and shared key. The HMAC implementation is provided by the libgcrypt or mhash library. Message encryption is not included as the information exchanged between various booth parties does not seem to justify that. Every message (packet) contains a hash code computed from the combination of payload and the secret key. Whoever has the secret key can then verify that the message is authentic. The shared key is used by both the booth client and the booth server, hence it needs to be copied to all nodes at each site and all arbitrators. Of course, a secure channel is required for key transfer. It is recommended to use csync2 or ssh. Timestamps are included and verified to fend against replay attacks. Certain time skew, 10 minutes by default, is tolerated. Packets either not older than that or with a timestamp more recent than the previous one from the same peer are accepted. The time skew can be configured in the booth configuration file. # vim: set ft=asciidoc : diff --git a/docs/boothd.8.txt b/docs/boothd.8.txt index 04f2142..7f48477 100644 --- a/docs/boothd.8.txt +++ b/docs/boothd.8.txt @@ -1,574 +1,580 @@ BOOTHD(8) =========== :doctype: manpage NAME ---- boothd - The Booth Cluster Ticket Manager. SYNOPSIS -------- *boothd* 'daemon' [-SD] [-c 'config'] [-l 'lockfile'] *booth* 'list' [-s 'site'] [-c 'config'] *booth* 'grant' [-s 'site'] [-c 'config'] [-FCw] 'ticket' *booth* 'revoke' [-s 'site'] [-c 'config'] [-w] 'ticket' *booth* 'peers' [-s 'site'] [-c 'config'] *booth* 'status' [-D] [-c 'config'] DESCRIPTION ----------- Booth manages tickets which authorizes one of the cluster sites located in geographically dispersed distances to run certain resources. It is designed to be extend Pacemaker to support geographically distributed clustering. It is based on the RAFT protocol, see eg. for details. SHORT EXAMPLES -------------- --------------------- # boothd daemon -D # booth list # booth grant ticket-nfs # booth revoke ticket-nfs --------------------- OPTIONS ------- *-c* 'configfile':: Configuration to use. + Can be a full path to a configuration file, or a short name; in the latter case, the directory '/etc/booth' and suffix '.conf' are added. Per default 'booth' is used, which results in the path '/etc/booth/booth.conf'. + The configuration name also determines the name of the PID file - for the defaults, '/var/run/booth/booth.pid'. *-s*:: Site address or name. + The special value 'other' can be used to specify the other site. Obviously, in that case, the booth configuration must have exactly two sites defined. *-F*:: 'immediate grant': Don't wait for unreachable sites to relinquish the ticket. See the 'Booth ticket management' section below for more details. + This option may be DANGEROUS. It makes booth grant the ticket even though it cannot ascertain that unreachable sites don't hold the same ticket. It is up to the user to make sure that unreachable sites don't have this ticket as granted. *-w*:: 'wait for the request outcome': The client waits for the final outcome of grant or revoke request. *-C*:: 'wait for ticket commit to CIB': The client waits for the ticket commit to CIB (only for grant requests). If one or more sites are unreachable, this takes the ticket expire time (plus, if defined, the 'acquire-after' time). *-h*, *--help*:: Give a short usage output. *--version*:: Report version information. *-S*:: 'systemd' mode: don't fork. This is like '-D' but without the debug output. *-D*:: Debug output/don't daemonize. Increases the debug output level; booth daemon remains in the foreground. *-l* 'lockfile':: Use another lock file. By default, the lock file name is inferred from the configuration file name. Normally not needed. COMMANDS -------- Whether the binary is called as 'boothd' or 'booth' doesn't matter; the first argument determines the mode of operation. *'daemon'*:: Tells 'boothd' to serve a site. The locally configured interfaces are searched for an IP address that is defined in the configuration. booth then runs in either /arbitrator/ or /site/ mode. *'client'*:: Booth clients can list the ticket information (see also 'crm_ticket -L'), and revoke or grant tickets to a site. + The grant and, under certain circumstances, revoke operations may take a while to return a definite operation's outcome. The client will wait up to the network timeout value (by default 5 seconds) for the result. Unless the '-w' option was set, in which case the client waits indefinitely. + In this mode the configuration file is searched for an IP address that is locally reachable, ie. matches a configured subnet. This allows to run the client commands on another node in the same cluster, as long as the config file and the service IP is locally reachable. + For instance, if the booth service IP is 192.168.55.200, and the local node has 192.168.55.15 configured on one of its network interfaces, it knows which site it belongs to. + Use '-s' to direct client to connect to a different site. *'status'*:: 'boothd' looks for the (locked) PID file and the UDP socket, prints some output to stdout (for use in shell scripts) and returns an OCF-compatible return code. With '-D', a human-readable message is printed to STDERR as well. *'peers'*:: List the other 'boothd' servers we know about. + In addition to the type, name (IP address), and the last time the server was heard from, network statistics are also printed. The statistics are split into two rows, the first one consists of counters for the sent packets and the second one for the received packets. The first counter is the total number of packets and descriptions of the other counters follows: 'resends':: Packets which had to be resent because the recipient didn't acknowledge a message. This usually means that either the message or the acknowledgement got lost. The number of resends usually reflect the network reliability. 'error':: Packets which either couldn't be sent, got truncated, or were badly formed. Should be zero. 'invalid':: These packets contain either invalid or non-existing ticket name or refer to a non-existing ticket leader. Should be zero. 'authfail':: Packets which couldn't be authenticated. Should be zero. CONFIGURATION FILE ------------------ The configuration file must be identical on all sites and arbitrators. A minimal file may look like this: ----------------------- site="192.168.201.100" site="192.168.202.100" arbitrator="192.168.203.100" ticket="ticket-db8" ----------------------- Comments start with a hash-sign (''#''). Whitespace at the start and end of the line, and around the ''='', are ignored. The following key/value pairs are defined: *'port'*:: The UDP/TCP port to use. Default is '9929'. *'transport'*:: The transport protocol to use for Raft exchanges. Currently only UDP is supported. + Clients use TCP to communicate with a daemon; Booth will always bind and listen to both UDP and TCP ports. *'authfile'*:: File containing the authentication key. The key can be either binary or text. If the latter, then both leading and trailing white space, including new lines, is ignored. This key is a shared secret and used to authenticate both clients and servers. The key must be between 8 and 64 characters long and be readable only by the file owner. *'maxtimeskew'*:: As protection against replay attacks, packets contain generation timestamps. Such a timestamp is not allowed to be too old. Just how old can be specified with this parameter. The value is in seconds and the default is 600 (10 minutes). If clocks vary more than this default between sites and nodes (which is definitely something you should fix) then set this parameter to a higher value. The time skew test is performed only in concert with authentication. *'site'*:: Defines a site Raft member with the given IP. Sites can acquire tickets. The sites' IP should be managed by the cluster. *'arbitrator'*:: Defines an arbitrator Raft member with the given IP. Arbitrators help reach consensus in elections and cannot hold tickets. Booth needs at least three members for normal operation. Odd number of members provides more redundancy. *'site-user'*, *'site-group'*, *'arbitrator-user'*, *'arbitrator-group'*:: These define the credentials 'boothd' will be running with. + On a (Pacemaker) site the booth process will have to call 'crm_ticket', so the default is to use 'hacluster':'haclient'; for an arbitrator this user and group might not exists, so there we default to 'nobody':'nobody'. *'ticket'*:: Registers a ticket. Multiple tickets can be handled by single Booth instance. + Use the special ticket name '__defaults__' to modify the defaults. The '__defaults__' stanza must precede all the other ticket specifications. All times are in seconds. *'expire'*:: The lease time for a ticket. After that time the ticket can be acquired by another site if the ticket holder is not reachable. + The default is '600'. *'acquire-after'*:: Once a ticket is lost, wait this time in addition before acquiring the ticket. + This is to allow for the site that lost the ticket to relinquish the resources, by either stopping them or fencing a node. + A typical delay might be 60 seconds, but ultimately it depends on the protected resources and the fencing configuration. + The default is '0'. *'renewal-freq'*:: Set the ticket renewal frequency period. + If the network reliability is often reduced over prolonged periods, it is advisable to try to renew more often. + -Before every renewal, if defined, the command specified in -'before-acquire-handler' is run. In that case the 'renewal-freq' -parameter is effectively also the local cluster monitoring -interval. +Before every renewal, if defined, the command or commands +specified in 'before-acquire-handler' is run. In that case the +'renewal-freq' parameter is effectively also the local cluster +monitoring interval. *'timeout'*:: After that time 'booth' will re-send packets if there was an insufficient number of replies. This should be long enough to allow packets to reach other members. + The default is '5'. *'retries'*:: Defines how many times to retry sending packets before giving up waiting for acks from other members. + Default is '10'. Values lower than 3 are illegal. + Ticket renewals should allow for this number of retries. Hence, the total retry time must be shorter than the renewal time (either half the expire time or *'renewal-freq'*): timeout*(retries+1) < renewal *'weights'*:: A comma-separated list of integers that define the weight of individual Raft members, in the same order as the 'site' and 'arbitrator' lines. + Default is '0' for all; this means that the order in the configuration file defines priority for conflicting requests. *'before-acquire-handler'*:: - If set, this command will be called before 'boothd' tries to - acquire or renew a ticket. On exit code other than 0, - 'boothd' relinquishes the ticket. + If set, this parameter specifies either a file containing a + program to be run or a directory where a number of programs + can reside. They are invoked before 'boothd' tries to acquire + or renew a ticket. If any of them exits with a code other + than 0, 'boothd' relinquishes the ticket. + Thus it is possible to ensure whether the services and its dependencies protected by the ticket are in good shape at this site. For instance, if a service in the dependency-chain has a failcount of 'INFINITY' on all available nodes, the service will be unable to run. In that case, it is of no use to claim the ticket. + +One or more arguments may follow the program or directory +location. Typically, there is at least the name of one of +the resources which depend on this ticket. ++ See below for details about booth specific environment variables. The distributed 'service-runnable' script is an example which may be used to test whether a pacemaker resource can be started. *'attr-prereq'*:: Sites can have GEO attributes managed with the 'geostore(8)' program. Attributes are within ticket's scope and may be tested by 'boothd' for additional control of ticket failover (automatic) or ticket acquire (manual). + Attributes are typically used to convey extra information about resources, for instance database replication status. The attributes are commonly updated by resource agents. + Attribute values are referenced in expressions and may be tested for equality with the 'eq' binary operator or inequality with the 'ne' operator. The usage is as follows: attr-prereq = : "auto" | "manual" : attribute name : "eq" | "ne" : attribute value + The two grant types are 'auto' for ticket failover and 'manual' for grants using the booth client. Only in case the expression evaluates to true can the ticket be granted. + It is not clear whether the 'manual' grant type has any practical use because, obviously, this operation is anyway controlled by a human. + Note that there can be no guarantee on whether an attribute value is up to date, i.e. if it actually reflects the current state. One example of a booth configuration file: ----------------------- transport = udp port = 9930 # D-85774 site="192.168.201.100" # D-90409 site="::ffff:192.168.202.100" # A-1120 arbitrator="192.168.203.100" ticket="ticket-db8" expire = 600 acquire-after = 60 timeout = 10 retries = 5 renewal-freq = 60 before-acquire-handler = /usr/share/booth/service-runnable db8 attr-prereq = auto repl_state eq ACTIVE ----------------------- BOOTH TICKET MANAGEMENT ----------------------- The booth cluster guarantees that every ticket is owned by only one site at the time. Tickets must be initially granted with the 'booth client grant' command. Once it gets granted, the ticket is managed by the booth cluster. Hence, only granted tickets are managed by 'booth'. If the ticket gets lost, i.e. that the other members of the booth cluster do not hear from the ticket owner in a sufficiently long time, one of the remaining sites will acquire the ticket. This is what is called _ticket failover_. If the remaining members cannot form a majority, then the ticket cannot fail over. A ticket may be revoked at any time with the 'booth client revoke' command. For revoke to succeed, the site holding the ticket must be reachable. Once the ticket is administratively revoked, it is not managed by the booth cluster anymore. For the booth cluster to start managing the ticket again, it must be again granted to a site. The grant operation, in case not all sites are reachable, may get delayed for the ticket expire time (and, if defined, the 'acquire-after' time). The reason is that the other booth members may not know if the ticket is currently granted at the unreachable site. This delay may be disabled with the '-F' option. In that case, it is up to the administrator to make sure that the unreachable site is not holding the ticket. When the ticket is managed by 'booth', it is dangerous to modify it manually using either `crm_ticket` command or `crm site ticket`. Neither of these tools is aware of 'booth' and, consequently, 'booth' itself may not be aware of any ticket status changes. A notable exception is setting the ticket to standby which is typically done before a planned failover. NOTES ----- Tickets are not meant to be moved around quickly, the default 'expire' time is 600 seconds (10 minutes). 'booth' works with both IPv4 and IPv6 addresses. 'booth' renews a ticket before it expires, to account for possible transmission delays. The renewal time, unless explicitly set, is set to half the 'expire' time. HANDLERS -------- Currently, there's only one external handler defined (see the 'before-acquire-handler' configuration item above). The following environment variables are exported to the handler: *'BOOTH_TICKET':: The ticket name, as given in the configuration file. (See 'ticket' item above.) *'BOOTH_LOCAL':: The local site name, as defined in 'site'. *'BOOTH_CONF_PATH':: The path to the active configuration file. *'BOOTH_CONF_NAME':: The configuration name, as used by the '-c' commandline argument. *'BOOTH_TICKET_EXPIRES':: When the ticket expires (in seconds since 1.1.1970), or '0'. The handler is invoked with positional arguments specified after it. FILES ----- *'/etc/booth/booth.conf'*:: The default configuration file name. See also the '-c' argument. *'/etc/booth/authkey'*:: There is no default, but this is a typical location for the shared secret (authentication key). *'/var/run/booth/'*:: Directory that holds PID/lock files. See also the 'status' command. RAFT IMPLEMENTATION ------------------- In essence, every ticket corresponds to a separate Raft cluster. A ticket is granted to the Raft _Leader_ which then owns (or keeps) the ticket. ARBITRATOR MANAGEMENT --------------------- The booth daemon for an arbitrator which typically doesn't run the cluster stack, may be started through systemd or with '/etc/init.d/booth-arbitrator', depending on which init system the platform supports. The SysV init script starts a booth arbitrator for every configuration file found in '/etc/booth'. Platforms running systemd can enable and start every configuration separately using 'systemctl': ----------- # systemctl enable booth@ # systemctl start booth@ ----------- 'systemctl' requires the configuration name, even for the default name 'booth'. EXIT STATUS ----------- *0*:: Success. For the 'status' command: Daemon running. *1* (PCMK_OCF_UNKNOWN_ERROR):: General error code. *7* (PCMK_OCF_NOT_RUNNING):: No daemon process for that configuration active. BUGS ---- Booth is tested regularly. See the `README-testing` file for more information. Please report any bugs either at GitHub: Or, if you prefer bugzilla, at openSUSE bugzilla (component "High Availability"): https://bugzilla.opensuse.org/enter_bug.cgi?product=openSUSE%20Factory AUTHOR ------ 'boothd' was originally written (mostly) by Jiaju Zhang. In 2013 and 2014 Philipp Marek took over maintainership. Since April 2014 it has been mainly developed by Dejan Muhamedagic. Many people contributed (see the `AUTHORS` file). RESOURCES --------- GitHub: Documentation: COPYING ------- Copyright (C) 2011 Jiaju Zhang Copyright (C) 2013-2014 Philipp Marek Copyright (C) 2014 Dejan Muhamedagic Free use of this software is granted under the terms of the GNU General Public License (GPL). // vim: set ft=asciidoc :