diff --git a/heartbeat/README.galera b/heartbeat/README.galera new file mode 100644 index 000000000..56390e60b --- /dev/null +++ b/heartbeat/README.galera @@ -0,0 +1,132 @@ +Notes regarding the Galera resource agent +--- + +In the resource agent, the action of bootstrapping a Galera cluster is +implemented into a series of small steps, by using: + + * Two CIB attributes `last-committed` and `bootstrap` to elect a + bootstrap node that will restart the cluster. + + * One CIB attribute `sync-needed` that will identify that joining + nodes are in the process of synchronizing their local database + via SST. + + * A Master/Slave pacemaker resource which helps splitting the boot + into steps, up to a point where a galera node is available. + + * the recurring monitor action to coordinate switch from one + state to another. + +How boot works +==== + +There are two things to know to understand how the resource agent +restart a Galera cluster. + +### Bootstrap the cluster with the right node + +When synced, the nodes of a galera clusters have in common a last seqno, +which identifies the last transaction considered successful by a +majority of nodes in the cluster (think quorum). + +To restart a cluster, the resource agent must ensure that it will +bootstrap the cluster from an node which is up-to-date, i.e which has +the highest seqno of all nodes. + +As a result, if the resource agent cannot retrieve the seqno on all +nodes, it won't be able to safely identify a bootstrap node, and +will simply refuse to start the galera cluster. + +### synchronizing nodes can be a long operation + +Starting a bootstrap node is relatively fast, so it's performed +during the "promote" operation, which is a one-off, time-bounded +operation. + +Subsequent nodes will need to synchronize via SST, which consists +in "pushing" an entire Galera DB from one node to another. + +There is no perfect time-out, as time spent during synchronization +depends on the size of the DB. Thus, joiner nodes are started during +the "monitor" operation, which is a recurring operation that can +better track the progress of the SST. + + +State flow +==== + +General idea for starting Galera: + + * Before starting the Galera cluster each node needs to go in Slave + state so that the agent records its last seqno into the CIB. + __ This uses attribute last-committed __ + + * When all node went in Slave, the agent can safely determine the + last seqno and elect a bootstrap node (`detect_first_master()`). + __ This uses attribute bootstrap __ + + * The agent then sets the score of the elected bootstrap node to + Master so that pacemaker promote it and start the first Galera + server. + + * Once the first Master is running, the agent can start joiner + nodes during the "monitor" operation, and starts monitoring + their SST sync. + __ This uses attribute sync-needed __ + + * Only when SST is over on joiner nodes, the agent promotes them + to Master. At this point, the entire Galera cluster is up. + + +Attribute usage and liveness +==== + +Here is how attributes are created on a per-node basis. If you +modify the resource agent make sure those properties still hold. + +### last-committed + +It is just a temporary hint for the resource agent to help +elect a bootstrap node. Once the bootstrap attribute is set on one +of the nodes, we can get rid of last-committed. + + - Used : during Slave state to compare seqno + - Created: before entering Slave state: + . at startup in `galera_start()` + . or when a Galera node is stopped in `galera_demote()` + - Deleted: just before node starts in `galera_start_local_node()`; + cleaned-up during `galera_demote()` and `galera_stop()` + +We delete last-committed before starting Galera, to avoid race +conditions that could arise due to discrepancies between the CIB and +Galera. + +### bootstrap + +Attribute set on the node that is elected to bootstrap Galera. + +- Used : during promotion in `galera_start_local_node()` +- Created: at startup once all nodes have `last-committed`; + or during monitor if all nodes have failed +- Deleted: in `galera_start_local_node()`, just after the bootstrap + node started and is ready; + cleaned-up during `galera_demote()` and `galera_stop()` + +There cannot be more than one bootstrap node at any time, otherwise +the Galera cluster would stop replicating properly. + +### sync-needed + +While this attribute is set on a node, the Galera node is in JOIN +state, i.e. SST is in progress and the node cannot serve queries. + +The resource agent relies on the underlying SST method to monitor +the progress of the SST. For instance, with `wsrep_sst_rsync`, +timeout would be reported by rsync, the Galera node would go in +Non-primary state, which would make `galera_monitor()` fail. + +- Used : during recurring slave monitor in `check_sync_status()` +- Created: in `galera_start_local_node()`, just after the joiner + node started and entered the Galera cluster +- Deleted: during recurring slave monitor in `check_sync_status()` + as soon as the Galera code reports to be SYNC-ed. diff --git a/heartbeat/galera b/heartbeat/galera index 1a1a4ceac..7be2b00b1 100755 --- a/heartbeat/galera +++ b/heartbeat/galera @@ -1,733 +1,800 @@ #!/bin/sh # # Copyright (c) 2014 David Vossel # All Rights Reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of version 2 of the GNU General Public License as # published by the Free Software Foundation. # # This program is distributed in the hope that it would be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Further, this software is distributed without any warranty that it is # free of the rightful claim of any third person regarding infringement # or the like. Any license provided herein, whether implied or # otherwise, applies only to this software file. Patent licenses, if # any, provided herein do not apply to combinations of this program with # other software, or any other product whatsoever. # # You should have received a copy of the GNU General Public License # along with this program; if not, write the Free Software Foundation, # Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. # ## # README. # # This agent only supports being configured as a multistate Master # resource. # # Slave vs Master role: # # During the 'Slave' role, galera instances are in read-only mode and -# will not attempt to connect to the cluster. This role exists only as +# will not attempt to connect to the cluster. This role exists as # a means to determine which galera instance is the most up-to-date. The # most up-to-date node will be used to bootstrap a galera cluster that # has no current members. # # The galera instances will only begin to be promoted to the Master role # once all the nodes in the 'wsrep_cluster_address' connection address # have entered read-only mode. At that point the node containing the -# database that is most current will be promoted to Master. Once the first -# Master instance bootstraps the galera cluster, the other nodes will be -# promoted to Master as well. +# database that is most current will be promoted to Master. +# +# Once the first Master instance bootstraps the galera cluster, the +# other nodes will join the cluster and start synchronizing via SST. +# They will stay in Slave role as long as the SST is running. Their +# promotion to Master will happen once synchronization is finished. # # Example: Create a galera cluster using nodes rhel7-node1 rhel7-node2 rhel7-node3 # # pcs resource create db galera enable_creation=true \ # wsrep_cluster_address="gcomm://rhel7-auto1,rhel7-auto2,rhel7-auto3" meta master-max=3 --master # # By setting the 'enable_creation' option, the database will be automatically # generated at startup. The meta attribute 'master-max=3' means that all 3 # nodes listed in the wsrep_cluster_address list will be allowed to connect # to the galera cluster and perform replication. # # NOTE: If you have more nodes in the pacemaker cluster then you wish # to have in the galera cluster, make sure to use location contraints to prevent # pacemaker from attempting to place a galera instance on a node that is # not in the 'wsrep_cluster_address" list. # ## ####################################################################### # Initialization: : ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat} . ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs . ${OCF_FUNCTIONS_DIR}/mysql-common.sh # It is common for some galera instances to store # check user that can be used to query status # in this file if [ -f "/etc/sysconfig/clustercheck" ]; then . /etc/sysconfig/clustercheck fi ####################################################################### usage() { cat < 1.0 Resource script for managing galara database. Manages a galara instance Location of the MySQL server binary MySQL server binary Location of the MySQL client binary MySQL client binary Configuration file MySQL config Directory containing databases MySQL datadir User running MySQL daemon MySQL user Group running MySQL daemon (for logfile and directory permissions) MySQL group The logfile to be used for mysqld. MySQL log file The pidfile to be used for mysqld. MySQL pid file The socket to be used for mysqld. MySQL socket If the MySQL database does not exist, it will be created Create the database if it does not exist Additional parameters which are passed to the mysqld on startup. (e.g. --skip-external-locking or --skip-grant-tables) Additional parameters to pass to mysqld The galera cluster address. This takes the form of: gcomm://node,node,node Only nodes present in this node list will be allowed to start a galera instance. It is expected that the galera node names listed in this address match valid pacemaker node names. Galera cluster address Cluster check user. MySQL test user Cluster check user password check password END } get_option_variable() { local key=$1 $MYSQL $MYSQL_OPTIONS_CHECK -e "SHOW VARIABLES like '$key';" | tail -1 } get_status_variable() { local key=$1 $MYSQL $MYSQL_OPTIONS_CHECK -e "show status like '$key';" | tail -1 } set_bootstrap_node() { local node=$1 ${HA_SBIN_DIR}/crm_attribute -N $node -l reboot --name "${INSTANCE_ATTR_NAME}-bootstrap" -v "true" } clear_bootstrap_node() { ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-bootstrap" -D } is_bootstrap() { ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-bootstrap" -Q 2>/dev/null } clear_last_commit() { ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-last-committed" -D } set_last_commit() { ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-last-committed" -v $1 } get_last_commit() { local node=$1 if [ -z "$node" ]; then ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-last-committed" -Q 2>/dev/null else ${HA_SBIN_DIR}/crm_attribute -N $node -l reboot --name "${INSTANCE_ATTR_NAME}-last-committed" -Q 2>/dev/null fi } wait_for_sync() { local state=$(get_status_variable "wsrep_local_state") ocf_log info "Waiting for database to sync with the cluster. " while [ "$state" != "4" ]; do sleep 1 state=$(get_status_variable "wsrep_local_state") done ocf_log info "Database synced." } +set_sync_needed() +{ + ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-sync-needed" -v "true" +} + +clear_sync_needed() +{ + ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-sync-needed" -D +} + +check_sync_needed() +{ + ${HA_SBIN_DIR}/crm_attribute -N $NODENAME -l reboot --name "${INSTANCE_ATTR_NAME}-sync-needed" -Q 2>/dev/null +} + +check_sync_status() +{ + local state=$(get_status_variable "wsrep_local_state") + local ready=$(get_status_variable "wsrep_ready") + + if [ -z "$state" -o -z "$ready" ]; then + ocf_exit_reason "Unable to retrieve state transfer status, verify check_user '$OCF_RESKEY_check_user' has permissions to view status" + return $OCF_ERR_GENERIC + fi + + if [ "$state" == "4" -a "$ready" == "ON" ]; then + ocf_log info "local node synced with the cluster" + # when sync is finished, we are ready to switch to Master + clear_sync_needed + set_master_score + return $OCF_SUCCESS + else + ocf_log info "local node syncing" + return $OCF_SUCCESS + fi +} + is_primary() { cluster_status=$(get_status_variable "wsrep_cluster_status") if [ "$cluster_status" = "Primary" ]; then return 0 fi if [ -z "$cluster_status" ]; then ocf_exit_reason "Unable to retrieve wsrep_cluster_status, verify check_user '$OCF_RESKEY_check_user' has permissions to view status" else ocf_log info "Galera instance wsrep_cluster_status=${cluster_status}" fi return 1 } is_readonly() { local res=$(get_option_variable "read_only") if ! ocf_is_true "$res"; then return 1 fi cluster_status=$(get_status_variable "wsrep_cluster_status") if ! [ "$cluster_status" = "Disconnected" ]; then return 1 fi return 0 } master_exists() { if [ "$__OCF_ACTION" = "demote" ]; then # We don't want to detect master instances during demote. # 1. we could be detecting ourselves as being master, which is no longer the case. # 2. we could be detecting other master instances that are in the process of shutting down. # by not detecting other master instances in "demote" we are deferring this check # to the next recurring monitor operation which will be much more accurate return 1 fi # determine if a master instance is already up and is healthy crm_mon --as-xml | grep "resource.*id=\"${OCF_RESOURCE_INSTANCE}\".*role=\"Master\".*active=\"true\".*orphaned=\"false\".*failed=\"false\"" > /dev/null 2>&1 return $? } clear_master_score() { local node=$1 if [ -z "$node" ]; then $CRM_MASTER -D else $CRM_MASTER -D -N $node fi } set_master_score() { local node=$1 if [ -z "$node" ]; then $CRM_MASTER -v 100 else $CRM_MASTER -N $node -v 100 fi } -promote_everyone() -{ - - for node in $(echo "$OCF_RESKEY_wsrep_cluster_address" | sed 's/gcomm:\/\///g' | tr -d ' ' | tr -s ',' ' '); do - - set_master_score $node - done -} - greater_than_equal_long() { # there are values we need to compare in this script # that are too large for shell -gt to process echo | awk -v n1="$1" -v n2="$2" '{if (n1>=n2) printf ("true"); else printf ("false");}' | grep -q "true" } detect_first_master() { local best_commit=0 local best_node="$NODENAME" local last_commit=0 local missing_nodes=0 for node in $(echo "$OCF_RESKEY_wsrep_cluster_address" | sed 's/gcomm:\/\///g' | tr -d ' ' | tr -s ',' ' '); do last_commit=$(get_last_commit $node) if [ -z "$last_commit" ]; then ocf_log info "Waiting on node <${node}> to report database status before Master instances can start." missing_nodes=1 continue fi # this means -1, or that no commit has occured yet. if [ "$last_commit" = "18446744073709551615" ]; then last_commit="0" fi greater_than_equal_long "$last_commit" "$best_commit" if [ $? -eq 0 ]; then best_node=$node best_commit=$last_commit fi done if [ $missing_nodes -eq 1 ]; then return fi ocf_log info "Promoting $best_node to be our bootstrap node" set_master_score $best_node set_bootstrap_node $best_node } -# For galera, promote is really start -galera_promote() +galera_start_local_node() { local rc local extra_opts local bootstrap + + bootstrap=$(is_bootstrap) master_exists if [ $? -eq 0 ]; then # join without bootstrapping + ocf_log info "Node <${NODENAME}> is joining the cluster" extra_opts="--wsrep-cluster-address=${OCF_RESKEY_wsrep_cluster_address}" + elif ocf_is_true $bootstrap; then + ocf_log info "Node <${NODENAME}> is bootstrapping the cluster" + extra_opts="--wsrep-cluster-address=gcomm://" else - bootstrap=$(is_bootstrap) - - if ocf_is_true $bootstrap; then - ocf_log info "Node <${NODENAME}> is bootstrapping the cluster" - extra_opts="--wsrep-cluster-address=gcomm://" - else - ocf_exit_reason "Failure, Attempted to promote Master instance of $OCF_RESOURCE_INSTANCE before bootstrap node has been detected." - clear_last_commit - return $OCF_ERR_GENERIC - fi - fi - - galera_monitor - if [ $? -eq $OCF_RUNNING_MASTER ]; then - if ocf_is_true $bootstrap; then - promote_everyone - clear_bootstrap_node - ocf_log info "boostrap node already up, promoting the rest of the galera instances." - fi + ocf_exit_reason "Failure, Attempted to join cluster of $OCF_RESOURCE_INSTANCE before master node has been detected." clear_last_commit - return $OCF_SUCCESS + return $OCF_ERR_GENERIC fi - # last commit is no longer relevant once promoted + # clear last_commit before we start galera to make sure there + # won't be discrepency between the cib and galera if this node + # processes a few transactions and fails before we detect it clear_last_commit mysql_common_prepare_dirs mysql_common_start "$extra_opts" rc=$? if [ $rc != $OCF_SUCCESS ]; then return $rc fi - galera_monitor + mysql_common_status info rc=$? - if [ $rc != $OCF_SUCCESS -a $rc != $OCF_RUNNING_MASTER ]; then + + if [ $rc != $OCF_SUCCESS ]; then ocf_exit_reason "Failed initial monitor action" return $rc fi is_readonly if [ $? -eq 0 ]; then ocf_exit_reason "Failure. Master instance started in read-only mode, check configuration." return $OCF_ERR_GENERIC fi is_primary if [ $? -ne 0 ]; then ocf_exit_reason "Failure. Master instance started, but is not in Primary mode." return $OCF_ERR_GENERIC fi if ocf_is_true $bootstrap; then - promote_everyone clear_bootstrap_node - ocf_log info "Bootstrap complete, promoting the rest of the galera instances." else - # if this is not the bootstrap node, make sure this instance - # syncs with the rest of the cluster before promotion returns. - wait_for_sync + set_sync_needed fi ocf_log info "Galera started" return $OCF_SUCCESS } + +galera_promote() +{ + local rc + local extra_opts + local bootstrap + + master_exists + if [ $? -ne 0 ]; then + # promoting the first master will bootstrap the cluster + if is_bootstrap; then + galera_start_local_node + rc=$? + return $rc + else + ocf_exit_reason "Attempted to start the cluster without being a bootstrap node." + return $OCF_ERR_GENERIC + fi + else + # promoting other masters only performs sanity checks + # as the joining nodes were started during the "monitor" op + if ! check_sync_needed; then + return $OCF_SUCCESS + else + ocf_exit_reason "Attempted to promote local node while sync was still needed." + return $OCF_ERR_GENERIC + fi + fi +} + galera_demote() { mysql_common_stop rc=$? if [ $rc -ne $OCF_SUCCESS ] && [ $rc -ne $OCF_NOT_RUNNING ]; then ocf_exit_reason "Failed to stop Master galera instance during demotion to Master" return $rc fi # if this node was previously a bootstrap node, that is no longer the case. clear_bootstrap_node clear_last_commit + clear_sync_needed # record last commit by "starting" galera. start is just detection of the last sequence number galera_start } galera_start() { local last_commit echo $OCF_RESKEY_wsrep_cluster_address | grep -q $NODENAME if [ $? -ne 0 ]; then ocf_exit_reason "local node <${NODENAME}> must be a member of the wsrep_cluster_address <${OCF_RESKEY_wsrep_cluster_address}>to start this galera instance" return $OCF_ERR_CONFIGURED fi - galera_monitor - if [ $? -eq $OCF_RUNNING_MASTER ]; then + mysql_common_status info + if [ $? -ne $OCF_NOT_RUNNING ]; then ocf_exit_reason "master galera instance started outside of the cluster's control" return $OCF_ERR_GENERIC fi mysql_common_prepare_dirs ocf_log info "attempting to detect last commit version by reading ${OCF_RESKEY_datadir}/grastate.dat" last_commit="$(cat ${OCF_RESKEY_datadir}/grastate.dat | sed -n 's/^seqno.\s*\(.*\)\s*$/\1/p')" if [ -z "$last_commit" ] || [ "$last_commit" = "-1" ]; then ocf_log info "now attempting to detect last commit version using 'mysqld_safe --wsrep-recover'" local tmp=$(mktemp) ${OCF_RESKEY_binary} --defaults-file=$OCF_RESKEY_config \ --pid-file=$OCF_RESKEY_pid \ --socket=$OCF_RESKEY_socket \ --datadir=$OCF_RESKEY_datadir \ --user=$OCF_RESKEY_user \ --wsrep-recover > $tmp 2>&1 last_commit="$(cat $tmp | sed -n 's/.*WSREP\:\s*[R|r]ecovered\s*position.*\:\(.*\)\s*$/\1/p')" rm -f $tmp if [ "$last_commit" = "-1" ]; then last_commit="0" fi fi if [ -z "$last_commit" ]; then ocf_exit_reason "Unable to detect last known write sequence number" clear_last_commit return $OCF_ERR_GENERIC fi ocf_log info "Last commit version found: $last_commit" set_last_commit $last_commit master_exists if [ $? -eq 0 ]; then - ocf_log info "Master instances are already up, setting master score so this instance will join galera cluster." - set_master_score $NODENAME + ocf_log info "Master instances are already up, local node will join in when started" else clear_master_score detect_first_master fi return $OCF_SUCCESS } galera_monitor() { local rc local status_loglevel="err" # Set loglevel to info during probe if ocf_is_probe; then status_loglevel="info" fi mysql_common_status $status_loglevel rc=$? if [ $rc -eq $OCF_NOT_RUNNING ]; then - last_commit=$(get_last_commit $node) - if [ -n "$last_commit" ]; then - # if last commit is set, this instance is considered started in slave mode + last_commit=$(get_last_commit $NODENAME) + if [ -n "$last_commit" ];then rc=$OCF_SUCCESS + + if ocf_is_probe; then + # prevent state change during probe + return $rc + fi + master_exists if [ $? -ne 0 ]; then detect_first_master else - # a master instance exists and is healthy, promote this - # local read only instance - # so it can join the master galera cluster. - set_master_score + # a master instance exists and is healthy. + # start this node and mark it as "pending sync" + ocf_log info "cluster is running. start local node to join in" + galera_start_local_node + rc=$? fi fi return $rc elif [ $rc -ne $OCF_SUCCESS ]; then return $rc fi # if we make it here, mysql is running. Check cluster status now. echo $OCF_RESKEY_wsrep_cluster_address | grep -q $NODENAME if [ $? -ne 0 ]; then ocf_exit_reason "local node <${NODENAME}> is started, but is not a member of the wsrep_cluster_address <${OCF_RESKEY_wsrep_cluster_address}>" return $OCF_ERR_GENERIC fi is_primary if [ $? -eq 0 ]; then + check_sync_needed + if [ $? -eq 0 ]; then + # galera running and sync is needed: slave state + if ocf_is_probe; then + # prevent state change during probe + rc=$OCF_SUCCESS + else + check_sync_status + rc=$? + fi + else + # galera running, no need to sync: master state and everything's clear + rc=$OCF_RUNNING_MASTER - if ocf_is_probe; then - # restore master score during probe - # if we detect this is a master instance - set_master_score + if ocf_is_probe; then + # restore master score during probe + # if we detect this is a master instance + set_master_score + fi fi - rc=$OCF_RUNNING_MASTER else ocf_exit_reason "local node <${NODENAME}> is started, but not in primary mode. Unknown state." rc=$OCF_ERR_GENERIC fi return $rc } galera_stop() { local rc # make sure the process is stopped mysql_common_stop - rc=$1 + rc=$? clear_last_commit clear_master_score clear_bootstrap_node + clear_sync_needed return $rc } galera_validate() { if ! ocf_is_ms; then ocf_exit_reason "Galera must be configured as a multistate Master/Slave resource." return $OCF_ERR_CONFIGURED fi if [ -z "$OCF_RESKEY_wsrep_cluster_address" ]; then ocf_exit_reason "Galera must be configured with a wsrep_cluster_address value." return $OCF_ERR_CONFIGURED fi mysql_common_validate } case "$1" in meta-data) meta_data exit $OCF_SUCCESS;; usage|help) usage exit $OCF_SUCCESS;; esac galera_validate rc=$? LSB_STATUS_STOPPED=3 if [ $rc -ne 0 ]; then case "$1" in stop) exit $OCF_SUCCESS;; monitor) exit $OCF_NOT_RUNNING;; status) exit $LSB_STATUS_STOPPED;; *) exit $rc;; esac fi if [ -z "${OCF_RESKEY_check_passwd}" ]; then # This value is automatically sourced from /etc/sysconfig/checkcluster if available OCF_RESKEY_check_passwd=${MYSQL_PASSWORD} fi if [ -z "${OCF_RESKEY_check_user}" ]; then # This value is automatically sourced from /etc/sysconfig/checkcluster if available OCF_RESKEY_check_user=${MYSQL_USERNAME} fi : ${OCF_RESKEY_check_user="root"} MYSQL_OPTIONS_CHECK="-nNE --user=${OCF_RESKEY_check_user}" if [ -n "${OCF_RESKEY_check_passwd}" ]; then MYSQL_OPTIONS_CHECK="$MYSQL_OPTIONS_CHECK --password=${OCF_RESKEY_check_passwd}" fi # This value is automatically sourced from /etc/sysconfig/checkcluster if available if [ -n "${MYSQL_HOST}" ]; then MYSQL_OPTIONS_CHECK="$MYSQL_OPTIONS_CHECK -h ${MYSQL_HOST}" fi # This value is automatically sourced from /etc/sysconfig/checkcluster if available if [ -n "${MYSQL_PORT}" ]; then MYSQL_OPTIONS_CHECK="$MYSQL_OPTIONS_CHECK -P ${MYSQL_PORT}" fi # What kind of method was invoked? case "$1" in start) galera_start;; stop) galera_stop;; status) mysql_common_status err;; monitor) galera_monitor;; promote) galera_promote;; demote) galera_demote;; validate-all) exit $OCF_SUCCESS;; *) usage exit $OCF_ERR_UNIMPLEMENTED;; esac # vi:sw=4:ts=4:et: