Page MenuHomeClusterLabs Projects

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/ChangeLog b/ChangeLog
index 44a7434c44..5d2911b6b9 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,2459 +1,2459 @@
* Thu Jul 06 2017 Ken Gaillot <kgaillot@redhat.com> Pacemaker-1.1.17-1
- Update source tarball to revision: 301bc44
- Changesets: 539
- Diff: 177 files changed, 11525 insertions(+), 5036 deletions(-)
- Features added since Pacemaker-1.1.16
+ New "bundle" resource type for Docker container use cases (experimental)
+ New "PCMK_node_start_state" environment variable to start node in standby
+ New "value-source" rule expression attribute in location constraints to
compare a node attribute against a resource parameter
+ New "stonith-max-attempts" cluster option to specify how many times
fencing can fail for a target before the cluster will no longer
immediately re-attempt it (previously hard-coded at 10)
+ New "cluster-ipc-limit" cluster option to avoid IPC client eviction in
large clusters
+ Failures are now tracked per operation type, as well as per node and
resource (the "fail-count" and "last-failure" node attribute names now end
in "#OPERATION_INTERVAL")
+ attrd: Pacemaker Remote node attributes and regular expressions are now
supported on legacy cluster stacks (heartbeat, CMAN, and corosync plugin)
+ tools: New "crm_resource --validate" option
+ tools: New "stonith_admin --list-targets" option
+ tools: New "crm_attribute --pattern" option to match a regular expression
+ tools: "crm_resource --cleanup" and "crm_failcount" can now take
--operation and --interval options to operate on a single operation type
- Changes since Pacemaker-1.1.16
+ Fix multiple memory issues (leaks, use-after-free) in libraries
+ pengine: unmanaging a guest node resource puts guest in maintenance mode
+ cib: broadcasts of cib changes should always pass ACL checks
+ crmd,libcrmcommon: update throttling when CPUs are hot-plugged
+ crmd: abort transition whenever we lose quorum
+ crmd: avoid attribute write-out on join when atomic attrd is used
+ crmd: check for too many stonith failures only when aborting for that reason
+ crmd: correctly clear failure counts only for a specified node
+ crmd: don't fence old DC if it's shutting down as soon-to-be DC joins
+ crmd: forget stonith failures when forgetting node
+ crmd: all nodes should track stonith failure counts in case they become DC
+ crmd: update cache status for guest node whose host is fenced
+ dbus: prevent lrmd from hanging on dbus calls
+ fencing: detect newly added constraints for stonith devices
+ pengine: order remote actions after connection recovery
(regression introduced in 1.1.15)
+ pengine: quicker recovery from failed demote
+ libcib: determine remote nodes correctly from node status entries
+ libcrmcommon: avoid evicting IPC client if messages spike briefly
+ libcrmcommon: better XML comment handling prevents infinite election loop
+ libcrmcommon: set month correctly in date/time string sent to alert agents
+ libfencing,fencing: intelligently remap "action" wrongly specified in config
+ libservices: ensure completed ops aren't on blocked ops list
+ libservices: properly detect and cancel in-flight systemd/upstart ops
+ libservices: properly watch writable DBus handles
+ libservices: systemd service that is reloading doesn't cause monitor failure
+ pacemaker_remoted: allow graceful shutdown while unmanaged
+ pengine,libpe_status: don't clear same fail-count twice
+ pengine: consider guest node unclean if its host is unclean
+ pengine: do not re-add a node's default score for each location constraint
+ pengine: avoid restarting services when recovering remote connection
+ pengine: better guest node recovery when host fails
+ pengine: guest node fencing doesn't require stonith enabled
+ pengine: allow probes of guest node connection resources
+ pengine: properly handle allow-migrate explicitly set for remote connection
+ pengine: fence failed remote nodes even if no resources can run on them
+ tools: resource agents will now get the correct node name on
Pacemaker Remote nodes when using crm_node and crm_attribute
+ tools: avoid grep crashes in crm_report when looking for system logs
+ tools: crm_resource -C now clears last-failure as well as fail-count
+ tools: implement crm_failcount command-line options correctly
+ tools: properly ignore version with crm_diff --no-version
* Wed Nov 30 2016 Ken Gaillot <kgaillot@redhat.com> Pacemaker-1.1.16-1
- Update source tarball to revision: 76876b3
- Changesets: 382
- Diff: 145 files changed, 7200 insertions(+), 5621 deletions(-)
- Features added since Pacemaker-1.1.15
+ Location constraints may use rsc-pattern, with submatches expanded
+ node-health-base available with node-health-strategy=progressive
+ new Pacemaker Development document for working on pacemaker code base
+ new PCMK_panic_action variable allows crash instead of reboot on panic
+ resources: add resource agent for managing a node attribute
+ systemd: include socket units when listing all systemd agents
- Changes since Pacemaker-1.1.15
+ Important security fix for CVE-2016-7035
+ Logging is now synchronous when blackboxes are enabled
+ All python code except CTS is now compatible with python 2.6+ and 3.2+
+ build: take advantage of compiler features for security and performance
+ build: update SuSE spec modifications for recent spec changes
+ build: avoid watchdog reboot when upgrading pacemaker_remote with sbd
+ build: numerous other improvements in environment detection, etc.
+ cib: fix infinite loop when no schema validates
+ crmd: cl#5185 - record pending operations in CIB before they are performed
+ crmd: don't abort transitions for CIB comment changes
+ crmd: resend shutdown request if DC loses original request
+ documentation: install improved README in doc instead of now-removed AUTHORS
+ documentation: clarify licensing and provide copy of all licenses
+ documentation: document various features and upgrades better
+ fence_legacy: use "list" action when searching cluster-glue agents
+ libcib: don't stop sending alerts after releasing DC role
+ libcrmcommon: properly handle XML comments when comparing v2 patchset diffs
+ libcrmcommon: report errors consistently when waiting for data on connection
+ libpengine: avoid potential use-of-NULL
+ libservices: use DBusError API properly
+ pacemaker_remote: init script stop should always return 0
+ pacemaker_remote: allow remote clients to timeout/reconnect
+ pacemaker_remote: correctly calculate remaining timeout when receiving messages
+ pengine: avoid transition loop for start-then-stop + unfencing
+ pengine: correctly update dependent actions of un-runnable clones
+ pengine: do not fence a node in maintenance mode if it shuts down cleanly
+ pengine: set OCF_RESKEY_CRM_meta_notify_active_* for multistate resources
+ resources: ping - avoid temporary files for fping check, support FreeBSD
+ resources: SysInfo - better support for FreeBSD
+ resources: variable name typo in docker-wrapper
+ systemd: order pacemaker after time-sync target
+ tools: correct attrd_updater help and error messages when using CMAN
+ tools: crm_standby --version/--help should work without cluster running
+ tools: make crm_report sanitize CIB before generating readable version
+ tools: display pending resource state by default when available
+ tools: avoid matching other process with same PID in ClusterMon
* Tue Jun 21 2016 Ken Gaillot <kgaillot@redhat.com> Pacemaker-1.1.15-1
- Update source tarball to revision: 32fa6a5
- Changesets: 533
- Diff: 219 files changed, 6659 insertions(+), 3989 deletions(-)
- Features added since Pacemaker-1.1.14
+ Event-driven alerts allow scripts to be called after significant events
+ build: Some files moved from pacemaker package to pacemaker-cli for cleaner pacemaker-remote dependencies
+ build: ./configure --with-configdir argument for /etc/sysconfig, /etc/default, etc.
+ fencing: Simplify watchdog integration
+ fencing: Support concurrent fencing actions via new pcmk_action_limit option
+ remote: pacemaker_remote may be stopped without disabling resource first
+ remote: Report integration status of Pacemaker Remote nodes in CIB node_state
+ tools: crm_mon now reports why resources are not starting
+ tools: crm_report now obscures passwords in logfiles
+ tools: attrd_updater --update-both/--update-delay options allow changing dampening value
+ tools: allow stonith_admin -H '*' to show history for all nodes
- Changes since Pacemaker-1.1.14
+ Fix multiple memory issues (leaks, use-after-free) in daemons, libraries and tools
+ Make various log messages more user-friendly
+ Improve FreeBSD and Hurd support
+ attrd: Prevent possible segfault on exit
+ cib: Fix regression to restore support for compressed CIB larger than 1MB
+ common: fix regression in 1.1.14 that made have-watchdog always true
+ controld: handle DLM "wait fencing" state better
+ crmd: Fix regression so that fenced unseen nodes do not remain unclean
+ crmd: Take start-delay into account when calculation action timeouts
+ crmd: Avoid timeout on older peers when cancelling a resource operation
+ fencing: Allow fencing by node ID (e.g. by DLM) even if node left cluster
+ lrmd: Fix potential issues when cluster is stopped via systemd shutdown
+ pacemakerd: Properly respawn stonithd if it fails
+ pengine: Fix regression with multiple monitor levels that could ignore failure
+ pengine: Correctly set OCF_RESKEY_CRM_meta_timeout when start-delay is configured
+ pengine: Properly order actions for master/slave resources in anti-colocations
+ pengine: Respect asymmetrical ordering when trying to move resources
+ pengine: Properly order stop actions on guest node relative to host stonith
+ pengine: Correctly block actions dependent on unrunnable clones
+ remote: Allow remote nodes to have node attributes even with legacy attrd
+ remote: Recover from remote node fencing more quickly
+ remote: Place resources on newly rejoined remote nodes more quickly
+ resources: ping agent can now use fping6 for IPv6 hosts
+ resources: SysInfo now resets #health_disk to green when there's sufficient free disk
+ tools: crm_report is now more efficient and handles Pacemaker Remote nodes better
+ tools: Prevent crm_resource segfault when --resource is not supplied with --restart
+ tools: crm_shadow --display option now works
+ tools: crm_resource --restart handles groups, target-roles and moving resources better
* Thu Jan 14 2016 Ken Gaillot <kgaillot@redhat.com> Pacemaker-1.1.14-1
- Update source tarball to revision: f0b585a
- Changesets: 724
- Diff: 179 files changed, 13142 insertions(+), 7695 deletions(-)
- Features added since Pacemaker-1.1.13
+ crm_resource: Indicate common reasons why a resource may not start after a cleanup
+ crm_resource: New --force-promote and --force-demote options for debugging
+ fencing: Support targeting fencing topologies by node name pattern or node attribute
+ fencing: Remap sequential topology reboots to all-off-then-all-on
+ pengine: Allow resources to start and stop as soon as their state is known on all nodes
+ pengine: Include a list of all and available nodes with clone notifications
+ pengine: Addition of the clone resource clone-min metadata option
+ pengine: Support of multiple-active=block for resource groups
+ remote: Resources that create guest nodes can be included in a group resource
+ remote: reconnect_interval option for remote nodes to delay reconnect after fence
- Changes since Pacemaker-1.1.13
+ improve support for building on FreeBSD and Debian
+ fix multiple memory issues (leaks, use-after-free, double free, use-of-NULL) in components and tools
+ cib: Do not terminate due to badly behaving clients
+ cman: handle corosync-invented node names of the form Node{id} for peers not in its node list
+ controld: replace bashism
+ crm_node: Display node state with -l and quorum status with -q, if available
+ crmd: resources would sometimes be restarted when only non-unique parameters changed
+ crmd: fence remote node after connection failure only once
+ crmd: handle resources named the same as cluster nodes
+ crmd: Pre-emptively fail in-flight actions when lrmd connections fail
+ crmd: Record actions in the CIB as failed if we cannot execute them
+ crm_report: Enable password sanitizing by default
+ crm_report: Allow log file discovery to be disabled
+ crm_resource: Allow the resource configuration to be modified for --force-{check,start,..} calls
+ crm_resource: Compensate for -C and -p being called with the child resource for clones
+ crm_resource: Correctly clean up all children for anonymous cloned groups
+ crm_resource: Correctly clean up failcounts for inactive anonymous clones
+ crm_resource: Correctly observe --force when deleting and updating attributes
+ crm_shadow: Fix "crm_shadow --diff"
+ crm_simulate: Prevent segfault on arches with 64bit time_t
+ fencing: ensure "required"/"automatic" only apply to "on" actions
+ fencing: Return a provider for the internal fencing agent "#watchdog" instead of logging an error
+ fencing: ignore stderr output of fence agents (often used for debug messages)
+ fencing: fix issue where deleting a fence device attribute can delete the device
+ libcib: potential user input overflow
+ libcluster: overhaul peer cache management
+ log: make syslog less noisy
+ log: fix various misspellings in log messages
+ lrmd: cancel currently pending STONITH op if stonithd connection is lost
+ lrmd: Finalize all pending and recurring operations when cleaning up a resource
+ pengine: Bug cl#5247 - Imply resources running on a container are stopped when the container is stopped
+ pengine: cl#5235 - Prevent graph loops that can be introduced by "load_stopped -> migrate_to" ordering
+ pengine: Correctly bypass fencing for resources that do not require it
+ pengine: do not timeout remote node recurring monitor op failure until after fencing
+ pengine: Ensure recurring monitor operations are cancelled when clone instances are de-allocated
+ pengine: fixes segfault in pengine when fencing remote node
+ pengine: properly handle blocked clone actions
+ pengine: ensure failed actions that occurred in node shutdown are displayed
+ remote: Correctly display the usage of the ocf:pacemaker:remote resource agent
+ remote: do not fail operations because of a migration
+ remote: enable reloads for select remote connection options
+ resources: allow for top output with or without percent sign in HealthCPU
+ resources: Prevent an error message on stopping "Dummy" resource
+ systemd: Prevent segfault when logging failed operations
+ systemd: Reconnect to System DBus if the connection is closed
+ systemd: set systemd resources' timeout values higher than systemd's own default
+ tools: Do not send command lines to syslog
+ tools: update SNMP MIB
+ upstart: Ensure pending structs are correctly unreferenced
* Wed Jun 24 2015 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.13-1
- Update source tarball to revision: 2a1847e
- Changesets: 750
- Diff: 156 files changed, 11323 insertions(+), 3725 deletions(-)
- Features added since Pacemaker-1.1.12
+ Allow fail-counts to be removed en-mass when the new attrd is in operation
+ attrd supports private attributes (not written to CIB)
+ crmd: Ensure a watchdog device is in use if stonith-watchdog-timeout is configured
+ crmd: If configured, trigger the watchdog immediately if we lose quorum and no-quorum-policy=suicide
+ crm_diff: Support generating a difference without versions details if --no-version/-u is supplied
+ crm_resource: Implement an intelligent restart capability
+ Fencing: Advertise the watchdog device for fencing operations
+ Fencing: Allow the cluster to recover resources if the watchdog is in use
+ fencing: cl#5134 - Support random fencing delay to avoid double fencing
+ mcp: Allow orphan children to initiate node panic via SIGQUIT
+ mcp: Turn on sbd integration if pacemakerd finds it running
+ mcp: Two new error codes that result in machine reset or power off
+ Officially support the resource-discovery attribute for location constraints
+ PE: Allow natural ordering of colocation sets
+ PE: Support non-actionable degraded mode for OCF
+ pengine: cl#5207 - Display "UNCLEAN" for resources running on unclean offline nodes
+ remote: pcmk remote client tool for use with container wrapper script
+ Support machine panics for some kinds of errors (via sbd if available)
+ tools: add crm_resource --wait option
+ tools: attrd_updater supports --query and --all options
+ tools: attrd_updater: Allow attributes to be set for other nodes
- Changes since Pacemaker-1.1.12
+ pengine: exclusive discovery implies rsc is only allowed on exclusive subset of nodes
+ acl: Correctly implement the 'reference' acl directive
+ acl: Do not delay evaluation of added nodes in some situations
+ attrd: b22b1fe did uuid test too early
+ attrd: Clean out the node cache when requested by the admin
+ attrd: fixes double free in attrd legacy
+ attrd: properly write attributes for peers once uuid is discovered
+ attrd: refresh should force an immediate write-out of all attributes
+ attrd: Simplify how node deletions happen
+ Bug rhbz#1067544 - Tools: Correctly handle --ban, --move and --locate for master/slave groups
+ Bug rhbz#1181824 - Ensure the DC can be reliably fenced
+ cib: Ability to upgrade cib validation schema in legacy mode
+ cib: Always generate digests for cib diffs in legacy mode
+ cib: assignment where comparison intended
+ cib: Avoid nodeid conflicts we don't care about
+ cib: Correctly add "update-origin", "update-client" and "update-user" attributes for cib
+ cib: Correctly set up signal handlers
+ cib: Correctly track node state
+ cib: Do not update on disk backups if we're just querying them
+ cib: Enable cib legacy mode for plugin-based clusters
+ cib: Ensure file-based backends treat '-o section' consistently with the native backend
+ cib: Ensure upgrade operations from a non-DC get an acknowledgement
+ cib: No need to enforce cib digests for v2 diffs in legacy mode
+ cib: Revert d153b86 to instantly get cib synchronized in legacy mode
+ cib: tls sock cleanup for remote cib connections
+ cli: Ensure subsequent unknown long options are correctly detected
+ cluster: Invoke crm_remove_conflicting_peer() only when the new node's uname is being assigned in the node cache
+ common: Increment current and age for lib common as a result of APIs being added
+ corosync: Bug cl#5232 - Somewhat gracefully handle nodes with invalid UUIDs
+ corosync: Avoid unnecessary repeated CMAP API calls
+ crmd/pengine: handle on-fail=ignore properly
+ crmd: Add "on_node" attribute for *_last_failure_0 lrm resource operations
+ crmd: All peers need to track node shutdown requests
+ crmd: Cached copies of transient attributes cease to be valid once a node leaves the membership
+ crmd: Correctly add the local option that validates against schema for pengine to calculate
+ crmd: Disable debug logging that results in significant overhead
+ crmd: do not remove connection resources during re-probe
+ crmd: don't update fail count twice for same failure
+ crmd: Ensure remote connection resources timeout properly during 'migrate_from' action
+ crmd: Ensure throttle_mode() does something on Linux
+ crmd: Fixes crash when remote connection migration fails
+ crmd: gracefully handle remote node disconnects during op execution
+ crmd: Handle remote connection failures while executing ops on remote connection
+ crmd: include remote nodes when forcing cluster wide resource reprobe
+ crmd: never stop recurring monitor ops for pcmk remote during incomplete migration
+ crmd: Prevent the old version of DC from being fenced when it shuts down for rolling-upgrade
+ crmd: Prevent use-of-NULL during reprobe
+ crmd: properly update job limit for baremetal remote-nodes
+ crmd: Remote-node throttle jobs count towards cluster-node hosting conneciton rsc
+ crmd: Reset stonith failcount to recover transitioner when the node rejoins
+ crmd: resolves memory leak in crmd.
+ crmd: respect start-failure-is-fatal even for artifically injected events
+ crmd: Wait for all pending operations to complete before poking the policy engine
+ crmd: When container's host is fenced, cancel in-flight operations
+ crm_attribute: Correctly update config options when -o crm_config is specified
+ crm_failcount: Better error reporting when no resource is specified
+ crm_mon: add exit reason to resource failure output
+ crm_mon: Fill CRM_notify_node in traps with node's uname rather than node's id if possible
+ crm_mon: Repair notification delivery when the v2 patch format is in use
+ crm_node: Correctly remove nodes from the CIB by nodeid
+ crm_report: More patterns for finding logs on non-DC nodes
+ crm_resource: Allow resource restart operations to be node specific
+ crm_resource: avoid deletion of lrm cache on node with resource discovery disabled.
+ crm_resource: Calculate how long to wait for a restart based on the resource timeouts
+ crm_resource: Clean up memory in --restart error paths
+ crm_resource: Display the locations of all anonymous clone children when supplying the children's common ID
+ crm_resource: Ensure --restart sets/clears meta attributes
+ crm_resource: Ensure fail-counts are purged when we redetect the state of all resources
+ crm_resource: Implement --timeout for resource restart operations
+ crm_resource: Include group members when calculating the next timeout
+ crm_resource: Memory leak in error paths
+ crm_resource: Prevent use-after-free
+ crm_resource: Repair regression test outputs
+ crm_resource: Use-after-free when restarting a resource
+ dbus: ref count leaks
+ dbus: Ensure both the read and write queues get dispatched
+ dbus: Fail gracefully if malloc fails
+ dbus: handle dispatch queue when multiple replies need to be processed
+ dbus: Notice when dbus connections get disabled
+ dbus: Remove double-free introduced while trying to make coverity shut up
+ ensure if B is colocated with A, B can never run without A
+ fence_legacy: Avoid passing 'port' to cluster-glue agents
+ fencing: Allow nodes to be purged from the member cache
+ fencing: Correctly make args for fencing agents
+ fencing: Correctly wait for self-fencing to occur when the watchdog is in use
+ fencing: Ensure the hostlist parameter is set for watchdog agents
+ fencing: Force 'stonith-ng' as the system name
+ fencing: Gracefully handle invalid metadata from agents
+ fencing: If configured, wait stonith-watchdog-timer seconds for self-fencing to complete
+ fencing: Reject actions for devices that haven't been explicitly registered yet
+ ipc: properly allocate server enforced buffer size on client
+ ipc: use server enforced buffer during ipc client send
+ lrmd, services: interpret LSB status codes properly
+ lrmd: add back support for class heartbeat agents
+ lrmd: cancel pending async connection during disconnect
+ lrmd: enable ipc proxy for docker-wrapper privileged mode
+ lrmd: fix rescheduling of systemd monitor op during start
+ lrmd: Handle systemd reporting 'done' before a resource is actually stopped
+ lrmd: Hint to child processes that using sd_notify is not required
+ lrmd: Log with the correct personality
+ lrmd: Prevent glib assert triggered by timers being removed from mainloop more than once
+ lrmd: report original timeout when systemd operation completes
+ lrmd: store failed operation exit reason in cib
+ mainloop: resolves race condition mainloop poll involving modification of ipc connections
+ make targetted reprobe for remote node work, crm_resource -C -N <remote node>
+ mcp: Allow a configurable delay when debugging shutdown issues
+ mcp: Avoid requiring 'export' for SYS-V sysconfig options
+ Membership: Detect and resolve nodes that change their ID
+ pacemakerd: resolves memory leak of xml structure in pacemakerd
+ pengine: ability to launch resources in isolated containers
+ pengine: add #kind=remote for baremetal remote-nodes
+ pengine: allow baremetal remote-nodes to recover without requiring fencing when cluster-node fails
+ pengine: allow remote-nodes to be placed in maintenance mode
+ pengine: Avoid trailing whitespaces when printing resource state
+ pengine: cl#5130 - Choose nodes capable of running all the colocated utilization resources
+ pengine: cl#5130 - Only check the capacities of the nodes that are allowed to run the resource
+ pengine: Correctly compare feature set to determine how to unpack meta attributes
+ pengine: disable migrations for resources with isolation containers
+ pengine: disable reloading of resources within isolated container wrappers
+ pengine: Do not aggregate children in a pending state into the started/stopped/etc lists
+ pengine: Do not record duplicate copies of the failed actions
+ pengine: Do not reschedule monitors that are no longer needed while resource definitions have changed
+ pengine: Fence baremetal remote when recurring monitor op fails
+ pengine: Fix colocation with unmanaged resources
+ pengine: Fix the behaviors of multi-state resources with asymmetrical ordering
+ pengine: fixes pengine crash with orphaned remote node connection resource
+ pengine: fixes segfault caused by malformed log warning
+ pengine: handle cloned isolated resources in a sane way
+ pengine: handle isolated resource scenario, cloned group of isolated resources
+ pengine: Handle ordering between stateful and migratable resources
+ pengine: imply stop in container node resources when host node is fenced
+ pengine: only fence baremetal remote when connection can fails or can not be recovered
+ pengine: only kill process group on timeout when on-fail does not equal block.
+ pengine: per-node control over resource discovery
+ pengine: prefer migration target for remote node connections
+ pengine: prevent disabling rsc discovery per node in certain situations
+ pengine: Prevent use-after-free in sort_rsc_process_order()
+ pengine: properly handle ordering during remote connection partial migration
+ pengine: properly recover remote-nodes when cluster-node proxy goes offline
+ pengine: remove unnecessary whitespace from notify environment variables
+ pengine: require-all feature for ordered clones
+ pengine: Resolve memory leaks
+ pengine: resource discovery mode for location constraints
+ pengine: restart master instances on instance attribute changes
+ pengine: Turn off legacy unpacking of resource options into the meta hashtable
+ pengine: Watchdog integration is sufficient for fencing
+ Perform systemd reloads asynchronously
+ ping: Correctly advertise multiplier default
+ Prefer to inherit the watchdog timeout from SBD
+ properly record stop args after reload
+ provide fake meta data for ra class heartbeat
+ remote: report timestamps for remote connection resource operations
+ remote: Treat recv msg timeout as a disconnect
+ service: Prevent potential use-of-NULL in metadata lookups
+ solaris: Allow compilation when dirent.d_type is not available
+ solaris: Correctly replace the linux swab functions
+ solaris: Disable throttling since /proc doesn't exist
+ stonith-ng: Correctly observe the watchdog completion timeout
+ stonith-ng: Correctly track node state
+ stonith-ng: Reset mainloop source IDs after removing them
+ systemd: Correctly handle long running stop actions
+ systemd: Ensure failed monitor operations always return
+ systemd: Ensure we don't call dbus_message_unref() with NULL
+ systemd: fix crash caused when canceling in-flight operation
+ systemd: Kindly ask dbus NOT to kill the process if the dbus connection fails
+ systemd: Perform actions asynchronously
+ systemd: Perform monitor operations without blocking
+ systemd: Tell systemd not to take DBus down from underneath us
+ systemd: Trick systemd into not stopping our services before us during shutdown
+ tools: Improve crm_mon output with certain option combinations
+ upstart: Monitor actions always return 'ok' or 'not running'
+ upstart: Perform more parts of monitor operations without blocking
+ xml: add 'require-all' to xml schema for constraints
+ xml: cl#5231 - Unset the deleted attributes in the resulting diffs
+ xml: Clone the latest constraint schema in preparation for changes"
+ xml: Correctly create v1 patchsets when deleting attributes
+ xml: Do not change the ordering of properties when applying v1 cib diffs
+ xml: Do not dump deleted attributes
+ xml: Do not prune leaves from v1 cib diffs that are being created with digests
+ xml: Ensure ACLs are reapplied before calculating what a replace operation changed
+ xml: Fix upgrade-1.3.xsl to correctly transform ACL rules with "attribute"
+ xml: Prevent assert errors in crm_element_value() on applying a patch without version information
+ xml: Prevent potential use-of-NULL
* Tue Jul 22 2014 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.12-1
- Update source tarball to revision: 93a037d
- Changesets: 795
- Diff: 195 files changed, 13772 insertions(+), 6176 deletions(-)
- Features added since Pacemaker-1.1.11
+ Changes to the ACL schema to support nodes and unix groups
+ cib: Check ACLs prior to making the update instead of parsing the diff afterwards
+ cib: Default ACL support to on
+ cib: Enable the more efficient xml patchset format
+ cib: Implement zero-copy status update
+ cib: Send all r/w operations via the cluster connection and have all nodes process them
+ crmd: Set "cluster-name" property to corosync's "cluster_name" by default for corosync-2
+ crm_mon: Display brief output if "-b/--brief" is supplied or 'b' is toggled
+ crm_report: Allow ssh alternatives to be used
+ crm_ticket: Support multiple modifications for a ticket in an atomic operation
+ extra: Add logrotate configuration file for /var/log/pacemaker.log
+ Fencing: Add the ability to call stonith_api_time() from stonith_admin
+ logging: daemons always get a log file, unless explicitly set to configured 'none'
+ logging: allows the user to specify a log level that is output to syslog
+ PE: Automatically re-unfence a node if the fencing device definition changes
+ pengine: cl#5174 - Allow resource sets and templates for location constraints
+ pengine: Support cib object tags
+ pengine: Support cluster-specific instance attributes based on rules
+ pengine: Support id-ref in nvpair with optional "name"
+ pengine: Support per-resource maintenance mode
+ pengine: Support site-specific instance attributes based on rules
+ tools: Allow crm_shadow to create older configuration versions
+ tools: Display pending state in crm_mon/crm_resource/crm_simulate if --pending/-j is supplied (cl#5178)
+ xml: Add the ability to have lightweight schema revisions
+ xml: Enable resource sets in location constraints for 1.2 schema
+ xml: Support resources that require unfencing
- Changes since Pacemaker-1.1.11
+ acl: Authenticate pacemaker-remote requests with the node name as the client
+ acl: Read access must be explicitly granted
+ attrd: Ensure attribute dampening is always observed
+ attrd: Remove offline nodes from node cache for "peer-remove" requests
+ Bug cl#5055 - Improved migration support.
+ Bug cl#5184 - Ensure pending probes that ultimately fail are correctly updated
+ Bug cl#5196 - pengine: Check values after expanding templates
+ Bug cl#5212 - Do not promote instances when quorum is lots and no-quorum-policy=freeze
+ Bug cl#5213 - Ensure role colocation with -INFINITY is enforced
+ Bug cl#5213 - Limit the scope of the previous commit to the masters role
+ Bug cl#5219 - pengine: Allow unrelated resources with a common colocation target to remain promoted
+ Bug cl#5222 - cib: Repair rolling update capability
+ Bug cl#5222 - Enable legacy mode whenever a broadcast update is detected
+ Bug rhbz#1036631 - Stop members of cloned groups when dependencies are stopped
+ Bug rhbz#1054307 - cname pattern match should be more restrictive in init script
+ Bug rhbz#1057697 - Use native DBus library for systemd/upstart support to avoid problematic use of threads
+ Bug rhbz#1097457 - Limit the scope of the previous fix and include a helpful comment
+ Bug rhbz#1097457 - Prevent invalid transition when resource are ordered to start after the container they're started in
+ cib: allow setting permanent remote-node attributes
+ cib: Auto-detect which patchset format to use
+ cib: Determine the best value of validate-with if one is not supplied
+ cib: Do not disable cib disk writes if on-disk cib is corrupt
+ cib: Ensure 'cibadmin -R/--replace' commands get replies
+ cib: Erasing the cib is an admin action, bump the admin_epoch instead
+ cib: Fix remote cib based on TLS
+ cib: Ignore patch failures if we already have their contents
+ cib: Validate that everyone still sees the same configuration once all updates have completed
+ cibadmin: Allow priviliged clients to perform tasks as unpriviliged users
+ cibadmin: Remove dangerous commands that exposed unnecessary implementation internal details
+ cluster: Fix segfault on removing a node
+ cluster: Prevent search of unames from attempting to create node entries for unknown nodes
+ cluster: Remove unknown offline nodes with conflicting unames from node cache
+ controld: Do not consider the dlm up until the address list is present
+ controld: handling startup fencing within the controld agent, not the dlm
+ controld: Return OCF_ERR_INSTALLED instead of OCF_NOT_INSTALLED
+ crmd: Ack pending operations that were cancelled due to rsc deletion
+ crmd: Actions can only be executed if their pre-requisits completed successfully
+ crmd: avoid double free caused by nested hash table removal
+ crmd: Avoid spamming the cib by triggering a transition only once per non-status change
+ crmd: Correctly react to successful unfencing operations
+ crmd: Correctly recognise operation cancellations we initiated
+ crmd: Do not erase the status section for unfenced nodes
+ crmd: Do not overwrite existing node state when fencing completes
+ crmd: Do not start timers for already completed operations
+ crmd: Ensure crm_config options are re-read on updates
+ crmd: Fenced nodes that return prior to an election do not need to have their status section reset
+ crmd: make lrm_state hash table not case sensitive
+ crmd: make node_state erase correctly
+ crmd: Only write fence_averride if open() returns a positive file descriptor
+ crmd: Prevent manual fencing confirmations from attempting to create node entries for unknown nodes
+ crmd: Prevent SIGPIPE when notifying CMAN about fencing operations
+ crmd: Remove state of unknown nodes with conflicting unames from CIB
+ crmd: Remove unknown nodes with conflicting unames from CIB
+ crmd: Report unsuccessful unfencing operations
+ crm_diff: Allow the generation of xml patchsets without digests
+ crm_mon: Allow the file created by --as-html to be world readable
+ crm_mon: Ensure resource attributes have been unpacked before displaying connectivity data
+ crm_node: Only remove the named resource from the cib
+ crm_report: Gracefully handle rediculously large logfiles
+ crm_report: Only gather dlm data if dlm_controld is running
+ crm_resource: Gracefully handle -EACCESS when querying the cib
+ crm_verify: Perform a full set of calculations whenever the status section is present
+ fencing: Advertise support for reboot/on/off in the metadata for legacy agents
+ fencing: Automatically switch from 'list' to 'status' to 'static-list' if those actions are not advertised in the metadata
+ fencing: Cache metadata lookups to avoid repeated blocking during device registration
+ fencing: Correctly record which peer performed the fencing operation
+ fencing: default to 'off' when agent does not advertise 'reboot' in metadata
+ fencing: Do not unregister/register all stonith devices on every resource agent change
+ fencing: Execute all required fencing devices regardless of what topology level they are at
+ fencing: Fence using all required devices
+ fencing: Pass the correct options when looking up the history by node name
+ fencing: Update stonith device list only if stonith is enabled
+ get_cluster_type: failing concurrent tool invocations on heartbeat
+ ignore SIGPIPE when gnutls is in use
+ iso8601: Different logic is needed when logging and calculating durations
+ iso8601: Fix memory leak in duration calculation
+ Logging: Bootstrap daemon logging before processing arguments but configure it afterwards
+ lrmd: Cancel recurring operations before stop action is executed
+ lrmd: Expose logging variables expected by OCF agents
+ lrmd: Handle systemd reporting 'done' before a resource is actually stopped/started
+ lrmd: Merge duplicate recurring monitor operations
+ lrmd: Prevent OCF agents from logging to random files due to "value" of setenv() being NULL
+ lrmd: Provide stderr output from agents if available, otherwise fall back to stdout
+ mainloop: Better handle the killing of processes in the act of exiting
+ mainloop: Canceling in-flight operations should not fail if child process has already exited.
+ mainloop: Fixes use after free in process monitor code
+ mcp: Tell systemd not to respawn us if we exit with rc=100
+ membership: Avoid duplicate peer entries in the peer cache
+ pengine: Allow container nodes to migrate with connection resource
+ pengine: avoid assert by searching for stop action on correct node during LogActions
+ pengine: Block restart of resources if any dependent resource in a group is unmanaged
+ pengine: cl#5186 - Avoid running rsc on two nodes when node is fenced during migration
+ pengine: cl#5187 - Prevent resources in an anti-colocation from even temporarily running on a same node
+ pengine: cl#5200 - Before migrating utilization-using resources to a node, take off the load that will no longer run there if it's not introducing transition loop
+ pengine: Correctly handle origin offsets in the future
+ pengine: Correctly observe requires=nothing
+ pengine: Default sequential to TRUE for resource sets for consistency with colocation sets
+ pengine: Delay unfencing until after we know the state of all resources that require unfencing
+ pengine: Do not initiate fencing for unclean nodes when fencing is disabled
+ pengine: Ensure instance numbers are preserved for cloned templates
+ pengine: Ensure unfencing only happens once, even if the transition is interrupted
+ pengine: Fencing devices default to only requiring quorum in order to start
+ pengine: fixes invalid transition caused by clones with more than 10 instances
+ pengine: Force record pending for migrate_to actions
+ pengine: handles edge case where container order constraints are not honored during migration
+ pengine: Ignore failure-timeout only if the failed operation has on-fail="block"
+ pengine: Mark unrunnable stop actions as "blocked" and show the correct current locations
+ pengine: Memory leaks
+ pengine: properly handle fencing of container remote-nodes when the container is orphaned
+ pengine: properly place resource within a container when container is a remote-node.
+ pengine: Unfencing is based on device probes, there is no need to unfence when normal resources are found active
+ pengine: Use "#cluster-name" in rules for setting cluster-specific instance attributes
+ pengine: Use "#site-name" in rules for setting site-specific instance attributes
+ remote: Allow baremetal remote-node connection resources to migrate
+ remote: clear remote-node status correctly
+ remote: Enable migration support for baremetal connection resources by default
+ remote: Handle request/response ipc proxy correctly
+ services: Correctly reset the nice value for lrmd's children
+ services: Do not allow duplicate recurring op entries
+ services: Do not block synced service executions
+ services: Fixes segfault associated with cancelling in-flight recurring operations.
+ services: Remove cancelled recurring ops from internal lists as early as possible
+ services: Remove file descriptors from mainloop as soon as we have drained them
+ services: Reset the scheduling policy and priority for lrmd's children without replying on SCHED_RESET_ON_FORK
+ services_action_cancel: Interpret return code from mainloop_child_kill() correctly
+ stonith_admin: Ensure pointers passed to sscanf() are properly initialized
+ stonith_api_time_helper now returns when the most recent fencing operation completed
+ systemd: Prevent use-of-NULL when determining if an agent exists
+ systemd: Try to handle dbus actions that complete prior to configuring a callback
+ Tools: Non-daemons shouldn't abort just because xml parsing failed
+ Upstart: Allow comilation with glib versions older than 2.28
+ Upstart: Do not attempt upstart jobs if we cannot connect to dbus
+ When data was old, it fixed so that the newest cib might not be acquired.
+ xml: Check all available schemas when doing upgrades
+ xml: Correctly determine the lowest allowed schema version
+ xml: Correctly enforce ACLs after a replace operation
+ xml: Correctly infer attribute changes after a replace operation
+ xml: Create the correct diff when only part of a document is changed
+ xml: Detect attribute ordering changes
+ xml: Detect content that is added and removed in the same update
+ xml: Do not prune meaningful leaves from v1 patchsets
+ xml: Empty patchsets are considered to have applied cleanly
+ xml: Ensure patches always have version details set
+ xml: Find the minimal set of changes when part of a document is replaced
+ xml: If validate-with is missing, we find the most recent schema that accepts it and go from there
+ xml: Introduce a 'move' primitive for v2 patch sets
+ xml: Preserve the attribute order in the patch for subsequent digest validation
+ xml: Resolve memory leak when logging xml blobs
+ xml: Update xml validation to allow '<node type=remote />'
* Thu Feb 13 2014 David Vossel <davidvossel@gmail.com> Pacemaker-1.1.11-1
- Update source tarball to revision: 33f9d09
- Changesets: 462
- Diff: 147 files changed, 6810 insertions(+), 4057 deletions(-)
- Features added since Pacemaker-1.1.10
+ attrd: A truly atomic version of attrd for use where CPG is used for cluster communication
+ cib: Allow values to be added/updated and removed in a single update
+ cib: Support XML comments in diffs
+ Core: Allow blackbox logging to be disabled with SIGUSR2
+ crmd: Do not block on proxied calls from pacemaker_remoted
+ crmd: Enable cluster-wide throttling when the cib heavily exceeds its target load
+ crmd: Make the per-node action limit directly configurable in the CIB
+ crmd: Slow down recovery on nodes with IO load
+ crmd: Track CPU usage on cluster nodes and slow down recovery on nodes with high CPU/IO load
+ crm_mon: add --hide-headers option to hide all headers
+ crm_node: Display partition output in sorted order
+ crm_report: Collect logs directly from journald if available
+ Fencing: On timeout, clean up the agent's entire process group
+ Fencing: Support agents that need the host to be unfenced at startup
+ ipc: Raise the default buffer size to 128k
+ PE: Add a special attribute for distinguishing between real nodes and containers in constraint rules
+ PE: Allow location constraints to take a regex pattern to match against resource IDs
+ pengine: Distinguish between the agent being missing and something the agent needs being missing
+ remote: Properly version the remote connection protocol
- Changes since Pacemaker-1.1.10
+ Bug rhbz#1011618 - Consistently use 'Slave' as the role for unpromoted master/slave resources
+ Bug rhbz#1057697 - Use native DBus library for systemd and upstart support to avoid problematic use of threads
+ attrd: Any variable called 'cluster' makes the daemon crash before reaching main()
+ attrd: Avoid infinite write loop for unknown peers
+ attrd: Drop all attributes for peers that left the cluster
+ attrd: Give remote-nodes ability to set attributes with attrd
+ attrd: Prevent inflation of attribute dampen intervals
+ attrd: Support SI units for attribute dampening
+ Bug cl#5171 - pengine: Don't prevent clones from running due to dependent resources
+ Bug cl#5179 - Corosync: Attempt to retrieve a peer's node name if it is not already known
+ Bug cl#5181 - corosync: Ensure node IDs are written to the CIB as unsigned integers
+ Bug rhbz#902407 - crm_resource: Handle --ban for master/slave resources as advertised
+ cib: Correctly check for archived configuration files
+ cib: Correctly log short-form xml diffs
+ cib: Fix remote cib based on TLS
+ cibadmin: Report errors during sign-off
+ cli: Do not enabled blackbox for cli tools
+ cluster: Fix segfault on removing a node
+ cman: Do not start pacemaker if cman startup fails
+ cman: Start clvmd and friends from the init script if enabled
+ Command-line tools should stop after an assertion failure
+ controld: Use the correct variant of dlm_controld for corosync-2 clusters
+ cpg: Correctly set the group name length
+ cpg: Ensure the CPG group is always null-terminated
+ cpg: Only process one message at a time to allow other priority jobs to be performed
+ crmd: Correctly observe the configured batch-limit
+ crmd: Correctly update expected state when the previous DC shuts down
+ crmd: Correcty update the history cache when recurring ops change their return code
+ crmd: Don't add node_state to cib, if we have not seen or fenced this node yet
+ crmd: don't segfault on shutdown when using heartbeat
+ crmd: Prevent recurring monitors being cancelled due to notify operations
+ crmd: Reliably detect and act on reprobe operations from the policy engine
+ crmd: When a peer expectedly shuts down, record the new join and expected states into the cib
+ crmd: When the DC gracefully shuts down, record the new expected state into the cib
+ crm_attribute: Do not swallow hostname lookup failures
+ crm_mon: Do not display duplicates of failed actions
+ crm_mon: Reduce flickering in interactive mode
+ crm_resource: Observe --master modifier for --move
+ crm_resource: Provide a meaningful error if --master is used for primitives and groups
+ fencing: Allow fencing for node after topology entries are deleted
+ fencing: Apply correct score to the resource of group
+ fencing: Ignore changes to non-fencing resources
+ fencing: Observe pcmk_host_list during automatic unfencing
+ fencing: Put all fencing agent processes into their own process group
+ fencing: Wait until all possible replies are recieved before continuing with unverified devices
+ ipc: Compress msgs based on client's actual max send size
+ ipc: Have the ipc server enforce a minimum buffer size all clients must use.
+ iso8601: Prevent dates from jumping backwards a day in some timezones
+ lrmd: Correctly calculate metadata for the 'service' class
+ lrmd: Correctly cancel monitor actions for lsb/systemd/service resources on cleaning up
+ mcp: Remove LSB hints that instruct chkconfig to start pacemaker at boot time
+ mcp: Some distros complain when LSB scripts do not include Default-Start/Stop directives
+ pengine: Allow fencing of baremetal remote nodes
+ pengine: cl#5186 - Avoid running rsc on two nodes when node is fenced during migration
+ pengine: Correctly account for the location preferences of things colocated with a group
+ pengine: Correctly handle demotion of grouped masters that are partially demoted
+ pengine: Disable container node probes due to constraint conflicts
+ pengine: Do not allow colocation with blocked clone instances
+ pengine: Do not re-allocate clone instances that are blocked in the Stopped state
+ pengine: Do not restart resources that depend on unmanaged resources
+ pengine: Force record pending for migrate_to actions
+ pengine: Location constraints with role=Started should prevent masters from running at all
+ pengine: Order demote/promote of resources on remote nodes to happen only once the connection is up
+ pengine: Properly handle orphaned multistate resources living on remote-nodes
+ pengine: Properly shutdown orphaned remote connection resources
+ pengine: Recover unexpectedly running container nodes.
+ remote: Add support for ipv6 into pacemaker_remote daemon
+ remote: Handle endian changes between client and server and improve forward compatibility
+ services: Fixes segfault associated with cancelling in-flight recurring operations.
+ services: Reset the scheduling policy and priority for lrmd's children without replying on SCHED_RESET_ON_FORK
* Fri Jul 26 2013 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.10-1
- Update source tarball to revision: ab2e209
- Changesets: 602
- Diff: 143 files changed, 8162 insertions(+), 5159 deletions(-)
- Features added since Pacemaker-1.1.9
+ Core: Convert all exit codes to positive errno values
+ crm_error: Add the ability to list and print error symbols
+ crm_resource: Allow individual resources to be reprobed
+ crm_resource: Allow options to be set recursively
+ crm_resource: Implement --ban for moving resources away from nodes and --clear (replaces --unmove)
+ crm_resource: Support OCF tracing when using --force-(check|start|stop)
+ PE: Allow active nodes in our current membership to be fenced without quorum
+ PE: Suppress meaningless IDs when displaying anonymous clone status
+ Turn off auto-respawning of systemd services when the cluster starts them
+ Bug cl#5128 - pengine: Support maintenance mode for a single node
- Changes since Pacemaker-1.1.9
+ crmd: cib: stonithd: Memory leaks resolved and improved use of glib reference counting
+ attrd: Fixes deleted attributes during dc election
+ Bug cf#5153 - Correctly display clone failcounts in crm_mon
+ Bug cl#5133 - pengine: Correctly observe on-fail=block for failed demote operation
+ Bug cl#5148 - legacy: Correctly remove a node that used to have a different nodeid
+ Bug cl#5151 - Ensure node names are consistently compared without case
+ Bug cl#5152 - crmd: Correctly clean up fenced nodes during membership changes
+ Bug cl#5154 - Do not expire failures when on-fail=block is present
+ Bug cl#5155 - pengine: Block the stop of resources if any depending resource is unmanaged
+ Bug cl#5157 - Allow migration in the absence of some colocation constraints
+ Bug cl#5161 - crmd: Prevent memory leak in operation cache
+ Bug cl#5164 - crmd: Fixes crash when using pacemaker-remote
+ Bug cl#5164 - pengine: Fixes segfault when calculating transition with remote-nodes.
+ Bug cl#5167 - crm_mon: Only print "stopped" node list for incomplete clone sets
+ Bug cl#5168 - Prevent clones from being bounced around the cluster due to location constraints
+ Bug cl#5170 - Correctly support on-fail=block for clones
+ cib: Correctly read back archived configurations if the primary is corrupted
+ cib: The result is not valid when diffs fail to apply cleanly for CLI tools
+ cib: Restore the ability to embed comments in the configuration
+ cluster: Detect and warn about node names with capitals
+ cman: Do not pretend we know the state of nodes we've never seen
+ cman: Do not unconditionally start cman if it is already running
+ cman: Support non-blocking CPG calls
+ Core: Ensure the blackbox is saved on abnormal program termination
+ corosync: Detect the loss of members for which we only know the nodeid
+ corosync: Do not pretend we know the state of nodes we've never seen
+ corosync: Ensure removed peers are erased from all caches
+ corosync: Nodes that can persist in sending CPG messages must be alive afterall
+ crmd: Do not get stuck in S_POLICY_ENGINE if a node we couldn't fence returns
+ crmd: Do not update fail-count and last-failure for old failures
+ crmd: Ensure all membership operations can complete while trying to cancel a transition
+ crmd: Ensure operations for cleaned up resources don't block recovery
+ crmd: Ensure we return to a stable state if there have been too many fencing failures
+ crmd: Initiate node shutdown if another node claims to have successfully fenced us
+ crmd: Prevent messages for remote crmd clients from being relayed to wrong daemons
+ crmd: Properly handle recurring monitor operations for remote-node agent
+ crmd: Store last-run and last-rc-change for all operations
+ crm_mon: Ensure stale pid files are updated when a new process is started
+ crm_report: Correctly collect logs when 'uname -n' reports fully qualified names
+ fencing: Fail the operation once all peers have been exhausted
+ fencing: Restore the ability to manually confirm that fencing completed
+ ipc: Allow unpriviliged clients to clean up after server failures
+ ipc: Restore the ability for members of the haclient group to connect to the cluster
+ legacy: Support "crm_node --remove" with a node name for corosync plugin (bnc#805278)
+ lrmd: Default to the upstream location for resource agent scratch directory
+ lrmd: Pass errors from lsb metadata generation back to the caller
+ pengine: Correctly handle resources that recover before we operate on them
+ pengine: Delete the old resource state on every node whenever the resource type is changed
+ pengine: Detect constraints with inappropriate actions (ie. promote for a clone)
+ pengine: Ensure per-node resource parameters are used during probes
+ pengine: If fencing is unavailable or disabled, block further recovery for resources that fail to stop
+ pengine: Implement the rest of get_timet_now() and rename to get_effective_time
+ pengine: Re-initiate _active_ recurring monitors that previously failed but have timed out
+ remote: Workaround for inconsistent tls handshake behavior between gnutls versions
+ systemd: Ensure we get shut down correctly by systemd
+ systemd: Reload systemd after adding/removing override files for cluster services
+ xml: Check for and replace non-printing characters with their octal equivalent while exporting xml text
+ xml: Prevent lockups by setting a more reliable buffer allocation strategy
* Fri Mar 08 2013 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.9-1
- Update source tarball to revision: 7e42d77
- Statistics:
Changesets: 731
Diff: 1301 files changed, 92909 insertions(+), 57455 deletions(-)
- Features added in Pacemaker-1.1.9
+ corosync: Allow cman and corosync 2.0 nodes to use a name other than uname()
+ corosync: Use queues to avoid blocking when sending CPG messages
+ ipc: Compress messages that exceed the configured IPC message limit
+ ipc: Use queues to prevent slow clients from blocking the server
+ ipc: Use shared memory by default
+ lrmd: Support nagios remote monitoring
+ lrmd: Pacemaker Remote Daemon for extending pacemaker functionality outside corosync cluster.
+ pengine: Check for master/slave resources that are not OCF agents
+ pengine: Support a 'requires' resource meta-attribute for controlling whether it needs quorum, fencing or nothing
+ pengine: Support for resource container
+ pengine: Support resources that require unfencing before start
- Changes since Pacemaker-1.1.8
+ attrd: Correctly handle deletion of non-existant attributes
+ Bug cl#5135 - Improved detection of the active cluster type
+ Bug rhbz#913093 - Use crm_node instead of uname
+ cib: Avoid use-after-free by correctly support cib_no_children for non-xpath queries
+ cib: Correctly process XML diff's involving element removal
+ cib: Performance improvements for non-DC nodes
+ cib: Prevent error message by correctly handling peer replies
+ cib: Prevent ordering changes when applying xml diffs
+ cib: Remove text nodes from cib replace operations
+ cluster: Detect node name collisions in corosync
+ cluster: Preserve corosync membership state when matching node name/id entries
+ cman: Force fenced to terminate on shutdown
+ cman: Ignore qdisk 'nodes'
+ core: Drop per-user core directories
+ corosync: Avoid errors when closing failed connections
+ corosync: Ensure peer state is preserved when matching names to nodeids
+ corosync: Clean up CMAP connections after querying node name
+ corosync: Correctly detect corosync 2.0 clusters even if we don't have permission to access it
+ crmd: Bug cl#5144 - Do not updated the expected status of failed nodes
+ crmd: Correctly determin if cluster disconnection was abnormal
+ crmd: Correctly relay messages for remote clients (bnc#805626, bnc#804704)
+ crmd: Correctly stall the FSA when waiting for additional inputs
+ crmd: Detect and recover when we are evicted from CPG
+ crmd: Differentiate between a node that is up and coming up in peer_update_callback()
+ crmd: Have cib operation timeouts scale with node count
+ crmd: Improved continue/wait logic in do_dc_join_finalize()
+ crmd: Prevent election storms caused by getrusage() values being too close
+ crmd: Prevent timeouts when performing pacemaker level membership negotiation
+ crmd: Prevent use-after-free of fsa_message_queue during exit
+ crmd: Store all current actions when stalling the FSA
+ crm_mon: Do not try to render a blank cib and indicate the previous output is now stale
+ crm_mon: Fixes crm_mon crash when using snmp traps.
+ crm_mon: Look for the correct error codes when applying configuration updates
+ crm_report: Ensure policy engine logs are found
+ crm_report: Fix node list detection
+ crm_resource: Have crm_resource generate a valid transition key when sending resource commands to the crmd
+ date/time: Bug cl#5118 - Correctly convert seconds-since-epoch to the current time
+ fencing: Attempt to provide more information that just 'generic error' for failed actions
+ fencing: Correctly record completed but previously unknown fencing operations
+ fencing: Correctly terminate when all device options have been exhausted
+ fencing: cov#739453 - String not null terminated
+ fencing: Do not merge new fencing requests with stale ones from dead nodes
+ fencing: Do not start fencing until entire device topology is found or query results timeout.
+ fencing: Do not wait for the query timeout if all replies have arrived
+ fencing: Fix passing of parameters from CMAN containing '='
+ fencing: Fix non-comparison when sorting devices by priority
+ fencing: On failure, only try a topology device once from the remote level.
+ fencing: Only try peers for non-topology based operations once
+ fencing: Retry stonith device for duration of action's timeout period.
+ heartbeat: Remove incorrect assert during cluster connect
+ ipc: Bug cl#5110 - Prevent 100% CPU usage when looking for synchronous replies
+ ipc: Use 50k as the default compression threshold
+ legacy: Prevent assertion failure on routing ais messages (bnc#805626)
+ legacy: Re-enable logging from the pacemaker plugin
+ legacy: Relax the 'active' check for plugin based clusters to avoid false negatives
+ legacy: Skip peer process check if the process list is empty in crm_is_corosync_peer_active()
+ mcp: Only define HA_DEBUGLOG to avoid agent calls to ocf_log printing everything twice
+ mcp: Re-attach to existing pacemaker components when mcp fails
+ pengine: Any location constraint for the slave role applies to all roles
+ pengine: Avoid leaking memory when cleaning up failcounts and using containers
+ pengine: Bug cl#5101 - Ensure stop order is preserved for partially active groups
+ pengine: Bug cl#5140 - Allow set members to be stopped when the subseqent set has require-all=false
+ pengine: Bug cl#5143 - Prevent shuffling of anonymous master/slave instances
+ pengine: Bug rhbz#880249 - Ensure orphan masters are demoted before being stopped
+ pengine: Bug rhbz#880249 - Teach the PE how to recover masters into primitives
+ pengine: cl#5025 - Automatically clear failcount for start/monitor failures after resource parameters change
+ pengine: cl#5099 - Probe operation uses the timeout value from the minimum interval monitor by default (#bnc776386)
+ pengine: cl#5111 - When clone/master child rsc has on-fail=stop, insure all children stop on failure.
+ pengine: cl#5142 - Do not delete orphaned children of an anonymous clone
+ pengine: Correctly unpack active anonymous clones
+ pengine: Ensure previous migrations are closed out before attempting another one
+ pengine: Introducing the whitebox container resources feature
+ pengine: Prevent double-free for cloned primitive from template
+ pengine: Process rsc_ticket dependencies earlier for correctly allocating resources (bnc#802307)
+ pengine: Remove special cases for fencing resources
+ pengine: rhbz#902459 - Remove rsc node status for orphan resources
+ systemd: Gracefully handle unexpected DBus return types
+ Replace the use of the insecure mktemp(3) with mkstemp(3)
* Thu Sep 20 2012 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.8-1
- Update source tarball to revision: 1a5341f
- Statistics:
Changesets: 1019
Diff: 2107 files changed, 117258 insertions(+), 73606 deletions(-)
- All APIs have been cleaned up and reduced to essentials
- Pacemaker now includes a replacement lrmd that supports systemd and upstart agents
- Config and state files (cib.xml, PE inputs and core files) have moved to new locations
- The crm shell has become a separate project and no longer included with Pacemaker
- All daemons/tools now have a unified set of error codes based on errno.h (see crm_error)
- Changes since Pacemaker-1.1.7
+ Core: Bug cl#5032 - Rewrite the iso8601 date handling code
+ Core: Correctly extract the version details from a diff
+ Core: Log blackbox contents, if enabled, when an error occurs
+ Core: Only LOG_NOTICE and higher are sent to syslog
+ Core: Replace use of IPC from clplumbing with IPC from libqb
+ Core: SIGUSR1 now enables blackbox logging, SIGTRAP to write out
+ Core: Support a blackbox for additional logging detail after crashes/errors
+ Promote support for advanced fencing logic to the stable schema
+ Promote support for node starting scores to the stable schema
+ Promote support for service and systemd to the stable schema
+ attrd: Differentiate between updating all our attributes and everybody updating all theirs too
+ attrd: Have single-shot clients wait for an ack before disconnecting
+ cib: cl#5026 - Synced cib updates should not return until the cpg broadcast is complete.
+ corosync: Detect when the first corosync has not yet formed and handle it gracefully
+ corosync: Obtain a full list of configured nodes, including their names, when we connect to the quorum API
+ corosync: Obtain a node name from DNS if one was not already known
+ corosync: Populate the cib nodelist from corosync if available
+ corosync: Use the CFG API and DNS to determine node names if not configured in corosync.conf
+ crmd: Block after 10 failed fencing attempts for a node
+ crmd: cl#5051 - Fixes file leak in PE ipc connection initialization.
+ crmd: cl#5053 - Fixes fail-count not being updated properly.
+ crmd: cl#5057 - Restart sub-systems correctly (bnc#755671)
+ crmd: cl#5068 - Fixes crm_node -R option so it works with corosync 2.0
+ crmd: Correctly re-establish failed attrd connections
+ crmd: Detect when the quorum API isn't configured for corosync 2.0
+ crmd: Do not overwrite any configured node type (eg. quorum node)
+ crmd: Enable use of new lrmd daemon and client library in crmd.
+ crmd: Overhaul the way node state is recorded and updated in the CIB
+ fencing: Bug rhbz#853537 - Prevent use-of-NULL when the cib libraries are not available
+ fencing: cl#5073 - Add 'off' as an valid value for stonith-action option.
+ fencing: cl#5092 - Always timeout stonith operations if timeout period expires.
+ fencing: cl#5093 - Stonith per device timeout option
+ fencing: Clean up if we detect a failed connection
+ fencing: Delegate complex self fencing requests - we wont be around to see it to completion
+ fencing: Ensure all peers are notified of complex fencing op completion
+ fencing: Fix passing of fence_legacy parameters containing '='
+ fencing: Gracefully handle metadata requests for unknown agents
+ fencing: Return cached dynamic target list for busy devices.
+ fencing: rhbz#801355 - Abort transition on DC when external fencing operation is detected
+ fencing: rhbz#801355 - Merge fence requests for identical operations already in progress.
+ fencing: rhbz#801355 - Report fencing operations external of pacemaker to cib
+ fencing: Specify the action to perform using action= instead of the older option=
+ fencing: Stop building fake metadata for broken agents
+ fencing: Tolerate agents that report empty metadata in the admin tool
+ mcp: Correctly retry the connection to corosync on failure
+ mcp: Do not shut down IPC until the last client exits
+ mcp: Prevent use-after-free when running against corosync 1.x
+ pengine: Bug cl#5059 - Use the correct action's status when calculating required actions for interleaved clones
+ pengine: Bypass online/offline checking resource detection for ping/quorum nodes
+ pengine: cl#5044 - migrate_to no longer requires load_stopped for avoiding possible transition loop
+ pengine: cl#5069 - Honor 'on-fail=ignore' even when operation is disabled.
+ pengine: cl#5070 - Allow influence of promotion score when multistate rsc is left hand of colocation
+ pengine: cl#5072 - Fixes monitor op stopping after rsc promotion.
+ pengine: cl#5072 - Fixes pengine regression test failures
+ pengine: Correctly set the status for nodes not intended to run Pacemaker
+ pengine: Do not append instance numbers to anonymous clones
+ pengine: Fix failcount expiration
+ pengine: Fix memory leaks found by valgrind
+ pengine: Fix use-after-free and use-of-NULL errors detected by coverity
+ pengine: Fixes use of colocation scores other than +/- INFINITY
+ pengine: Improve detection of rejoining nodes
+ pengine: Prevent use-of-NULL when tracing is enabled
+ pengine: Stonith resources are allowed to start even if their probes haven't completed on partially active nodes
+ services: New class called 'service' which expands to the correct (LSB/systemd/upstart) standard
+ services: Support Asynchronous systemd/upstart actions
+ Tools: crm_shadow - Bug cl#5062 - Correctly set argv[0] when forking a shell process
+ Tools: crm_report: Always include system logs (if we can find them)
* Wed Mar 28 2012 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.7-1
- Update source tarball to revision: bc7ff2c
- Statistics:
Changesets: 513
Diff: 1171 files changed, 90472 insertions(+), 19368 deletions(-)
- Changes since Pacemaker-1.1.6.1
+ ais: Prepare for corosync versions using IPC from libqb
+ cib: Correctly shutdown in the presence of peers without relying on timers
+ cib: Don't halt disk writes if the previous digest is missing
+ cib: Determine when there are no peers to respond to our shutdown request and exit
+ cib: Ensure no additional messages are processed after we begin terminating
+ Cluster: Hook up the callbacks to the corosync quorum notifications
+ Core: basename() may modify its input, do not pass in a constant
+ Core: Bug cl#5016 - Prevent failures in recurring ops from being lost
+ Core: Bug rhbz#800054 - Correctly retrieve heartbeat uuids
+ Core: Correctly determine when an XML file should be decompressed
+ Core: Correctly track the length of a string without reading from uninitialzied memory (valgrind)
+ Core: Ensure signals are handled eventually in the absense of timer sources or IPC messages
+ Core: Prevent use-of-NULL in crm_update_peer()
+ Core: Strip text nodes from on disk xml files
+ Core: Support libqb for logging
+ corosync: Consistently set the correct uuid with get_node_uuid()
+ Corosync: Correctly disconnect from corosync variants
+ Corosync: Correctly extract the node id from membership udpates
+ corosync: Correctly infer lost members from the quorum API
+ Corosync: Default to using the nodeid as the node's uuid (instead of uname)
+ corosync: Ensure we catch nodes that leave the membership, even if the ringid doesn't change
+ corosync: Hook up CPG membership
+ corosync: Relax a development assert and gracefully handle the error condition
+ corosync: Remove deprecated member of the CFG API
+ corosync: Treat CS_ERR_QUEUE_FULL the same as CS_ERR_TRY_AGAIN
+ corosync: Unset the process list when nodes dissappear on us
+ crmd: Also purge fencing results when we enter S_NOT_DC
+ crmd: Bug cl#5015 - Remove the failed operation as well as the resulting fail-count and last-failure attributes
+ crmd: Correctly determine when a node can suicide with fencing
+ crmd: Election - perform the age comparison only once
+ crmd: Fast-track shutdown if we couldn't request it via attrd
+ crmd: Leave it up to the PE to decide which ops can/cannot be reload
+ crmd: Prevent use-after-free when calling delete_resource due to CRM_OP_REPROBE
+ crmd: Supply format arguments in the correct order
+ fencing: Add missing format parameter
+ fencing: Add the fencing topology section to the 1.1 configuration schema
+ fencing: fence_legacy - Drop spurilous host argument from status query
+ fencing: fence_legacy - Ensure port is available as an environment variable when calling monitor
+ fencing: fence_pcmk - don't block if nothing is specified on stdin
+ fencing: Fix log format error
+ fencing: Fix segfault caused by passing garbage to dlsym()
+ fencing: Fix use-of-NULL in process_remote_stonith_query()
+ fencing: Fix use-of-NULL when listing installed devices
+ fencing: Implement support for advanced fencing topologies: eg. kdump || (network && disk) || power
+ fencing: More gracefully handle failed 'list' operations for devices that only support a single connection
+ fencing: Prevent duplicate free when listing devices
+ fencing: Prevent uninitialized pointers being passed to free
+ fencing: Prevent use-after-free, we may need the query result for subsequent operations
+ fencing: Provide enough data to construct an entry in the node's fencing history
+ fencing: Standardize on /one/ method for clients to request members be fenced
+ fencing: Supress errors when listing all registered devices
+ mcp: corosync_cfg_state_track was removed from the corosync API, luckily we didnt use it for anything
+ mcp: Do not specify a WorkingDirectory in the systemd unit file - startup fails if its not available
+ mcp: Set the HA_quorum_type env variable consistently with our corosync plugin
+ mcp: Shut down if one of our child processes can/should not be respawned
+ pengine: Bug cl#5000 - Ensure ordering is preserved when depending on partial sets
+ pengine: Bug cl#5028 - Unmanaged services should block shutdown unless in maintenance mode
+ pengine: Bug cl#5038 - Prevent restart of anonymous clones when clone-max decreases
+ pengine: Bug cl#5007 - Fixes use of colocation constraints with multi-state resources
+ pengine: Bug cl#5014 - Prevent asymmetrical order constraints from causing resource stops
+ pengine: Bug cl#5000 - Implements ability to create rsc_order constraint sets such that A can start after B or C has started.
+ pengine: Correctly migrate a resource that has just migrated
+ pengine: Correct return from error path
+ pengine: Detect reloads of previously migrated resources
+ pengine: Ensure post-migration stop actions occur before node shutdown
+ pengine: Log as loudly as possible when we cannot shut down a cluster node
+ pengine: Reload of a resource no longer causes a restart of dependent resources
+ pengine: Support limiting the number of concurrent live migrations
+ pengine: Support referencing templates in constraints
+ pengine: Support of referencing resource templates in resource sets
+ pengine: Support to make tickets standby for relinquishing tickets gracefully
+ stonith: A "start" operation of a stonith resource does a "monitor" on the device beyond registering it
+ stonith: Bug rhbz#745526 - Ensure stonith_admin actually gets called by fence_pcmk
+ Stonith: Ensure all nodes receive and deliver notifications of the manual override
+ stonith: Fix the stonith timeout issue (cl#5009, bnc#727498)
+ Stonith: Implement a manual override for when nodes are known to be safely off
+ Tools: Bug cl#5003 - Prevent use-after-free in crm_simlate
+ Tools: crm_mon - Support to display tickets (based on Yuusuke Iida's work)
+ Tools: crm_simulate - Support to grant/revoke/standby/activate tickets from the new ticket state section
+ Tools: Implement crm_node functionality for native corosync
+ Fix a number of potential problems reported by coverity
* Wed Aug 31 2011 Andrew Beekhof <andrew@beekhof.net> 1.1.6-1
- Update source tarball to revision: 676e5f25aa46 tip
- Statistics:
Changesets: 376
Diff: 1761 files changed, 36259 insertions(+), 140578 deletions(-)
- Changes since Pacemaker-1.1.5
+ ais: check for retryable errors when dispatching AIS messages
+ ais: Correctly disconnect from Corosync and Cman based clusters
+ ais: Followup to previous patch - Ensure we drain the corosync queue of messages when Glib tells us there is input
+ ais: Handle IPC error before checking for NULL data (bnc#702907)
+ cib: Check the validation version before adding the originator details of a CIB change
+ cib: Remove disconnected remote connections from mainloop
+ cman: Correctly override existing fenced operations
+ cman: Dequeue all the cman emitted events and not only the first one leaving the others in the event's queue.
+ cman: Don't call fenced_join and fenced_leave when notifying cman of a fencing event.
+ cman: We need to run the crmd as root for CMAN so that we can ACK fencing operations
+ Core: Cancelled and pending operations do not count as failed
+ Core: Ensure there is sufficient space for EOS when building short-form option strings
+ Core: Fix variable expansion in pkg-config files
+ Core: Partial revert of accidental commit in previous patch
+ Core: Use dlopen to load heartbeat libraries on-demand
+ crmd: Bug lf#2509 - Watch for config option changes from the CIB even if we're not the DC
+ crmd: Bug lf#2528 - Introduce a slight delay when creating a transition to allow attrd time to perform its updates
+ crmd: Bug lf#2559 - Fail actions that were scheduled for a failed/fenced node
+ crmd: Bug lf#2584 - Allow nodes to fence themselves if they're the last one standing
+ crmd: Bug lf#2632 - Correctly handle nodes that return faster than stonith
+ crmd: Cancel timers for actions that were pending on dead nodes
+ crmd: Catch fence operations that claim to succeed but did not really
+ crmd: Do not wait for actions that were pending on dead nodes
+ crmd: Ensure we do not attempt to perform action on failed nodes
+ crmd: Prevent use-of-NULL by g_hash_table_iter_next()
+ crmd: Recurring actions shouldn't cause the last non-recurring action to be forgotten
+ crmd: Store only the last and last failed operation in the CIB
+ mcp: dirname() modifies the input path - pass in a copy of the logfile path
+ mcp: Enable stack detection logic instead of forcing 'corosync'
+ mcp: Fix spelling mistake in systemd service script that prevents shutdown
+ mcp: Shut down if corosync becomes unavailable
+ mcp: systemd control file is now functional
+ pengine: Before migrating an utilization-using resource to a node, take off the load which will no longer run there (lf#2599, bnc#695440)
+ pengine: Before migrating an utilization-using resource to a node, take off the load which will no longer run there (regression tests) (lf#2599, bnc#695440)
+ pengine: Bug lf#2574 - Prevent shuffling by choosing the correct clone instance to stop
+ pengine: Bug lf#2575 - Use uname for migration variables, id is a UUID on heartbeat
+ pengine: Bug lf#2581 - Avoid group restart when clone (re)starts on an unrelated node
+ pengine: Bug lf#2613, lf#2619 - Group migration after failures and non-default utilization policies
+ pengine: Bug suse#707150 - Prevent services being active if dependencies on clones are not satisfied
+ pengine: Correctly recognise which recurring operations are currently active
+ pengine: Demote from Master does not clear previous errors
+ pengine: Ensure restarts due to definition changes cause the start action to be re-issued not probes
+ pengine: Ensure role is preserved for unmanaged resources
+ pengine: Ensure unmanaged resources have the correct role set so the correct monitor operation is chosen
+ pengine: Fix memory leak for re-allocated resources reported by valgrind
+ pengine: Implement cluster ticket and deadman
+ pengine: Implement resource template
+ pengine: Correctly determine the state of multi-state resources with a partial operation history
+ pengine: Only allocate master/slave resources once
+ pengine: Partial revert of 'Minor code cleanup CS: cf6bca32376c On: 2011-08-15'
+ pengine: Resolve memory leak reported by valgrind
+ pengine: Restore the ability to save inputs to disk
+ Shell: implement -w,--wait option to wait for the transition to finish
+ Shell: repair template list command
+ Shell: set of commands to examine logs, reports, etc
+ Stonith: Consolidate pcmk_host_map into run_stonith_agent so that it is applied consistently
+ Stonith: Deprecate pcmk_arg_map for the saner pcmk_host_argument
+ Stonith: Fix use-of-NULL by g_hash_table_lookup
+ Stonith: Improved pcmk_host_map parsing
+ Stonith: Prevent use-of-NULL by g_hash_table_lookup
+ Stonith: Prevent use-of-NULL when no Linux-HA stonith agents are present
+ stonith: Add missing entries to stonith_error2string()
+ Stonith: Correctly finish sending agent options if the initial write is interrupted
+ stonith: Correctly handle synchronous calls
+ stonith: Coverity - Correctly construct result list for the query API call
+ stonith: Coverity - Remove badly constructed memory allocation from the query API call
+ stonith: Ensure completed operations are recorded as such in the history
+ Stonith: Ensure device parameters are passed to the daemon during registration
+ stonith: Fix use-of-NULL in stonith_api_device_list()
+ stonith: stonith_admin - Prevent use of uninitialized pointer by --history command
+ Tools: Bug lf#2528 - Make progress when attrd_updater is called repeatedly within the dampen interval but with the same value
+ Tools: crm_report - Correctly extract data from the local node
+ Tools: crm_report - Remove newlines when detecting the node list
+ Tools: crm_report - Repair the ability to extract data from the local machine
+ Tools: crm_report - Report on all detected backtraces
* Fri Feb 11 2011 Andrew Beekhof <andrew@beekhof.net> 1.1.5-1
- Update source tarball to revision: baad6636a053
- Statistics:
Changesets: 184
Diff: 605 files changed, 46103 insertions(+), 26417 deletions(-)
- Changes since Pacemaker-1.1.4
+ Add the ability to delegate sub-sections of the cluster to non-root users via ACLs
Needs to be enabled at compile time, not enabled by default.
+ ais: Bug lf#2550 - Report failed processes immediately
+ Core: Prevent recently introduced use-after-free in replace_xml_child()
+ Core: Reinstate the logic that skips past non-XML_ELEMENT_NODE children
+ Core: Remove extra calls to xmlCleanupParser resulting in use-after-free
+ Core: Repair reference to child-of-child after removal of xml_child_iter_filter from get_message_xml()
+ crmd: Bug lf#2545 - Ensure notify variables are accurate for stop operations
+ crmd: Cancel recurring operations while we're still connected to the lrmd
+ crmd: Reschedule the PE_START action if its not already running when we try to use it
+ crmd: Update failcount for failed promote and demote operations
+ pengine: Bug lf#2445 - Avoid relying on stickness for stable clone placement
+ pengine: Bug lf#2445 - Do not override configured clone stickiness values
+ pengine: Bug lf#2493 - Don't imply colocation requirements when applying ordering constraints with clones
+ pengine: Bug lf#2495 - Prevent segfault by validating the contents of ordering sets
+ pengine: Bug lf#2508 - Correctly reconstruct the status of anonymous cloned groups
+ pengine: Bug lf#2518 - Avoid spamming the logs with errors for orphan resources
+ pengine: Bug lf#2544 - Prevent unstable clone placement by factoring in the current node's score before all others
+ pengine: Bug lf#2554 - target-role alone is not sufficient to promote resources
+ pengine: Correct target_rc for probes of inactive resources (fix regression introduced by cs:ac3f03006e95)
+ pengine: Ensure that fencing has completed for stop actions on stonith-dependent resources (lf#2551)
+ pengine: Only update the node's promotion score if the resource is active there
+ pengine: Only use the promotion score from the current clone instance
+ pengine: Prevent use-of-NULL resulting from variable shadowing spotted by Coverity
+ pengine: Prevent use-of-NULL when there is status for an undefined node
+ pengine: Prevet use-after-free resulting from unintended recursion when chosing a node to promote master/slave resources
+ Shell: don't create empty optional sections (bnc#665131)
+ Stonith: Teach stonith_admin to automagically obtain the current node attributes for the target from the CIB
+ tools: Bug lf#2527 - Prevent use-of-NULL in crm_simulate
+ Tools: Prevent crm_resource commands from being lost due to the use of cib_scope_local
* Wed Oct 20 2010 Andrew Beekhof <andrew@beekhof.net> 1.1.4-1
- Update source tarball to revision: 75406c3eb2c1 tip
- Statistics:
Changesets: 169
Diff: 772 files changed, 56172 insertions(+), 39309 deletions(-)
- Changes since Pacemaker-1.1.3
+ Italian translation of Clusters from Scratch
+ Significant performance enhancements to the Policy Engine and CIB
+ cib: Bug lf#2506 - Don't remove client's when notifications fail, they might just be too big
+ cib: Drop invalid/failed connections from the client hashtable
+ cib: Ensure all diffs sent to peers have sufficient ordering information
+ cib: Ensure non-change diffs can preserve the ordering on the other side
+ cib: Fix the feature set check
+ cib: Include version information on our synthesised diffs when nothing changed
+ cib: Optimize the way we detect group/set ordering changes - 15% speedup
+ cib: Prevent false detection of config updates with the new diff format
+ cib: Reduce unnecessary copying when comparing xml objects
+ cib: Repair the processing of updates sent from peer nodes
+ cib: Revert part of a recent commit that purged still valid connections
+ cib: The feature set version check is only valid if the current value is non-NULL
+ Core: Actually removing diff markers is necessary
+ Core: Bug lf#2506 - Drop the compression limit because Heartbeat's IPC code sucks
+ Core: Cache Relax-NG schemas - profiling indicates many cycles are wasted needlessly re-parsing them
+ Core: Correctly compare against crm_log_level in the logging macros
+ Core: Correctly extract the version details from a diff
+ Core: Correctly hook up the RNG schema cache
+ Core: Correctly use lazy_xml_sort() for v2 digests
+ Core: Don't compress large payload elements unless we're approaching message limits
+ Core: Don't insert empty ID tags when applying diffs
+ Core: Enable the improve v2 digests
+ Core: Ensure ordering is preserved when applying diffs
+ Core: Fix the CRM_CHECK macro
+ Core: Modify the v2 digest algorithm so that some fields are sorted
+ Core: Prevent use-after-free when creating a CIB update for a timed out action
+ Core: Prevent use-of-NULL when cleaning up RelaxNG data structures
+ Core: Provide significant performance improvements by implementing versioned diffs and digests
+ crmd: All pending operations should be recorded, even recurring ones with high start delays
+ crmd: Don't abort transitions when probes are completed on a node
+ crmd: Don't hide stop events that time out - allowing faster recovery in the presence of overloaded hosts
+ crmd: Ensure the CIB is always writable on the DC by removing a timing hole
+ crmd: Include the correct transition details for timed out operations
+ crmd: Prevent use of NULL by making copies of the operation's hash table
+ crmd: There's no need to check the cib version from the 'added' part of diff updates
+ crmd: Use the supplied timeout for stop actions
+ mcp: Ensure valgrind is able to log its output somewhere
+ mcp: Use 99/01 for the start/stop sequence to avoid problems with services (such as libvirtd) started by init - Patch from Vladislav Bogdanov
+ pengine: Ensure fencing of the DC preceeds the STONITH_DONE operation
+ pengine: Fix memory leak introduced as part of the conversion to GHashTables
+ pengine: Fix memory leak when processing completed migration actions
+ pengine: Fix typo leading to use-of-NULL in the new ordering code
+ pengine: Free memory in recently introduced helper function
+ pengine: lf#2478 - Implement improved handling and recovery of atomic resource migrations
+ pengine: Obtain massive speedup by prepending to the list of ordering constraints (which can grow quite large)
+ pengine: Optimize the logic for deciding which non-grouped anonymous clone instances to probe for
+ pengine: Prevent clones from being stopped because resources colocated with them cannot be active
+ pengine: Try to ensure atomic migration ops occur within a single transition
+ pengine: Use hashtables instead of linked lists for performance sensitive datastructures
+ pengine: Use the original digest algorithm for parameter lists
+ stonith: cleanup children on timeout in fence_legacy
+ Stonith: Fix two memory leaks
+ Tools: crm_shadow - Avoid replacing the entire configuration (including status)
* Tue Sep 21 2010 Andrew Beekhof <andrew@beekhof.net> 1.1.3-1
- Update source tarball to revision: e3bb31c56244 tip
- Statistics:
Changesets: 352
Diff: 481 files changed, 14130 insertions(+), 11156 deletions(-)
- Changes since Pacemaker-1.1.2.1
+ ais: Bug lf#2401 - Improved processing when the peer crmd processes join/leave
+ ais: Correct the logic for conecting to plugin based clusters
+ ais: Do not supply a process list in mcp-mode
+ ais: Drop support for whitetank in the 1.1 release series
+ ais: Get an initial dump of the node membership when connecting to quorum-based clusters
+ ais: Guard against saturated cpg connections
+ ais: Handle CS_ERR_TRY_AGAIN in more cases
+ ais: Move the code for finding uid before the fork so that the child does no logging
+ ais: Never allow quorum plugins to affect connection to the pacemaker plugin
+ ais: Sign everyone up for peer process updates, not just the crmd
+ ais: The cluster type needs to be set before initializing classic openais connections
+ cib: Also free query result for xpath operations that return more than one hit
+ cib: Attempt to resolve memory corruption when forking a child to write the cib to disk
+ cib: Correctly free memory when writing out the cib to disk
+ cib: Fix the application of unversioned diffs
+ cib: Remove old developmental error logging
+ cib: Restructure the 'valid peer' check for deciding which instructions to ignore
+ cman: Correctly process membership/quorum changes from the pcmk plugin. Allow other message types through untouched
+ cman: Filter directed messages not intended for us
+ cman: Grab the initial membership when we connect
+ cman: Keep the list of peer processes up-to-date
+ cman: Make sure our common hooks are called after a cman membership update
+ cman: Make sure we can compile without cman present
+ cman: Populate sender details for cpg messages
+ cman: Update the ringid for cman based clusters
+ Core: Correctly unpack HA_Messages containing multiple entries with the same name
+ Core: crm_count_member() should only track nodes that have the full stack up
+ Core: New developmental logging system inspired by the kernel and a PoC from Lars Ellenberg
+ crmd: All nodes should see status updates, not just he DC
+ crmd: Allow non-DC nodes to clear failcounts too
+ crmd: Base DC election on process relative uptime
+ crmd: Bug lf#2439 - cancel_op() can also return HA_RSCBUSY
+ crmd: Bug lf#2439 - Handle asynchronous notification of resource deletion events
+ crmd: Bug lf#2458 - Ensure stop actions always have the relevant resource attributes
+ crmd: Disable age as a criteria for cman based clusters, its not reliable enough
+ crmd: Ensure we activate the DC timer if we detect an alternate DC
+ crmd: Factor the nanosecond component of process uptime in elections
+ crmd: Fix assertion failure when performing async resource failures
+ crmd: Fix handling of async resource deletion results
+ crmd: Include the action for crm graph operations
+ crmd: Make sure the membership cache is accurate after a sucessful fencing operation
+ crmd: Make sure we always poke the FSA after a transition to clear any TE_HALT actions
+ crmd: Offer crm-level membership once the peer starts the crmd process
+ crmd: Only need to request quorum update for plugin based clusters
+ crmd: Prevent assertion failure for stop actions resulting from cs: 3c0bc17c6daf
+ crmd: Prevent everyone from loosing DC elections by correctly initializing all relevant variables
+ crmd: Prevent segmentation fault
+ crmd: several fixes for async resource delete (thanks to beekhof)
+ crmd: Use the correct define/size for lrm resource IDs
+ Introduce two new cluster types 'cman' and 'corosync', replaces 'quorum_provider' concept
+ mcp: Add missing headers when built without heartbeat support
+ mcp: Correctly initialize the string containing the list of active daemons
+ mcp: Fix macro expansion in init script
+ mcp: Fix the expansion of the pid file in the init script
+ mcp: Handle CS_ERR_TRY_AGAIN when connecting to libcfg
+ mcp: Make sure we can compile the mcp without cman present
+ mcp: New master control process for (re)spawning pacemaker daemons
+ mcp: Read config early so we can re-initialize logging asap if daemonizing
+ mcp: Rename the mcp binary to pacemakerd and create a 'pacemaker' init script
+ mcp: Resend our process list after every CPG change
+ mcp: Tell chkconfig we need to shut down early on
+ pengine: Avoid creating invalid ordering constraints for probes that are not needed
+ pengine: Bug lf#1959 - Fail unmanaged resources should not prevent other services from shutting down
+ pengine: Bug lf#2422 - Ordering dependencies on partially active groups not observed properly
+ pengine: Bug lf#2424 - Use notify oepration definition if it exists in the configuration
+ pengine: Bug lf#2433 - No services should be stopped until probes finish
+ pengine: Bug lf#2453 - Enforce clone ordering in the absense of colocation constraints
+ pengine: Bug lf#2476 - Repair on-fail=block for groups and primitive resources
+ pengine: Correctly detect when there is a real failcount that expired and needs to be cleared
+ pengine: Correctly handle pseudo action creation
+ pengine: Correctly order clone startup after group/clone start
+ pengine: Correct use-after-free introduced in the prior patch
+ pengine: Do not demote resources because something that requires it can not run
+ pengine: Fix colocation for interleaved clones
+ pengine: Fix colocation with partially active groups
+ pengine: Fix potential use-after-free defect from coverity
+ pengine: Fix previous merge
+ pengine: Fix use-after-free in order_actions() reported by valgrind
+ pengine: Make the current data set a global variable so it does not need to be passed around everywhere
+ pengine: Prevent endless loop when looking for operation definitions in the configuration
+ pengine: Prevent segfault by ensuring the arguments to do_calculations() are initialized
+ pengine: Rewrite the ordering constraint logic to be simplicity, clarity and maintainability
+ pengine: Wait until stonith is available, do not fall back to shutdown for nodes requesting termination
+ Resolve coverity RESOURCE_LEAK defects
+ Shell: Complete the transition to using crm_attribute instead of crm_failcount and crm_standby
+ stonith: Advertise stonith-ng options in the metadata
+ stonith: Bug lf#2461 - Prevent segfault by not looking up operations if the hashtable has not been initialized yet
+ stonith: Bug lf#2473 - Add the timeout at the top level where the daemon is looking for it
+ Stonith: Bug lf#2473 - Ensure stonith operations complete within the timeout and are terminated if they run too long
+ stonith: Bug lf#2473 - Ensure timeouts are included for fencing operations
+ stonith: Bug lf#2473 - Gracefully handle remote operations that arrive late (after we have done notifications)
+ stonith: Correctly parse pcmk_host_list parameters that appear on a single line
+ stonith: Map poweron/poweroff back to on/off expected by the stonith tool from cluster-glue
+ stonith: pass the configuration to the stonith program via environment variables (bnc#620781)
+ Stonith: Use the timeout specified by the user
+ Support starting plugin-based Pacemaker clusters with the MCP as well
+ Tools: Bug lf#2456 - Fix assertion failure in crm_resource
+ tools: crm_node - Repair the ability to connect to openais based clusters
+ tools: crm_node - Use the correct short option for --cman
+ tools: crm_report - corosync.conf wont necessarily contain the text 'pacemaker' anymore
+ Tools: crm_simulate - Fix use-after-free in when terminating
+ tools: crm_simulate - Resolve coverity USE_AFTER_FREE defect
+ Tools: Drop the 'pingd' daemon and resource agent in favor of ocf:pacemaker:ping
+ Tools: Fix recently introduced use-of-NULL
+ Tools: Fix use-after-free defects from coverity
* Wed May 12 2010 Andrew Beekhof <andrew@beekhof.net> 1.1.2-1
- Update source tarball to revision: c25c972a25cc tip
- Statistics:
Changesets: 339
Diff: 708 files changed, 37918 insertions(+), 10584 deletions(-)
- Changes since Pacemaker-1.1.1
+ ais: Do not count votes from offline nodes and calculate current votes before sending quorum data
+ ais: Ensure the list of active processes sent to clients is always up-to-date
+ ais: Look for the correct conf variable for turning on file logging
+ ais: Need to find a better and thread-safe way to set core_uses_pid. Disable for now.
+ ais: Use the threadsafe version of getpwnam
+ Core: Bump the feature set due to the new failcount expiry feature
+ Core: fix memory leaks exposed by valgrind
+ Core: Bug lf#2414 - Prevent use-after-free reported by valgrind when doing xpath based deletions
+ crmd: Bug lf#2414 - Prevent use-after-free of the PE connection after it dies
+ crmd: Bug lf#2414 - Prevent use-after-free of the stonith-ng connection
+ crmd: Bug lf#2401 - Improved detection of partially active peers
+ crmd: Bug lf#2379 - Ensure the cluster terminates when the PE is not available
+ crmd: Do not allow the target_rc to be misused by resource agents
+ crmd: Do not ignore action timeouts based on FSA state
+ crmd: Ensure we don't get stuck in S_PENDING if we lose an election to someone that never talks to us again
+ crmd: Fix memory leaks exposed by valgrind
+ crmd: Remove race condition that could lead to multiple instances of a clone being active on a machine
+ crmd: Send erase_status_tag() calls to the local CIB when the DC is fenced, since there is no DC to accept them
+ crmd: Use global fencing notifications to prevent secondary fencing operations of the DC
+ pengine: Bug lf#2317 - Avoid needless restart of primitive depending on a clone
+ pengine: Bug lf#2361 - Ensure clones observe mandatory ordering constraints if the LHS is unrunnable
+ pengine: Bug lf#2383 - Combine failcounts for all instances of an anonymous clone on a host
+ pengine: Bug lf#2384 - Fix intra-set colocation and ordering
+ pengine: Bug lf#2403 - Enforce mandatory promotion (colocation) constraints
+ pengine: Bug lf#2412 - Correctly find clone instances by their prefix
+ pengine: Do not be so quick to pull the trigger on nodes that are coming up
+ pengine: Fix memory leaks exposed by valgrind
+ pengine: Rewrite native_merge_weights() to avoid Fix use-after-free
+ Shell: Bug bnc#590035 - always reload status if working with the cluster
+ Shell: Bug bnc#592762 - Default to using the status section from the live CIB
+ Shell: Bug lf#2315 - edit multiple meta_attributes sets in resource management
+ Shell: Bug lf#2221 - enable comments
+ Shell: Bug bnc#580492 - implement new cibstatus interface and commands
+ Shell: Bug bnc#585471 - new cibstatus import command
+ Shell: check timeouts also against the default-action-timeout property
+ Shell: new configure filter command
+ Tools: crm_mon - fix memory leaks exposed by valgrind
* Tue Feb 16 2010 Andrew Beekhof <andrew@beekhof.net> - 1.1.1-1
- First public release of Pacemaker 1.1
- Package reference documentation in a doc subpackage
- Move cts into a subpackage so that it can be easily consumed by others
- Update source tarball to revision: 17d9cd4ee29f
+ New stonith daemon that supports global notifications
+ Service placement influenced by the physical resources
+ A new tool for simulating failures and the cluster’s reaction to them
+ Ability to serialize an otherwise unrelated a set of resource actions (eg. Xen migrations)
* Mon Jan 18 2010 Andrew Beekhof <andrew@beekhof.net> - 1.0.7-1
- Update source tarball to revision: 2eed906f43e9 (stable-1.0) tip
- Statistics:
Changesets: 193
Diff: 220 files changed, 15933 insertions(+), 8782 deletions(-)
- Changes since 1.0.5-4
+ pengine: Bug 2213 - Ensure groups process location constraints so that clone-node-max works for cloned groups
+ pengine: Bug lf#2153 - non-clones should not restart when clones stop/start on other nodes
+ pengine: Bug lf#2209 - Clone ordering should be able to prevent startup of dependent clones
+ pengine: Bug lf#2216 - Correctly identify the state of anonymous clones when deciding when to probe
+ pengine: Bug lf#2225 - Operations that require fencing should wait for 'stonith_complete' not 'all_stopped'.
+ pengine: Bug lf#2225 - Prevent clone peers from stopping while another is instance is (potentially) being fenced
+ pengine: Correctly anti-colocate with a group
+ pengine: Correctly unpack ordering constraints for resource sets to avoid graph loops
+ Tools: crm: load help from crm_cli.txt
+ Tools: crm: resource sets (bnc#550923)
+ Tools: crm: support for comments (LF 2221)
+ Tools: crm: support for description attribute in resources/operations (bnc#548690)
+ Tools: hb2openais: add EVMS2 CSM processing (and other changes) (bnc#548093)
+ Tools: hb2openais: do not allow empty rules, clones, or groups (LF 2215)
+ Tools: hb2openais: refuse to convert pure EVMS volumes
+ cib: Ensure the loop for login message terminates
+ cib: Finally fix reliability of receiving large messages over remote plaintext connections
+ cib: Fix remote notifications
+ cib: For remote connections, default to CRM_DAEMON_USER since thats the only one that the cib can validate the password for using PAM
+ cib: Remote plaintext - Retry sending parts of the message that did not fit the first time
+ crmd: Ensure batch-limit is correctly enforced
+ crmd: Ensure we have the latest status after a transition abort
+ (bnc#547579,547582): Tools: crm: status section editing support
+ shell: Add allow-migrate as allowed meta-attribute (bnc#539968)
+ Medium: Build: Do not automatically add -L/lib, it could cause 64-bit arches to break
+ Medium: pengine: Bug lf#2206 - rsc_order constraints always use score at the top level
+ Medium: pengine: Only complain about target-role=master for non m/s resources
+ Medium: pengine: Prevent non-multistate resources from being promoted through target-role
+ Medium: pengine: Provide a default action for resource-set ordering
+ Medium: pengine: Silently fix requires=fencing for stonith resources so that it can be set in op_defaults
+ Medium: Tools: Bug lf#2286 - Allow the shell to accept template parameters on the command line
+ Medium: Tools: Bug lf#2307 - Provide a way to determin the nodeid of past cluster members
+ Medium: Tools: crm: add update method to template apply (LF 2289)
+ Medium: Tools: crm: direct RA interface for ocf class resource agents (LF 2270)
+ Medium: Tools: crm: direct RA interface for stonith class resource agents (LF 2270)
+ Medium: Tools: crm: do not add score which does not exist
+ Medium: Tools: crm: do not consider warnings as errors (LF 2274)
+ Medium: Tools: crm: do not remove sets which contain id-ref attribute (LF 2304)
+ Medium: Tools: crm: drop empty attributes elements
+ Medium: Tools: crm: exclude locations when testing for pathological constraints (LF 2300)
+ Medium: Tools: crm: fix exit code on single shot commands
+ Medium: Tools: crm: fix node delete (LF 2305)
+ Medium: Tools: crm: implement -F (--force) option
+ Medium: Tools: crm: rename status to cibstatus (LF 2236)
+ Medium: Tools: crm: revisit configure commit
+ Medium: Tools: crm: stay in crm if user specified level only (LF 2286)
+ Medium: Tools: crm: verify changes on exit from the configure level
+ Medium: ais: Some clients such as gfs_controld want a cluster name, allow one to be specified in corosync.conf
+ Medium: cib: Clean up logic for receiving remote messages
+ Medium: cib: Create valid notification control messages
+ Medium: cib: Indicate where the remote connection came from
+ Medium: cib: Send password prompt to stderr so that stdout can be redirected
+ Medium: cts: Fix rsh handling when stdout is not required
+ Medium: doc: Fill in the section on removing a node from an AIS-based cluster
+ Medium: doc: Update the docs to reflect the 0.6/1.0 rolling upgrade problem
+ Medium: doc: Use Publican for docbook based documentation
+ Medium: fencing: stonithd: add metadata for stonithd instance attributes (and support in the shell)
+ Medium: fencing: stonithd: ignore case when comparing host names (LF 2292)
+ Medium: tools: Make crm_mon functional with remote connections
+ Medium: xml: Add stopped as a supported role for operations
+ Medium: xml: Bug bnc#552713 - Treat node unames as text fields not IDs
+ Medium: xml: Bug lf#2215 - Create an always-true expression for empty rules when upgrading from 0.6
* Thu Oct 29 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.5-4
- Include the fixes from CoroSync integration testing
- Move the resource templates - they are not documentation
- Ensure documentation is placed in a standard location
- Exclude documentation that is included elsewhere in the package
- Update the tarball from upstream to version ee19d8e83c2a
+ cib: Correctly clean up when both plaintext and tls remote ports are requested
+ pengine: Bug bnc#515172 - Provide better defaults for lt(e) and gt(e) comparisions
+ pengine: Bug lf#2197 - Allow master instances placemaker to be influenced by colocation constraints
+ pengine: Make sure promote/demote pseudo actions are created correctly
+ pengine: Prevent target-role from promoting more than master-max instances
+ ais: Bug lf#2199 - Prevent expected-quorum-votes from being populated with garbage
+ ais: Prevent deadlock - don't try to release IPC message if the connection failed
+ cib: For validation errors, send back the full CIB so the client can display the errors
+ cib: Prevent use-after-free for remote plaintext connections
+ crmd: Bug lf#2201 - Prevent use-of-NULL when running heartbeat
* Wed Oct 13 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.5-3
- Update the tarball from upstream to version 38cd629e5c3c
+ Core: Bug lf#2169 - Allow dtd/schema validation to be disabled
+ pengine: Bug lf#2106 - Not all anonymous clone children are restarted after configuration change
+ pengine: Bug lf#2170 - stop-all-resources option had no effect
+ pengine: Bug lf#2171 - Prevent groups from starting if they depend on a complex resource which can not
+ pengine: Disable resource management if stonith-enabled=true and no stonith resources are defined
+ pengine: do not include master score if it would prevent allocation
+ ais: Avoid excessive load by checking for dead children every 1s (instead of 100ms)
+ ais: Bug rh#525589 - Prevent shutdown deadlocks when running on CoroSync
+ ais: Gracefully handle changes to the AIS nodeid
+ crmd: Bug bnc#527530 - Wait for the transition to complete before leaving S_TRANSITION_ENGINE
+ crmd: Prevent use-after-free with LOG_DEBUG_3
+ Medium: xml: Mask the "symmetrical" attribute on rsc_colocation constraints (bnc#540672)
+ Medium (bnc#520707): Tools: crm: new templates ocfs2 and clvm
+ Medium: Build: Invert the disable ais/heartbeat logic so that --without (ais|heartbeat) is available to rpmbuild
+ Medium: pengine: Bug lf#2178 - Indicate unmanaged clones
+ Medium: pengine: Bug lf#2180 - Include node information for all failed ops
+ Medium: pengine: Bug lf#2189 - Incorrect error message when unpacking simple ordering constraint
+ Medium: pengine: Correctly log resources that would like to start but can not
+ Medium: pengine: Stop ptest from logging to syslog
+ Medium: ais: Include version details in plugin name
+ Medium: crmd: Requery the resource metadata after every start operation
* Fri Aug 21 2009 Tomas Mraz <tmraz@redhat.com> - 1.0.5-2.1
- rebuilt with new openssl
* Wed Aug 19 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.5-2
- Add versioned perl dependency as specified by
https://fedoraproject.org/wiki/Packaging/Perl#Packages_that_link_to_libperl
- No longer remove RPATH data, it prevents us finding libperl.so and no other
libraries were being hardcoded
- Compile in support for heartbeat
- Conditionally add heartbeat-devel and corosynclib-devel to the -devel requirements
depending on which stacks are supported
* Mon Aug 17 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.5-1
- Add dependency on resource-agents
- Use the version of the configure macro that supplies --prefix, --libdir, etc
- Update the tarball from upstream to version 462f1569a437 (Pacemaker 1.0.5 final)
+ Tools: crm_resource - Advertise --move instead of --migrate
+ Medium: Extra: New node connectivity RA that uses system ping and attrd_updater
+ Medium: crmd: Note that dc-deadtime can be used to mask the brokeness of some switches
* Tue Aug 11 2009 Ville Skyttä <ville.skytta@iki.fi> - 1.0.5-0.7.c9120a53a6ae.hg
- Use bzipped upstream tarball.
* Wed Jul 29 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.5-0.6.c9120a53a6ae.hg
- Add back missing build auto* dependencies
- Minor cleanups to the install directive
* Tue Jul 28 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.5-0.5.c9120a53a6ae.hg
- Add a leading zero to the revision when alphatag is used
* Tue Jul 28 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.5-0.4.c9120a53a6ae.hg
- Incorporate the feedback from the cluster-glue review
- Realistically, the version is a 1.0.5 pre-release
- Use the global directive instead of define for variables
- Use the haclient/hacluster group/user instead of daemon
- Use the _configure macro
- Fix install dependencies
* Fri Jul 24 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.4-3
- Initial Fedora checkin
- Include an AUTHORS and license file in each package
- Change the library package name to pacemaker-libs to be more
Fedora compliant
- Remove execute permissions from xml related files
- Reference the new cluster-glue devel package name
- Update the tarball from upstream to version c9120a53a6ae
+ pengine: Only prevent migration if the clone dependency is stopping/starting on the target node
+ pengine: Bug 2160 - Don't shuffle clones due to colocation
+ pengine: New implementation of the resource migration (not stop/start) logic
+ Medium: Tools: crm_resource - Prevent use-of-NULL by requiring a resource name for the -A and -a options
+ Medium: pengine: Prevent use-of-NULL in find_first_action()
* Tue Jul 14 2009 Andrew Beekhof <andrew@beekhof.net> - 1.0.4-2
- Reference authors from the project AUTHORS file instead of listing in description
- Change Source0 to reference the Mercurial repo
- Cleaned up the summaries and descriptions
- Incorporate the results of Fedora package self-review
* Thu Jun 04 2009 Andrew Beekhof <abeekhof@suse.de> - 1.0.4-1
- Update source tarball to revision: 1d87d3e0fc7f (stable-1.0)
- Statistics:
Changesets: 209
Diff: 266 files changed, 12010 insertions(+), 8276 deletions(-)
- Changes since Pacemaker-1.0.3
+ (bnc#488291): ais: do not rely on byte endianness on ptr cast
+ (bnc#507255): Tools: crm: delete rsc/op_defaults (these meta_attributes are killing me)
+ (bnc#507255): Tools: crm: import properly rsc/op_defaults
+ (LF 2114): Tools: crm: add support for operation instance attributes
+ ais: Bug lf#2126 - Messages replies cannot be routed to transient clients
+ ais: Fix compilation for the latest Corosync API (v1719)
+ attrd: Do not perform all updates as complete refreshes
+ cib: Fix huge memory leak affecting heartbeat-based clusters
+ Core: Allow xpath queries to match attributes
+ Core: Generate the help text directly from a tool options struct
+ Core: Handle differences in 0.6 messaging format
+ crmd: Bug lf#2120 - All transient node attribute updates need to go via attrd
+ crmd: Correctly calculate how long an FSA action took to avoid spamming the logs with errors
+ crmd: Fix another large memory leak affecting Heartbeat based clusters
+ lha: Restore compatibility with older versions
+ pengine: Bug bnc#495687 - Filesystem is not notified of successful STONITH under some conditions
+ pengine: Make running a cluster with STONITH enabled but no STONITH resources an error and provide details on resolutions
+ pengine: Prevent use-ofNULL when using resource ordering sets
+ pengine: Provide inter-notification ordering guarantees
+ pengine: Rewrite the notification code to be understanable and extendable
+ Tools: attrd - Prevent race condition resulting in the cluster forgetting the node wishes to shut down
+ Tools: crm: regression tests
+ Tools: crm_mon - Fix smtp notifications
+ Tools: crm_resource - Repair the ability to query meta attributes
+ Low Build: Bug lf#2105 - Debian package should contain pacemaker doc and crm templates
+ Medium (bnc#507255): Tools: crm: handle empty rsc/op_defaults properly
+ Medium (bnc#507255): Tools: crm: use the right obj_type when creating objects from xml nodes
+ Medium (LF 2107): Tools: crm: revisit exit codes in configure
+ Medium: cib: Do not bother validating updates that only affect the status section
+ Medium: Core: Include supported stacks in version information
+ Medium: crmd: Record in the CIB, the cluster infrastructure being used
+ Medium: cts: Do not combine crm_standby arguments - the wrapper can not process them
+ Medium: cts: Fix the CIBAusdit class
+ Medium: Extra: Refresh showscores script from Dominik
+ Medium: pengine: Build a statically linked version of ptest
+ Medium: pengine: Correctly log the actions for resources that are being recovered
- + Medium: pengine: Correctly log the occurance of promotion events
+ + Medium: pengine: Correctly log the occurrence of promotion events
+ Medium: pengine: Implememt node health based on a patch from Mark Hamzy
+ Medium: Tools: Add examples to help text outputs
+ Medium: Tools: crm: catch syntax errors for configure load
+ Medium: Tools: crm: implement erasing nodes in configure erase
+ Medium: Tools: crm: work with parents only when managing xml objects
+ Medium: Tools: crm_mon - Add option to run custom notification program on resource operations (Patch by Dominik Klein)
+ Medium: Tools: crm_resource - Allow --cleanup to function on complex resources and cluster-wide
+ Medium: Tools: haresource2cib.py - Patch from horms to fix conversion error
+ Medium: Tools: Include stack information in crm_mon output
+ Medium: Tools: Two new options (--stack,--constraints) to crm_resource for querying how a resource is configured
* Wed Apr 08 2009 Andrew Beekhof <abeekhof@suse.de> - 1.0.3-1
- Update source tarball to revision: b133b3f19797 (stable-1.0) tip
- Statistics:
Changesets: 383
Diff: 329 files changed, 15471 insertions(+), 15119 deletions(-)
- Changes since Pacemaker-1.0.2
+ Added tag SLE11-HAE-GMC for changeset 9196be9830c2
+ ais plugin: Fix quorum calculation (bnc#487003)
+ ais: Another memory fix leak in error path
+ ais: Bug bnc#482847, bnc#482905 - Force a clean exit of OpenAIS once Pacemaker has finished unloading
+ ais: Bug bnc#486858 - Fix update_member() to prevent spamming clients with membership events containing no changes
+ ais: Centralize all quorum calculations in the ais plugin and allow expected votes to be configured int he cib
+ ais: Correctly handle a return value of zero from openais_dispatch_recv()
+ ais: Disable logging to a file
+ ais: Fix memory leak in error path
+ ais: IPC messages are only in scope until a response is sent
+ All signal handlers used with CL_SIGNAL() need to be as minimal as possible
+ cib: Bug bnc#482885 - Simplify CIB disk-writes to prevent data loss. Required a change to the backup filename format
+ cib: crmd: Revert part of 9782ab035003. Complex shutdown routines need G_main_add_SignalHandler to avoid race coditions
+ crm: Avoid infinite loop during crm configure edit (bnc#480327)
+ crmd: Avoid a race condition by waiting for the attrd update to trigger a transition automatically
+ crmd: Bug bnc#480977 - Prevent extra, partial, shutdown when a node restarts too quickly
+ crmd: Bug bnc#480977 - Prevent extra, partial, shutdown when a node restarts too quickly (verified)
+ crmd: Bug bnc#489063 - Ensure the DC is always unset after we 'lose' an election
+ crmd: Bug BSC#479543 - Correctly find the migration source for timed out migrate_from actions
+ crmd: Call crm_peer_init() before we start the FSA - prevents a race condition when used with Heartbeat
+ crmd: Erasing the status section should not be forced to the local node
+ crmd: Fix memory leak in cib notication processing code
+ crmd: Fix memory leak in transition graph processing
+ crmd: Fix memory leaks found by valgrind
+ crmd: More memory leaks fixes found by valgrind
+ fencing: stonithd: is_heartbeat_cluster is a no-no if there is no heartbeat support
+ pengine: Bug bnc#466788 - Exclude nodes that can not run resources
+ pengine: Bug bnc#466788 - Make colocation based on node attributes work
+ pengine: Bug BNC#478687 - Do not crash when clone-max is 0
+ pengine: Bug bnc#488721 - Fix id-ref expansion for clones, the doc-root for clone children is not the cib root
+ pengine: Bug bnc#490418 - Correctly determine node state for nodes wishing to be terminated
+ pengine: Bug LF#2087 - Correctly parse the state of anonymous clones that have multiple instances on a given node
+ pengine: Bug lf#2089 - Meta attributes are not inherited by clone children
+ pengine: Bug lf#2091 - Correctly restart modified resources that were found active by a probe
+ pengine: Bug lf#2094 - Fix probe ordering for cloned groups
+ pengine: Bug LF:2075 - Fix large pingd memory leaks
+ pengine: Correctly attach orphaned clone children to their parent
+ pengine: Correctly handle terminate node attributes that are set to the output from time()
+ pengine: Ensure orphaned clone members are hooked up to the parent when clone-max=0
+ pengine: Fix memory leak in LogActions
+ pengine: Fix the determination of whether a group is active
+ pengine: Look up the correct promotion preference for anonymous masters
+ pengine: Simplify handling of start failures by changing the default migration-threshold to INFINITY
+ pengine: The ordered option for clones no longer causes extra start/stop operations
+ RA: Bug bnc#490641 - Shut down dlm_controld with -TERM instead of -KILL
+ RA: pingd: Set default ping interval to 1 instead of 0 seconds
+ Resources: pingd - Correctly tell the ping daemon to shut down
+ Tools: Bug bnc#483365 - Ensure the command from cluster_test includes a value for --log-facility
+ Tools: cli: fix and improve delete command
+ Tools: crm: add and implement templates
+ Tools: crm: add support for command aliases and some common commands (i.e. cd,exit)
+ Tools: crm: create top configuration nodes if they are missing
+ Tools: crm: fix parsing attributes for rules (broken by the previous changeset)
+ Tools: crm: new ra set of commands
+ Tools: crm: resource agents information management
+ Tools: crm: rsc/op_defaults
+ Tools: crm: support for no value attribute in nvpairs
+ Tools: crm: the new configure monitor command
+ Tools: crm: the new configure node command
+ Tools: crm_mon - Prevent use-of-NULL when summarizing an orphan
+ Tools: hb2openais: create clvmd clone for respawn evmsd in ha.cf
+ Tools: hb2openais: fix a serious recursion bug in xml node processing
+ Tools: hb2openais: fix ocfs2 processing
+ Tools: pingd - prevent double free of getaddrinfo() output in error path
+ Tools: The default re-ping interval for pingd should be 1s not 1ms
+ Medium (bnc#479049): Tools: crm: add validation of resource type for the configure primitive command
+ Medium (bnc#479050): Tools: crm: add help for RA parameters in tab completion
+ Medium (bnc#479050): Tools: crm: add tab completion for primitive params/meta/op
+ Medium (bnc#479050): Tools: crm: reimplement cluster properties completion
+ Medium (bnc#486968): Tools: crm: listnodes function requires no parameters (do not mix completion with other stuff)
+ Medium: ais: Remove the ugly hack for dampening AIS membership changes
+ Medium: cib: Fix memory leaks by using mainloop_add_signal
+ Medium: cib: Move more logging to the debug level (was info)
+ Medium: cib: Overhaul the processing of synchronous replies
+ Medium: Core: Add library functions for instructing the cluster to terminate nodes
+ Medium: crmd: Add new expected-quorum-votes option
+ Medium: crmd: Allow up to 5 retires when an attrd update fails
+ Medium: crmd: Automatically detect and use new values for crm_config options
+ Medium: crmd: Bug bnc#490426 - Escalated shutdowns stall when there are pending resource operations
+ Medium: crmd: Clean up and optimize the DC election algorithm
+ Medium: crmd: Fix memory leak in shutdown
+ Medium: crmd: Fix memory leaks spotted by Valgrind
+ Medium: crmd: Ignore join messages from hosts other than our DC
+ Medium: crmd: Limit the scope of resource updates to the status section
+ Medium: crmd: Prevent the crmd from being respawned if its told to shut down when it did not ask to be
+ Medium: crmd: Re-check the election status after membership events
+ Medium: crmd: Send resource updates via the local CIB during elections
+ Medium: pengine: Bug bnc#491441 - crm_mon does not display operations returning 'uninstalled' correctly
+ Medium: pengine: Bug lf#2101 - For location constraints, role=Slave is equivalent to role=Started
+ Medium: pengine: Clean up the API - removed ->children() and renamed ->find_child() to fine_rsc()
+ Medium: pengine: Compress the display of healthy anonymous clones
+ Medium: pengine: Correctly log the actions for resources that are being recovered
+ Medium: pengine: Determin a promotion score for complex resources
+ Medium: pengine: Ensure clones always have a value for globally-unique
+ Medium: pengine: Prevent orphan clones from being allocated
+ Medium: RA: controld: Return proper exit code for stop op.
+ Medium: Tools: Bug bnc#482558 - Fix logging test in cluster_test
+ Medium: Tools: Bug bnc#482828 - Fix quoting in cluster_test logging setup
+ Medium: Tools: Bug bnc#482840 - Include directory path to CTSlab.py
+ Medium: Tools: crm: add more user input checks
+ Medium: Tools: crm: do not check resource status of we are working with a shadow
+ Medium: Tools: crm: fix id-refs and allow reference to top objects (i.e. primitive)
+ Medium: Tools: crm: ignore comments in the CIB
+ Medium: Tools: crm: multiple column output would not work with small lists
+ Medium: Tools: crm: refuse to delete running resources
+ Medium: Tools: crm: rudimentary if-else for templates
+ Medium: Tools: crm: Start/stop clones via target-role.
+ Medium: Tools: crm_mon - Compress the node status for healthy and offline nodes
+ Medium: Tools: crm_shadow - Return 0/cib_ok when --create-empty succeeds
+ Medium: Tools: crm_shadow - Support -e, the short form of --create-empty
+ Medium: Tools: Make attrd quieter
+ Medium: Tools: pingd - Avoid using various clplumbing functions as they seem to leak
+ Medium: Tools: Reduce pingd logging
* Mon Feb 16 2009 Andrew Beekhof <abeekhof@suse.de> - 1.0.2-1
- Update source tarball to revision: d232d19daeb9 (stable-1.0) tip
- Statistics:
Changesets: 441
Diff: 639 files changed, 20871 insertions(+), 21594 deletions(-)
- Changes since Pacemaker-1.0.1
+ (bnc#450815): Tools: crm cli: do not generate id for the operations tag
+ ais: Add support for the new AIS IPC layer
+ ais: Always set header.error to the correct default: SA_AIS_OK
+ ais: Bug BNC#456243 - Ensure the membership cache always contains an entry for the local node
+ ais: Bug BNC:456208 - Prevent deadlocks by not logging in the child process before exec()
+ ais: By default, disable supprt for the WIP openais IPC patch
+ ais: Detect and handle situations where ais and the crm disagree on the node name
+ ais: Ensure crm_peer_seq is updated after a membership update
+ ais: Make sure all IPC header fields are set to sane defaults
+ ais: Repair and streamline service load now that whitetank startup functions correctly
+ build: create and install doc files
+ cib: Allow clients without mainloop to connect to the cib
+ cib: CID:18 - Fix use-of-NULL in cib_perform_op
+ cib: CID:18 - Repair errors introduced in b5a18704477b - Fix use-of-NULL in cib_perform_op
+ cib: Ensure diffs contain the correct values of admin_epoch
+ cib: Fix four moderately sized memory leaks detected by Valgrind
+ Core: CID:10 - Prevent indexing into an array of schemas with a negative value
+ Core: CID:13 - Fix memory leak in log_data_element
+ Core: CID:15 - Fix memory leak in crm_get_peer
+ Core: CID:6 - Fix use-of-NULL in copy_ha_msg_input
+ Core: Fix crash in the membership code preventing node shutdown
+ Core: Fix more memory leaks foudn by valgrind
+ Core: Prevent unterminated strings after decompression
+ crmd: Bug BNC:467995 - Delay marking STONITH operations complete until STONITH tells us so
+ crmd: Bug LF:1962 - Do not NACK peers because they are not (yet) in our membership. Just ignore them.
+ crmd: Bug LF:2010 - Ensure fencing cib updates create the node_state entry if needed to preent re-fencing during cluster startup
+ crmd: Correctly handle reconnections to attrd
+ crmd: Ensure updates for lost migrate operations indicate which node it tried to migrating to
+ crmd: If there are no nodes to finalize, start an election.
+ crmd: If there are no nodes to welcome, start an election.
+ crmd: Prevent node attribute loss by detecting attrd disconnections immediately
+ crmd: Prevent node re-probe loops by ensuring mandatory actions always complete
+ pengine: Bug 2005 - Fix startup ordering of cloned stonith groups
+ pengine: Bug 2006 - Correctly reprobe cloned groups
+ pengine: Bug BNC:465484 - Fix the no-quorum-policy=suicide option
+ pengine: Bug LF:1996 - Correctly process disabled monitor operations
+ pengine: CID:19 - Fix use-of-NULL in determine_online_status
+ pengine: Clones now default to globally-unique=false
+ pengine: Correctly calculate the number of available nodes for the clone to use
+ pengine: Only shoot online nodes with no-quorum-policy=suicide
+ pengine: Prevent on-fail settings being ignored after a resource is successfully stopped
+ pengine: Prevent use-of-NULL for failed migrate actions in process_rsc_state()
+ pengine: Remove an optimization for the terminate node attribute that caused the cluster to block indefinitly
+ pengine: Repar the ability to colocate based on node attributes other than uname
+ pengine: Start the correct monitor operation for unmanaged masters
+ stonith: CID:3 - Fix another case of exceptionally poor error handling by the original stonith developers
+ stonith: CID:5 - Checking for NULL and then dereferencing it anyway is an interesting approach to error handling
+ stonithd: Sending IPC to the cluster is a privileged operation
+ stonithd: wrong checks for shmid (0 is a valid id)
+ Tools: attrd - Correctly determine when an attribute has stopped changing and should be committed to the CIB
+ Tools: Bug 2003 - pingd does not correctly detect failures when the interface is down
+ Tools: Bug 2003 - pingd does not correctly handle node-down events on multi-NIC systems
+ Tools: Bug 2021 - pingd does not detect sequence wrapping correctly, incorrectly reports nodes offline
+ Tools: Bug BNC:468066 - Do not use the result of uname() when its no longer in scope
+ Tools: Bug BNC:473265 - crm_resource -L dumps core
+ Tools: Bug LF:2001 - Transient node attributes should be set via attrd
+ Tools: Bug LF:2036 - crm_resource cannot set/get parameters for cloned resources
+ Tools: Bug LF:2046 - Node attribute updates are lost because attrd can take too long to start
+ Tools: Cause the correct clone instance to be failed with crm_resource -F
+ Tools: cluster_test - Allow the user to select a stack and fix CTS invocation
+ Tools: crm cli: allow rename only if the resource is stopped
+ Tools: crm cli: catch system errors on file operations
+ Tools: crm cli: completion for ids in configure
+ Tools: crm cli: drop '-rsc' from attributes for order constraint
+ Tools: crm cli: exit with an appropriate exit code
+ Tools: crm cli: fix wrong order of action and resource in order constraint
+ Tools: crm cli: fox wrong exit code
+ Tools: crm cli: improve handling of cib attributes
+ Tools: crm cli: new command: configure rename
+ Tools: crm cli: new command: configure upgrade
+ Tools: crm cli: new command: node delete
+ Tools: crm cli: prevent key errors on missing cib attributes
+ Tools: crm cli: print long help for help topics
+ Tools: crm cli: return on syntax error when parsing score
+ Tools: crm cli: rsc_location can be without nvpairs
+ Tools: crm cli: short node preference location constraint
+ Tools: crm cli: sometimes, on errors, level would change on single shot use
+ Tools: crm cli: syntax: drop a bunch of commas (remains of help tables conversion)
+ Tools: crm cli: verify user input for sanity
+ Tools: crm: find expressions within rules (do not always skip xml nodes due to used id)
+ Tools: crm_master should not define a set id now that attrd is used. Defining one can break lookups
+ Tools: crm_mon Use the OID assigned to the project by IANA for SNMP traps
+ Medium (bnc#445622): Tools: crm cli: improve the node show command and drop node status
+ Medium (LF 2009): stonithd: improve timeouts for remote fencing
+ Medium: ais: Allow dead peers to be removed from membership calculations
+ Medium: ais: Pass node deletion events on to clients
+ Medium: ais: Sanitize ipc usage
+ Medium: ais: Supply the node uname in addtion to the id
+ Medium: Build: Clean up configure to ensure NON_FATAL_CFLAGS is consistent with CFLAGS (ie. includes -g)
+ Medium: Build: Install cluster_test
+ Medium: Build: Use more restrictive CFLAGS and fix the resulting errors
+ Medium: cib: CID:20 - Fix potential use-after-free in cib_native_signon
+ Medium: Core: Bug BNC:474727 - Set a maximum time to wait for IPC messages
+ Medium: Core: CID:12 - Fix memory leak in decode_transition_magic error path
+ Medium: Core: CID:14 - Fix memory leak in calculate_xml_digest error path
+ Medium: Core: CID:16 - Fix memory leak in date_to_string error path
+ Medium: Core: Try to track down the cause of XML parsing errors
+ Medium: crmd: Bug BNC:472473 - Do not wait excessive amounts of time for lost actions
+ Medium: crmd: Bug BNC:472473 - Reduce the transition timeout to action_timeout+network_delay
+ Medium: crmd: Do not fast-track the processing of LRM refreshes when there are pending actions.
+ Medium: crmd: do_dc_join_filter_offer - Check the 'join' message is for the current instance before deciding to NACK peers
+ Medium: crmd: Find option values without having to do a config upgrade
+ Medium: crmd: Implement shutdown using a transient node attribute
+ Medium: crmd: Update the crmd options to use dashes instead of underscores
+ Medium: cts: Add 'cluster reattach' to the suite of automated regression tests
+ Medium: cts: cluster_test - Make some usability enhancements
+ Medium: CTS: cluster_test - suggest a valid port number
+ Medium: CTS: Fix python import order
+ Medium: cts: Implement an automated SplitBrain test
+ Medium: CTS: Remove references to deleted classes
+ Medium: Extra: Resources - Use HA_VARRUN instead of HA_RSCTMP for state files as Heartbeat removes HA_RSCTMP at startup
+ Medium: HB: Bug 1933 - Fake crmd_client_status_callback() calls because HB does not provide them for already running processes
+ Medium: pengine: CID:17 - Fix memory leak in find_actions_by_task error path
+ Medium: pengine: CID:7,8 - Prevent hypothetical use-of-NULL in LogActions
+ Medium: pengine: Defer logging the actions performed on a resource until we have processed ordering constraints
+ Medium: pengine: Remove the symmetrical attribute of colocation constraints
+ Medium: Resources: pingd - fix the meta defaults
+ Medium: Resources: Stateful - Add missing meta defaults
+ Medium: stonithd: exit if we the pid file cannot be locked
+ Medium: Tools: Allow attrd clients to specify the ID the attribute should be created with
+ Medium: Tools: attrd - Allow attribute updates to be performed from a hosts peer
+ Medium: Tools: Bug LF:1994 - Clean up crm_verify return codes
+ Medium: Tools: Change the pingd defaults to ping hosts once every second (instead of 5 times every 10 seconds)
+ Medium: Tools: cibmin - Detect resource operations with a view to providing email/snmp/cim notification
+ Medium: Tools: crm cli: add back symmetrical for order constraints
+ Medium: Tools: crm cli: generate role in location when converting from xml
+ Medium: Tools: crm cli: handle shlex exceptions
+ Medium: Tools: crm cli: keep order of help topics
+ Medium: Tools: crm cli: refine completion for ids in configure
+ Medium: Tools: crm cli: replace inf with INFINITY
+ Medium: Tools: crm cli: streamline cib load and parsing
+ Medium: Tools: crm cli: supply provider only for ocf class primitives
+ Medium: Tools: crm_mon - Add support for sending mail notifications of resource events
+ Medium: Tools: crm_mon - Include the DC version in status summary
+ Medium: Tools: crm_mon - Sanitize startup and option processing
+ Medium: Tools: crm_mon - switch to event-driven updates and add support for sending snmp traps
+ Medium: Tools: crm_shadow - Replace the --locate option with the saner --edit
+ Medium: Tools: hb2openais: do not remove Evmsd resources, but replace them with clvmd
+ Medium: Tools: hb2openais: replace crmadmin with crm_mon
+ Medium: Tools: hb2openais: replace the lsb class with ocf for o2cb
+ Medium: Tools: hb2openais: reuse code
+ Medium: Tools: LF:2029 - Display an error if crm_resource is used to reset the operation history of non-primitive resources
+ Medium: Tools: Make pingd resilient to attrd failures
+ Medium: Tools: pingd - fix the command line switches
+ Medium: Tools: Rename ccm_tool to crm_node
* Tue Nov 18 2008 Andrew Beekhof <abeekhof@suse.de> - 1.0.1-1
- Update source tarball to revision: 6fc5ce8302ab (stable-1.0) tip
- Statistics:
Changesets: 170
Diff: 816 files changed, 7633 insertions(+), 6286 deletions(-)
- Changes since Pacemaker-1.0.1
+ ais: Allow the crmd to get callbacks whenever a node state changes
+ ais: Create an option for starting the mgmtd daemon automatically
+ ais: Ensure HA_RSCTMP exists for use by resource agents
+ ais: Hook up the openais.conf config logging options
+ ais: Zero out the PID of disconnecting clients
+ cib: Ensure global updates cause a disk write when appropriate
+ Core: Add an extra snaity check to getXpathResults() to prevent segfaults
+ Core: Do not redefine __FUNCTION__ unnecessarily
+ Core: Repair the ability to have comments in the configuration
+ crmd: Bug:1975 - crmd should wait indefinitely for stonith operations to complete
+ crmd: Ensure PE processing does not occur for all error cases in do_pe_invoke_callback
+ crmd: Requests to the CIB should cause any prior PE calculations to be ignored
+ heartbeat: Wait for membership 'up' events before removing stale node status data
+ pengine: Bug LF:1988 - Ensure recurring operations always have the correct target-rc set
+ pengine: Bug LF:1988 - For unmanaged resources we need to skip the usual can_run_resources() checks
+ pengine: Ensure the terminate node attribute is handled correctly
+ pengine: Fix optional colocation
+ pengine: Improve up the detection of 'new' nodes joining the cluster
+ pengine: Prevent assert failures in master_color() by ensuring unmanaged masters are always reallocated to their current location
+ Tools: crm cli: parser: return False on syntax error and None for comments
+ Tools: crm cli: unify template and edit commands
+ Tools: crm_shadow - Show more line number information after validation failures
+ Tools: hb2openais: add option to upgrade the CIB to v3.0
+ Tools: hb2openais: add U option to getopts and update usage
+ Tools: hb2openais: backup improved and multiple fixes
+ Tools: hb2openais: fix class/provider reversal
+ Tools: hb2openais: fix testing
+ Tools: hb2openais: move the CIB update to the end
+ Tools: hb2openais: update logging and set logfile appropriately
+ Tools: LF:1969 - Attrd never sets any properties in the cib
+ Tools: Make attrd functional on OpenAIS
+ Medium: ais: Hook up the options for specifying the expected number of nodes and total quorum votes
+ Medium: ais: Look for pacemaker options inside the service block with 'name: pacemaker' instead of creating an addtional configuration block
+ Medium: ais: Provide better feedback when nodes change nodeids (in openais.conf)
+ Medium: cib: Always store cib contents on disk with num_updates=0
+ Medium: cib: Ensure remote access ports are cleaned up on shutdown
+ Medium: crmd: Detect deleted resource operations automatically
+ Medium: crmd: Erase a nodes resource operations and transient attributes after a successful STONITH
+ Medium: crmd: Find a more appropriate place to update quorum and refresh attrd attributes
+ Medium: crmd: Fix the handling of unexpected PE exits to ensure the current CIB is stored
+ Medium: crmd: Fix the recording of pending operations in the CIB
+ Medium: crmd: Initiate an attrd refresh _after_ the status section has been fully repopulated
+ Medium: crmd: Only the DC should update quorum in an openais cluster
+ Medium: Ensure meta attributes are used consistantly
+ Medium: pengine: Allow group and clone level resource attributes
+ Medium: pengine: Bug N:437719 - Ensure scores from colocated resources count when allocating groups
+ Medium: pengine: Prevent lsb scripts from being used in globally unique clones
+ Medium: pengine: Make a best-effort guess at a migration threshold for people with 0.6 configs
+ Medium: Resources: controld - ensure we are part of a clone with globally_unique=false
+ Medium: Tools: attrd - Automatically refresh all attributes after a CIB replace operation
+ Medium: Tools: Bug LF:1985 - crm_mon - Correctly process failed cib queries to allow reconnection after cluster restarts
+ Medium: Tools: Bug LF:1987 - crm_verify incorrectly warns of configuration upgrades for the most recent version
+ Medium: Tools: crm (bnc#441028): check for key error in attributes management
+ Medium: Tools: crm_mon - display the meaning of the operation rc code instead of the status
+ Medium: Tools: crm_mon - Fix the display of timing data
+ Medium: Tools: crm_verify - check that we are being asked to validate a complete config
+ Medium: xml: Relax the restriction on the contents of rsc_locaiton.node
* Thu Oct 16 2008 Andrew Beekhof <abeekhof@suse.de> - 1.0.0-1
- Update source tarball to revision: 388654dfef8f tip
- Statistics:
Changesets: 261
Diff: 3021 files changed, 244985 insertions(+), 111596 deletions(-)
- Changes since f805e1b30103
+ add the crm cli program
+ ais: Move the service id definition to a common location and make sure it is always used
+ build: rename hb2openais.sh to .in and replace paths with vars
+ cib: Implement --create for crm_shadow
+ cib: Remove dead files
+ Core: Allow the expected number of quorum votes to be configrable
+ Core: cl_malloc and friends were removed from Heartbeat
+ Core: Only call xmlCleanupParser() if we parsed anything. Doing so unconditionally seems to cause a segfault
+ hb2openais.sh: improve pingd handling; several bugs fixed
+ hb2openais: fix clone creation; replace EVMS strings
+ new hb2openais.sh conversion script
+ pengine: Bug LF:1950 - Ensure the current values for all notification variables are always set (even if empty)
+ pengine: Bug LF:1955 - Ensure unmanaged masters are unconditionally repromoted to ensure they are monitored correctly.
+ pengine: Bug LF:1955 - Fix another case of filtering causing unmanaged master failures
+ pengine: Bug LF:1955 - Umanaged mode prevents master resources from being allocated correctly
+ pengine: Bug N:420538 - Anit-colocation caused a positive node preference
+ pengine: Correctly handle unmanaged resources to prevent them from being started elsewhere
+ pengine: crm_resource - Fix the --migrate command
+ pengine: MAke stonith-enabled default to true and warn if no STONITH resources are found
+ pengine: Make sure orphaned clone children are created correctly
+ pengine: Monitors for unmanaged resources do not need to wait for start/promote/demote actions to complete
+ stonithd (LF 1951): fix remote stonith operations
+ stonithd: fix handling of timeouts
+ stonithd: fix logic for stonith resource priorities
+ stonithd: implement the fence-timeout instance attribute
+ stonithd: initialize value before reading fence-timeout
+ stonithd: set timeouts for fencing ops to the timeout of the start op
+ stonithd: stonith rsc priorities (new feature)
+ Tools: Add hb2openais - a tool for upgrading a Heartbeat cluster to use OpenAIS instead
+ Tools: crm_verify - clean up the upgrade logic to prevent crash on invalid configurations
+ Tools: Make pingd functional on Linux
+ Update version numbers for 1.0 candidates
+ Medium: ais: Add support for a synchronous call to retrieve the nodes nodeid
+ Medium: ais: Use the agreed service number
+ Medium: Build: Reliably detect heartbeat libraries during configure
+ Medium: Build: Supply prototypes for libreplace functions when needed
+ Medium: Build: Teach configure how to find corosync
+ Medium: Core: Provide better feedback if Pacemaker is started by a stack it does not support
+ Medium: crmd: Avoid calling GHashTable functions with NULL
+ Medium: crmd: Delay raising I_ERROR when the PE exits until we have had a chance to save the current CIB
+ Medium: crmd: Hook up the stonith-timeout option to stonithd
+ Medium: crmd: Prevent potential use-of-NULL in global_timer_callback
+ Medium: crmd: Rationalize the logging of graph aborts
+ Medium: pengine: Add a stonith_timeout option and remove new options that are better set in rsc_defaults
+ Medium: pengine: Allow external entities to ask for a node to be shot by creating a terminate=true transient node attribute
+ Medium: pengine: Bug LF:1950 - Notifications do not contain all documented resource state fields
+ Medium: pengine: Bug N:417585 - Do not restart group children whos individual score drops below zero
+ Medium: pengine: Detect clients that disconnect before receiving their reply
+ Medium: pengine: Implement a true maintenance mode
+ Medium: pengine: Implement on-fail=standby for NTT. Derived from a patch by Satomi TANIGUCHI
+ Medium: pengine: Print the correct message when stonith is disabled
+ Medium: pengine: ptest - check the input is valid before proceeding
+ Medium: pengine: Revert group stickiness to the 'old way'
+ Medium: pengine: Use the correct attribute for action 'requires' (was prereq)
+ Medium: stonithd: Fix compilation without full heartbeat install
+ Medium: stonithd: exit with better code on empty host list
+ Medium: tools: Add a new regression test for CLI tools
+ Medium: tools: crm_resource - return with non-zero when a resource migration command is invalid
+ Medium: tools: crm_shadow - Allow the admin to start with an empty CIB (and no cluster connection)
+ Medium: xml: pacemaker-0.7 is now an alias for the 1.0 schema
* Mon Sep 22 2008 Andrew Beekhof <abeekhof@suse.de> - 0.7.3-1
- Update source tarball to revision: 33e677ab7764+ tip
- Statistics:
Changesets: 133
Diff: 89 files changed, 7492 insertions(+), 1125 deletions(-)
- Changes since f805e1b30103
+ Tools: add the crm cli program
+ Core: cl_malloc and friends were removed from Heartbeat
+ Core: Only call xmlCleanupParser() if we parsed anything. Doing so unconditionally seems to cause a segfault
+ new hb2openais.sh conversion script
+ pengine: Bug LF:1950 - Ensure the current values for all notification variables are always set (even if empty)
+ pengine: Bug LF:1955 - Ensure unmanaged masters are unconditionally repromoted to ensure they are monitored correctly.
+ pengine: Bug LF:1955 - Fix another case of filtering causing unmanaged master failures
+ pengine: Bug LF:1955 - Umanaged mode prevents master resources from being allocated correctly
+ pengine: Bug N:420538 - Anit-colocation caused a positive node preference
+ pengine: Correctly handle unmanaged resources to prevent them from being started elsewhere
+ pengine: crm_resource - Fix the --migrate command
+ pengine: MAke stonith-enabled default to true and warn if no STONITH resources are found
+ pengine: Make sure orphaned clone children are created correctly
+ pengine: Monitors for unmanaged resources do not need to wait for start/promote/demote actions to complete
+ stonithd (LF 1951): fix remote stonith operations
+ Tools: crm_verify - clean up the upgrade logic to prevent crash on invalid configurations
+ Medium: ais: Add support for a synchronous call to retrieve the nodes nodeid
+ Medium: ais: Use the agreed service number
+ Medium: pengine: Allow external entities to ask for a node to be shot by creating a terminate=true transient node attribute
+ Medium: pengine: Bug LF:1950 - Notifications do not contain all documented resource state fields
+ Medium: pengine: Bug N:417585 - Do not restart group children whos individual score drops below zero
+ Medium: pengine: Implement a true maintenance mode
+ Medium: pengine: Print the correct message when stonith is disabled
+ Medium: stonithd: exit with better code on empty host list
+ Medium: xml: pacemaker-0.7 is now an alias for the 1.0 schema
* Wed Aug 20 2008 Andrew Beekhof <abeekhof@suse.de> - 0.7.1-1
- Update source tarball to revision: f805e1b30103+ tip
- Statistics:
Changesets: 184
Diff: 513 files changed, 43408 insertions(+), 43783 deletions(-)
- Changes since 0.7.0-19
+ Fix compilation when GNUTLS isn't found
+ admin: Fix use-after-free in crm_mon
+ Build: Remove testing code that prevented heartbeat-only builds
+ cib: Use single quotes so that the xpath queries for nvpairs will succeed
+ crmd: Always connect to stonithd when the TE starts and ensure we notice if it dies
+ crmd: Correctly handle a dead PE process
+ crmd: Make sure async-failures cause the failcount to be incremented
+ pengine: Bug LF:1941 - Handle failed clone instance probes when clone-max < #nodes
+ pengine: Parse resource ordering sets correctly
+ pengine: Prevent use-of-NULL - order->rsc_rh will not always be non-NULL
+ pengine: Unpack colocation sets correctly
+ Tools: crm_mon - Prevent use-of-NULL for orphaned resources
+ Medium: ais: Add support for a synchronous call to retrieve the nodes nodeid
+ Medium: ais: Allow transient clients to receive membership updates
+ Medium: ais: Avoid double-free in error path
+ Medium: ais: Include in the mebership nodes for which we have not determined their hostname
+ Medium: ais: Spawn the PE from the ais plugin instead of the crmd
+ Medium: cib: By default, new configurations use the latest schema
+ Medium: cib: Clean up the CIB if it was already disconnected
+ Medium: cib: Only increment num_updates if something actually changed
+ Medium: cib: Prevent use-after-free in client after abnormal termination of the CIB
+ Medium: Core: Fix memory leak in xpath searches
+ Medium: Core: Get more details regarding parser errors
+ Medium: Core: Repair expand_plus_plus - do not call char2score on unexpanded values
+ Medium: Core: Switch to the libxml2 parser - its significantly faster
+ Medium: Core: Use a libxml2 library function for xml -> text conversion
+ Medium: crmd: Asynchronous failure actions have no parameters
+ Medium: crmd: Avoid calling glib functions with NULL
+ Medium: crmd: Do not allow an election to promote a node from S_STARTING
+ Medium: crmd: Do not vote if we have not completed the local startup
+ Medium: crmd: Fix te_update_diff() now that get_object_root() functions differently
+ Medium: crmd: Fix the lrmd xpath expressions to not contain quotes
+ Medium: crmd: If we get a join offer during an election, better restart the election
+ Medium: crmd: No further processing is needed when using the LRMs API call for failing resources
+ Medium: crmd: Only update have-quorum if the value changed
+ Medium: crmd: Repair the input validation logic in do_te_invoke
+ Medium: cts: CIBs can no longer contain comments
+ Medium: cts: Enable a bunch of tests that were incorrectly disabled
+ Medium: cts: The libxml2 parser wont allow v1 resources to use integers as parameter names
+ Medium: Do not use the cluster UID and GID directly. Look them up based on the configured value of HA_CCMUSER
+ Medium: Fix compilation when heartbeat is not supported
+ Medium: pengine: Allow groups to be involved in optional ordering constraints
+ Medium: pengine: Allow sets of operations to be reused by multiple resources
+ Medium: pengine: Bug LF:1941 - Mark extra clone instances as orphans and do not show inactive ones
+ Medium: pengine: Determin the correct migration-threshold during resource expansion
+ Medium: pengine: Implement no-quorum-policy=suicide (FATE #303619)
+ Medium: pengine: Clean up resources after stopping old copies of the PE
+ Medium: pengine: Teach the PE how to stop old copies of itself
+ Medium: Tools: Backport hb_report updates
+ Medium: Tools: cib_shadow - On create, spawn a new shell with CIB_shadow and PS1 set accordingly
+ Medium: Tools: Rename cib_shadow to crm_shadow
* Fri Jul 18 2008 Andrew Beekhof <abeekhof@suse.de> - 0.7.0-19
- Update source tarball to revision: 007c3a1c50f5 (unstable) tip
- Statistics:
Changesets: 108
Diff: 216 files changed, 4632 insertions(+), 4173 deletions(-)
- Changes added since unstable-0.7
+ admin: Fix use-after-free in crm_mon
+ ais: Change the tag for the ais plugin to "pacemaker" (used in openais.conf)
+ ais: Log terminated processes as an error
+ cib: Performance - Reorganize things to avoid calculating the XML diff twice
+ pengine: Bug LF:1941 - Handle failed clone instance probes when clone-max < #nodes
+ pengine: Fix memory leak in action2xml
+ pengine: Make OCF_ERR_ARGS a node-level error rather than a cluster-level one
+ pengine: Properly handle clones that are not installed on all nodes
+ Medium: admin: cibadmin - Show any validation errors if the upgrade failed
+ Medium: admin: cib_shadow - Implement --locate to display the underlying filename
+ Medium: admin: cib_shadow - Implement a --diff option
+ Medium: admin: cib_shadow - Implement a --switch option
+ Medium: admin: crm_resource - create more compact constraints that do not use lifetime (which is deprecated)
+ Medium: ais: Approximate born_on for OpenAIS based clusters
+ Medium: cib: Remove do_id_check, it is a poor substitute for ID validation by a schema
+ Medium: cib: Skip construction of pre-notify messages if no-one wants one
+ Medium: Core: Attempt to streamline some key functions to increase performance
+ Medium: Core: Clean up XML parser after validation
+ Medium: crmd: Detect and optimize the CRMs behavior when processing diffs of an LRM refresh
+ Medium: Fix memory leaks when resetting the name of an XML object
+ Medium: pengine: Prefer the current location if it is one of a group of nodes with the same (highest) score
* Wed Jun 25 2008 Andrew Beekhof <abeekhof@suse.de> - 0.7.0-1
- Update source tarball to revision: bde0c7db74fb tip
- Statistics:
Changesets: 439
Diff: 676 files changed, 41310 insertions(+), 52071 deletions(-)
- Changes added since stable-0.6
+ A new tool for setting up and invoking CTS
+ Admin: All tools now use --node (-N) for specifying node unames
+ Admin: All tools now use --xml-file (-x) and --xml-text (-X) for specifying where to find XML blobs
+ cib: Cleanup the API - remove redundant input fields
+ cib: Implement CIB_shadow - a facility for making and testing changes before uploading them to the cluster
+ cib: Make registering per-op callbacks an API call and renamed (for clarity) the API call for requesting notifications
+ Core: Add a facility for automatically upgrading old configurations
+ Core: Adopt libxml2 as the XML processing library - all external clients need to be recompiled
+ Core: Allow sending TLS messages larger than the MTU
+ Core: Fix parsing of time-only ISO dates
+ Core: Smarter handling of XML values containing quotes
+ Core: XML memory corruption - catch, and handle, cases where we are overwriting an attribute value with itself
+ Core: The xml ID type does not allow UUIDs that start with a number
+ Core: Implement XPath based versions of query/delete/replace/modify
+ Core: Remove some HA2.0.(3,4) compatibility code
+ crmd: Overhaul the detection of nodes that are starting vs. failed
+ pengine: Bug LF:1459 - Allow failures to expire
+ pengine: Have the PE do non-persistent configuration upgrades before performing calculations
+ pengine: Replace failure-stickiness with a simple 'migration-threshold'
+ tengine: Simplify the design by folding the tengine process into the crmd
+ Medium: Admin: Bug LF:1438 - Allow the list of all/active resource operations to be queried by crm_resource
+ Medium: Admin: Bug LF:1708 - crm_resource should print a warning if an attribute is already set as a meta attribute
+ Medium: Admin: Bug LF:1883 - crm_mon should display fail-count and operation history
+ Medium: Admin: Bug LF:1883 - crm_mon should display operation timing data
+ Medium: Admin: Bug N:371785 - crm_resource -C does not also clean up fail-count attributes
+ Medium: Admin: crm_mon - include timing data for failed actions
+ Medium: ais: Read options from the environment since objdb is not completely usable yet
+ Medium: cib: Add sections for op_defaults and rsc_defaults
+ Medium: cib: Better matching notification callbacks (for detecting duplicates and removal)
+ Medium: cib: Bug LF:1348 - Allow rules and attribute sets to be referenced for use in other objects
+ Medium: cib: BUG LF:1918 - By default, all cib calls now timeout after 30s
+ Medium: cib: Detect updates that decrease the version tuple
+ Medium: cib: Implement a client-side operation timeout - Requires LHA update
+ Medium: cib: Implement callbacks and async notifications for remote connections
+ Medium: cib: Make cib->cmds->update() an alias for modify at the API level (also implemented in cibadmin)
+ Medium: cib: Mark the CIB as disconnected if the IPC connection is terminated
+ Medium: cib: New call option 'cib_can_create' which can be passed to modify actions - allows the object to be created if it does not exist yet
+ Medium: cib: Reimplement get|set|delete attributes using XPath
+ Medium: cib: Remove some useless parts of the API
+ Medium: cib: Remove the 'attributes' scaffolding from the new format
+ Medium: cib: Implement the ability for clients to connect to remote servers
+ Medium: Core: Add support for validating xml against RelaxNG schemas
+ Medium: Core: Allow more than one item to be modified/deleted in XPath based operations
+ Medium: Core: Fix the sort_pairs function for creating sorted xml objects
+ Medium: Core: iso8601 - Implement subtract_duration and fix subtract_time
- + Medium: Core: Reduce the amount of xml copying occuring
+ + Medium: Core: Reduce the amount of xml copying
+ Medium: Core: Support value='value+=N' XML updates (in addtion to value='value++')
+ Medium: crmd: Add support for lrm_ops->fail_rsc if its available
+ Medium: crmd: HB - watch link status for node leaving events
+ Medium: crmd: Bug LF:1924 - Improved handling of lrmd disconnects and shutdowns
+ Medium: crmd: Do not wait for actions with a start_delay over 5 minutes. Confirm them immediately
+ Medium: pengine: Bug LF:1328 - Do not fencing nodes in clusters without managed resources
+ Medium: pengine: Bug LF:1461 - Give transient node attributes (in <status/>) preference over persistent ones (in <nodes/>)
+ Medium: pengine: Bug LF:1884, Bug LF:1885 - Implement N:M ordering and colocation constraints
+ Medium: pengine: Bug LF:1886 - Create a resource and operation 'defaults' config section
+ Medium: pengine: Bug LF:1892 - Allow recurring actions to be triggered at known times
+ Medium: pengine: Bug LF:1926 - Probes should complete before stop actions are invoked
+ Medium: pengine: Fix the standby when its set as a transient attribute
+ Medium: pengine: Implement a global 'stop-all-resources' option
+ Medium: pengine: Implement cibpipe, a tool for performing/simulating config changes "offline"
+ Medium: pengine: We do not allow colocation with specific clone instances
+ Medium: Tools: pingd - Implement a stack-independent version of pingd
+ Medium: xml: Ship an xslt for upgrading from 0.6 to 0.7
* Thu Jun 19 2008 Andrew Beekhof <abeekhof@suse.de> - 0.6.5-1
- Update source tarball to revision: b9fe723d1ac5 tip
- Statistics:
Changesets: 48
Diff: 37 files changed, 1204 insertions(+), 234 deletions(-)
- Changes since Pacemaker-0.6.4
+ Admin: Repair the ability to delete failcounts
+ ais: Audit IPC handling between the AIS plugin and CRM processes
+ ais: Have the plugin create needed /var/lib directories
+ ais: Make sure the sync and async connections are assigned correctly (not swapped)
+ cib: Correctly detect configuration changes - num_updates does not count
+ pengine: Apply stickiness values to the whole group, not the individual resources
+ pengine: Bug N:385265 - Ensure groups are migrated instead of remaining partially active on the current node
+ pengine: Bug N:396293 - Enforce mandatory group restarts due to ordering constraints
+ pengine: Correctly recover master instances found active on more than one node
+ pengine: Fix memory leaks reported by Valgrind
+ Medium: Admin: crm_mon - Misc improvements from Satomi Taniguchi
+ Medium: Bug LF:1900 - Resource stickiness should not allow placement in asynchronous clusters
+ Medium: crmd: Ensure joins are completed promptly when a node taking part dies
+ Medium: pengine: Avoid clone instance shuffling in more cases
+ Medium: pengine: Bug LF:1906 - Remove an optimization in native_merge_weights() causing group scores to behave eratically
+ Medium: pengine: Make use of target_rc data to correctly process resource operations
+ Medium: pengine: Prevent a possible use of NULL in sort_clone_instance()
+ Medium: tengine: Include target rc in the transition key - used to correctly determin operation failure
* Thu May 22 2008 Andrew Beekhof <abeekhof@suse.de> - 0.6.4-1
- Update source tarball to revision: 226d8e356924 tip
- Statistics:
Changesets: 55
Diff: 199 files changed, 7103 insertions(+), 12378 deletions(-)
- Changes since Pacemaker-0.6.3
+ crmd: Bug LF:1881 LF:1882 - Overhaul the logic for operation cancelation and deletion
+ crmd: Bug LF:1894 - Make sure cancelled recurring operations are cleaned out from the CIB
+ pengine: Bug N:387749 - Colocation with clones causes unnecessary clone instance shuffling
+ pengine: Ensure 'master' monitor actions are cancelled _before_ we demote the resource
+ pengine: Fix assert failure leading to core dump - make sure variable is properly initialized
+ pengine: Make sure 'slave' monitoring happens after the resource has been demoted
+ pengine: Prevent failure stickiness underflows (where too many failures become a _positive_ preference)
+ Medium: Admin: crm_mon - Only complain if the output file could not be opened
+ Medium: Common: filter_action_parameters - enable legacy handling only for older versions
+ Medium: pengine: Bug N:385265 - The failure stickiness of group children is ignored until it reaches -INFINITY
+ Medium: pengine: Implement master and clone colocation by exlcuding nodes rather than setting ones score to INFINITY (similar to cs: 756afc42dc51)
+ Medium: tengine: Bug LF:1875 - Correctly find actions to cancel when their node leaves the cluster
* Wed Apr 23 2008 Andrew Beekhof <abeekhof@suse.de> - 0.6.3-1
- Update source tarball to revision: fd8904c9bc67 tip
- Statistics:
Changesets: 117
Diff: 354 files changed, 19094 insertions(+), 11338 deletions(-)
- Changes since Pacemaker-0.6.2
+ Admin: Bug LF:1848 - crm_resource - Pass set name and id to delete_resource_attr() in the correct order
+ Build: SNMP has been moved to the management/pygui project
+ crmd: Bug LF1837 - Unmanaged resources prevent crmd from shutting down
+ crmd: Prevent use-after-free in lrm interface code (Patch based on work by Keisuke MORI)
+ pengine: Allow the cluster to make progress by not retrying failed demote actions
+ pengine: Anti-colocation with slave should not prevent master colocation
+ pengine: Bug LF 1768 - Wait more often for STONITH ops to complete before starting resources
+ pengine: Bug LF1836 - Allow is-managed-default=false to be overridden by individual resources
+ pengine: Bug LF185 - Prevent pointless master/slave instance shuffling by ignoring the master-pref of stopped instances
+ pengine: Bug N-191176 - Implement interleaved ordering for clone-to-clone scenarios
+ pengine: Bug N-347004 - Ensure clone notifications are always sent when an instance is stopped/started
+ pengine: Bug N-347004 - Include notification ordering is correct for interleaved clones
+ pengine: Bug PM-11 - Directly link probe_complete to starting clone instances
+ pengine: Bug PM1 - Fix setting failcounts when applied to complex resources
+ pengine: Bug PM12, LF1648 - Extensive revision of group ordering
+ pengine: Bug PM7 - Ensure masters are always demoted before they are stopped
+ pengine: Create probes after allocation to allow smarter handling of anonymous clones
+ pengine: Do not prioritize clone instances that must be moved
+ pengine: Fix error in previous commit that allowed more than the required number of masters to be promoted
+ pengine: Group start ordering fixes
+ pengine: Implement promote/demote ordering for cloned groups
+ tengine: Repair failcount updates
+ tengine: Use the correct offset when updating failcount
+ Medium: Admin: Add a summary output that can be easily parsed by CTS for audit purposes
+ Medium: Build: Make configure fail if bz2 or libxml2 are not present
+ Medium: Build: Re-instate a better default for LCRSODIR
+ Medium: CIB: Bug LF-1861 - Filter irrelvant error status from synchronous CIB clients
+ Medium: Core: Bug 1849 - Invalid conversion of ordinal leap year to gregorian date
+ Medium: Core: Drop compatibility code for 2.0.4 and 2.0.5 clusters
+ Medium: crmd: Bug LF-1860 - Automatically cancel recurring ops before demote and promote operations (not only stops)
+ Medium: crmd: Save the current CIB contents if we detect the PE crashed
+ Medium: pengine: Bug LF:1866 - Fix version check when applying compatibility handling for failed start operations
+ Medium: pengine: Bug LF:1866 - Restore the ability to have start failures not be fatal
+ Medium: pengine: Bug PM1 - Failcount applies to all instances of non-unique clone
+ Medium: pengine: Correctly set the state of partially active master/slave groups
+ Medium: pengine: Do not claim to be stopping an already stopped orphan
+ Medium: pengine: Ensure implies_left ordering constraints are always effective
+ Medium: pengine: Indicate each resources 'promotion' score
+ Medium: pengine: Prevent a possible use-of-NULL
+ Medium: pengine: Reprocess the current action if it changed (so that any prior dependencies are updated)
+ Medium: tengine: Bug LF-1859 - Wait for fail-count updates to complete before terminating the transition
+ Medium: tengine: Bug LF:1859 - Do not abort graphs due to our own failcount updates
+ Medium: tengine: Bug LF:1859 - Prevent the TE from interupting itself
* Thu Feb 14 2008 Andrew Beekhof <abeekhof@suse.de> - 0.6.2-1
- Update source tarball to revision: 28b1a8c1868b tip
- Statistics:
Changesets: 11
Diff: 7 files changed, 58 insertions(+), 18 deletions(-)
- Changes since Pacemaker-0.6.1
+ haresources2cib.py: set default-action-timeout to the default (20s)
+ haresources2cib.py: update ra parameters lists
+ Medium: SNMP: Allow the snmp subagent to be built (patch from MATSUDA, Daiki)
+ Medium: Tools: Make sure the autoconf variables in haresources2cib are expanded
* Tue Feb 12 2008 Andrew Beekhof <abeekhof@suse.de> - 0.6.1-1
- Update source tarball to revision: e7152d1be933 tip
- Statistics:
Changesets: 25
Diff: 37 files changed, 1323 insertions(+), 227 deletions(-)
- Changes since Pacemaker-0.6.0
+ CIB: Ensure changes to top-level attributes (like admin_epoch) cause a disk write
+ CIB: Ensure the archived file hits the disk before returning
+ CIB: Repair the ability to do 'atomic increment' updates (value="value++")
+ crmd: Bug #7 - Connecting to the crmd immediately after startup causes use-of-NULL
+ Medium: CIB: Mask cib_diff_resync results from the caller - they do not need to know
+ Medium: crmd: Delay starting the IPC server until we are fully functional
+ Medium: CTS: Fix the startup patterns
+ Medium: pengine: Bug 1820 - Allow the first resource in a group to be migrated
+ Medium: pengine: Bug 1820 - Check the colocation dependencies of resources to be migrated
* Mon Jan 14 2008 Andrew Beekhof <abeekhof@suse.de> - 0.6.0-1
- This is the first release of the Pacemaker Cluster Resource Manager formerly part of Heartbeat.
- For those looking for the GUI, mgmtd, CIM or TSA components, they are now found in
the new pacemaker-pygui project. Build dependencies prevent them from being
included in Heartbeat (since the built-in CRM is no longer supported) and,
being non-core components, are not included with Pacemaker.
- Update source tarball to revision: c94b92d550cf
- Statistics:
Changesets: 347
Diff: 2272 files changed, 132508 insertions(+), 305991 deletions(-)
- Test hardware:
+ 6-node vmware cluster (sles10-sp1/256MB/vmware stonith) on a single host (opensuse10.3/2GB/2.66GHz Quad Core2)
+ 7-node EMC Centera cluster (sles10/512MB/2GHz Xeon/ssh stonith)
- Notes: Heartbeat Stack
+ All testing was performed with STONITH enabled
+ The CRM was enabled using the "crm respawn" directive
- Notes: OpenAIS Stack
+ This release contains a preview of support for the OpenAIS cluster stack
+ The current release of the OpenAIS project is missing two important
patches that we require. OpenAIS packages containing these patches are
available for most major distributions at:
http://download.opensuse.org/repositories/server:/ha-clustering
+ The OpenAIS stack is not currently recommended for use in clusters that
have shared data as STONITH support is not yet implimented
+ pingd is not yet available for use with the OpenAIS stack
+ 3 significant OpenAIS issues were found during testing of 4 and 6 node
clusters. We are activly working together with the OpenAIS project to
get these resolved.
- Pending bugs encountered during testing:
+ OpenAIS #1736 - Openais membership took 20s to stabilize
+ Heartbeat #1750 - ipc_bufpool_update: magic number in head does not match
+ OpenAIS #1793 - Assertion failure in memb_state_gather_enter()
+ OpenAIS #1796 - Cluster message corruption
- Changes since Heartbeat-2.1.2-24
+ Add OpenAIS support
+ Admin: crm_uuid - Look in the right place for Heartbeat UUID files
+ admin: Exit and indicate a problem if the crmd exits while crmadmin is performing a query
+ cib: Fix CIB_OP_UPDATE calls that modify the whole CIB
+ cib: Fix compilation when supporting the heartbeat stack
+ cib: Fix memory leaks caused by the switch to get_message_xml()
+ cib: HA_VALGRIND_ENABLED needs to be set _and_ set to 1|yes|true
+ cib: Use get_message_xml() in preference to cl_get_struct()
+ cib: Use the return value from call to write() in cib_send_plaintext()
+ Core: ccm nodes can legitimately have a node id of 0
+ Core: Fix peer-process tracking for the Heartbeat stack
+ Core: Heartbeat does not send status notifications for nodes that were already part of the cluster. Fake them instead
+ CRM: Add children to HA_Messages such that the field name matches F_XML_TAGNAME
+ crm: Adopt a more flexible appraoch to enabling Valgrind
+ crm: Fix compilation when bzip2 is not installed
+ CRM: Future-proof get_message_xml()
+ crmd: Filter election responses based on time not FSA state
+ crmd: Handle all possible peer states in crmd_ha_status_callback()
+ crmd: Make sure the current date/time is set - prevents use-of-NULL when evaluating rules
+ crmd: Relax an assertion regrading ccm membership instances
+ crmd: Use (node->processes&crm_proc_ais) to accurately update the CIB after replace operations
+ crmd: Heartbeat: Accurately record peer client status
+ pengine: Bug 1777 - Allow colocation with a resource in the Stopped state
+ pengine: Bug 1822 - Prevent use-of-NULL in PromoteRsc()
+ pengine: Implement three recovery policies based on op_status and op_rc
+ pengine: Parse fail-count correctly (it may be set to ININFITY)
+ pengine: Prevent graph-loop when stonith agents need to be moved around before a STONITH op
+ pengine: Prevent graph-loops when two operations have the same name+interval
+ tengine: Cancel active timers when destroying graphs
+ tengine: Ensure failcount is set correctly for failed stops/starts
+ tengine: Update failcount for oeprations that time out
+ Medium: admin: Prevent hang in crm_mon -1 when there is no cib connection - Patch from Junko IKEDA
+ Medium: cib: Require --force|-f when performing potentially dangerous commands with cibadmin
+ Medium: cib: Tweak the shutdown code
+ Medium: Common: Only count peer processes of active nodes
+ Medium: Core: Create generic cluster sign-in method
+ Medium: core: Fix compilation when Heartbeat support is disabled
+ Medium: Core: General cleanup for supporting two stacks
+ Medium: Core: iso6601 - Support parsing of time-only strings
+ Medium: core: Isolate more code that is only needed when SUPPORT_HEARTBEAT is enabled
+ Medium: crm: Improved logging of errors in the XML parser
+ Medium: crmd: Fix potential use-of-NULL in string comparison
+ Medium: crmd: Reimpliment syncronizing of CIB queries and updates when invoking the PE
+ Medium: crm_mon: Indicate when a node is both in standby mode and offline
+ Medium: pengine: Bug 1822 - Do not try an promote groups if not all of it is active
+ Medium: pengine: on_fail=nothing is an alias for 'ignore' not 'restart'
+ Medium: pengine: Prevent a potential use-of-NULL in cron_range_satisfied()
+ snmp subagent: fix a problem on displaying an unmanaged group
+ snmp subagent: use the syslog setting
+ snmp: v2 support (thanks to Keisuke MORI)
+ snmp_subagent - made it not complain about some things if shutting down
diff --git a/crmd/te_actions.c b/crmd/te_actions.c
index 66dd16ebcc..fde44dbeae 100644
--- a/crmd/te_actions.c
+++ b/crmd/te_actions.c
@@ -1,764 +1,764 @@
/*
* Copyright (C) 2004 Andrew Beekhof <andrew@beekhof.net>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <crm_internal.h>
#include <sys/param.h>
#include <crm/crm.h>
#include <crm/cib.h>
#include <crm/msg_xml.h>
#include <crm/common/xml.h>
#include <tengine.h>
#include <crmd_fsa.h>
#include <crmd_lrm.h>
#include <crmd_messages.h>
#include <crm/cluster.h>
#include <throttle.h>
char *te_uuid = NULL;
GHashTable *te_targets = NULL;
void send_rsc_command(crm_action_t * action);
static void te_update_job_count(crm_action_t * action, int offset);
static void
te_start_action_timer(crm_graph_t * graph, crm_action_t * action)
{
action->timer = calloc(1, sizeof(crm_action_timer_t));
action->timer->timeout = action->timeout;
action->timer->reason = timeout_action;
action->timer->action = action;
action->timer->source_id = g_timeout_add(action->timer->timeout + graph->network_delay,
action_timer_callback, (void *)action->timer);
CRM_ASSERT(action->timer->source_id != 0);
}
static gboolean
te_pseudo_action(crm_graph_t * graph, crm_action_t * pseudo)
{
const char *task = crm_element_value(pseudo->xml, XML_LRM_ATTR_TASK);
/* send to peers as well? */
if (safe_str_eq(task, CRM_OP_MAINTENANCE_NODES)) {
GHashTableIter iter;
crm_node_t *node = NULL;
g_hash_table_iter_init(&iter, crm_peer_cache);
while (g_hash_table_iter_next(&iter, NULL, (gpointer *) &node)) {
xmlNode *cmd = NULL;
if (safe_str_eq(fsa_our_uname, node->uname)) {
continue;
}
cmd = create_request(task, pseudo->xml, node->uname,
CRM_SYSTEM_CRMD, CRM_SYSTEM_TENGINE, NULL);
send_cluster_message(node, crm_msg_crmd, cmd, FALSE);
free_xml(cmd);
}
remote_ra_process_maintenance_nodes(pseudo->xml);
} else {
/* Check action for Pacemaker Remote node side effects */
remote_ra_process_pseudo(pseudo->xml);
}
crm_debug("Pseudo-action %d (%s) fired and confirmed", pseudo->id,
crm_element_value(pseudo->xml, XML_LRM_ATTR_TASK_KEY));
te_action_confirmed(pseudo);
update_graph(graph, pseudo);
trigger_graph();
return TRUE;
}
void
send_stonith_update(crm_action_t * action, const char *target, const char *uuid)
{
int rc = pcmk_ok;
crm_node_t *peer = NULL;
/* We (usually) rely on the membership layer to do node_update_cluster,
* and the peer status callback to do node_update_peer, because the node
* might have already rejoined before we get the stonith result here.
*/
int flags = node_update_join | node_update_expected;
/* zero out the node-status & remove all LRM status info */
xmlNode *node_state = NULL;
CRM_CHECK(target != NULL, return);
CRM_CHECK(uuid != NULL, return);
/* Make sure the membership and join caches are accurate */
peer = crm_get_peer_full(0, target, CRM_GET_PEER_ANY);
CRM_CHECK(peer != NULL, return);
if (peer->state == NULL) {
/* Usually, we rely on the membership layer to update the cluster state
* in the CIB. However, if the node has never been seen, do it here, so
* the node is not considered unclean.
*/
flags |= node_update_cluster;
}
if (peer->uuid == NULL) {
crm_info("Recording uuid '%s' for node '%s'", uuid, target);
peer->uuid = strdup(uuid);
}
crmd_peer_down(peer, TRUE);
/* Generate a node state update for the CIB */
node_state = create_node_state_update(peer, flags, NULL, __FUNCTION__);
/* we have to mark whether or not remote nodes have already been fenced */
if (peer->flags & crm_remote_node) {
time_t now = time(NULL);
char *now_s = crm_itoa(now);
crm_xml_add(node_state, XML_NODE_IS_FENCED, now_s);
free(now_s);
}
/* Force our known ID */
crm_xml_add(node_state, XML_ATTR_UUID, uuid);
rc = fsa_cib_conn->cmds->update(fsa_cib_conn, XML_CIB_TAG_STATUS, node_state,
cib_quorum_override | cib_scope_local | cib_can_create);
/* Delay processing the trigger until the update completes */
crm_debug("Sending fencing update %d for %s", rc, target);
fsa_register_cib_callback(rc, FALSE, strdup(target), cib_fencing_updated);
/* Make sure it sticks */
/* fsa_cib_conn->cmds->bump_epoch(fsa_cib_conn, cib_quorum_override|cib_scope_local); */
erase_status_tag(peer->uname, XML_CIB_TAG_LRM, cib_scope_local);
erase_status_tag(peer->uname, XML_TAG_TRANSIENT_NODEATTRS, cib_scope_local);
free_xml(node_state);
return;
}
static gboolean
te_fence_node(crm_graph_t * graph, crm_action_t * action)
{
int rc = 0;
const char *id = NULL;
const char *uuid = NULL;
const char *target = NULL;
const char *type = NULL;
gboolean invalid_action = FALSE;
enum stonith_call_options options = st_opt_none;
id = ID(action->xml);
target = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
uuid = crm_element_value(action->xml, XML_LRM_ATTR_TARGET_UUID);
type = crm_meta_value(action->params, "stonith_action");
CRM_CHECK(id != NULL, invalid_action = TRUE);
CRM_CHECK(uuid != NULL, invalid_action = TRUE);
CRM_CHECK(type != NULL, invalid_action = TRUE);
CRM_CHECK(target != NULL, invalid_action = TRUE);
if (invalid_action) {
crm_log_xml_warn(action->xml, "BadAction");
return FALSE;
}
crm_notice("Requesting fencing (%s) of node %s "
CRM_XS " action=%s timeout=%d",
type, target, id, transition_graph->stonith_timeout);
/* Passing NULL means block until we can connect... */
te_connect_stonith(NULL);
if (crmd_join_phase_count(crm_join_confirmed) == 1) {
options |= st_opt_allow_suicide;
}
rc = stonith_api->cmds->fence(stonith_api, options, target, type,
transition_graph->stonith_timeout / 1000, 0);
stonith_api->cmds->register_callback(stonith_api, rc, transition_graph->stonith_timeout / 1000,
st_opt_timeout_updates,
generate_transition_key(transition_graph->id, action->id,
0, te_uuid),
"tengine_stonith_callback", tengine_stonith_callback);
return TRUE;
}
static int
get_target_rc(crm_action_t * action)
{
const char *target_rc_s = crm_meta_value(action->params, XML_ATTR_TE_TARGET_RC);
if (target_rc_s != NULL) {
return crm_parse_int(target_rc_s, "0");
}
return 0;
}
static gboolean
te_crm_command(crm_graph_t * graph, crm_action_t * action)
{
char *counter = NULL;
xmlNode *cmd = NULL;
gboolean is_local = FALSE;
const char *id = NULL;
const char *task = NULL;
const char *value = NULL;
const char *on_node = NULL;
const char *router_node = NULL;
gboolean rc = TRUE;
gboolean no_wait = FALSE;
id = ID(action->xml);
task = crm_element_value(action->xml, XML_LRM_ATTR_TASK);
on_node = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
router_node = crm_element_value(action->xml, XML_LRM_ATTR_ROUTER_NODE);
if (!router_node) {
router_node = on_node;
}
CRM_CHECK(on_node != NULL && strlen(on_node) != 0,
crm_err("Corrupted command (id=%s) %s: no node", crm_str(id), crm_str(task));
return FALSE);
crm_info("Executing crm-event (%s): %s on %s%s%s",
crm_str(id), crm_str(task), on_node,
is_local ? " (local)" : "", no_wait ? " - no waiting" : "");
if (safe_str_eq(router_node, fsa_our_uname)) {
is_local = TRUE;
}
value = crm_meta_value(action->params, XML_ATTR_TE_NOWAIT);
if (crm_is_true(value)) {
no_wait = TRUE;
}
if (is_local && safe_str_eq(task, CRM_OP_SHUTDOWN)) {
/* defer until everything else completes */
crm_info("crm-event (%s) is a local shutdown", crm_str(id));
graph->completion_action = tg_shutdown;
graph->abort_reason = "local shutdown";
te_action_confirmed(action);
update_graph(graph, action);
trigger_graph();
return TRUE;
} else if (safe_str_eq(task, CRM_OP_SHUTDOWN)) {
crm_node_t *peer = crm_get_peer(0, router_node);
crm_update_peer_expected(__FUNCTION__, peer, CRMD_JOINSTATE_DOWN);
}
cmd = create_request(task, action->xml, router_node, CRM_SYSTEM_CRMD, CRM_SYSTEM_TENGINE, NULL);
counter =
generate_transition_key(transition_graph->id, action->id, get_target_rc(action), te_uuid);
crm_xml_add(cmd, XML_ATTR_TRANSITION_KEY, counter);
rc = send_cluster_message(crm_get_peer(0, router_node), crm_msg_crmd, cmd, TRUE);
free(counter);
free_xml(cmd);
if (rc == FALSE) {
crm_err("Action %d failed: send", action->id);
return FALSE;
} else if (no_wait) {
te_action_confirmed(action);
update_graph(graph, action);
trigger_graph();
} else {
if (action->timeout <= 0) {
crm_err("Action %d: %s on %s had an invalid timeout (%dms). Using %dms instead",
action->id, task, on_node, action->timeout, graph->network_delay);
action->timeout = graph->network_delay;
}
te_start_action_timer(graph, action);
}
return TRUE;
}
gboolean
cib_action_update(crm_action_t * action, int status, int op_rc)
{
lrmd_event_data_t *op = NULL;
xmlNode *state = NULL;
xmlNode *rsc = NULL;
xmlNode *xml_op = NULL;
xmlNode *action_rsc = NULL;
int rc = pcmk_ok;
const char *rsc_id = NULL;
const char *task = crm_element_value(action->xml, XML_LRM_ATTR_TASK);
const char *target = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
const char *task_uuid = crm_element_value(action->xml, XML_LRM_ATTR_TASK_KEY);
const char *target_uuid = crm_element_value(action->xml, XML_LRM_ATTR_TARGET_UUID);
int call_options = cib_quorum_override | cib_scope_local;
int target_rc = get_target_rc(action);
if (status == PCMK_LRM_OP_PENDING) {
crm_debug("%s %d: Recording pending operation %s on %s",
crm_element_name(action->xml), action->id, task_uuid, target);
} else {
crm_warn("%s %d: %s on %s timed out",
crm_element_name(action->xml), action->id, task_uuid, target);
}
action_rsc = find_xml_node(action->xml, XML_CIB_TAG_RESOURCE, TRUE);
if (action_rsc == NULL) {
return FALSE;
}
rsc_id = ID(action_rsc);
CRM_CHECK(rsc_id != NULL, crm_log_xml_err(action->xml, "Bad:action");
return FALSE);
/*
update the CIB
<node_state id="hadev">
<lrm>
<lrm_resources>
<lrm_resource id="rsc2" last_op="start" op_code="0" target="hadev"/>
*/
state = create_xml_node(NULL, XML_CIB_TAG_STATE);
crm_xml_add(state, XML_ATTR_UUID, target_uuid);
crm_xml_add(state, XML_ATTR_UNAME, target);
rsc = create_xml_node(state, XML_CIB_TAG_LRM);
crm_xml_add(rsc, XML_ATTR_ID, target_uuid);
rsc = create_xml_node(rsc, XML_LRM_TAG_RESOURCES);
rsc = create_xml_node(rsc, XML_LRM_TAG_RESOURCE);
crm_xml_add(rsc, XML_ATTR_ID, rsc_id);
crm_copy_xml_element(action_rsc, rsc, XML_ATTR_TYPE);
crm_copy_xml_element(action_rsc, rsc, XML_AGENT_ATTR_CLASS);
crm_copy_xml_element(action_rsc, rsc, XML_AGENT_ATTR_PROVIDER);
op = convert_graph_action(NULL, action, status, op_rc);
op->call_id = -1;
op->user_data = generate_transition_key(transition_graph->id, action->id, target_rc, te_uuid);
xml_op = create_operation_update(rsc, op, CRM_FEATURE_SET, target_rc, target, __FUNCTION__, LOG_INFO);
lrmd_free_event(op);
crm_trace("Updating CIB with \"%s\" (%s): %s %s on %s",
status < 0 ? "new action" : XML_ATTR_TIMEOUT,
crm_element_name(action->xml), crm_str(task), rsc_id, target);
crm_log_xml_trace(xml_op, "Op");
rc = fsa_cib_conn->cmds->update(fsa_cib_conn, XML_CIB_TAG_STATUS, state, call_options);
crm_trace("Updating CIB with %s action %d: %s on %s (call_id=%d)",
services_lrm_status_str(status), action->id, task_uuid, target, rc);
fsa_register_cib_callback(rc, FALSE, NULL, cib_action_updated);
free_xml(state);
action->sent_update = TRUE;
if (rc < pcmk_ok) {
return FALSE;
}
return TRUE;
}
static gboolean
te_rsc_command(crm_graph_t * graph, crm_action_t * action)
{
/* never overwrite stop actions in the CIB with
* anything other than completed results
*
* Writing pending stops makes it look like the
* resource is running again
*/
xmlNode *cmd = NULL;
xmlNode *rsc_op = NULL;
gboolean rc = TRUE;
gboolean no_wait = FALSE;
gboolean is_local = FALSE;
char *counter = NULL;
const char *task = NULL;
const char *value = NULL;
const char *on_node = NULL;
const char *router_node = NULL;
const char *task_uuid = NULL;
CRM_ASSERT(action != NULL);
CRM_ASSERT(action->xml != NULL);
action->executed = FALSE;
on_node = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
CRM_CHECK(on_node != NULL && strlen(on_node) != 0,
crm_err("Corrupted command(id=%s) %s: no node", ID(action->xml), crm_str(task));
return FALSE);
rsc_op = action->xml;
task = crm_element_value(rsc_op, XML_LRM_ATTR_TASK);
task_uuid = crm_element_value(action->xml, XML_LRM_ATTR_TASK_KEY);
router_node = crm_element_value(rsc_op, XML_LRM_ATTR_ROUTER_NODE);
if (!router_node) {
router_node = on_node;
}
counter =
generate_transition_key(transition_graph->id, action->id, get_target_rc(action), te_uuid);
crm_xml_add(rsc_op, XML_ATTR_TRANSITION_KEY, counter);
if (safe_str_eq(router_node, fsa_our_uname)) {
is_local = TRUE;
}
value = crm_meta_value(action->params, XML_ATTR_TE_NOWAIT);
if (crm_is_true(value)) {
no_wait = TRUE;
}
crm_notice("Initiating %s operation %s%s on %s%s "CRM_XS" action %d",
task, task_uuid, (is_local? " locally" : ""), on_node,
(no_wait? " without waiting" : ""), action->id);
cmd = create_request(CRM_OP_INVOKE_LRM, rsc_op, router_node,
CRM_SYSTEM_LRMD, CRM_SYSTEM_TENGINE, NULL);
if (is_local) {
/* shortcut local resource commands */
ha_msg_input_t data = {
.msg = cmd,
.xml = rsc_op,
};
fsa_data_t msg = {
.id = 0,
.data = &data,
.data_type = fsa_dt_ha_msg,
.fsa_input = I_NULL,
.fsa_cause = C_FSA_INTERNAL,
.actions = A_LRM_INVOKE,
.origin = __FUNCTION__,
};
do_lrm_invoke(A_LRM_INVOKE, C_FSA_INTERNAL, fsa_state, I_NULL, &msg);
} else {
rc = send_cluster_message(crm_get_peer(0, router_node), crm_msg_lrmd, cmd, TRUE);
}
free(counter);
free_xml(cmd);
action->executed = TRUE;
if (rc == FALSE) {
crm_err("Action %d failed: send", action->id);
return FALSE;
} else if (no_wait) {
crm_info("Action %d confirmed - no wait", action->id);
action->confirmed = TRUE; /* Just mark confirmed.
* Don't bump the job count only to immediately decrement it
*/
update_graph(transition_graph, action);
trigger_graph();
} else if (action->confirmed == TRUE) {
crm_debug("Action %d: %s %s on %s(timeout %dms) was already confirmed.",
action->id, task, task_uuid, on_node, action->timeout);
} else {
if (action->timeout <= 0) {
crm_err("Action %d: %s %s on %s had an invalid timeout (%dms). Using %dms instead",
action->id, task, task_uuid, on_node, action->timeout, graph->network_delay);
action->timeout = graph->network_delay;
}
te_update_job_count(action, 1);
te_start_action_timer(graph, action);
}
return TRUE;
}
struct te_peer_s
{
char *name;
int jobs;
int migrate_jobs;
};
static void te_peer_free(gpointer p)
{
struct te_peer_s *peer = p;
free(peer->name);
free(peer);
}
void te_reset_job_counts(void)
{
GHashTableIter iter;
struct te_peer_s *peer = NULL;
if(te_targets == NULL) {
te_targets = g_hash_table_new_full(crm_str_hash, g_str_equal, NULL, te_peer_free);
}
g_hash_table_iter_init(&iter, te_targets);
while (g_hash_table_iter_next(&iter, NULL, (gpointer *) & peer)) {
peer->jobs = 0;
peer->migrate_jobs = 0;
}
}
static void
te_update_job_count_on(const char *target, int offset, bool migrate)
{
struct te_peer_s *r = NULL;
if(target == NULL || te_targets == NULL) {
return;
}
r = g_hash_table_lookup(te_targets, target);
if(r == NULL) {
r = calloc(1, sizeof(struct te_peer_s));
r->name = strdup(target);
g_hash_table_insert(te_targets, r->name, r);
}
r->jobs += offset;
if(migrate) {
r->migrate_jobs += offset;
}
crm_trace("jobs[%s] = %d", target, r->jobs);
}
static void
te_update_job_count(crm_action_t * action, int offset)
{
const char *task = crm_element_value(action->xml, XML_LRM_ATTR_TASK);
const char *target = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
if (action->type != action_type_rsc || target == NULL) {
/* No limit on these */
return;
}
/* if we have a router node, this means the action is performing
- * on a remote node. For now, we count all action occuring on a
+ * on a remote node. For now, we count all actions occurring on a
* remote node against the job list on the cluster node hosting
* the connection resources */
target = crm_element_value(action->xml, XML_LRM_ATTR_ROUTER_NODE);
if ((target == NULL) &&
(safe_str_eq(task, CRMD_ACTION_MIGRATE) || safe_str_eq(task, CRMD_ACTION_MIGRATED))) {
const char *t1 = crm_meta_value(action->params, XML_LRM_ATTR_MIGRATE_SOURCE);
const char *t2 = crm_meta_value(action->params, XML_LRM_ATTR_MIGRATE_TARGET);
te_update_job_count_on(t1, offset, TRUE);
te_update_job_count_on(t2, offset, TRUE);
return;
} else if (target == NULL) {
target = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
}
te_update_job_count_on(target, offset, FALSE);
}
static gboolean
te_should_perform_action_on(crm_graph_t * graph, crm_action_t * action, const char *target)
{
int limit = 0;
struct te_peer_s *r = NULL;
const char *task = crm_element_value(action->xml, XML_LRM_ATTR_TASK);
const char *id = crm_element_value(action->xml, XML_LRM_ATTR_TASK_KEY);
if(target == NULL) {
/* No limit on these */
return TRUE;
} else if(te_targets == NULL) {
return FALSE;
}
r = g_hash_table_lookup(te_targets, target);
limit = throttle_get_job_limit(target);
if(r == NULL) {
r = calloc(1, sizeof(struct te_peer_s));
r->name = strdup(target);
g_hash_table_insert(te_targets, r->name, r);
}
if(limit <= r->jobs) {
crm_trace("Peer %s is over their job limit of %d (%d): deferring %s",
target, limit, r->jobs, id);
return FALSE;
} else if(graph->migration_limit > 0 && r->migrate_jobs >= graph->migration_limit) {
if (safe_str_eq(task, CRMD_ACTION_MIGRATE) || safe_str_eq(task, CRMD_ACTION_MIGRATED)) {
crm_trace("Peer %s is over their migration job limit of %d (%d): deferring %s",
target, graph->migration_limit, r->migrate_jobs, id);
return FALSE;
}
}
crm_trace("Peer %s has not hit their limit yet. current jobs = %d limit= %d limit", target, r->jobs, limit);
return TRUE;
}
static gboolean
te_should_perform_action(crm_graph_t * graph, crm_action_t * action)
{
const char *target = NULL;
const char *task = crm_element_value(action->xml, XML_LRM_ATTR_TASK);
if (action->type != action_type_rsc) {
/* No limit on these */
return TRUE;
}
/* if we have a router node, this means the action is performing
- * on a remote node. For now, we count all action occuring on a
+ * on a remote node. For now, we count all actions occurring on a
* remote node against the job list on the cluster node hosting
* the connection resources */
target = crm_element_value(action->xml, XML_LRM_ATTR_ROUTER_NODE);
if ((target == NULL) &&
(safe_str_eq(task, CRMD_ACTION_MIGRATE) || safe_str_eq(task, CRMD_ACTION_MIGRATED))) {
target = crm_meta_value(action->params, XML_LRM_ATTR_MIGRATE_SOURCE);
if(te_should_perform_action_on(graph, action, target) == FALSE) {
return FALSE;
}
target = crm_meta_value(action->params, XML_LRM_ATTR_MIGRATE_TARGET);
} else if (target == NULL) {
target = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
}
return te_should_perform_action_on(graph, action, target);
}
void
te_action_confirmed(crm_action_t * action)
{
const char *target = crm_element_value(action->xml, XML_LRM_ATTR_TARGET);
if (action->confirmed == FALSE && action->type == action_type_rsc && target != NULL) {
te_update_job_count(action, -1);
}
action->confirmed = TRUE;
}
crm_graph_functions_t te_graph_fns = {
te_pseudo_action,
te_rsc_command,
te_crm_command,
te_fence_node,
te_should_perform_action,
};
void
notify_crmd(crm_graph_t * graph)
{
const char *type = "unknown";
enum crmd_fsa_input event = I_NULL;
crm_debug("Processing transition completion in state %s", fsa_state2string(fsa_state));
CRM_CHECK(graph->complete, graph->complete = TRUE);
switch (graph->completion_action) {
case tg_stop:
type = "stop";
if (fsa_state == S_TRANSITION_ENGINE) {
event = I_TE_SUCCESS;
}
break;
case tg_done:
type = "done";
if (fsa_state == S_TRANSITION_ENGINE) {
event = I_TE_SUCCESS;
}
break;
case tg_restart:
type = "restart";
if (fsa_state == S_TRANSITION_ENGINE) {
if (transition_timer->period_ms > 0) {
crm_timer_stop(transition_timer);
crm_timer_start(transition_timer);
} else {
event = I_PE_CALC;
}
} else if (fsa_state == S_POLICY_ENGINE) {
register_fsa_action(A_PE_INVOKE);
}
break;
case tg_shutdown:
type = "shutdown";
if (is_set(fsa_input_register, R_SHUTDOWN)) {
event = I_STOP;
} else {
crm_err("We didn't ask to be shut down, yet our PE is telling us to.");
event = I_TERMINATE;
}
}
crm_debug("Transition %d status: %s - %s", graph->id, type, crm_str(graph->abort_reason));
graph->abort_reason = NULL;
graph->completion_action = tg_done;
clear_bit(fsa_input_register, R_IN_TRANSITION);
if (event != I_NULL) {
register_fsa_input(C_FSA_INTERNAL, event, NULL);
} else if (fsa_source) {
mainloop_set_trigger(fsa_source);
}
}
diff --git a/extra/cluster-helper b/extra/cluster-helper
index 3b36f70b25..1b77a7ee62 100755
--- a/extra/cluster-helper
+++ b/extra/cluster-helper
@@ -1,193 +1,193 @@
#!/bin/bash
hosts=
group=$cluster_name
user=root
pdsh=`which pdsh 2>/dev/null`
ssh=`which qarsh 2>/dev/null`
scp=`which qacp 2>/dev/null`
command=list
format=oneline
replace="{}"
if [ x$ssh = "x" ]; then
ssh=ssh
scp=scp
fi
function helptext() {
echo "cluster-helper - A tool for running commands on multiple hosts"
echo ""
echo "Attempt to use pdsh, qarsh, or ssh (in that order) to execute commands"
echo "on mutiple hosts"
echo ""
echo "DSH groups can be configured and specified with -g instead of listing"
echo "the individual hosts every time"
echo ""
echo "Usage: cluster-helper [options] [command]"
echo ""
echo "Options:"
echo "--ssh Force the use of ssh instead of qarsh even if it available"
echo "-g, --group Specify the group to operate on/with"
echo "-w, --host Specify a host to operate on/with. May be specified multiple times"
echo "-f, --format Specifiy the output format When listing hosts or group contents"
echo " Allowed values: [oneline], long, short, pdsh, bullet"
echo ""
echo ""
echo "Commands:"
echo "--list format List the contents of a group in the specified format"
echo "--add name Add supplied (-w) hosts to the named group"
echo "--create name Create the named group with the supplied (-w) hosts"
echo "--run, -- Treat all subsequent arguments as a command to perform on"
echo " the specified command on the hosts or group"
- echo "--xargs Run the supplied command having replaced any occurances"
+ echo "--xargs Run the supplied command having replaced any occurrences"
echo " of {} with the node name"
echo ""
echo "--copy file(s) host:file Pass subsequent arguments to scp or qacp"
- echo " Any occurances of {} are replaced with the node name"
+ echo " Any occurrences of {} are replaced with the node name"
echo "--key Install an ssh key"
echo ""
exit $1
}
while true ; do
case "$1" in
--help|-h|-\?) helptext 0;;
-x) set -x; shift;;
--ssh) ssh="ssh"; scp="scp"; pdsh=""; shift;;
-g|--group) group="$2"; shift; shift;;
-w|--host) for h in $2; do
hosts="$hosts $h";
done
shift; shift;;
-f|--format) format=$2; shift; shift;;
-I) replace=$2; shift; shift;;
--list|list) format=$2; command=list; shift; shift;;
--add|add) command=group-add; shift;;
--create|create) group="$2", command=group-create; shift; shift;;
--run|run) command=run; shift;;
--copy|copy) command=copy; shift; break ;;
--key|key) command=key; shift; break ;;
--xargs) command=xargs; shift; break ;;
--) command=run; shift; break ;;
"") break;;
*) helptext 1;;
esac
done
if [ x"$group" = x -a x"$hosts" = x ]; then
group=$CTS_GROUP
fi
function expand() {
fmt=$1
if [ x$group != x -a -f ~/.dsh/group/$group ]; then
hosts=`cat ~/.dsh/group/$group`
elif [ x$group != x ]; then
echo "Unknown group: $group" >&2
exit 1
fi
if [ "x$hosts" != x -a $fmt = oneline ]; then
echo $hosts
elif [ "x$hosts" != x -a $fmt = short ]; then
( for h in $hosts; do
echo $h | sed 's:\..*::'
done ) | tr '\n' ' '
echo ""
elif [ "x$hosts" != x -a $fmt = pdsh ]; then
( for h in $hosts; do
echo "-w $h"
done ) | tr '\n' ' '
echo ""
elif [ "x$hosts" != x -a $fmt = long ]; then
for h in $hosts; do
echo $h
done
elif [ "x$hosts" != x -a $fmt = bullet ]; then
for h in $hosts; do
echo " * $h"
done
elif [ "x$hosts" != x ]; then
echo "Unknown format: $fmt" >&2
fi
}
if [ $command = list ]; then
expand $format
elif [ $command = key ]; then
hosts=`expand oneline`
for h in $hosts; do
ssh-copy-id root@$h
done
elif [ $command = group-create ]; then
f=`mktemp`
mkdir -p ~/.dsh/group
if [ -f ~/.dsh/group/$group ]; then
echo "Overwriting existing group $group"
fi
for h in $hosts; do
echo $h >> $f
done
echo "Creating group $group in ~/.dsh/group"
sort -u $f > ~/.dsh/group/$group
rm -f $f
elif [ $command = group-add ]; then
if [ x$group = x ]; then
echo "Please specify a group to append to"
exit 1
fi
f=`mktemp`
mkdir -p ~/.dsh/group
if [ -f ~/.dsh/group/$group ]; then
cat ~/.dsh/group/$group > $f
fi
for h in $hosts; do
echo $h >> $f
done
echo "Appending hosts to group $group in ~/.dsh/group"
sort -u $f > ~/.dsh/group/$group
rm -f $f
elif [ $command = run ]; then
if [ x$pdsh != x ]; then
hosts=`expand pdsh`
$pdsh -l $user $hosts -- $*
else
hosts=`expand oneline`
for n in $hosts; do
$ssh -l $user $n -- $* < /dev/null
done
if [ x"$hosts" = x ]; then
echo "No hosts specified"
fi
fi
elif [ $command = copy ]; then
hosts=`expand oneline`
for n in $hosts; do
$scp `echo $* | sed 's@'$replace'@'$n'@'`
done
elif [ $command = xargs ]; then
hosts=`expand oneline`
for n in $hosts; do
eval `echo $* | sed 's@'$replace'@'$n'@'`
done
fi
diff --git a/include/crm/common/logging.h b/include/crm/common/logging.h
index b0d3078cbd..8ccca37a42 100644
--- a/include/crm/common/logging.h
+++ b/include/crm/common/logging.h
@@ -1,276 +1,276 @@
/*
* Copyright (C) 2004 Andrew Beekhof <andrew@beekhof.net>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* \file
* \brief Wrappers for and extensions to libqb logging
* \ingroup core
*/
#ifndef CRM_LOGGING__H
# define CRM_LOGGING__H
# include <qb/qblog.h>
# ifndef LOG_TRACE
# define LOG_TRACE LOG_DEBUG+1
# endif
# define LOG_DEBUG_2 LOG_TRACE
# define LOG_DEBUG_3 LOG_TRACE
# define LOG_DEBUG_4 LOG_TRACE
# define LOG_DEBUG_5 LOG_TRACE
# define LOG_DEBUG_6 LOG_TRACE
/* "Extended information" logging support */
#ifdef QB_XS
# define CRM_XS QB_XS
# define crm_extended_logging(t, e) qb_log_ctl((t), QB_LOG_CONF_EXTENDED, (e))
#else
# define CRM_XS "|"
/* A caller might want to check the return value, so we can't define this as a
* no-op, and we can't simply define it to be 0 because gcc will then complain
* when the value isn't checked.
*/
static inline int
crm_extended_logging(int t, int e)
{
return 0;
}
#endif
extern unsigned int crm_log_level;
extern gboolean crm_config_error;
extern gboolean crm_config_warning;
extern unsigned int crm_trace_nonlog;
enum xml_log_options
{
xml_log_option_filtered = 0x0001,
xml_log_option_formatted = 0x0002,
xml_log_option_text = 0x0004, /* add this option to dump text into xml */
xml_log_option_diff_plus = 0x0010,
xml_log_option_diff_minus = 0x0020,
xml_log_option_diff_short = 0x0040,
xml_log_option_diff_all = 0x0100,
xml_log_option_dirty_add = 0x1000,
xml_log_option_open = 0x2000,
xml_log_option_children = 0x4000,
xml_log_option_close = 0x8000,
};
void crm_enable_blackbox(int nsig);
void crm_disable_blackbox(int nsig);
void crm_write_blackbox(int nsig, struct qb_log_callsite *callsite);
void crm_update_callsites(void);
void crm_log_deinit(void);
gboolean crm_log_cli_init(const char *entity);
void crm_log_preinit(const char *entity, int argc, char **argv);
gboolean crm_log_init(const char *entity, uint8_t level, gboolean daemon,
gboolean to_stderr, int argc, char **argv, gboolean quiet);
void crm_log_args(int argc, char **argv);
void crm_log_output_fn(const char *file, const char *function, int line, int level,
const char *prefix, const char *output);
# define crm_log_output(level, prefix, output) crm_log_output_fn(__FILE__, __FUNCTION__, __LINE__, level, prefix, output)
gboolean crm_add_logfile(const char *filename);
void crm_bump_log_level(int argc, char **argv);
void crm_enable_stderr(int enable);
gboolean crm_is_callsite_active(struct qb_log_callsite *cs, uint8_t level, uint32_t tags);
void log_data_element(int log_level, const char *file, const char *function, int line,
const char *prefix, xmlNode * data, int depth, gboolean formatted);
char *crm_strdup_printf (char const *format, ...) __attribute__ ((__format__ (__printf__, 1, 2)));
/* returns the old value */
unsigned int set_crm_log_level(unsigned int level);
unsigned int get_crm_log_level(void);
/*
* Throughout the macros below, note the leading, pre-comma, space in the
- * various ' , ##args' occurences to aid portability across versions of 'gcc'.
+ * various ' , ##args' occurrences to aid portability across versions of 'gcc'.
* http://gcc.gnu.org/onlinedocs/cpp/Variadic-Macros.html#Variadic-Macros
*/
#if defined(__clang__)
# define CRM_TRACE_INIT_DATA(name)
# else
# define CRM_TRACE_INIT_DATA(name) QB_LOG_INIT_DATA(name)
#endif
/*!
* \brief Log a message
*
* \param[in] level Severity at which to log the message
* \param[in] fmt printf-style format string for message
* \param[in] args Any arguments needed by format string
*/
# define do_crm_log(level, fmt, args...) \
qb_log_from_external_source(__func__, __FILE__, fmt, level, __LINE__, 0 , ##args)
/*!
* \brief Log a message that is likely to be filtered out
*
* \param[in] level Severity at which to log the message
* \param[in] fmt printf-style format string for message
* \param[in] args Any arguments needed by format string
*/
# define do_crm_log_unlikely(level, fmt, args...) do { \
static struct qb_log_callsite *trace_cs = NULL; \
if(trace_cs == NULL) { \
trace_cs = qb_log_callsite_get(__func__, __FILE__, fmt, level, __LINE__, 0); \
} \
if (crm_is_callsite_active(trace_cs, level, 0)) { \
qb_log_from_external_source( \
__func__, __FILE__, fmt, level, __LINE__, 0 , ##args); \
} \
} while(0)
# define CRM_LOG_ASSERT(expr) do { \
if(__unlikely((expr) == FALSE)) { \
static struct qb_log_callsite *core_cs = NULL; \
if(core_cs == NULL) { \
core_cs = qb_log_callsite_get(__func__, __FILE__, "log-assert", LOG_TRACE, __LINE__, 0); \
} \
crm_abort(__FILE__, __FUNCTION__, __LINE__, #expr, \
core_cs?core_cs->targets:FALSE, TRUE); \
} \
} while(0)
/* 'failure_action' MUST NOT be 'continue' as it will apply to the
* macro's do-while loop
*/
# define CRM_CHECK(expr, failure_action) do { \
if(__unlikely((expr) == FALSE)) { \
static struct qb_log_callsite *core_cs = NULL; \
if(core_cs == NULL) { \
core_cs = qb_log_callsite_get(__func__, __FILE__, "check-assert", LOG_TRACE, __LINE__, 0); \
} \
crm_abort(__FILE__, __FUNCTION__, __LINE__, #expr, \
core_cs?core_cs->targets:FALSE, TRUE); \
failure_action; \
} \
} while(0)
# define do_crm_log_xml(level, text, xml) do { \
static struct qb_log_callsite *xml_cs = NULL; \
if(xml_cs == NULL) { \
xml_cs = qb_log_callsite_get(__func__, __FILE__, "xml-blob", level, __LINE__, 0); \
} \
if (crm_is_callsite_active(xml_cs, level, 0)) { \
log_data_element(level, __FILE__, __FUNCTION__, __LINE__, text, xml, 1, xml_log_option_formatted); \
} \
} while(0)
/*!
* \brief Log a message as if it came from a different code location
*
* \param[in] level Severity at which to log the message
* \param[in] file Source file name to use instead of __FILE__
* \param[in] function Source function name to use instead of __func__
* \param[in] line Source line number to use instead of __line__
* \param[in] fmt printf-style format string for message
* \param[in] args Any arguments needed by format string
*/
# define do_crm_log_alias(level, file, function, line, fmt, args...) do { \
if(level > 0) { \
qb_log_from_external_source(function, file, fmt, level, line, 0 , ##args); \
} else { \
printf(fmt "\n" , ##args); \
} \
} while(0)
/*!
* \brief Log a message using constant severity
*
* \param[in] level Severity at which to log the message
* \param[in] fmt printf-style format string for message
* \param[in] args Any arguments needed by format string
*
* \note level and fmt /MUST/ be constants else compilation may fail
*/
# define do_crm_log_always(level, fmt, args...) qb_log(level, fmt , ##args)
/*!
* \brief Log a system error message
*
* \param[in] level Severity at which to log the message
* \param[in] fmt printf-style format string for message
* \param[in] args Any arguments needed by format string
*
* \note Because crm_perror() adds the system error message and error number
* onto the end of fmt, that information will become extended information
* if CRM_XS is used inside fmt and will not show up in syslog.
*/
# define crm_perror(level, fmt, args...) do { \
const char *err = strerror(errno); \
/* cast to int makes coverity happy when level == 0 */ \
if (level <= (int)crm_log_level) { \
fprintf(stderr, fmt ": %s (%d)\n" , ##args, err, errno); \
} \
do_crm_log(level, fmt ": %s (%d)" , ##args, err, errno); \
} while(0)
# define crm_log_tag(level, tag, fmt, args...) do { \
static struct qb_log_callsite *trace_tag_cs = NULL; \
int converted_tag = g_quark_try_string(tag); \
if(trace_tag_cs == NULL) { \
trace_tag_cs = qb_log_callsite_get(__func__, __FILE__, fmt, level, __LINE__, converted_tag); \
} \
if (crm_is_callsite_active(trace_tag_cs, level, converted_tag)) { \
qb_log_from_external_source(__func__, __FILE__, fmt, level, \
__LINE__, converted_tag , ##args); \
} \
} while(0)
# define crm_crit(fmt, args...) qb_logt(LOG_CRIT, 0, fmt , ##args)
# define crm_err(fmt, args...) qb_logt(LOG_ERR, 0, fmt , ##args)
# define crm_warn(fmt, args...) qb_logt(LOG_WARNING, 0, fmt , ##args)
# define crm_notice(fmt, args...) qb_logt(LOG_NOTICE, 0, fmt , ##args)
# define crm_info(fmt, args...) qb_logt(LOG_INFO, 0, fmt , ##args)
# define crm_debug(fmt, args...) do_crm_log_unlikely(LOG_DEBUG, fmt , ##args)
# define crm_trace(fmt, args...) do_crm_log_unlikely(LOG_TRACE, fmt , ##args)
# define crm_log_xml_crit(xml, text) do_crm_log_xml(LOG_CRIT, text, xml)
# define crm_log_xml_err(xml, text) do_crm_log_xml(LOG_ERR, text, xml)
# define crm_log_xml_warn(xml, text) do_crm_log_xml(LOG_WARNING, text, xml)
# define crm_log_xml_notice(xml, text) do_crm_log_xml(LOG_NOTICE, text, xml)
# define crm_log_xml_info(xml, text) do_crm_log_xml(LOG_INFO, text, xml)
# define crm_log_xml_debug(xml, text) do_crm_log_xml(LOG_DEBUG, text, xml)
# define crm_log_xml_trace(xml, text) do_crm_log_xml(LOG_TRACE, text, xml)
# define crm_log_xml_explicit(xml, text) do { \
static struct qb_log_callsite *digest_cs = NULL; \
digest_cs = qb_log_callsite_get( \
__func__, __FILE__, text, LOG_TRACE, __LINE__, \
crm_trace_nonlog); \
if (digest_cs && digest_cs->targets) { \
do_crm_log_xml(LOG_TRACE, text, xml); \
} \
} while(0)
# define crm_str(x) (const char*)(x?x:"<null>")
#endif
diff --git a/lib/common/logging.c b/lib/common/logging.c
index b9a1c805b1..6e8d67b5fd 100644
--- a/lib/common/logging.c
+++ b/lib/common/logging.c
@@ -1,1266 +1,1267 @@
/*
* Copyright (C) 2004 Andrew Beekhof <andrew@beekhof.net>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <crm_internal.h>
#include <sys/param.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/stat.h>
#include <sys/utsname.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <limits.h>
#include <ctype.h>
#include <pwd.h>
#include <grp.h>
#include <time.h>
#include <libgen.h>
#include <signal.h>
#include <bzlib.h>
#include <qb/qbdefs.h>
#include <crm/crm.h>
#include <crm/common/mainloop.h>
unsigned int crm_log_priority = LOG_NOTICE;
unsigned int crm_log_level = LOG_INFO;
static gboolean crm_tracing_enabled(void);
unsigned int crm_trace_nonlog = 0;
bool crm_is_daemon = 0;
#ifdef HAVE_G_LOG_SET_DEFAULT_HANDLER
GLogFunc glib_log_default;
static void
crm_glib_handler(const gchar * log_domain, GLogLevelFlags flags, const gchar * message,
gpointer user_data)
{
int log_level = LOG_WARNING;
GLogLevelFlags msg_level = (flags & G_LOG_LEVEL_MASK);
static struct qb_log_callsite *glib_cs = NULL;
if (glib_cs == NULL) {
glib_cs = qb_log_callsite_get(__FUNCTION__, __FILE__, "glib-handler", LOG_DEBUG, __LINE__, crm_trace_nonlog);
}
switch (msg_level) {
case G_LOG_LEVEL_CRITICAL:
log_level = LOG_CRIT;
if (crm_is_callsite_active(glib_cs, LOG_DEBUG, 0) == FALSE) {
/* log and record how we got here */
crm_abort(__FILE__, __FUNCTION__, __LINE__, message, TRUE, TRUE);
}
break;
case G_LOG_LEVEL_ERROR:
log_level = LOG_ERR;
break;
case G_LOG_LEVEL_MESSAGE:
log_level = LOG_NOTICE;
break;
case G_LOG_LEVEL_INFO:
log_level = LOG_INFO;
break;
case G_LOG_LEVEL_DEBUG:
log_level = LOG_DEBUG;
break;
case G_LOG_LEVEL_WARNING:
case G_LOG_FLAG_RECURSION:
case G_LOG_FLAG_FATAL:
case G_LOG_LEVEL_MASK:
log_level = LOG_WARNING;
break;
}
do_crm_log(log_level, "%s: %s", log_domain, message);
}
#endif
#ifndef NAME_MAX
# define NAME_MAX 256
#endif
static void
crm_trigger_blackbox(int nsig)
{
if(nsig == SIGTRAP) {
/* Turn it on if it wasn't already */
crm_enable_blackbox(nsig);
}
crm_write_blackbox(nsig, NULL);
}
const char *
daemon_option(const char *option)
{
char env_name[NAME_MAX];
const char *value = NULL;
snprintf(env_name, NAME_MAX, "PCMK_%s", option);
value = getenv(env_name);
if (value != NULL) {
crm_trace("Found %s = %s", env_name, value);
return value;
}
snprintf(env_name, NAME_MAX, "HA_%s", option);
value = getenv(env_name);
if (value != NULL) {
crm_trace("Found %s = %s", env_name, value);
return value;
}
crm_trace("Nothing found for %s", option);
return NULL;
}
void
set_daemon_option(const char *option, const char *value)
{
char env_name[NAME_MAX];
snprintf(env_name, NAME_MAX, "PCMK_%s", option);
if (value) {
crm_trace("Setting %s to %s", env_name, value);
setenv(env_name, value, 1);
} else {
crm_trace("Unsetting %s", env_name);
unsetenv(env_name);
}
snprintf(env_name, NAME_MAX, "HA_%s", option);
if (value) {
crm_trace("Setting %s to %s", env_name, value);
setenv(env_name, value, 1);
} else {
crm_trace("Unsetting %s", env_name);
unsetenv(env_name);
}
}
gboolean
daemon_option_enabled(const char *daemon, const char *option)
{
const char *value = daemon_option(option);
if (value != NULL && crm_is_true(value)) {
return TRUE;
} else if (value != NULL && strstr(value, daemon)) {
return TRUE;
}
return FALSE;
}
void
crm_log_deinit(void)
{
#ifdef HAVE_G_LOG_SET_DEFAULT_HANDLER
g_log_set_default_handler(glib_log_default, NULL);
#endif
}
#define FMT_MAX 256
static void
set_format_string(int method, const char *daemon)
{
int offset = 0;
char fmt[FMT_MAX];
if (method > QB_LOG_STDERR) {
/* When logging to a file */
struct utsname res;
if (uname(&res) == 0) {
offset +=
snprintf(fmt + offset, FMT_MAX - offset, "%%t [%d] %s %10s: ", getpid(),
res.nodename, daemon);
} else {
offset += snprintf(fmt + offset, FMT_MAX - offset, "%%t [%d] %10s: ", getpid(), daemon);
}
}
if (method == QB_LOG_SYSLOG) {
offset += snprintf(fmt + offset, FMT_MAX - offset, "%%g %%-7p: %%b");
crm_extended_logging(method, QB_FALSE);
} else if (crm_tracing_enabled()) {
offset += snprintf(fmt + offset, FMT_MAX - offset, "(%%-12f:%%5l %%g) %%-7p: %%n:\t%%b");
} else {
offset += snprintf(fmt + offset, FMT_MAX - offset, "%%g %%-7p: %%n:\t%%b");
}
CRM_LOG_ASSERT(offset > 0);
qb_log_format_set(method, fmt);
}
gboolean
crm_add_logfile(const char *filename)
{
bool is_default = false;
static int default_fd = -1;
static gboolean have_logfile = FALSE;
/* @COMPAT This should be CRM_LOG_DIR "/pacemaker.log". We aren't changing
* it yet because it will be a significant user-visible change to publicize.
*/
const char *default_logfile = "/var/log/pacemaker.log";
struct stat parent;
int fd = 0, rc = 0;
FILE *logfile = NULL;
char *parent_dir = NULL;
char *filename_cp;
if (filename == NULL && have_logfile == FALSE) {
filename = default_logfile;
}
if (filename == NULL) {
return FALSE; /* Nothing to do */
} else if(safe_str_eq(filename, "none")) {
return FALSE; /* Nothing to do */
} else if(safe_str_eq(filename, "/dev/null")) {
return FALSE; /* Nothing to do */
} else if(safe_str_eq(filename, default_logfile)) {
is_default = TRUE;
}
if(is_default && default_fd >= 0) {
return TRUE; /* Nothing to do */
}
/* Check the parent directory */
filename_cp = strdup(filename);
parent_dir = dirname(filename_cp);
rc = stat(parent_dir, &parent);
if (rc != 0) {
crm_err("Directory '%s' does not exist: logging to '%s' is disabled", parent_dir, filename);
free(filename_cp);
return FALSE;
}
free(filename_cp);
errno = 0;
logfile = fopen(filename, "a");
if(logfile == NULL) {
crm_err("%s (%d): Logging to '%s' as uid=%u, gid=%u is disabled",
pcmk_strerror(errno), errno, filename, geteuid(), getegid());
return FALSE;
}
/* Check/Set permissions if we're root */
if (geteuid() == 0) {
struct stat st;
uid_t pcmk_uid = 0;
gid_t pcmk_gid = 0;
gboolean fix = FALSE;
int logfd = fileno(logfile);
rc = fstat(logfd, &st);
if (rc < 0) {
crm_perror(LOG_WARNING, "Cannot stat %s", filename);
fclose(logfile);
return FALSE;
}
if(crm_user_lookup(CRM_DAEMON_USER, &pcmk_uid, &pcmk_gid) == 0) {
if (st.st_gid != pcmk_gid) {
/* Wrong group */
fix = TRUE;
} else if ((st.st_mode & S_IRWXG) != (S_IRGRP | S_IWGRP)) {
/* Not read/writable by the correct group */
fix = TRUE;
}
}
if (fix) {
rc = fchown(logfd, pcmk_uid, pcmk_gid);
if (rc < 0) {
crm_warn("Cannot change the ownership of %s to user %s and gid %d",
filename, CRM_DAEMON_USER, pcmk_gid);
}
rc = fchmod(logfd, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP);
if (rc < 0) {
crm_warn("Cannot change the mode of %s to rw-rw----", filename);
}
fprintf(logfile, "Set r/w permissions for uid=%d, gid=%d on %s\n",
pcmk_uid, pcmk_gid, filename);
if (fflush(logfile) < 0 || fsync(logfd) < 0) {
crm_err("Couldn't write out logfile: %s", filename);
}
}
}
/* Close and reopen with libqb */
fclose(logfile);
fd = qb_log_file_open(filename);
if (fd < 0) {
crm_perror(LOG_WARNING, "Couldn't send additional logging to %s", filename);
return FALSE;
}
if(is_default) {
default_fd = fd;
} else if(default_fd >= 0) {
crm_notice("Switching to %s", filename);
qb_log_ctl(default_fd, QB_LOG_CONF_ENABLED, QB_FALSE);
}
crm_notice("Additional logging available in %s", filename);
qb_log_ctl(fd, QB_LOG_CONF_ENABLED, QB_TRUE);
/* qb_log_ctl(fd, QB_LOG_CONF_FILE_SYNC, 1); Turn on synchronous writes */
/* Enable callsites */
crm_update_callsites();
have_logfile = TRUE;
return TRUE;
}
static int blackbox_trigger = 0;
static char *blackbox_file_prefix = NULL;
static void
blackbox_logger(int32_t t, struct qb_log_callsite *cs, time_t timestamp, const char *msg)
{
if(cs && cs->priority < LOG_ERR) {
crm_write_blackbox(SIGTRAP, cs); /* Bypass the over-dumping logic */
} else {
crm_write_blackbox(0, cs);
}
}
static void
crm_control_blackbox(int nsig, bool enable)
{
int lpc = 0;
if (blackbox_file_prefix == NULL) {
pid_t pid = getpid();
blackbox_file_prefix = malloc(NAME_MAX);
snprintf(blackbox_file_prefix, NAME_MAX, "%s/%s-%d", CRM_BLACKBOX_DIR, crm_system_name, pid);
}
if (enable && qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_STATE_GET, 0) != QB_LOG_STATE_ENABLED) {
qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_SIZE, 5 * 1024 * 1024); /* Any size change drops existing entries */
qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_ENABLED, QB_TRUE); /* Setting the size seems to disable it */
/* Enable synchronous logging */
for (lpc = QB_LOG_BLACKBOX; lpc < QB_LOG_TARGET_MAX; lpc++) {
qb_log_ctl(lpc, QB_LOG_CONF_FILE_SYNC, QB_TRUE);
}
crm_notice("Initiated blackbox recorder: %s", blackbox_file_prefix);
/* Save to disk on abnormal termination */
crm_signal(SIGSEGV, crm_trigger_blackbox);
crm_signal(SIGABRT, crm_trigger_blackbox);
crm_signal(SIGILL, crm_trigger_blackbox);
crm_signal(SIGBUS, crm_trigger_blackbox);
+ crm_signal(SIGFPE, crm_trigger_blackbox);
crm_update_callsites();
blackbox_trigger = qb_log_custom_open(blackbox_logger, NULL, NULL, NULL);
qb_log_ctl(blackbox_trigger, QB_LOG_CONF_ENABLED, QB_TRUE);
crm_trace("Trigger: %d is %d %d", blackbox_trigger,
qb_log_ctl(blackbox_trigger, QB_LOG_CONF_STATE_GET, 0), QB_LOG_STATE_ENABLED);
crm_update_callsites();
} else if (!enable && qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_STATE_GET, 0) == QB_LOG_STATE_ENABLED) {
qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_ENABLED, QB_FALSE);
/* Disable synchronous logging again when the blackbox is disabled */
for (lpc = QB_LOG_BLACKBOX; lpc < QB_LOG_TARGET_MAX; lpc++) {
qb_log_ctl(lpc, QB_LOG_CONF_FILE_SYNC, QB_FALSE);
}
}
}
void
crm_enable_blackbox(int nsig)
{
crm_control_blackbox(nsig, TRUE);
}
void
crm_disable_blackbox(int nsig)
{
crm_control_blackbox(nsig, FALSE);
}
void
crm_write_blackbox(int nsig, struct qb_log_callsite *cs)
{
static int counter = 1;
static time_t last = 0;
char buffer[NAME_MAX];
time_t now = time(NULL);
if (blackbox_file_prefix == NULL) {
return;
}
switch (nsig) {
case 0:
case SIGTRAP:
/* The graceful case - such as assertion failure or user request */
if (nsig == 0 && now == last) {
/* Prevent over-dumping */
return;
}
snprintf(buffer, NAME_MAX, "%s.%d", blackbox_file_prefix, counter++);
if (nsig == SIGTRAP) {
crm_notice("Blackbox dump requested, please see %s for contents", buffer);
} else if (cs) {
syslog(LOG_NOTICE,
"Problem detected at %s:%d (%s), please see %s for additional details",
cs->function, cs->lineno, cs->filename, buffer);
} else {
crm_notice("Problem detected, please see %s for additional details", buffer);
}
last = now;
qb_log_blackbox_write_to_file(buffer);
/* Flush the existing contents
* A size change would also work
*/
qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_ENABLED, QB_FALSE);
qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_ENABLED, QB_TRUE);
break;
default:
/* Do as little as possible, just try to get what we have out
* We logged the filename when the blackbox was enabled
*/
crm_signal(nsig, SIG_DFL);
qb_log_blackbox_write_to_file(blackbox_file_prefix);
qb_log_ctl(QB_LOG_BLACKBOX, QB_LOG_CONF_ENABLED, QB_FALSE);
raise(nsig);
break;
}
}
gboolean
crm_log_cli_init(const char *entity)
{
return crm_log_init(entity, LOG_ERR, FALSE, FALSE, 0, NULL, TRUE);
}
static const char *
crm_quark_to_string(uint32_t tag)
{
const char *text = g_quark_to_string(tag);
if (text) {
return text;
}
return "";
}
static void
crm_log_filter_source(int source, const char *trace_files, const char *trace_fns,
const char *trace_fmts, const char *trace_tags, const char *trace_blackbox,
struct qb_log_callsite *cs)
{
if (qb_log_ctl(source, QB_LOG_CONF_STATE_GET, 0) != QB_LOG_STATE_ENABLED) {
return;
} else if (cs->tags != crm_trace_nonlog && source == QB_LOG_BLACKBOX) {
/* Blackbox gets everything if enabled */
qb_bit_set(cs->targets, source);
} else if (source == blackbox_trigger && blackbox_trigger > 0) {
/* Should this log message result in the blackbox being dumped */
if (cs->priority <= LOG_ERR) {
qb_bit_set(cs->targets, source);
} else if (trace_blackbox) {
char *key = crm_strdup_printf("%s:%d", cs->function, cs->lineno);
if (strstr(trace_blackbox, key) != NULL) {
qb_bit_set(cs->targets, source);
}
free(key);
}
} else if (source == QB_LOG_SYSLOG) { /* No tracing to syslog */
if (cs->priority <= crm_log_priority && cs->priority <= crm_log_level) {
qb_bit_set(cs->targets, source);
}
/* Log file tracing options... */
} else if (cs->priority <= crm_log_level) {
qb_bit_set(cs->targets, source);
} else if (trace_files && strstr(trace_files, cs->filename) != NULL) {
qb_bit_set(cs->targets, source);
} else if (trace_fns && strstr(trace_fns, cs->function) != NULL) {
qb_bit_set(cs->targets, source);
} else if (trace_fmts && strstr(trace_fmts, cs->format) != NULL) {
qb_bit_set(cs->targets, source);
} else if (trace_tags
&& cs->tags != 0
&& cs->tags != crm_trace_nonlog && g_quark_to_string(cs->tags) != NULL) {
qb_bit_set(cs->targets, source);
}
}
static void
crm_log_filter(struct qb_log_callsite *cs)
{
int lpc = 0;
static int need_init = 1;
static const char *trace_fns = NULL;
static const char *trace_tags = NULL;
static const char *trace_fmts = NULL;
static const char *trace_files = NULL;
static const char *trace_blackbox = NULL;
if (need_init) {
need_init = 0;
trace_fns = getenv("PCMK_trace_functions");
trace_fmts = getenv("PCMK_trace_formats");
trace_tags = getenv("PCMK_trace_tags");
trace_files = getenv("PCMK_trace_files");
trace_blackbox = getenv("PCMK_trace_blackbox");
if (trace_tags != NULL) {
uint32_t tag;
char token[500];
const char *offset = NULL;
const char *next = trace_tags;
do {
offset = next;
next = strchrnul(offset, ',');
snprintf(token, 499, "%.*s", (int)(next - offset), offset);
tag = g_quark_from_string(token);
crm_info("Created GQuark %u from token '%s' in '%s'", tag, token, trace_tags);
if (next[0] != 0) {
next++;
}
} while (next != NULL && next[0] != 0);
}
}
cs->targets = 0; /* Reset then find targets to enable */
for (lpc = QB_LOG_SYSLOG; lpc < QB_LOG_TARGET_MAX; lpc++) {
crm_log_filter_source(lpc, trace_files, trace_fns, trace_fmts, trace_tags, trace_blackbox,
cs);
}
}
gboolean
crm_is_callsite_active(struct qb_log_callsite *cs, uint8_t level, uint32_t tags)
{
gboolean refilter = FALSE;
if (cs == NULL) {
return FALSE;
}
if (cs->priority != level) {
cs->priority = level;
refilter = TRUE;
}
if (cs->tags != tags) {
cs->tags = tags;
refilter = TRUE;
}
if (refilter) {
crm_log_filter(cs);
}
if (cs->targets == 0) {
return FALSE;
}
return TRUE;
}
void
crm_update_callsites(void)
{
static gboolean log = TRUE;
if (log) {
log = FALSE;
crm_debug
("Enabling callsites based on priority=%d, files=%s, functions=%s, formats=%s, tags=%s",
crm_log_level, getenv("PCMK_trace_files"), getenv("PCMK_trace_functions"),
getenv("PCMK_trace_formats"), getenv("PCMK_trace_tags"));
}
qb_log_filter_fn_set(crm_log_filter);
}
static gboolean
crm_tracing_enabled(void)
{
if (crm_log_level >= LOG_TRACE) {
return TRUE;
} else if (getenv("PCMK_trace_files") || getenv("PCMK_trace_functions")
|| getenv("PCMK_trace_formats") || getenv("PCMK_trace_tags")) {
return TRUE;
}
return FALSE;
}
static int
crm_priority2int(const char *name)
{
struct syslog_names {
const char *name;
int priority;
};
static struct syslog_names p_names[] = {
{"emerg", LOG_EMERG},
{"alert", LOG_ALERT},
{"crit", LOG_CRIT},
{"error", LOG_ERR},
{"warning", LOG_WARNING},
{"notice", LOG_NOTICE},
{"info", LOG_INFO},
{"debug", LOG_DEBUG},
{NULL, -1}
};
int lpc;
for (lpc = 0; name != NULL && p_names[lpc].name != NULL; lpc++) {
if (crm_str_eq(p_names[lpc].name, name, TRUE)) {
return p_names[lpc].priority;
}
}
return crm_log_priority;
}
static void
crm_identity(const char *entity, int argc, char **argv)
{
if(crm_system_name != NULL) {
/* Nothing to do */
} else if (entity) {
free(crm_system_name);
crm_system_name = strdup(entity);
} else if (argc > 0 && argv != NULL) {
char *mutable = strdup(argv[0]);
char *modified = basename(mutable);
if (strstr(modified, "lt-") == modified) {
modified += 3;
}
free(crm_system_name);
crm_system_name = strdup(modified);
free(mutable);
} else if (crm_system_name == NULL) {
crm_system_name = strdup("Unknown");
}
setenv("PCMK_service", crm_system_name, 1);
}
void
crm_log_preinit(const char *entity, int argc, char **argv)
{
/* Configure libqb logging with nothing turned on */
int lpc = 0;
int32_t qb_facility = 0;
static bool have_logging = FALSE;
if(have_logging == FALSE) {
have_logging = TRUE;
crm_xml_init(); /* Sets buffer allocation strategy */
if (crm_trace_nonlog == 0) {
crm_trace_nonlog = g_quark_from_static_string("Pacemaker non-logging tracepoint");
}
umask(S_IWGRP | S_IWOTH | S_IROTH);
/* Redirect messages from glib functions to our handler */
#ifdef HAVE_G_LOG_SET_DEFAULT_HANDLER
glib_log_default = g_log_set_default_handler(crm_glib_handler, NULL);
#endif
/* and for good measure... - this enum is a bit field (!) */
g_log_set_always_fatal((GLogLevelFlags) 0); /*value out of range */
/* Who do we log as */
crm_identity(entity, argc, argv);
qb_facility = qb_log_facility2int("local0");
qb_log_init(crm_system_name, qb_facility, LOG_ERR);
crm_log_level = LOG_CRIT;
/* Nuke any syslog activity until it's asked for */
qb_log_ctl(QB_LOG_SYSLOG, QB_LOG_CONF_ENABLED, QB_FALSE);
/* Set format strings and disable threading
* Pacemaker and threads do not mix well (due to the amount of forking)
*/
qb_log_tags_stringify_fn_set(crm_quark_to_string);
for (lpc = QB_LOG_SYSLOG; lpc < QB_LOG_TARGET_MAX; lpc++) {
qb_log_ctl(lpc, QB_LOG_CONF_THREADED, QB_FALSE);
set_format_string(lpc, crm_system_name);
}
}
}
gboolean
crm_log_init(const char *entity, uint8_t level, gboolean daemon, gboolean to_stderr,
int argc, char **argv, gboolean quiet)
{
const char *syslog_priority = NULL;
const char *logfile = daemon_option("logfile");
const char *facility = daemon_option("logfacility");
const char *f_copy = facility;
crm_is_daemon = daemon;
crm_log_preinit(entity, argc, argv);
if(level > crm_log_level) {
crm_log_level = level;
}
/* Should we log to syslog */
if (facility == NULL) {
if(crm_is_daemon) {
facility = "daemon";
} else {
facility = "none";
}
set_daemon_option("logfacility", facility);
}
if (safe_str_eq(facility, "none")) {
quiet = TRUE;
} else {
qb_log_ctl(QB_LOG_SYSLOG, QB_LOG_CONF_FACILITY, qb_log_facility2int(facility));
}
if (daemon_option_enabled(crm_system_name, "debug")) {
/* Override the default setting */
crm_log_level = LOG_DEBUG;
}
/* What lower threshold do we have for sending to syslog */
syslog_priority = daemon_option("logpriority");
if(syslog_priority) {
int priority = crm_priority2int(syslog_priority);
crm_log_priority = priority;
qb_log_filter_ctl(QB_LOG_SYSLOG, QB_LOG_FILTER_ADD, QB_LOG_FILTER_FILE, "*", priority);
} else {
qb_log_filter_ctl(QB_LOG_SYSLOG, QB_LOG_FILTER_ADD, QB_LOG_FILTER_FILE, "*", LOG_NOTICE);
}
if (!quiet) {
/* Nuke any syslog activity */
qb_log_ctl(QB_LOG_SYSLOG, QB_LOG_CONF_ENABLED, QB_TRUE);
}
/* Should we log to stderr */
if (daemon_option_enabled(crm_system_name, "stderr")) {
/* Override the default setting */
to_stderr = TRUE;
}
crm_enable_stderr(to_stderr);
/* Should we log to a file */
if (safe_str_eq("none", logfile)) {
/* No soup^Hlogs for you! */
} else if(crm_is_daemon) {
/* The daemons always get a log file, unless explicitly set to configured 'none' */
crm_add_logfile(logfile);
} else if(logfile) {
crm_add_logfile(logfile);
}
if (crm_is_daemon && daemon_option_enabled(crm_system_name, "blackbox")) {
crm_enable_blackbox(0);
}
/* Summary */
crm_trace("Quiet: %d, facility %s", quiet, f_copy);
daemon_option("logfile");
daemon_option("logfacility");
crm_update_callsites();
/* Ok, now we can start logging... */
if (quiet == FALSE && crm_is_daemon == FALSE) {
crm_log_args(argc, argv);
}
if (crm_is_daemon) {
const char *user = getenv("USER");
if (user != NULL && safe_str_neq(user, "root") && safe_str_neq(user, CRM_DAEMON_USER)) {
crm_trace("Not switching to corefile directory for %s", user);
crm_is_daemon = FALSE;
}
}
if (crm_is_daemon) {
int user = getuid();
const char *base = CRM_CORE_DIR;
struct passwd *pwent = getpwuid(user);
if (pwent == NULL) {
crm_perror(LOG_ERR, "Cannot get name for uid: %d", user);
} else if (safe_str_neq(pwent->pw_name, "root")
&& safe_str_neq(pwent->pw_name, CRM_DAEMON_USER)) {
crm_trace("Don't change active directory for regular user: %s", pwent->pw_name);
} else if (chdir(base) < 0) {
crm_perror(LOG_INFO, "Cannot change active directory to %s", base);
} else {
crm_info("Changed active directory to %s", base);
#if 0
{
char path[512];
snprintf(path, 512, "%s-%d", crm_system_name, getpid());
mkdir(path, 0750);
chdir(path);
crm_info("Changed active directory to %s/%s/%s", base, pwent->pw_name, path);
}
#endif
}
/* Original meanings from signal(7)
*
* Signal Value Action Comment
* SIGTRAP 5 Core Trace/breakpoint trap
* SIGUSR1 30,10,16 Term User-defined signal 1
* SIGUSR2 31,12,17 Term User-defined signal 2
*
* Our usage is as similar as possible
*/
mainloop_add_signal(SIGUSR1, crm_enable_blackbox);
mainloop_add_signal(SIGUSR2, crm_disable_blackbox);
mainloop_add_signal(SIGTRAP, crm_trigger_blackbox);
}
return TRUE;
}
/* returns the old value */
unsigned int
set_crm_log_level(unsigned int level)
{
unsigned int old = crm_log_level;
crm_log_level = level;
crm_update_callsites();
crm_trace("New log level: %d", level);
return old;
}
void
crm_enable_stderr(int enable)
{
if (enable && qb_log_ctl(QB_LOG_STDERR, QB_LOG_CONF_STATE_GET, 0) != QB_LOG_STATE_ENABLED) {
qb_log_ctl(QB_LOG_STDERR, QB_LOG_CONF_ENABLED, QB_TRUE);
crm_update_callsites();
} else if (enable == FALSE) {
qb_log_ctl(QB_LOG_STDERR, QB_LOG_CONF_ENABLED, QB_FALSE);
}
}
void
crm_bump_log_level(int argc, char **argv)
{
static int args = TRUE;
int level = crm_log_level;
if (args && argc > 1) {
crm_log_args(argc, argv);
}
if (qb_log_ctl(QB_LOG_STDERR, QB_LOG_CONF_STATE_GET, 0) == QB_LOG_STATE_ENABLED) {
set_crm_log_level(level + 1);
}
/* Enable after potentially logging the argstring, not before */
crm_enable_stderr(TRUE);
}
unsigned int
get_crm_log_level(void)
{
return crm_log_level;
}
#define ARGS_FMT "Invoked: %s"
void
crm_log_args(int argc, char **argv)
{
int lpc = 0;
int len = 0;
int existing_len = 0;
int line = __LINE__;
static int logged = 0;
char *arg_string = NULL;
if (argc == 0 || argv == NULL || logged) {
return;
}
logged = 1;
for (; lpc < argc; lpc++) {
if (argv[lpc] == NULL) {
break;
}
len = 2 + strlen(argv[lpc]); /* +1 space, +1 EOS */
arg_string = realloc_safe(arg_string, len + existing_len);
existing_len += sprintf(arg_string + existing_len, "%s ", argv[lpc]);
}
qb_log_from_external_source(__func__, __FILE__, ARGS_FMT, LOG_NOTICE, line, 0, arg_string);
free(arg_string);
}
const char *
pcmk_errorname(int rc)
{
int error = ABS(rc);
switch (error) {
case E2BIG: return "E2BIG";
case EACCES: return "EACCES";
case EADDRINUSE: return "EADDRINUSE";
case EADDRNOTAVAIL: return "EADDRNOTAVAIL";
case EAFNOSUPPORT: return "EAFNOSUPPORT";
case EAGAIN: return "EAGAIN";
case EALREADY: return "EALREADY";
case EBADF: return "EBADF";
case EBADMSG: return "EBADMSG";
case EBUSY: return "EBUSY";
case ECANCELED: return "ECANCELED";
case ECHILD: return "ECHILD";
case ECOMM: return "ECOMM";
case ECONNABORTED: return "ECONNABORTED";
case ECONNREFUSED: return "ECONNREFUSED";
case ECONNRESET: return "ECONNRESET";
/* case EDEADLK: return "EDEADLK"; */
case EDESTADDRREQ: return "EDESTADDRREQ";
case EDOM: return "EDOM";
case EDQUOT: return "EDQUOT";
case EEXIST: return "EEXIST";
case EFAULT: return "EFAULT";
case EFBIG: return "EFBIG";
case EHOSTDOWN: return "EHOSTDOWN";
case EHOSTUNREACH: return "EHOSTUNREACH";
case EIDRM: return "EIDRM";
case EILSEQ: return "EILSEQ";
case EINPROGRESS: return "EINPROGRESS";
case EINTR: return "EINTR";
case EINVAL: return "EINVAL";
case EIO: return "EIO";
case EISCONN: return "EISCONN";
case EISDIR: return "EISDIR";
case ELIBACC: return "ELIBACC";
case ELOOP: return "ELOOP";
case EMFILE: return "EMFILE";
case EMLINK: return "EMLINK";
case EMSGSIZE: return "EMSGSIZE";
case EMULTIHOP: return "EMULTIHOP";
case ENAMETOOLONG: return "ENAMETOOLONG";
case ENETDOWN: return "ENETDOWN";
case ENETRESET: return "ENETRESET";
case ENETUNREACH: return "ENETUNREACH";
case ENFILE: return "ENFILE";
case ENOBUFS: return "ENOBUFS";
case ENODATA: return "ENODATA";
case ENODEV: return "ENODEV";
case ENOENT: return "ENOENT";
case ENOEXEC: return "ENOEXEC";
case ENOKEY: return "ENOKEY";
case ENOLCK: return "ENOLCK";
case ENOLINK: return "ENOLINK";
case ENOMEM: return "ENOMEM";
case ENOMSG: return "ENOMSG";
case ENOPROTOOPT: return "ENOPROTOOPT";
case ENOSPC: return "ENOSPC";
case ENOSR: return "ENOSR";
case ENOSTR: return "ENOSTR";
case ENOSYS: return "ENOSYS";
case ENOTBLK: return "ENOTBLK";
case ENOTCONN: return "ENOTCONN";
case ENOTDIR: return "ENOTDIR";
case ENOTEMPTY: return "ENOTEMPTY";
case ENOTSOCK: return "ENOTSOCK";
/* case ENOTSUP: return "ENOTSUP"; */
case ENOTTY: return "ENOTTY";
case ENOTUNIQ: return "ENOTUNIQ";
case ENXIO: return "ENXIO";
case EOPNOTSUPP: return "EOPNOTSUPP";
case EOVERFLOW: return "EOVERFLOW";
case EPERM: return "EPERM";
case EPFNOSUPPORT: return "EPFNOSUPPORT";
case EPIPE: return "EPIPE";
case EPROTO: return "EPROTO";
case EPROTONOSUPPORT: return "EPROTONOSUPPORT";
case EPROTOTYPE: return "EPROTOTYPE";
case ERANGE: return "ERANGE";
case EREMOTE: return "EREMOTE";
case EREMOTEIO: return "EREMOTEIO";
case EROFS: return "EROFS";
case ESHUTDOWN: return "ESHUTDOWN";
case ESPIPE: return "ESPIPE";
case ESOCKTNOSUPPORT: return "ESOCKTNOSUPPORT";
case ESRCH: return "ESRCH";
case ESTALE: return "ESTALE";
case ETIME: return "ETIME";
case ETIMEDOUT: return "ETIMEDOUT";
case ETXTBSY: return "ETXTBSY";
case EUNATCH: return "EUNATCH";
case EUSERS: return "EUSERS";
/* case EWOULDBLOCK: return "EWOULDBLOCK"; */
case EXDEV: return "EXDEV";
#ifdef EBADE
/* Not available on OSX */
case EBADE: return "EBADE";
case EBADFD: return "EBADFD";
case EBADSLT: return "EBADSLT";
case EDEADLOCK: return "EDEADLOCK";
case EBADR: return "EBADR";
case EBADRQC: return "EBADRQC";
case ECHRNG: return "ECHRNG";
#ifdef EISNAM /* Not available on Illumos/Solaris */
case EISNAM: return "EISNAM";
case EKEYEXPIRED: return "EKEYEXPIRED";
case EKEYREJECTED: return "EKEYREJECTED";
case EKEYREVOKED: return "EKEYREVOKED";
#endif
case EL2HLT: return "EL2HLT";
case EL2NSYNC: return "EL2NSYNC";
case EL3HLT: return "EL3HLT";
case EL3RST: return "EL3RST";
case ELIBBAD: return "ELIBBAD";
case ELIBMAX: return "ELIBMAX";
case ELIBSCN: return "ELIBSCN";
case ELIBEXEC: return "ELIBEXEC";
#ifdef ENOMEDIUM /* Not available on Illumos/Solaris */
case ENOMEDIUM: return "ENOMEDIUM";
case EMEDIUMTYPE: return "EMEDIUMTYPE";
#endif
case ENONET: return "ENONET";
case ENOPKG: return "ENOPKG";
case EREMCHG: return "EREMCHG";
case ERESTART: return "ERESTART";
case ESTRPIPE: return "ESTRPIPE";
#ifdef EUCLEAN /* Not available on Illumos/Solaris */
case EUCLEAN: return "EUCLEAN";
#endif
case EXFULL: return "EXFULL";
#endif
case pcmk_err_generic: return "pcmk_err_generic";
case pcmk_err_no_quorum: return "pcmk_err_no_quorum";
case pcmk_err_schema_validation: return "pcmk_err_schema_validation";
case pcmk_err_transform_failed: return "pcmk_err_transform_failed";
case pcmk_err_old_data: return "pcmk_err_old_data";
case pcmk_err_diff_failed: return "pcmk_err_diff_failed";
case pcmk_err_diff_resync: return "pcmk_err_diff_resync";
case pcmk_err_cib_modified: return "pcmk_err_cib_modified";
case pcmk_err_cib_backup: return "pcmk_err_cib_backup";
case pcmk_err_cib_save: return "pcmk_err_cib_save";
case pcmk_err_cib_corrupt: return "pcmk_err_cib_corrupt";
}
return "Unknown";
}
const char *
pcmk_strerror(int rc)
{
int error = abs(rc);
if (error == 0) {
return "OK";
} else if (error < PCMK_ERROR_OFFSET) {
return strerror(error);
}
switch (error) {
case pcmk_err_generic:
return "Generic Pacemaker error";
case pcmk_err_no_quorum:
return "Operation requires quorum";
case pcmk_err_schema_validation:
return "Update does not conform to the configured schema";
case pcmk_err_transform_failed:
return "Schema transform failed";
case pcmk_err_old_data:
return "Update was older than existing configuration";
case pcmk_err_diff_failed:
return "Application of an update diff failed";
case pcmk_err_diff_resync:
return "Application of an update diff failed, requesting a full refresh";
case pcmk_err_cib_modified:
return "The on-disk configuration was manually modified";
case pcmk_err_cib_backup:
return "Could not archive the previous configuration";
case pcmk_err_cib_save:
return "Could not save the new configuration to disk";
case pcmk_err_cib_corrupt:
return "Could not parse on-disk configuration";
case pcmk_err_schema_unchanged:
return "Schema is already the latest available";
/* The following cases will only be hit on systems for which they are non-standard */
/* coverity[dead_error_condition] False positive on non-Linux */
case ENOTUNIQ:
return "Name not unique on network";
/* coverity[dead_error_condition] False positive on non-Linux */
case ECOMM:
return "Communication error on send";
/* coverity[dead_error_condition] False positive on non-Linux */
case ELIBACC:
return "Can not access a needed shared library";
/* coverity[dead_error_condition] False positive on non-Linux */
case EREMOTEIO:
return "Remote I/O error";
/* coverity[dead_error_condition] False positive on non-Linux */
case EUNATCH:
return "Protocol driver not attached";
/* coverity[dead_error_condition] False positive on non-Linux */
case ENOKEY:
return "Required key not available";
}
crm_err("Unknown error code: %d", rc);
return "Unknown error";
}
const char *
bz2_strerror(int rc)
{
/* http://www.bzip.org/1.0.3/html/err-handling.html */
switch (rc) {
case BZ_OK:
case BZ_RUN_OK:
case BZ_FLUSH_OK:
case BZ_FINISH_OK:
case BZ_STREAM_END:
return "Ok";
case BZ_CONFIG_ERROR:
return "libbz2 has been improperly compiled on your platform";
case BZ_SEQUENCE_ERROR:
return "library functions called in the wrong order";
case BZ_PARAM_ERROR:
return "parameter is out of range or otherwise incorrect";
case BZ_MEM_ERROR:
return "memory allocation failed";
case BZ_DATA_ERROR:
return "data integrity error is detected during decompression";
case BZ_DATA_ERROR_MAGIC:
return "the compressed stream does not start with the correct magic bytes";
case BZ_IO_ERROR:
return "error reading or writing in the compressed file";
case BZ_UNEXPECTED_EOF:
return "compressed file finishes before the logical end of stream is detected";
case BZ_OUTBUFF_FULL:
return "output data will not fit into the buffer provided";
}
return "Unknown error";
}
void
crm_log_output_fn(const char *file, const char *function, int line, int level, const char *prefix,
const char *output)
{
const char *next = NULL;
const char *offset = NULL;
if (output == NULL) {
level = LOG_DEBUG;
output = "-- empty --";
}
next = output;
do {
offset = next;
next = strchrnul(offset, '\n');
do_crm_log_alias(level, file, function, line, "%s [ %.*s ]", prefix,
(int)(next - offset), offset);
if (next[0] != 0) {
next++;
}
} while (next != NULL && next[0] != 0);
}
char *
crm_strdup_printf (char const *format, ...)
{
va_list ap;
int len = 0;
char *string = NULL;
va_start(ap, format);
len = vasprintf (&string, format, ap);
CRM_ASSERT(len > 0);
va_end(ap);
return string;
}
diff --git a/lib/lrmd/lrmd_client.c b/lib/lrmd/lrmd_client.c
index 1c785e29c4..ef6932f893 100644
--- a/lib/lrmd/lrmd_client.c
+++ b/lib/lrmd/lrmd_client.c
@@ -1,2261 +1,2261 @@
/*
* Copyright (c) 2012 David Vossel <davidvossel@gmail.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include <crm_internal.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
#include <string.h>
#include <ctype.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <glib.h>
#include <dirent.h>
#include <crm/crm.h>
#include <crm/lrmd.h>
#include <crm/services.h>
#include <crm/common/mainloop.h>
#include <crm/common/ipcs.h>
#include <crm/msg_xml.h>
#include <crm/stonith-ng.h>
#ifdef HAVE_GNUTLS_GNUTLS_H
# undef KEYFILE
# include <gnutls/gnutls.h>
#endif
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/ip.h>
#include <arpa/inet.h>
#include <netdb.h>
#define MAX_TLS_RECV_WAIT 10000
CRM_TRACE_INIT_DATA(lrmd);
static int lrmd_api_disconnect(lrmd_t * lrmd);
static int lrmd_api_is_connected(lrmd_t * lrmd);
/* IPC proxy functions */
int lrmd_internal_proxy_send(lrmd_t * lrmd, xmlNode *msg);
static void lrmd_internal_proxy_dispatch(lrmd_t *lrmd, xmlNode *msg);
void lrmd_internal_set_proxy_callback(lrmd_t * lrmd, void *userdata, void (*callback)(lrmd_t *lrmd, void *userdata, xmlNode *msg));
#ifdef HAVE_GNUTLS_GNUTLS_H
# define LRMD_CLIENT_HANDSHAKE_TIMEOUT 5000 /* 5 seconds */
gnutls_psk_client_credentials_t psk_cred_s;
int lrmd_tls_set_key(gnutls_datum_t * key);
static void lrmd_tls_disconnect(lrmd_t * lrmd);
static int global_remote_msg_id = 0;
int lrmd_tls_send_msg(crm_remote_t * session, xmlNode * msg, uint32_t id, const char *msg_type);
static void lrmd_tls_connection_destroy(gpointer userdata);
#endif
typedef struct lrmd_private_s {
enum client_type type;
char *token;
mainloop_io_t *source;
/* IPC parameters */
crm_ipc_t *ipc;
crm_remote_t *remote;
/* Extra TLS parameters */
char *remote_nodename;
#ifdef HAVE_GNUTLS_GNUTLS_H
char *server;
int port;
gnutls_psk_client_credentials_t psk_cred_c;
- /* while the async connection is occuring, this is the id
+ /* while the async connection is occurring, this is the id
* of the connection timeout timer. */
int async_timer;
int sock;
/* since tls requires a round trip across the network for a
* request/reply, there are times where we just want to be able
* to send a request from the client and not wait around (or even care
* about) what the reply is. */
int expected_late_replies;
GList *pending_notify;
crm_trigger_t *process_notify;
#endif
lrmd_event_callback callback;
/* Internal IPC proxy msg passing for remote guests */
void (*proxy_callback)(lrmd_t *lrmd, void *userdata, xmlNode *msg);
void *proxy_callback_userdata;
char *peer_version;
} lrmd_private_t;
static lrmd_list_t *
lrmd_list_add(lrmd_list_t * head, const char *value)
{
lrmd_list_t *p, *end;
p = calloc(1, sizeof(lrmd_list_t));
p->val = strdup(value);
end = head;
while (end && end->next) {
end = end->next;
}
if (end) {
end->next = p;
} else {
head = p;
}
return head;
}
void
lrmd_list_freeall(lrmd_list_t * head)
{
lrmd_list_t *p;
while (head) {
char *val = (char *)head->val;
p = head->next;
free(val);
free(head);
head = p;
}
}
lrmd_key_value_t *
lrmd_key_value_add(lrmd_key_value_t * head, const char *key, const char *value)
{
lrmd_key_value_t *p, *end;
p = calloc(1, sizeof(lrmd_key_value_t));
p->key = strdup(key);
p->value = strdup(value);
end = head;
while (end && end->next) {
end = end->next;
}
if (end) {
end->next = p;
} else {
head = p;
}
return head;
}
void
lrmd_key_value_freeall(lrmd_key_value_t * head)
{
lrmd_key_value_t *p;
while (head) {
p = head->next;
free(head->key);
free(head->value);
free(head);
head = p;
}
}
lrmd_event_data_t *
lrmd_copy_event(lrmd_event_data_t * event)
{
lrmd_event_data_t *copy = NULL;
copy = calloc(1, sizeof(lrmd_event_data_t));
/* This will get all the int values.
* we just have to be careful not to leave any
* dangling pointers to strings. */
memcpy(copy, event, sizeof(lrmd_event_data_t));
copy->rsc_id = event->rsc_id ? strdup(event->rsc_id) : NULL;
copy->op_type = event->op_type ? strdup(event->op_type) : NULL;
copy->user_data = event->user_data ? strdup(event->user_data) : NULL;
copy->output = event->output ? strdup(event->output) : NULL;
copy->exit_reason = event->exit_reason ? strdup(event->exit_reason) : NULL;
copy->remote_nodename = event->remote_nodename ? strdup(event->remote_nodename) : NULL;
copy->params = crm_str_table_dup(event->params);
return copy;
}
void
lrmd_free_event(lrmd_event_data_t * event)
{
if (!event) {
return;
}
/* free gives me grief if i try to cast */
free((char *)event->rsc_id);
free((char *)event->op_type);
free((char *)event->user_data);
free((char *)event->output);
free((char *)event->exit_reason);
free((char *)event->remote_nodename);
if (event->params) {
g_hash_table_destroy(event->params);
}
free(event);
}
static int
lrmd_dispatch_internal(lrmd_t * lrmd, xmlNode * msg)
{
const char *type;
const char *proxy_session = crm_element_value(msg, F_LRMD_IPC_SESSION);
lrmd_private_t *native = lrmd->private;
lrmd_event_data_t event = { 0, };
if (proxy_session != NULL) {
/* this is proxy business */
lrmd_internal_proxy_dispatch(lrmd, msg);
return 1;
} else if (!native->callback) {
/* no callback set */
crm_trace("notify event received but client has not set callback");
return 1;
}
event.remote_nodename = native->remote_nodename;
type = crm_element_value(msg, F_LRMD_OPERATION);
crm_element_value_int(msg, F_LRMD_CALLID, &event.call_id);
event.rsc_id = crm_element_value(msg, F_LRMD_RSC_ID);
if (crm_str_eq(type, LRMD_OP_RSC_REG, TRUE)) {
event.type = lrmd_event_register;
} else if (crm_str_eq(type, LRMD_OP_RSC_UNREG, TRUE)) {
event.type = lrmd_event_unregister;
} else if (crm_str_eq(type, LRMD_OP_RSC_EXEC, TRUE)) {
crm_element_value_int(msg, F_LRMD_TIMEOUT, &event.timeout);
crm_element_value_int(msg, F_LRMD_RSC_INTERVAL, &event.interval);
crm_element_value_int(msg, F_LRMD_RSC_START_DELAY, &event.start_delay);
crm_element_value_int(msg, F_LRMD_EXEC_RC, (int *)&event.rc);
crm_element_value_int(msg, F_LRMD_OP_STATUS, &event.op_status);
crm_element_value_int(msg, F_LRMD_RSC_DELETED, &event.rsc_deleted);
crm_element_value_int(msg, F_LRMD_RSC_RUN_TIME, (int *)&event.t_run);
crm_element_value_int(msg, F_LRMD_RSC_RCCHANGE_TIME, (int *)&event.t_rcchange);
crm_element_value_int(msg, F_LRMD_RSC_EXEC_TIME, (int *)&event.exec_time);
crm_element_value_int(msg, F_LRMD_RSC_QUEUE_TIME, (int *)&event.queue_time);
event.op_type = crm_element_value(msg, F_LRMD_RSC_ACTION);
event.user_data = crm_element_value(msg, F_LRMD_RSC_USERDATA_STR);
event.output = crm_element_value(msg, F_LRMD_RSC_OUTPUT);
event.exit_reason = crm_element_value(msg, F_LRMD_RSC_EXIT_REASON);
event.type = lrmd_event_exec_complete;
event.params = xml2list(msg);
} else if (crm_str_eq(type, LRMD_OP_NEW_CLIENT, TRUE)) {
event.type = lrmd_event_new_client;
} else if (crm_str_eq(type, LRMD_OP_POKE, TRUE)) {
event.type = lrmd_event_poke;
} else {
return 1;
}
crm_trace("op %s notify event received", type);
native->callback(&event);
if (event.params) {
g_hash_table_destroy(event.params);
}
return 1;
}
static int
lrmd_ipc_dispatch(const char *buffer, ssize_t length, gpointer userdata)
{
lrmd_t *lrmd = userdata;
lrmd_private_t *native = lrmd->private;
xmlNode *msg;
int rc;
if (!native->callback) {
/* no callback set */
return 1;
}
msg = string2xml(buffer);
rc = lrmd_dispatch_internal(lrmd, msg);
free_xml(msg);
return rc;
}
#ifdef HAVE_GNUTLS_GNUTLS_H
static void
lrmd_free_xml(gpointer userdata)
{
free_xml((xmlNode *) userdata);
}
static int
lrmd_tls_connected(lrmd_t * lrmd)
{
lrmd_private_t *native = lrmd->private;
if (native->remote->tls_session) {
return TRUE;
}
return FALSE;
}
static int
lrmd_tls_dispatch(gpointer userdata)
{
lrmd_t *lrmd = userdata;
lrmd_private_t *native = lrmd->private;
xmlNode *xml = NULL;
int rc = 0;
int disconnected = 0;
if (lrmd_tls_connected(lrmd) == FALSE) {
crm_trace("tls dispatch triggered after disconnect");
return 0;
}
crm_trace("tls_dispatch triggered");
/* First check if there are any pending notifies to process that came
* while we were waiting for replies earlier. */
if (native->pending_notify) {
GList *iter = NULL;
crm_trace("Processing pending notifies");
for (iter = native->pending_notify; iter; iter = iter->next) {
lrmd_dispatch_internal(lrmd, iter->data);
}
g_list_free_full(native->pending_notify, lrmd_free_xml);
native->pending_notify = NULL;
}
/* Next read the current buffer and see if there are any messages to handle. */
rc = crm_remote_ready(native->remote, 0);
if (rc == 0) {
/* nothing to read, see if any full messages are already in buffer. */
xml = crm_remote_parse_buffer(native->remote);
} else if (rc < 0) {
disconnected = 1;
} else {
crm_remote_recv(native->remote, -1, &disconnected);
xml = crm_remote_parse_buffer(native->remote);
}
while (xml) {
const char *msg_type = crm_element_value(xml, F_LRMD_REMOTE_MSG_TYPE);
if (safe_str_eq(msg_type, "notify")) {
lrmd_dispatch_internal(lrmd, xml);
} else if (safe_str_eq(msg_type, "reply")) {
if (native->expected_late_replies > 0) {
native->expected_late_replies--;
} else {
int reply_id = 0;
crm_element_value_int(xml, F_LRMD_CALLID, &reply_id);
/* if this happens, we want to know about it */
crm_err("Got outdated reply %d", reply_id);
}
}
free_xml(xml);
xml = crm_remote_parse_buffer(native->remote);
}
if (disconnected) {
crm_info("Server disconnected while reading remote server msg.");
lrmd_tls_disconnect(lrmd);
return 0;
}
return 1;
}
#endif
/* Not used with mainloop */
int
lrmd_poll(lrmd_t * lrmd, int timeout)
{
lrmd_private_t *native = lrmd->private;
switch (native->type) {
case CRM_CLIENT_IPC:
return crm_ipc_ready(native->ipc);
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
if (native->pending_notify) {
return 1;
}
return crm_remote_ready(native->remote, 0);
#endif
default:
crm_err("Unsupported connection type: %d", native->type);
}
return 0;
}
/* Not used with mainloop */
bool
lrmd_dispatch(lrmd_t * lrmd)
{
lrmd_private_t *private = NULL;
CRM_ASSERT(lrmd != NULL);
private = lrmd->private;
switch (private->type) {
case CRM_CLIENT_IPC:
while (crm_ipc_ready(private->ipc)) {
if (crm_ipc_read(private->ipc) > 0) {
const char *msg = crm_ipc_buffer(private->ipc);
lrmd_ipc_dispatch(msg, strlen(msg), lrmd);
}
}
break;
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
lrmd_tls_dispatch(lrmd);
break;
#endif
default:
crm_err("Unsupported connection type: %d", private->type);
}
if (lrmd_api_is_connected(lrmd) == FALSE) {
crm_err("Connection closed");
return FALSE;
}
return TRUE;
}
static xmlNode *
lrmd_create_op(const char *token, const char *op, xmlNode *data, int timeout,
enum lrmd_call_options options)
{
xmlNode *op_msg = create_xml_node(NULL, "lrmd_command");
CRM_CHECK(op_msg != NULL, return NULL);
CRM_CHECK(token != NULL, return NULL);
crm_xml_add(op_msg, F_XML_TAGNAME, "lrmd_command");
crm_xml_add(op_msg, F_TYPE, T_LRMD);
crm_xml_add(op_msg, F_LRMD_CALLBACK_TOKEN, token);
crm_xml_add(op_msg, F_LRMD_OPERATION, op);
crm_xml_add_int(op_msg, F_LRMD_TIMEOUT, timeout);
crm_xml_add_int(op_msg, F_LRMD_CALLOPTS, options);
if (data != NULL) {
add_message_xml(op_msg, F_LRMD_CALLDATA, data);
}
crm_trace("Created lrmd %s command with call options %.8lx (%d)",
op, (long)options, options);
return op_msg;
}
static void
lrmd_ipc_connection_destroy(gpointer userdata)
{
lrmd_t *lrmd = userdata;
lrmd_private_t *native = lrmd->private;
crm_info("IPC connection destroyed");
/* Prevent these from being cleaned up in lrmd_api_disconnect() */
native->ipc = NULL;
native->source = NULL;
if (native->callback) {
lrmd_event_data_t event = { 0, };
event.type = lrmd_event_disconnect;
event.remote_nodename = native->remote_nodename;
native->callback(&event);
}
}
#ifdef HAVE_GNUTLS_GNUTLS_H
static void
lrmd_tls_connection_destroy(gpointer userdata)
{
lrmd_t *lrmd = userdata;
lrmd_private_t *native = lrmd->private;
crm_info("TLS connection destroyed");
if (native->remote->tls_session) {
gnutls_bye(*native->remote->tls_session, GNUTLS_SHUT_RDWR);
gnutls_deinit(*native->remote->tls_session);
gnutls_free(native->remote->tls_session);
}
if (native->psk_cred_c) {
gnutls_psk_free_client_credentials(native->psk_cred_c);
}
if (native->sock) {
close(native->sock);
}
if (native->process_notify) {
mainloop_destroy_trigger(native->process_notify);
native->process_notify = NULL;
}
if (native->pending_notify) {
g_list_free_full(native->pending_notify, lrmd_free_xml);
native->pending_notify = NULL;
}
free(native->remote->buffer);
native->remote->buffer = NULL;
native->source = 0;
native->sock = 0;
native->psk_cred_c = NULL;
native->remote->tls_session = NULL;
native->sock = 0;
if (native->callback) {
lrmd_event_data_t event = { 0, };
event.remote_nodename = native->remote_nodename;
event.type = lrmd_event_disconnect;
native->callback(&event);
}
return;
}
int
lrmd_tls_send_msg(crm_remote_t * session, xmlNode * msg, uint32_t id, const char *msg_type)
{
int rc = -1;
crm_xml_add_int(msg, F_LRMD_REMOTE_MSG_ID, id);
crm_xml_add(msg, F_LRMD_REMOTE_MSG_TYPE, msg_type);
rc = crm_remote_send(session, msg);
if (rc < 0) {
crm_err("Failed to send remote lrmd tls msg, rc = %d", rc);
return rc;
}
return rc;
}
static xmlNode *
lrmd_tls_recv_reply(lrmd_t * lrmd, int total_timeout, int expected_reply_id, int *disconnected)
{
lrmd_private_t *native = lrmd->private;
xmlNode *xml = NULL;
time_t start = time(NULL);
const char *msg_type = NULL;
int reply_id = 0;
int remaining_timeout = 0;
/* A timeout of 0 here makes no sense. We have to wait a period of time
* for the response to come back. If -1 or 0, default to 10 seconds. */
if (total_timeout <= 0 || total_timeout > MAX_TLS_RECV_WAIT) {
total_timeout = MAX_TLS_RECV_WAIT;
}
while (!xml) {
xml = crm_remote_parse_buffer(native->remote);
if (!xml) {
/* read some more off the tls buffer if we still have time left. */
if (remaining_timeout) {
remaining_timeout = total_timeout - ((time(NULL) - start) * 1000);
} else {
remaining_timeout = total_timeout;
}
if (remaining_timeout <= 0) {
crm_err("Never received the expected reply during the timeout period, disconnecting.");
*disconnected = TRUE;
return NULL;
}
crm_remote_recv(native->remote, remaining_timeout, disconnected);
xml = crm_remote_parse_buffer(native->remote);
if (!xml) {
crm_err("Unable to receive expected reply, disconnecting.");
*disconnected = TRUE;
return NULL;
} else if (*disconnected) {
return NULL;
}
}
CRM_ASSERT(xml != NULL);
crm_element_value_int(xml, F_LRMD_REMOTE_MSG_ID, &reply_id);
msg_type = crm_element_value(xml, F_LRMD_REMOTE_MSG_TYPE);
if (!msg_type) {
crm_err("Empty msg type received while waiting for reply");
free_xml(xml);
xml = NULL;
} else if (safe_str_eq(msg_type, "notify")) {
/* got a notify while waiting for reply, trigger the notify to be processed later */
crm_info("queueing notify");
native->pending_notify = g_list_append(native->pending_notify, xml);
if (native->process_notify) {
crm_info("notify trigger set.");
mainloop_set_trigger(native->process_notify);
}
xml = NULL;
} else if (safe_str_neq(msg_type, "reply")) {
/* msg isn't a reply, make some noise */
crm_err("Expected a reply, got %s", msg_type);
free_xml(xml);
xml = NULL;
} else if (reply_id != expected_reply_id) {
if (native->expected_late_replies > 0) {
native->expected_late_replies--;
} else {
crm_err("Got outdated reply, expected id %d got id %d", expected_reply_id, reply_id);
}
free_xml(xml);
xml = NULL;
}
}
if (native->remote->buffer && native->process_notify) {
mainloop_set_trigger(native->process_notify);
}
return xml;
}
static int
lrmd_tls_send(lrmd_t * lrmd, xmlNode * msg)
{
int rc = 0;
lrmd_private_t *native = lrmd->private;
global_remote_msg_id++;
if (global_remote_msg_id <= 0) {
global_remote_msg_id = 1;
}
rc = lrmd_tls_send_msg(native->remote, msg, global_remote_msg_id, "request");
if (rc <= 0) {
crm_err("Remote lrmd send failed, disconnecting");
lrmd_tls_disconnect(lrmd);
return -ENOTCONN;
}
return pcmk_ok;
}
static int
lrmd_tls_send_recv(lrmd_t * lrmd, xmlNode * msg, int timeout, xmlNode ** reply)
{
int rc = 0;
int disconnected = 0;
xmlNode *xml = NULL;
if (lrmd_tls_connected(lrmd) == FALSE) {
return -1;
}
rc = lrmd_tls_send(lrmd, msg);
if (rc < 0) {
return rc;
}
xml = lrmd_tls_recv_reply(lrmd, timeout, global_remote_msg_id, &disconnected);
if (disconnected) {
crm_err("Remote lrmd server disconnected while waiting for reply with id %d. ",
global_remote_msg_id);
lrmd_tls_disconnect(lrmd);
rc = -ENOTCONN;
} else if (!xml) {
crm_err("Remote lrmd never received reply for request id %d. timeout: %dms ",
global_remote_msg_id, timeout);
rc = -ECOMM;
}
if (reply) {
*reply = xml;
} else {
free_xml(xml);
}
return rc;
}
#endif
static int
lrmd_send_xml(lrmd_t * lrmd, xmlNode * msg, int timeout, xmlNode ** reply)
{
int rc = -1;
lrmd_private_t *native = lrmd->private;
switch (native->type) {
case CRM_CLIENT_IPC:
rc = crm_ipc_send(native->ipc, msg, crm_ipc_client_response, timeout, reply);
break;
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
rc = lrmd_tls_send_recv(lrmd, msg, timeout, reply);
break;
#endif
default:
crm_err("Unsupported connection type: %d", native->type);
}
return rc;
}
static int
lrmd_send_xml_no_reply(lrmd_t * lrmd, xmlNode * msg)
{
int rc = -1;
lrmd_private_t *native = lrmd->private;
switch (native->type) {
case CRM_CLIENT_IPC:
rc = crm_ipc_send(native->ipc, msg, crm_ipc_flags_none, 0, NULL);
break;
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
rc = lrmd_tls_send(lrmd, msg);
if (rc == pcmk_ok) {
/* we don't want to wait around for the reply, but
* since the request/reply protocol needs to behave the same
* as libqb, a reply will eventually come later anyway. */
native->expected_late_replies++;
}
break;
#endif
default:
crm_err("Unsupported connection type: %d", native->type);
}
return rc;
}
static int
lrmd_api_is_connected(lrmd_t * lrmd)
{
lrmd_private_t *native = lrmd->private;
switch (native->type) {
case CRM_CLIENT_IPC:
return crm_ipc_connected(native->ipc);
break;
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
return lrmd_tls_connected(lrmd);
break;
#endif
default:
crm_err("Unsupported connection type: %d", native->type);
}
return 0;
}
/*!
* \internal
* \brief Send a prepared API command to the lrmd server
*
* \param[in] lrmd Existing connection to the lrmd server
* \param[in] op Name of API command to send
* \param[in] data Command data XML to add to the sent command
* \param[out] output_data If expecting a reply, it will be stored here
* \param[in] timeout Timeout in milliseconds (if 0, defaults to 1000);
* will be added to the command XML
* \param[in] call_options Call options to pass to server when sending
* \param[in] expect_reply If TRUE, wait for a reply from the server;
* must be TRUE for IPC (as opposed to TLS) clients
*
* \return pcmk_ok on success, -errno on error
*/
static int
lrmd_send_command(lrmd_t *lrmd, const char *op, xmlNode *data,
xmlNode **output_data, int timeout,
enum lrmd_call_options options, gboolean expect_reply)
{
int rc = pcmk_ok;
lrmd_private_t *native = lrmd->private;
xmlNode *op_msg = NULL;
xmlNode *op_reply = NULL;
if (!lrmd_api_is_connected(lrmd)) {
return -ENOTCONN;
}
if (op == NULL) {
crm_err("No operation specified");
return -EINVAL;
}
CRM_CHECK(native->token != NULL,;
);
crm_trace("sending %s op to lrmd", op);
op_msg = lrmd_create_op(native->token, op, data, timeout, options);
if (op_msg == NULL) {
return -EINVAL;
}
if (expect_reply) {
rc = lrmd_send_xml(lrmd, op_msg, timeout, &op_reply);
} else {
rc = lrmd_send_xml_no_reply(lrmd, op_msg);
goto done;
}
if (rc < 0) {
crm_perror(LOG_ERR, "Couldn't perform %s operation (timeout=%d): %d", op, timeout, rc);
rc = -ECOMM;
goto done;
} else if(op_reply == NULL) {
rc = -ENOMSG;
goto done;
}
rc = pcmk_ok;
crm_trace("%s op reply received", op);
if (crm_element_value_int(op_reply, F_LRMD_RC, &rc) != 0) {
rc = -ENOMSG;
goto done;
}
crm_log_xml_trace(op_reply, "Reply");
if (output_data) {
*output_data = op_reply;
op_reply = NULL; /* Prevent subsequent free */
}
done:
if (lrmd_api_is_connected(lrmd) == FALSE) {
crm_err("LRMD disconnected");
}
free_xml(op_msg);
free_xml(op_reply);
return rc;
}
static int
lrmd_api_poke_connection(lrmd_t * lrmd)
{
int rc;
lrmd_private_t *native = lrmd->private;
xmlNode *data = create_xml_node(NULL, F_LRMD_RSC);
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
rc = lrmd_send_command(lrmd, LRMD_OP_POKE, data, NULL, 0, 0, native->type == CRM_CLIENT_IPC ? TRUE : FALSE);
free_xml(data);
return rc < 0 ? rc : pcmk_ok;
}
int
remote_proxy_check(lrmd_t * lrmd, GHashTable *hash)
{
int rc;
const char *value;
lrmd_private_t *native = lrmd->private;
xmlNode *data = create_xml_node(NULL, F_LRMD_OPERATION);
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
value = g_hash_table_lookup(hash, "stonith-watchdog-timeout");
crm_xml_add(data, F_LRMD_WATCHDOG, value);
rc = lrmd_send_command(lrmd, LRMD_OP_CHECK, data, NULL, 0, 0, native->type == CRM_CLIENT_IPC ? TRUE : FALSE);
free_xml(data);
return rc < 0 ? rc : pcmk_ok;
}
static int
lrmd_handshake(lrmd_t * lrmd, const char *name)
{
int rc = pcmk_ok;
lrmd_private_t *native = lrmd->private;
xmlNode *reply = NULL;
xmlNode *hello = create_xml_node(NULL, "lrmd_command");
crm_xml_add(hello, F_TYPE, T_LRMD);
crm_xml_add(hello, F_LRMD_OPERATION, CRM_OP_REGISTER);
crm_xml_add(hello, F_LRMD_CLIENTNAME, name);
crm_xml_add(hello, F_LRMD_PROTOCOL_VERSION, LRMD_PROTOCOL_VERSION);
/* advertise that we are a proxy provider */
if (native->proxy_callback) {
crm_xml_add(hello, F_LRMD_IS_IPC_PROVIDER, "true");
}
rc = lrmd_send_xml(lrmd, hello, -1, &reply);
if (rc < 0) {
crm_perror(LOG_DEBUG, "Couldn't complete registration with the lrmd API: %d", rc);
rc = -ECOMM;
} else if (reply == NULL) {
crm_err("Did not receive registration reply");
rc = -EPROTO;
} else {
const char *version = crm_element_value(reply, F_LRMD_PROTOCOL_VERSION);
const char *msg_type = crm_element_value(reply, F_LRMD_OPERATION);
const char *tmp_ticket = crm_element_value(reply, F_LRMD_CLIENTID);
crm_element_value_int(reply, F_LRMD_RC, &rc);
if (rc == -EPROTO) {
crm_err("LRMD protocol mismatch client version %s, server version %s",
LRMD_PROTOCOL_VERSION, version);
crm_log_xml_err(reply, "Protocol Error");
} else if (safe_str_neq(msg_type, CRM_OP_REGISTER)) {
crm_err("Invalid registration message: %s", msg_type);
crm_log_xml_err(reply, "Bad reply");
rc = -EPROTO;
} else if (tmp_ticket == NULL) {
crm_err("No registration token provided");
crm_log_xml_err(reply, "Bad reply");
rc = -EPROTO;
} else {
crm_trace("Obtained registration token: %s", tmp_ticket);
native->token = strdup(tmp_ticket);
native->peer_version = strdup(version?version:"1.0"); /* Included since 1.1 */
rc = pcmk_ok;
}
}
free_xml(reply);
free_xml(hello);
if (rc != pcmk_ok) {
lrmd_api_disconnect(lrmd);
}
return rc;
}
static int
lrmd_ipc_connect(lrmd_t * lrmd, int *fd)
{
int rc = pcmk_ok;
lrmd_private_t *native = lrmd->private;
static struct ipc_client_callbacks lrmd_callbacks = {
.dispatch = lrmd_ipc_dispatch,
.destroy = lrmd_ipc_connection_destroy
};
crm_info("Connecting to lrmd");
if (fd) {
/* No mainloop */
native->ipc = crm_ipc_new(CRM_SYSTEM_LRMD, 0);
if (native->ipc && crm_ipc_connect(native->ipc)) {
*fd = crm_ipc_get_fd(native->ipc);
} else if (native->ipc) {
crm_perror(LOG_ERR, "Connection to local resource manager failed");
rc = -ENOTCONN;
}
} else {
native->source = mainloop_add_ipc_client(CRM_SYSTEM_LRMD, G_PRIORITY_HIGH, 0, lrmd, &lrmd_callbacks);
native->ipc = mainloop_get_ipc_client(native->source);
}
if (native->ipc == NULL) {
crm_debug("Could not connect to the LRMD API");
rc = -ENOTCONN;
}
return rc;
}
#ifdef HAVE_GNUTLS_GNUTLS_H
static int
set_key(gnutls_datum_t * key, const char *location)
{
FILE *stream;
int read_len = 256;
int cur_len = 0;
int buf_len = read_len;
static char *key_cache = NULL;
static size_t key_cache_len = 0;
static time_t key_cache_updated;
if (location == NULL) {
return -1;
}
if (key_cache) {
time_t now = time(NULL);
if ((now - key_cache_updated) < 60) {
key->data = gnutls_malloc(key_cache_len + 1);
key->size = key_cache_len;
memcpy(key->data, key_cache, key_cache_len);
crm_debug("using cached LRMD key");
return 0;
} else {
key_cache_len = 0;
key_cache_updated = 0;
free(key_cache);
key_cache = NULL;
crm_debug("clearing lrmd key cache");
}
}
stream = fopen(location, "r");
if (!stream) {
return -1;
}
key->data = gnutls_malloc(read_len);
while (!feof(stream)) {
int next;
if (cur_len == buf_len) {
buf_len = cur_len + read_len;
key->data = gnutls_realloc(key->data, buf_len);
}
next = fgetc(stream);
if (next == EOF && feof(stream)) {
break;
}
key->data[cur_len] = next;
cur_len++;
}
fclose(stream);
key->size = cur_len;
if (!cur_len) {
gnutls_free(key->data);
key->data = 0;
return -1;
}
if (!key_cache) {
key_cache = calloc(1, key->size + 1);
memcpy(key_cache, key->data, key->size);
key_cache_len = key->size;
key_cache_updated = time(NULL);
}
return 0;
}
int
lrmd_tls_set_key(gnutls_datum_t * key)
{
int rc = 0;
const char *specific_location = getenv("PCMK_authkey_location");
if (set_key(key, specific_location) == 0) {
crm_debug("Using custom authkey location %s", specific_location);
return 0;
} else if (specific_location) {
crm_err("No valid lrmd remote key found at %s, trying default location", specific_location);
}
if (set_key(key, DEFAULT_REMOTE_KEY_LOCATION) != 0) {
rc = set_key(key, ALT_REMOTE_KEY_LOCATION);
}
if (rc) {
crm_err("No valid lrmd remote key found at %s", DEFAULT_REMOTE_KEY_LOCATION);
return -1;
}
return rc;
}
static void
lrmd_gnutls_global_init(void)
{
static int gnutls_init = 0;
if (!gnutls_init) {
crm_gnutls_global_init();
}
gnutls_init = 1;
}
#endif
static void
report_async_connection_result(lrmd_t * lrmd, int rc)
{
lrmd_private_t *native = lrmd->private;
if (native->callback) {
lrmd_event_data_t event = { 0, };
event.type = lrmd_event_connect;
event.remote_nodename = native->remote_nodename;
event.connection_rc = rc;
native->callback(&event);
}
}
#ifdef HAVE_GNUTLS_GNUTLS_H
static void
lrmd_tcp_connect_cb(void *userdata, int sock)
{
lrmd_t *lrmd = userdata;
lrmd_private_t *native = lrmd->private;
char name[256] = { 0, };
static struct mainloop_fd_callbacks lrmd_tls_callbacks = {
.dispatch = lrmd_tls_dispatch,
.destroy = lrmd_tls_connection_destroy,
};
int rc = sock;
gnutls_datum_t psk_key = { NULL, 0 };
native->async_timer = 0;
if (rc < 0) {
lrmd_tls_connection_destroy(lrmd);
crm_info("remote lrmd connect to %s at port %d failed", native->server, native->port);
report_async_connection_result(lrmd, rc);
return;
}
/* TODO continue with tls stuff now that tcp connect passed. make this async as well soon
* to avoid all blocking code in the client. */
native->sock = sock;
if (lrmd_tls_set_key(&psk_key) != 0) {
lrmd_tls_connection_destroy(lrmd);
return;
}
gnutls_psk_allocate_client_credentials(&native->psk_cred_c);
gnutls_psk_set_client_credentials(native->psk_cred_c, DEFAULT_REMOTE_USERNAME, &psk_key, GNUTLS_PSK_KEY_RAW);
gnutls_free(psk_key.data);
native->remote->tls_session = create_psk_tls_session(sock, GNUTLS_CLIENT, native->psk_cred_c);
if (crm_initiate_client_tls_handshake(native->remote, LRMD_CLIENT_HANDSHAKE_TIMEOUT) != 0) {
crm_warn("Client tls handshake failed for server %s:%d. Disconnecting", native->server,
native->port);
gnutls_deinit(*native->remote->tls_session);
gnutls_free(native->remote->tls_session);
native->remote->tls_session = NULL;
lrmd_tls_connection_destroy(lrmd);
report_async_connection_result(lrmd, -1);
return;
}
crm_info("Remote lrmd client TLS connection established with server %s:%d", native->server,
native->port);
snprintf(name, 128, "remote-lrmd-%s:%d", native->server, native->port);
native->process_notify = mainloop_add_trigger(G_PRIORITY_HIGH, lrmd_tls_dispatch, lrmd);
native->source =
mainloop_add_fd(name, G_PRIORITY_HIGH, native->sock, lrmd, &lrmd_tls_callbacks);
rc = lrmd_handshake(lrmd, name);
report_async_connection_result(lrmd, rc);
return;
}
static int
lrmd_tls_connect_async(lrmd_t * lrmd, int timeout /*ms */ )
{
int rc = -1;
int sock = 0;
int timer_id = 0;
lrmd_private_t *native = lrmd->private;
lrmd_gnutls_global_init();
sock = crm_remote_tcp_connect_async(native->server, native->port, timeout, &timer_id, lrmd,
lrmd_tcp_connect_cb);
if (sock != -1) {
native->sock = sock;
rc = 0;
native->async_timer = timer_id;
}
return rc;
}
static int
lrmd_tls_connect(lrmd_t * lrmd, int *fd)
{
static struct mainloop_fd_callbacks lrmd_tls_callbacks = {
.dispatch = lrmd_tls_dispatch,
.destroy = lrmd_tls_connection_destroy,
};
lrmd_private_t *native = lrmd->private;
int sock;
gnutls_datum_t psk_key = { NULL, 0 };
lrmd_gnutls_global_init();
sock = crm_remote_tcp_connect(native->server, native->port);
if (sock < 0) {
crm_warn("Could not establish remote lrmd connection to %s", native->server);
lrmd_tls_connection_destroy(lrmd);
return -ENOTCONN;
}
native->sock = sock;
if (lrmd_tls_set_key(&psk_key) != 0) {
lrmd_tls_connection_destroy(lrmd);
return -1;
}
gnutls_psk_allocate_client_credentials(&native->psk_cred_c);
gnutls_psk_set_client_credentials(native->psk_cred_c, DEFAULT_REMOTE_USERNAME, &psk_key, GNUTLS_PSK_KEY_RAW);
gnutls_free(psk_key.data);
native->remote->tls_session = create_psk_tls_session(sock, GNUTLS_CLIENT, native->psk_cred_c);
if (crm_initiate_client_tls_handshake(native->remote, LRMD_CLIENT_HANDSHAKE_TIMEOUT) != 0) {
crm_err("Session creation for %s:%d failed", native->server, native->port);
gnutls_deinit(*native->remote->tls_session);
gnutls_free(native->remote->tls_session);
native->remote->tls_session = NULL;
lrmd_tls_connection_destroy(lrmd);
return -1;
}
crm_info("Remote lrmd client TLS connection established with server %s:%d", native->server,
native->port);
if (fd) {
*fd = sock;
} else {
char name[256] = { 0, };
snprintf(name, 128, "remote-lrmd-%s:%d", native->server, native->port);
native->process_notify = mainloop_add_trigger(G_PRIORITY_HIGH, lrmd_tls_dispatch, lrmd);
native->source =
mainloop_add_fd(name, G_PRIORITY_HIGH, native->sock, lrmd, &lrmd_tls_callbacks);
}
return pcmk_ok;
}
#endif
static int
lrmd_api_connect(lrmd_t * lrmd, const char *name, int *fd)
{
int rc = -ENOTCONN;
lrmd_private_t *native = lrmd->private;
switch (native->type) {
case CRM_CLIENT_IPC:
rc = lrmd_ipc_connect(lrmd, fd);
break;
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
rc = lrmd_tls_connect(lrmd, fd);
break;
#endif
default:
crm_err("Unsupported connection type: %d", native->type);
}
if (rc == pcmk_ok) {
rc = lrmd_handshake(lrmd, name);
}
return rc;
}
static int
lrmd_api_connect_async(lrmd_t * lrmd, const char *name, int timeout)
{
int rc = 0;
lrmd_private_t *native = lrmd->private;
if (!native->callback) {
crm_err("Async connect not possible, no lrmd client callback set.");
return -1;
}
switch (native->type) {
case CRM_CLIENT_IPC:
/* fake async connection with ipc. it should be fast
* enough that we gain very little from async */
rc = lrmd_api_connect(lrmd, name, NULL);
if (!rc) {
report_async_connection_result(lrmd, rc);
}
break;
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
rc = lrmd_tls_connect_async(lrmd, timeout);
if (rc) {
/* connection failed, report rc now */
report_async_connection_result(lrmd, rc);
}
break;
#endif
default:
crm_err("Unsupported connection type: %d", native->type);
}
return rc;
}
static void
lrmd_ipc_disconnect(lrmd_t * lrmd)
{
lrmd_private_t *native = lrmd->private;
if (native->source != NULL) {
/* Attached to mainloop */
mainloop_del_ipc_client(native->source);
native->source = NULL;
native->ipc = NULL;
} else if (native->ipc) {
/* Not attached to mainloop */
crm_ipc_t *ipc = native->ipc;
native->ipc = NULL;
crm_ipc_close(ipc);
crm_ipc_destroy(ipc);
}
}
#ifdef HAVE_GNUTLS_GNUTLS_H
static void
lrmd_tls_disconnect(lrmd_t * lrmd)
{
lrmd_private_t *native = lrmd->private;
if (native->remote->tls_session) {
gnutls_bye(*native->remote->tls_session, GNUTLS_SHUT_RDWR);
gnutls_deinit(*native->remote->tls_session);
gnutls_free(native->remote->tls_session);
native->remote->tls_session = 0;
}
if (native->async_timer) {
g_source_remove(native->async_timer);
native->async_timer = 0;
}
if (native->source != NULL) {
/* Attached to mainloop */
mainloop_del_ipc_client(native->source);
native->source = NULL;
} else if (native->sock) {
close(native->sock);
native->sock = 0;
}
if (native->pending_notify) {
g_list_free_full(native->pending_notify, lrmd_free_xml);
native->pending_notify = NULL;
}
}
#endif
static int
lrmd_api_disconnect(lrmd_t * lrmd)
{
lrmd_private_t *native = lrmd->private;
crm_info("Disconnecting from %d lrmd service", native->type);
switch (native->type) {
case CRM_CLIENT_IPC:
lrmd_ipc_disconnect(lrmd);
break;
#ifdef HAVE_GNUTLS_GNUTLS_H
case CRM_CLIENT_TLS:
lrmd_tls_disconnect(lrmd);
break;
#endif
default:
crm_err("Unsupported connection type: %d", native->type);
}
free(native->token);
native->token = NULL;
free(native->peer_version);
native->peer_version = NULL;
return 0;
}
static int
lrmd_api_register_rsc(lrmd_t * lrmd,
const char *rsc_id,
const char *class,
const char *provider, const char *type, enum lrmd_call_options options)
{
int rc = pcmk_ok;
xmlNode *data = NULL;
if (!class || !type || !rsc_id) {
return -EINVAL;
}
if (safe_str_eq(class, PCMK_RESOURCE_CLASS_OCF) && !provider) {
return -EINVAL;
}
data = create_xml_node(NULL, F_LRMD_RSC);
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
crm_xml_add(data, F_LRMD_RSC_ID, rsc_id);
crm_xml_add(data, F_LRMD_CLASS, class);
crm_xml_add(data, F_LRMD_PROVIDER, provider);
crm_xml_add(data, F_LRMD_TYPE, type);
rc = lrmd_send_command(lrmd, LRMD_OP_RSC_REG, data, NULL, 0, options, TRUE);
free_xml(data);
return rc;
}
static int
lrmd_api_unregister_rsc(lrmd_t * lrmd, const char *rsc_id, enum lrmd_call_options options)
{
int rc = pcmk_ok;
xmlNode *data = create_xml_node(NULL, F_LRMD_RSC);
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
crm_xml_add(data, F_LRMD_RSC_ID, rsc_id);
rc = lrmd_send_command(lrmd, LRMD_OP_RSC_UNREG, data, NULL, 0, options, TRUE);
free_xml(data);
return rc;
}
lrmd_rsc_info_t *
lrmd_copy_rsc_info(lrmd_rsc_info_t * rsc_info)
{
lrmd_rsc_info_t *copy = NULL;
copy = calloc(1, sizeof(lrmd_rsc_info_t));
copy->id = strdup(rsc_info->id);
copy->type = strdup(rsc_info->type);
copy->class = strdup(rsc_info->class);
if (rsc_info->provider) {
copy->provider = strdup(rsc_info->provider);
}
return copy;
}
void
lrmd_free_rsc_info(lrmd_rsc_info_t * rsc_info)
{
if (!rsc_info) {
return;
}
free(rsc_info->id);
free(rsc_info->type);
free(rsc_info->class);
free(rsc_info->provider);
free(rsc_info);
}
static lrmd_rsc_info_t *
lrmd_api_get_rsc_info(lrmd_t * lrmd, const char *rsc_id, enum lrmd_call_options options)
{
lrmd_rsc_info_t *rsc_info = NULL;
xmlNode *data = create_xml_node(NULL, F_LRMD_RSC);
xmlNode *output = NULL;
const char *class = NULL;
const char *provider = NULL;
const char *type = NULL;
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
crm_xml_add(data, F_LRMD_RSC_ID, rsc_id);
lrmd_send_command(lrmd, LRMD_OP_RSC_INFO, data, &output, 0, options, TRUE);
free_xml(data);
if (!output) {
return NULL;
}
class = crm_element_value(output, F_LRMD_CLASS);
provider = crm_element_value(output, F_LRMD_PROVIDER);
type = crm_element_value(output, F_LRMD_TYPE);
if (!class || !type) {
free_xml(output);
return NULL;
} else if (safe_str_eq(class, PCMK_RESOURCE_CLASS_OCF) && !provider) {
free_xml(output);
return NULL;
}
rsc_info = calloc(1, sizeof(lrmd_rsc_info_t));
rsc_info->id = strdup(rsc_id);
rsc_info->class = strdup(class);
if (provider) {
rsc_info->provider = strdup(provider);
}
rsc_info->type = strdup(type);
free_xml(output);
return rsc_info;
}
static void
lrmd_api_set_callback(lrmd_t * lrmd, lrmd_event_callback callback)
{
lrmd_private_t *native = lrmd->private;
native->callback = callback;
}
void
lrmd_internal_set_proxy_callback(lrmd_t * lrmd, void *userdata, void (*callback)(lrmd_t *lrmd, void *userdata, xmlNode *msg))
{
lrmd_private_t *native = lrmd->private;
native->proxy_callback = callback;
native->proxy_callback_userdata = userdata;
}
void
lrmd_internal_proxy_dispatch(lrmd_t *lrmd, xmlNode *msg)
{
lrmd_private_t *native = lrmd->private;
if (native->proxy_callback) {
crm_log_xml_trace(msg, "PROXY_INBOUND");
native->proxy_callback(lrmd, native->proxy_callback_userdata, msg);
}
}
int
lrmd_internal_proxy_send(lrmd_t * lrmd, xmlNode *msg)
{
if (lrmd == NULL) {
return -ENOTCONN;
}
crm_xml_add(msg, F_LRMD_OPERATION, CRM_OP_IPC_FWD);
crm_log_xml_trace(msg, "PROXY_OUTBOUND");
return lrmd_send_xml_no_reply(lrmd, msg);
}
static int
stonith_get_metadata(const char *provider, const char *type, char **output)
{
int rc = pcmk_ok;
stonith_t *stonith_api = stonith_api_new();
if(stonith_api) {
stonith_api->cmds->metadata(stonith_api, st_opt_sync_call, type, provider, output, 0);
stonith_api->cmds->free(stonith_api);
}
if (*output == NULL) {
rc = -EIO;
}
return rc;
}
#define lsb_metadata_template \
"<?xml version='1.0'?>\n" \
"<!DOCTYPE resource-agent SYSTEM 'ra-api-1.dtd'>\n" \
"<resource-agent name='%s' version='0.1'>\n" \
" <version>1.0</version>\n" \
" <longdesc lang='en'>\n" \
" %s\n" \
" </longdesc>\n" \
" <shortdesc lang='en'>%s</shortdesc>\n" \
" <parameters>\n" \
" </parameters>\n" \
" <actions>\n" \
" <action name='meta-data' timeout='5' />\n" \
" <action name='start' timeout='15' />\n" \
" <action name='stop' timeout='15' />\n" \
" <action name='status' timeout='15' />\n" \
" <action name='restart' timeout='15' />\n" \
" <action name='force-reload' timeout='15' />\n" \
" <action name='monitor' timeout='15' interval='15' />\n" \
" </actions>\n" \
" <special tag='LSB'>\n" \
" <Provides>%s</Provides>\n" \
" <Required-Start>%s</Required-Start>\n" \
" <Required-Stop>%s</Required-Stop>\n" \
" <Should-Start>%s</Should-Start>\n" \
" <Should-Stop>%s</Should-Stop>\n" \
" <Default-Start>%s</Default-Start>\n" \
" <Default-Stop>%s</Default-Stop>\n" \
" </special>\n" \
"</resource-agent>\n"
#define LSB_INITSCRIPT_INFOBEGIN_TAG "### BEGIN INIT INFO"
#define LSB_INITSCRIPT_INFOEND_TAG "### END INIT INFO"
#define PROVIDES "# Provides:"
#define REQ_START "# Required-Start:"
#define REQ_STOP "# Required-Stop:"
#define SHLD_START "# Should-Start:"
#define SHLD_STOP "# Should-Stop:"
#define DFLT_START "# Default-Start:"
#define DFLT_STOP "# Default-Stop:"
#define SHORT_DSCR "# Short-Description:"
#define DESCRIPTION "# Description:"
#define lsb_meta_helper_free_value(m) \
do { \
if ((m) != NULL) { \
xmlFree(m); \
(m) = NULL; \
} \
} while(0)
/*!
* \internal
* \brief Grab an LSB header value
*
* \param[in] line Line read from LSB init script
* \param[in,out] value If not set, will be set to XML-safe copy of value
* \param[in] prefix Set value if line starts with this pattern
*
* \return TRUE if value was set, FALSE otherwise
*/
static inline gboolean
lsb_meta_helper_get_value(const char *line, char **value, const char *prefix)
{
if (!*value && !strncasecmp(line, prefix, strlen(prefix))) {
*value = (char *)xmlEncodeEntitiesReentrant(NULL, BAD_CAST line+strlen(prefix));
return TRUE;
}
return FALSE;
}
static int
lsb_get_metadata(const char *type, char **output)
{
char ra_pathname[PATH_MAX] = { 0, };
FILE *fp;
char buffer[1024];
char *provides = NULL;
char *req_start = NULL;
char *req_stop = NULL;
char *shld_start = NULL;
char *shld_stop = NULL;
char *dflt_start = NULL;
char *dflt_stop = NULL;
char *s_dscrpt = NULL;
char *xml_l_dscrpt = NULL;
int offset = 0;
int max = 2048;
char description[max];
if(type[0] == '/') {
snprintf(ra_pathname, sizeof(ra_pathname), "%s", type);
} else {
snprintf(ra_pathname, sizeof(ra_pathname), "%s/%s", LSB_ROOT_DIR, type);
}
crm_trace("Looking into %s", ra_pathname);
if (!(fp = fopen(ra_pathname, "r"))) {
return -errno;
}
/* Enter into the lsb-compliant comment block */
while (fgets(buffer, sizeof(buffer), fp)) {
/* Now suppose each of the following eight arguments contain only one line */
if (lsb_meta_helper_get_value(buffer, &provides, PROVIDES)) {
continue;
}
if (lsb_meta_helper_get_value(buffer, &req_start, REQ_START)) {
continue;
}
if (lsb_meta_helper_get_value(buffer, &req_stop, REQ_STOP)) {
continue;
}
if (lsb_meta_helper_get_value(buffer, &shld_start, SHLD_START)) {
continue;
}
if (lsb_meta_helper_get_value(buffer, &shld_stop, SHLD_STOP)) {
continue;
}
if (lsb_meta_helper_get_value(buffer, &dflt_start, DFLT_START)) {
continue;
}
if (lsb_meta_helper_get_value(buffer, &dflt_stop, DFLT_STOP)) {
continue;
}
if (lsb_meta_helper_get_value(buffer, &s_dscrpt, SHORT_DSCR)) {
continue;
}
/* Long description may cross multiple lines */
if (offset == 0 && (0 == strncasecmp(buffer, DESCRIPTION, strlen(DESCRIPTION)))) {
/* Between # and keyword, more than one space, or a tab
* character, indicates the continuation line.
*
* Extracted from LSB init script standard
*/
while (fgets(buffer, sizeof(buffer), fp)) {
if (!strncmp(buffer, "# ", 3) || !strncmp(buffer, "#\t", 2)) {
buffer[0] = ' ';
offset += snprintf(description+offset, max-offset, "%s", buffer);
} else {
fputs(buffer, fp);
break; /* Long description ends */
}
}
continue;
}
if (xml_l_dscrpt == NULL && offset > 0) {
xml_l_dscrpt = (char *)xmlEncodeEntitiesReentrant(NULL, BAD_CAST(description));
}
if (!strncasecmp(buffer, LSB_INITSCRIPT_INFOEND_TAG, strlen(LSB_INITSCRIPT_INFOEND_TAG))) {
/* Get to the out border of LSB comment block */
break;
}
if (buffer[0] != '#') {
break; /* Out of comment block in the beginning */
}
}
fclose(fp);
*output = crm_strdup_printf(lsb_metadata_template, type,
(xml_l_dscrpt == NULL) ? type : xml_l_dscrpt,
(s_dscrpt == NULL) ? type : s_dscrpt, (provides == NULL) ? "" : provides,
(req_start == NULL) ? "" : req_start, (req_stop == NULL) ? "" : req_stop,
(shld_start == NULL) ? "" : shld_start, (shld_stop == NULL) ? "" : shld_stop,
(dflt_start == NULL) ? "" : dflt_start, (dflt_stop == NULL) ? "" : dflt_stop);
lsb_meta_helper_free_value(xml_l_dscrpt);
lsb_meta_helper_free_value(s_dscrpt);
lsb_meta_helper_free_value(provides);
lsb_meta_helper_free_value(req_start);
lsb_meta_helper_free_value(req_stop);
lsb_meta_helper_free_value(shld_start);
lsb_meta_helper_free_value(shld_stop);
lsb_meta_helper_free_value(dflt_start);
lsb_meta_helper_free_value(dflt_stop);
crm_trace("Created fake metadata: %llu",
(unsigned long long) strlen(*output));
return pcmk_ok;
}
#if SUPPORT_NAGIOS
static int
nagios_get_metadata(const char *type, char **output)
{
int rc = pcmk_ok;
FILE *file_strm = NULL;
int start = 0, length = 0, read_len = 0;
char *metadata_file = NULL;
int len = 36;
len += strlen(NAGIOS_METADATA_DIR);
len += strlen(type);
metadata_file = calloc(1, len);
CRM_CHECK(metadata_file != NULL, return -ENOMEM);
sprintf(metadata_file, "%s/%s.xml", NAGIOS_METADATA_DIR, type);
file_strm = fopen(metadata_file, "r");
if (file_strm == NULL) {
crm_err("Metadata file %s does not exist", metadata_file);
free(metadata_file);
return -EIO;
}
/* see how big the file is */
start = ftell(file_strm);
fseek(file_strm, 0L, SEEK_END);
length = ftell(file_strm);
fseek(file_strm, 0L, start);
CRM_ASSERT(length >= 0);
CRM_ASSERT(start == ftell(file_strm));
if (length <= 0) {
crm_info("%s was not valid", metadata_file);
free(*output);
*output = NULL;
rc = -EIO;
} else {
crm_trace("Reading %d bytes from file", length);
*output = calloc(1, (length + 1));
read_len = fread(*output, 1, length, file_strm);
if (read_len != length) {
crm_err("Calculated and read bytes differ: %d vs. %d", length, read_len);
free(*output);
*output = NULL;
rc = -EIO;
}
}
fclose(file_strm);
free(metadata_file);
return rc;
}
#endif
#if SUPPORT_HEARTBEAT
/* strictly speaking, support for class=heartbeat style scripts
* does not require "heartbeat support" to be enabled.
* But since those scripts are part of the "heartbeat" package usually,
* and are very unlikely to be present in any other deployment,
* I leave it inside this ifdef.
*
* Yes, I know, these are legacy and should die,
* or at least be rewritten to be a proper OCF style agent.
* But they exist, and custom scripts following these rules do, too.
*
* Taken from the old "glue" lrmd, see
* http://hg.linux-ha.org/glue/file/0a7add1d9996/lib/plugins/lrm/raexechb.c#l49
* http://hg.linux-ha.org/glue/file/0a7add1d9996/lib/plugins/lrm/raexechb.c#l393
*/
static const char hb_metadata_template[] =
"<?xml version='1.0'?>\n"
"<!DOCTYPE resource-agent SYSTEM 'ra-api-1.dtd'>\n"
"<resource-agent name='%s' version='0.1'>\n"
"<version>1.0</version>\n"
"<longdesc lang='en'>\n"
"%s"
"</longdesc>\n"
"<shortdesc lang='en'>%s</shortdesc>\n"
"<parameters>\n"
"<parameter name='1' unique='1' required='0'>\n"
"<longdesc lang='en'>\n"
"This argument will be passed as the first argument to the "
"heartbeat resource agent (assuming it supports one)\n"
"</longdesc>\n"
"<shortdesc lang='en'>argv[1]</shortdesc>\n"
"<content type='string' default=' ' />\n"
"</parameter>\n"
"<parameter name='2' unique='1' required='0'>\n"
"<longdesc lang='en'>\n"
"This argument will be passed as the second argument to the "
"heartbeat resource agent (assuming it supports one)\n"
"</longdesc>\n"
"<shortdesc lang='en'>argv[2]</shortdesc>\n"
"<content type='string' default=' ' />\n"
"</parameter>\n"
"<parameter name='3' unique='1' required='0'>\n"
"<longdesc lang='en'>\n"
"This argument will be passed as the third argument to the "
"heartbeat resource agent (assuming it supports one)\n"
"</longdesc>\n"
"<shortdesc lang='en'>argv[3]</shortdesc>\n"
"<content type='string' default=' ' />\n"
"</parameter>\n"
"<parameter name='4' unique='1' required='0'>\n"
"<longdesc lang='en'>\n"
"This argument will be passed as the fourth argument to the "
"heartbeat resource agent (assuming it supports one)\n"
"</longdesc>\n"
"<shortdesc lang='en'>argv[4]</shortdesc>\n"
"<content type='string' default=' ' />\n"
"</parameter>\n"
"<parameter name='5' unique='1' required='0'>\n"
"<longdesc lang='en'>\n"
"This argument will be passed as the fifth argument to the "
"heartbeat resource agent (assuming it supports one)\n"
"</longdesc>\n"
"<shortdesc lang='en'>argv[5]</shortdesc>\n"
"<content type='string' default=' ' />\n"
"</parameter>\n"
"</parameters>\n"
"<actions>\n"
"<action name='start' timeout='15' />\n"
"<action name='stop' timeout='15' />\n"
"<action name='status' timeout='15' />\n"
"<action name='monitor' timeout='15' interval='15' start-delay='15' />\n"
"<action name='meta-data' timeout='5' />\n"
"</actions>\n"
"<special tag='heartbeat'>\n"
"</special>\n"
"</resource-agent>\n";
static int
heartbeat_get_metadata(const char *type, char **output)
{
*output = crm_strdup_printf(hb_metadata_template, type, type, type);
crm_trace("Created fake metadata: %llu",
(unsigned long long) strlen(*output));
return pcmk_ok;
}
#endif
static int
generic_get_metadata(const char *standard, const char *provider, const char *type, char **output)
{
svc_action_t *action;
action = resources_action_create(type, standard, provider, type,
"meta-data", 0, 30000, NULL, 0);
if (action == NULL) {
crm_err("Unable to retrieve meta-data for %s:%s:%s", standard, provider, type);
services_action_free(action);
return -EINVAL;
}
if (!(services_action_sync(action))) {
crm_err("Failed to retrieve meta-data for %s:%s:%s", standard, provider, type);
services_action_free(action);
return -EIO;
}
if (!action->stdout_data) {
crm_err("Failed to receive meta-data for %s:%s:%s", standard, provider, type);
services_action_free(action);
return -EIO;
}
*output = strdup(action->stdout_data);
services_action_free(action);
return pcmk_ok;
}
static int
lrmd_api_get_metadata(lrmd_t * lrmd,
const char *class,
const char *provider,
const char *type, char **output, enum lrmd_call_options options)
{
if (!class || !type) {
return -EINVAL;
}
if (safe_str_eq(class, PCMK_RESOURCE_CLASS_SERVICE)) {
class = resources_find_service_class(type);
}
if (safe_str_eq(class, PCMK_RESOURCE_CLASS_STONITH)) {
return stonith_get_metadata(provider, type, output);
} else if (safe_str_eq(class, PCMK_RESOURCE_CLASS_LSB)) {
return lsb_get_metadata(type, output);
#if SUPPORT_NAGIOS
} else if (safe_str_eq(class, PCMK_RESOURCE_CLASS_NAGIOS)) {
return nagios_get_metadata(type, output);
#endif
#if SUPPORT_HEARTBEAT
} else if (safe_str_eq(class, PCMK_RESOURCE_CLASS_HB)) {
return heartbeat_get_metadata(type, output);
#endif
}
return generic_get_metadata(class, provider, type, output);
}
static int
lrmd_api_exec(lrmd_t * lrmd, const char *rsc_id, const char *action, const char *userdata, int interval, /* ms */
int timeout, /* ms */
int start_delay, /* ms */
enum lrmd_call_options options, lrmd_key_value_t * params)
{
int rc = pcmk_ok;
xmlNode *data = create_xml_node(NULL, F_LRMD_RSC);
xmlNode *args = create_xml_node(data, XML_TAG_ATTRS);
lrmd_key_value_t *tmp = NULL;
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
crm_xml_add(data, F_LRMD_RSC_ID, rsc_id);
crm_xml_add(data, F_LRMD_RSC_ACTION, action);
crm_xml_add(data, F_LRMD_RSC_USERDATA_STR, userdata);
crm_xml_add_int(data, F_LRMD_RSC_INTERVAL, interval);
crm_xml_add_int(data, F_LRMD_TIMEOUT, timeout);
crm_xml_add_int(data, F_LRMD_RSC_START_DELAY, start_delay);
for (tmp = params; tmp; tmp = tmp->next) {
hash2smartfield((gpointer) tmp->key, (gpointer) tmp->value, args);
}
rc = lrmd_send_command(lrmd, LRMD_OP_RSC_EXEC, data, NULL, timeout, options, TRUE);
free_xml(data);
lrmd_key_value_freeall(params);
return rc;
}
/* timeout is in ms */
static int
lrmd_api_exec_alert(lrmd_t *lrmd, const char *alert_id, const char *alert_path,
int timeout, lrmd_key_value_t *params)
{
int rc = pcmk_ok;
xmlNode *data = create_xml_node(NULL, F_LRMD_ALERT);
xmlNode *args = create_xml_node(data, XML_TAG_ATTRS);
lrmd_key_value_t *tmp = NULL;
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
crm_xml_add(data, F_LRMD_ALERT_ID, alert_id);
crm_xml_add(data, F_LRMD_ALERT_PATH, alert_path);
crm_xml_add_int(data, F_LRMD_TIMEOUT, timeout);
for (tmp = params; tmp; tmp = tmp->next) {
hash2smartfield((gpointer) tmp->key, (gpointer) tmp->value, args);
}
rc = lrmd_send_command(lrmd, LRMD_OP_ALERT_EXEC, data, NULL, timeout,
lrmd_opt_notify_orig_only, TRUE);
free_xml(data);
lrmd_key_value_freeall(params);
return rc;
}
static int
lrmd_api_cancel(lrmd_t * lrmd, const char *rsc_id, const char *action, int interval)
{
int rc = pcmk_ok;
xmlNode *data = create_xml_node(NULL, F_LRMD_RSC);
crm_xml_add(data, F_LRMD_ORIGIN, __FUNCTION__);
crm_xml_add(data, F_LRMD_RSC_ACTION, action);
crm_xml_add(data, F_LRMD_RSC_ID, rsc_id);
crm_xml_add_int(data, F_LRMD_RSC_INTERVAL, interval);
rc = lrmd_send_command(lrmd, LRMD_OP_RSC_CANCEL, data, NULL, 0, 0, TRUE);
free_xml(data);
return rc;
}
static int
list_stonith_agents(lrmd_list_t ** resources)
{
int rc = 0;
stonith_t *stonith_api = stonith_api_new();
stonith_key_value_t *stonith_resources = NULL;
stonith_key_value_t *dIter = NULL;
if(stonith_api) {
stonith_api->cmds->list_agents(stonith_api, st_opt_sync_call, NULL, &stonith_resources, 0);
stonith_api->cmds->free(stonith_api);
}
for (dIter = stonith_resources; dIter; dIter = dIter->next) {
rc++;
if (resources) {
*resources = lrmd_list_add(*resources, dIter->value);
}
}
stonith_key_value_freeall(stonith_resources, 1, 0);
return rc;
}
static int
lrmd_api_list_agents(lrmd_t * lrmd, lrmd_list_t ** resources, const char *class,
const char *provider)
{
int rc = 0;
if (safe_str_eq(class, PCMK_RESOURCE_CLASS_STONITH)) {
rc += list_stonith_agents(resources);
} else {
GListPtr gIter = NULL;
GList *agents = resources_list_agents(class, provider);
for (gIter = agents; gIter != NULL; gIter = gIter->next) {
*resources = lrmd_list_add(*resources, (const char *)gIter->data);
rc++;
}
g_list_free_full(agents, free);
if (!class) {
rc += list_stonith_agents(resources);
}
}
if (rc == 0) {
crm_notice("No agents found for class %s", class);
rc = -EPROTONOSUPPORT;
}
return rc;
}
static int
does_provider_have_agent(const char *agent, const char *provider, const char *class)
{
int found = 0;
GList *agents = NULL;
GListPtr gIter2 = NULL;
agents = resources_list_agents(class, provider);
for (gIter2 = agents; gIter2 != NULL; gIter2 = gIter2->next) {
if (safe_str_eq(agent, gIter2->data)) {
found = 1;
}
}
g_list_free_full(agents, free);
return found;
}
static int
lrmd_api_list_ocf_providers(lrmd_t * lrmd, const char *agent, lrmd_list_t ** providers)
{
int rc = pcmk_ok;
char *provider = NULL;
GList *ocf_providers = NULL;
GListPtr gIter = NULL;
ocf_providers = resources_list_providers(PCMK_RESOURCE_CLASS_OCF);
for (gIter = ocf_providers; gIter != NULL; gIter = gIter->next) {
provider = gIter->data;
if (!agent || does_provider_have_agent(agent, provider,
PCMK_RESOURCE_CLASS_OCF)) {
*providers = lrmd_list_add(*providers, (const char *)gIter->data);
rc++;
}
}
g_list_free_full(ocf_providers, free);
return rc;
}
static int
lrmd_api_list_standards(lrmd_t * lrmd, lrmd_list_t ** supported)
{
int rc = 0;
GList *standards = NULL;
GListPtr gIter = NULL;
standards = resources_list_standards();
for (gIter = standards; gIter != NULL; gIter = gIter->next) {
*supported = lrmd_list_add(*supported, (const char *)gIter->data);
rc++;
}
if (list_stonith_agents(NULL) > 0) {
*supported = lrmd_list_add(*supported, PCMK_RESOURCE_CLASS_STONITH);
rc++;
}
g_list_free_full(standards, free);
return rc;
}
lrmd_t *
lrmd_api_new(void)
{
lrmd_t *new_lrmd = NULL;
lrmd_private_t *pvt = NULL;
new_lrmd = calloc(1, sizeof(lrmd_t));
pvt = calloc(1, sizeof(lrmd_private_t));
pvt->remote = calloc(1, sizeof(crm_remote_t));
new_lrmd->cmds = calloc(1, sizeof(lrmd_api_operations_t));
pvt->type = CRM_CLIENT_IPC;
new_lrmd->private = pvt;
new_lrmd->cmds->connect = lrmd_api_connect;
new_lrmd->cmds->connect_async = lrmd_api_connect_async;
new_lrmd->cmds->is_connected = lrmd_api_is_connected;
new_lrmd->cmds->poke_connection = lrmd_api_poke_connection;
new_lrmd->cmds->disconnect = lrmd_api_disconnect;
new_lrmd->cmds->register_rsc = lrmd_api_register_rsc;
new_lrmd->cmds->unregister_rsc = lrmd_api_unregister_rsc;
new_lrmd->cmds->get_rsc_info = lrmd_api_get_rsc_info;
new_lrmd->cmds->set_callback = lrmd_api_set_callback;
new_lrmd->cmds->get_metadata = lrmd_api_get_metadata;
new_lrmd->cmds->exec = lrmd_api_exec;
new_lrmd->cmds->cancel = lrmd_api_cancel;
new_lrmd->cmds->list_agents = lrmd_api_list_agents;
new_lrmd->cmds->list_ocf_providers = lrmd_api_list_ocf_providers;
new_lrmd->cmds->list_standards = lrmd_api_list_standards;
new_lrmd->cmds->exec_alert = lrmd_api_exec_alert;
return new_lrmd;
}
lrmd_t *
lrmd_remote_api_new(const char *nodename, const char *server, int port)
{
#ifdef HAVE_GNUTLS_GNUTLS_H
lrmd_t *new_lrmd = lrmd_api_new();
lrmd_private_t *native = new_lrmd->private;
if (!nodename && !server) {
lrmd_api_delete(new_lrmd);
return NULL;
}
native->type = CRM_CLIENT_TLS;
native->remote_nodename = nodename ? strdup(nodename) : strdup(server);
native->server = server ? strdup(server) : strdup(nodename);
native->port = port;
if (native->port == 0) {
const char *remote_port_str = getenv("PCMK_remote_port");
native->port = remote_port_str ? atoi(remote_port_str) : DEFAULT_REMOTE_PORT;
}
return new_lrmd;
#else
crm_err("GNUTLS is not enabled for this build, remote LRMD client can not be created");
return NULL;
#endif
}
void
lrmd_api_delete(lrmd_t * lrmd)
{
if (!lrmd) {
return;
}
lrmd->cmds->disconnect(lrmd); /* no-op if already disconnected */
free(lrmd->cmds);
if (lrmd->private) {
lrmd_private_t *native = lrmd->private;
#ifdef HAVE_GNUTLS_GNUTLS_H
free(native->server);
#endif
free(native->remote_nodename);
free(native->remote);
free(native->token);
free(native->peer_version);
}
free(lrmd->private);
free(lrmd);
}
diff --git a/lib/pengine/unpack.c b/lib/pengine/unpack.c
index 629c5578d9..effc8b7e8d 100644
--- a/lib/pengine/unpack.c
+++ b/lib/pengine/unpack.c
@@ -1,3510 +1,3510 @@
/*
* Copyright (C) 2004 Andrew Beekhof <andrew@beekhof.net>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <crm_internal.h>
#include <glib.h>
#include <crm/crm.h>
#include <crm/services.h>
#include <crm/msg_xml.h>
#include <crm/common/xml.h>
#include <crm/common/util.h>
#include <crm/pengine/rules.h>
#include <crm/pengine/internal.h>
#include <unpack.h>
CRM_TRACE_INIT_DATA(pe_status);
#define set_config_flag(data_set, option, flag) do { \
const char *tmp = pe_pref(data_set->config_hash, option); \
if(tmp) { \
if(crm_is_true(tmp)) { \
set_bit(data_set->flags, flag); \
} else { \
clear_bit(data_set->flags, flag); \
} \
} \
} while(0)
gboolean unpack_rsc_op(resource_t * rsc, node_t * node, xmlNode * xml_op, xmlNode ** last_failure,
enum action_fail_response *failed, pe_working_set_t * data_set);
static gboolean determine_remote_online_status(pe_working_set_t * data_set, node_t * this_node);
static gboolean
is_dangling_container_remote_node(node_t *node)
{
/* we are looking for a remote-node that was supposed to be mapped to a
* container resource, but all traces of that container have disappeared
* from both the config and the status section. */
if (is_remote_node(node) &&
node->details->remote_rsc &&
node->details->remote_rsc->container == NULL &&
is_set(node->details->remote_rsc->flags, pe_rsc_orphan_container_filler)) {
return TRUE;
}
return FALSE;
}
/*!
* \brief Schedule a fence action for a node
*
* \param[in,out] data_set Current working set of cluster
* \param[in,out] node Node to fence
* \param[in] reason Text description of why fencing is needed
*/
void
pe_fence_node(pe_working_set_t * data_set, node_t * node, const char *reason)
{
CRM_CHECK(node, return);
/* A guest node is fenced by marking its container as failed */
if (is_container_remote_node(node)) {
resource_t *rsc = node->details->remote_rsc->container;
if (is_set(rsc->flags, pe_rsc_failed) == FALSE) {
if (!is_set(rsc->flags, pe_rsc_managed)) {
crm_notice("Not fencing guest node %s "
"(otherwise would because %s): "
"its guest resource %s is unmanaged",
node->details->uname, reason, rsc->id);
} else {
crm_warn("Guest node %s will be fenced "
"(by recovering its guest resource %s): %s",
node->details->uname, rsc->id, reason);
/* We don't mark the node as unclean because that would prevent the
* node from running resources. We want to allow it to run resources
* in this transition if the recovery succeeds.
*/
node->details->remote_requires_reset = TRUE;
set_bit(rsc->flags, pe_rsc_failed);
}
}
} else if (is_dangling_container_remote_node(node)) {
crm_info("Cleaning up dangling connection for guest node %s: "
"fencing was already done because %s, "
"and guest resource no longer exists",
node->details->uname, reason);
set_bit(node->details->remote_rsc->flags, pe_rsc_failed);
} else if (is_baremetal_remote_node(node)) {
resource_t *rsc = node->details->remote_rsc;
if (rsc && (!is_set(rsc->flags, pe_rsc_managed))) {
crm_notice("Not fencing remote node %s "
"(otherwise would because %s): connection is unmanaged",
node->details->uname, reason);
} else if(node->details->remote_requires_reset == FALSE) {
node->details->remote_requires_reset = TRUE;
crm_warn("Remote node %s %s: %s",
node->details->uname,
pe_can_fence(data_set, node)? "will be fenced" : "is unclean",
reason);
}
node->details->unclean = TRUE;
} else if (node->details->unclean) {
crm_trace("Cluster node %s %s because %s",
node->details->uname,
pe_can_fence(data_set, node)? "would also be fenced" : "also is unclean",
reason);
} else {
crm_warn("Cluster node %s %s: %s",
node->details->uname,
pe_can_fence(data_set, node)? "will be fenced" : "is unclean",
reason);
node->details->unclean = TRUE;
}
}
gboolean
unpack_config(xmlNode * config, pe_working_set_t * data_set)
{
const char *value = NULL;
GHashTable *config_hash = crm_str_table_new();
xmlXPathObjectPtr xpathObj = NULL;
if(is_not_set(data_set->flags, pe_flag_enable_unfencing)) {
xpathObj = xpath_search(data_set->input, "//nvpair[@name='provides' and @value='unfencing']");
if(xpathObj && numXpathResults(xpathObj) > 0) {
set_bit(data_set->flags, pe_flag_enable_unfencing);
}
freeXpathObject(xpathObj);
}
if(is_not_set(data_set->flags, pe_flag_enable_unfencing)) {
xpathObj = xpath_search(data_set->input, "//nvpair[@name='requires' and @value='unfencing']");
if(xpathObj && numXpathResults(xpathObj) > 0) {
set_bit(data_set->flags, pe_flag_enable_unfencing);
}
freeXpathObject(xpathObj);
}
#ifdef REDHAT_COMPAT_6
if(is_not_set(data_set->flags, pe_flag_enable_unfencing)) {
xpathObj = xpath_search(data_set->input, "//primitive[@type='fence_scsi']");
if(xpathObj && numXpathResults(xpathObj) > 0) {
set_bit(data_set->flags, pe_flag_enable_unfencing);
}
freeXpathObject(xpathObj);
}
#endif
data_set->config_hash = config_hash;
unpack_instance_attributes(data_set->input, config, XML_CIB_TAG_PROPSET, NULL, config_hash,
CIB_OPTIONS_FIRST, FALSE, data_set->now);
verify_pe_options(data_set->config_hash);
set_config_flag(data_set, "enable-startup-probes", pe_flag_startup_probes);
if(is_not_set(data_set->flags, pe_flag_startup_probes)) {
crm_info("Startup probes: disabled (dangerous)");
}
value = pe_pref(data_set->config_hash, XML_ATTR_HAVE_WATCHDOG);
if (value && crm_is_true(value)) {
crm_notice("Watchdog will be used via SBD if fencing is required");
set_bit(data_set->flags, pe_flag_have_stonith_resource);
}
value = pe_pref(data_set->config_hash, "stonith-timeout");
data_set->stonith_timeout = crm_get_msec(value);
crm_debug("STONITH timeout: %d", data_set->stonith_timeout);
set_config_flag(data_set, "stonith-enabled", pe_flag_stonith_enabled);
crm_debug("STONITH of failed nodes is %s",
is_set(data_set->flags, pe_flag_stonith_enabled) ? "enabled" : "disabled");
data_set->stonith_action = pe_pref(data_set->config_hash, "stonith-action");
crm_trace("STONITH will %s nodes", data_set->stonith_action);
set_config_flag(data_set, "concurrent-fencing", pe_flag_concurrent_fencing);
crm_debug("Concurrent fencing is %s",
is_set(data_set->flags, pe_flag_concurrent_fencing) ? "enabled" : "disabled");
set_config_flag(data_set, "stop-all-resources", pe_flag_stop_everything);
crm_debug("Stop all active resources: %s",
is_set(data_set->flags, pe_flag_stop_everything) ? "true" : "false");
set_config_flag(data_set, "symmetric-cluster", pe_flag_symmetric_cluster);
if (is_set(data_set->flags, pe_flag_symmetric_cluster)) {
crm_debug("Cluster is symmetric" " - resources can run anywhere by default");
}
value = pe_pref(data_set->config_hash, "default-resource-stickiness");
data_set->default_resource_stickiness = char2score(value);
crm_debug("Default stickiness: %d", data_set->default_resource_stickiness);
value = pe_pref(data_set->config_hash, "no-quorum-policy");
if (safe_str_eq(value, "ignore")) {
data_set->no_quorum_policy = no_quorum_ignore;
} else if (safe_str_eq(value, "freeze")) {
data_set->no_quorum_policy = no_quorum_freeze;
} else if (safe_str_eq(value, "suicide")) {
gboolean do_panic = FALSE;
crm_element_value_int(data_set->input, XML_ATTR_QUORUM_PANIC, &do_panic);
if (is_set(data_set->flags, pe_flag_stonith_enabled) == FALSE) {
crm_config_err
("Setting no-quorum-policy=suicide makes no sense if stonith-enabled=false");
}
if (do_panic && is_set(data_set->flags, pe_flag_stonith_enabled)) {
data_set->no_quorum_policy = no_quorum_suicide;
} else if (is_set(data_set->flags, pe_flag_have_quorum) == FALSE && do_panic == FALSE) {
crm_notice("Resetting no-quorum-policy to 'stop': The cluster has never had quorum");
data_set->no_quorum_policy = no_quorum_stop;
}
} else {
data_set->no_quorum_policy = no_quorum_stop;
}
switch (data_set->no_quorum_policy) {
case no_quorum_freeze:
crm_debug("On loss of CCM Quorum: Freeze resources");
break;
case no_quorum_stop:
crm_debug("On loss of CCM Quorum: Stop ALL resources");
break;
case no_quorum_suicide:
crm_notice("On loss of CCM Quorum: Fence all remaining nodes");
break;
case no_quorum_ignore:
crm_notice("On loss of CCM Quorum: Ignore");
break;
}
set_config_flag(data_set, "stop-orphan-resources", pe_flag_stop_rsc_orphans);
crm_trace("Orphan resources are %s",
is_set(data_set->flags, pe_flag_stop_rsc_orphans) ? "stopped" : "ignored");
set_config_flag(data_set, "stop-orphan-actions", pe_flag_stop_action_orphans);
crm_trace("Orphan resource actions are %s",
is_set(data_set->flags, pe_flag_stop_action_orphans) ? "stopped" : "ignored");
set_config_flag(data_set, "remove-after-stop", pe_flag_remove_after_stop);
crm_trace("Stopped resources are removed from the status section: %s",
is_set(data_set->flags, pe_flag_remove_after_stop) ? "true" : "false");
set_config_flag(data_set, "maintenance-mode", pe_flag_maintenance_mode);
crm_trace("Maintenance mode: %s",
is_set(data_set->flags, pe_flag_maintenance_mode) ? "true" : "false");
if (is_set(data_set->flags, pe_flag_maintenance_mode)) {
clear_bit(data_set->flags, pe_flag_is_managed_default);
} else {
set_config_flag(data_set, "is-managed-default", pe_flag_is_managed_default);
}
crm_trace("By default resources are %smanaged",
is_set(data_set->flags, pe_flag_is_managed_default) ? "" : "not ");
set_config_flag(data_set, "start-failure-is-fatal", pe_flag_start_failure_fatal);
crm_trace("Start failures are %s",
is_set(data_set->flags,
pe_flag_start_failure_fatal) ? "always fatal" : "handled by failcount");
node_score_red = char2score(pe_pref(data_set->config_hash, "node-health-red"));
node_score_green = char2score(pe_pref(data_set->config_hash, "node-health-green"));
node_score_yellow = char2score(pe_pref(data_set->config_hash, "node-health-yellow"));
crm_debug("Node scores: 'red' = %s, 'yellow' = %s, 'green' = %s",
pe_pref(data_set->config_hash, "node-health-red"),
pe_pref(data_set->config_hash, "node-health-yellow"),
pe_pref(data_set->config_hash, "node-health-green"));
data_set->placement_strategy = pe_pref(data_set->config_hash, "placement-strategy");
crm_trace("Placement strategy: %s", data_set->placement_strategy);
return TRUE;
}
static void
destroy_digest_cache(gpointer ptr)
{
op_digest_cache_t *data = ptr;
free_xml(data->params_all);
free_xml(data->params_secure);
free_xml(data->params_restart);
free(data->digest_all_calc);
free(data->digest_restart_calc);
free(data->digest_secure_calc);
free(data);
}
node_t *
pe_create_node(const char *id, const char *uname, const char *type,
const char *score, pe_working_set_t * data_set)
{
node_t *new_node = NULL;
if (pe_find_node(data_set->nodes, uname) != NULL) {
crm_config_warn("Detected multiple node entries with uname=%s"
" - this is rarely intended", uname);
}
new_node = calloc(1, sizeof(node_t));
if (new_node == NULL) {
return NULL;
}
new_node->weight = char2score(score);
new_node->fixed = FALSE;
new_node->details = calloc(1, sizeof(struct node_shared_s));
if (new_node->details == NULL) {
free(new_node);
return NULL;
}
crm_trace("Creating node for entry %s/%s", uname, id);
new_node->details->id = id;
new_node->details->uname = uname;
new_node->details->online = FALSE;
new_node->details->shutdown = FALSE;
new_node->details->rsc_discovery_enabled = TRUE;
new_node->details->running_rsc = NULL;
new_node->details->type = node_ping;
if (safe_str_eq(type, "remote")) {
new_node->details->type = node_remote;
set_bit(data_set->flags, pe_flag_have_remote_nodes);
} else if (type == NULL || safe_str_eq(type, "member")
|| safe_str_eq(type, NORMALNODE)) {
new_node->details->type = node_member;
}
new_node->details->attrs = crm_str_table_new();
if (is_remote_node(new_node)) {
g_hash_table_insert(new_node->details->attrs, strdup("#kind"), strdup("remote"));
} else {
g_hash_table_insert(new_node->details->attrs, strdup("#kind"), strdup("cluster"));
}
new_node->details->utilization = crm_str_table_new();
new_node->details->digest_cache =
g_hash_table_new_full(crm_str_hash, g_str_equal, g_hash_destroy_str,
destroy_digest_cache);
data_set->nodes = g_list_insert_sorted(data_set->nodes, new_node, sort_node_uname);
return new_node;
}
bool
remote_id_conflict(const char *remote_name, pe_working_set_t *data)
{
bool match = FALSE;
#if 1
pe_find_resource(data->resources, remote_name);
#else
if (data->name_check == NULL) {
data->name_check = g_hash_table_new(crm_str_hash, g_str_equal);
for (xml_rsc = __xml_first_child(parent); xml_rsc != NULL; xml_rsc = __xml_next_element(xml_rsc)) {
const char *id = ID(xml_rsc);
/* avoiding heap allocation here because we know the duration of this hashtable allows us to */
g_hash_table_insert(data->name_check, (char *) id, (char *) id);
}
}
if (g_hash_table_lookup(data->name_check, remote_name)) {
match = TRUE;
}
#endif
if (match) {
crm_err("Invalid remote-node name, a resource called '%s' already exists.", remote_name);
return NULL;
}
return match;
}
static const char *
expand_remote_rsc_meta(xmlNode *xml_obj, xmlNode *parent, pe_working_set_t *data)
{
xmlNode *xml_rsc = NULL;
xmlNode *xml_tmp = NULL;
xmlNode *attr_set = NULL;
xmlNode *attr = NULL;
const char *container_id = ID(xml_obj);
const char *remote_name = NULL;
const char *remote_server = NULL;
const char *remote_port = NULL;
const char *connect_timeout = "60s";
const char *remote_allow_migrate=NULL;
const char *container_managed = NULL;
for (attr_set = __xml_first_child(xml_obj); attr_set != NULL; attr_set = __xml_next_element(attr_set)) {
if (safe_str_neq((const char *)attr_set->name, XML_TAG_META_SETS)) {
continue;
}
for (attr = __xml_first_child(attr_set); attr != NULL; attr = __xml_next_element(attr)) {
const char *value = crm_element_value(attr, XML_NVPAIR_ATTR_VALUE);
const char *name = crm_element_value(attr, XML_NVPAIR_ATTR_NAME);
if (safe_str_eq(name, XML_RSC_ATTR_REMOTE_NODE)) {
remote_name = value;
} else if (safe_str_eq(name, "remote-addr")) {
remote_server = value;
} else if (safe_str_eq(name, "remote-port")) {
remote_port = value;
} else if (safe_str_eq(name, "remote-connect-timeout")) {
connect_timeout = value;
} else if (safe_str_eq(name, "remote-allow-migrate")) {
remote_allow_migrate=value;
} else if (safe_str_eq(name, XML_RSC_ATTR_MANAGED)) {
container_managed = value;
}
}
}
if (remote_name == NULL) {
return NULL;
}
if (remote_id_conflict(remote_name, data)) {
return NULL;
}
xml_rsc = create_xml_node(parent, XML_CIB_TAG_RESOURCE);
crm_xml_add(xml_rsc, XML_ATTR_ID, remote_name);
crm_xml_add(xml_rsc, XML_AGENT_ATTR_CLASS, PCMK_RESOURCE_CLASS_OCF);
crm_xml_add(xml_rsc, XML_AGENT_ATTR_PROVIDER, "pacemaker");
crm_xml_add(xml_rsc, XML_ATTR_TYPE, "remote");
xml_tmp = create_xml_node(xml_rsc, XML_TAG_META_SETS);
crm_xml_set_id(xml_tmp, "%s_%s", remote_name, XML_TAG_META_SETS);
attr = create_xml_node(xml_tmp, XML_CIB_TAG_NVPAIR);
crm_xml_set_id(attr, "%s_%s", remote_name, "meta-attributes-container");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, XML_RSC_ATTR_CONTAINER);
crm_xml_add(attr, XML_NVPAIR_ATTR_VALUE, container_id);
attr = create_xml_node(xml_tmp, XML_CIB_TAG_NVPAIR);
crm_xml_set_id(attr, "%s_%s", remote_name, "meta-attributes-internal");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, XML_RSC_ATTR_INTERNAL_RSC);
crm_xml_add(attr, XML_NVPAIR_ATTR_VALUE, "true");
if (remote_allow_migrate) {
attr = create_xml_node(xml_tmp, XML_CIB_TAG_NVPAIR);
crm_xml_set_id(attr, "%s_%s", remote_name, "meta-attributes-migrate");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, XML_OP_ATTR_ALLOW_MIGRATE);
crm_xml_add(attr, XML_NVPAIR_ATTR_VALUE, remote_allow_migrate);
}
if (container_managed) {
attr = create_xml_node(xml_tmp, XML_CIB_TAG_NVPAIR);
crm_xml_set_id(attr, "%s_%s", remote_name, "meta-attributes-managed");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, XML_RSC_ATTR_MANAGED);
crm_xml_add(attr, XML_NVPAIR_ATTR_VALUE, container_managed);
}
xml_tmp = create_xml_node(xml_rsc, "operations");
attr = create_xml_node(xml_tmp, XML_ATTR_OP);
crm_xml_set_id(attr, "%s_%s", remote_name, "monitor-interval-30s");
crm_xml_add(attr, XML_ATTR_TIMEOUT, "30s");
crm_xml_add(attr, XML_LRM_ATTR_INTERVAL, "30s");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, "monitor");
if (connect_timeout) {
attr = create_xml_node(xml_tmp, XML_ATTR_OP);
crm_xml_set_id(attr, "%s_%s", remote_name, "start-interval-0");
crm_xml_add(attr, XML_ATTR_TIMEOUT, connect_timeout);
crm_xml_add(attr, XML_LRM_ATTR_INTERVAL, "0");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, "start");
}
if (remote_port || remote_server) {
xml_tmp = create_xml_node(xml_rsc, XML_TAG_ATTR_SETS);
crm_xml_set_id(xml_tmp, "%s_%s", remote_name, XML_TAG_ATTR_SETS);
if (remote_server) {
attr = create_xml_node(xml_tmp, XML_CIB_TAG_NVPAIR);
crm_xml_set_id(attr, "%s_%s", remote_name, "instance-attributes-addr");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, "addr");
crm_xml_add(attr, XML_NVPAIR_ATTR_VALUE, remote_server);
}
if (remote_port) {
attr = create_xml_node(xml_tmp, XML_CIB_TAG_NVPAIR);
crm_xml_set_id(attr, "%s_%s", remote_name, "instance-attributes-port");
crm_xml_add(attr, XML_NVPAIR_ATTR_NAME, "port");
crm_xml_add(attr, XML_NVPAIR_ATTR_VALUE, remote_port);
}
}
return remote_name;
}
static void
handle_startup_fencing(pe_working_set_t *data_set, node_t *new_node)
{
static const char *blind_faith = NULL;
static gboolean unseen_are_unclean = TRUE;
static gboolean need_warning = TRUE;
if ((new_node->details->type == node_remote) && (new_node->details->remote_rsc == NULL)) {
/* Ignore fencing for remote nodes that don't have a connection resource
* associated with them. This happens when remote node entries get left
* in the nodes section after the connection resource is removed.
*/
return;
}
blind_faith = pe_pref(data_set->config_hash, "startup-fencing");
if (crm_is_true(blind_faith) == FALSE) {
unseen_are_unclean = FALSE;
if (need_warning) {
crm_warn("Blind faith: not fencing unseen nodes");
/* Warn once per run, not per node and transition */
need_warning = FALSE;
}
}
if (is_set(data_set->flags, pe_flag_stonith_enabled) == FALSE
|| unseen_are_unclean == FALSE) {
/* blind faith... */
new_node->details->unclean = FALSE;
} else {
/* all nodes are unclean until we've seen their
* status entry
*/
new_node->details->unclean = TRUE;
}
/* We need to be able to determine if a node's status section
* exists or not separate from whether the node is unclean. */
new_node->details->unseen = TRUE;
}
gboolean
unpack_nodes(xmlNode * xml_nodes, pe_working_set_t * data_set)
{
xmlNode *xml_obj = NULL;
node_t *new_node = NULL;
const char *id = NULL;
const char *uname = NULL;
const char *type = NULL;
const char *score = NULL;
for (xml_obj = __xml_first_child(xml_nodes); xml_obj != NULL; xml_obj = __xml_next_element(xml_obj)) {
if (crm_str_eq((const char *)xml_obj->name, XML_CIB_TAG_NODE, TRUE)) {
new_node = NULL;
id = crm_element_value(xml_obj, XML_ATTR_ID);
uname = crm_element_value(xml_obj, XML_ATTR_UNAME);
type = crm_element_value(xml_obj, XML_ATTR_TYPE);
score = crm_element_value(xml_obj, XML_RULE_ATTR_SCORE);
crm_trace("Processing node %s/%s", uname, id);
if (id == NULL) {
crm_config_err("Must specify id tag in <node>");
continue;
}
new_node = pe_create_node(id, uname, type, score, data_set);
if (new_node == NULL) {
return FALSE;
}
/* if(data_set->have_quorum == FALSE */
/* && data_set->no_quorum_policy == no_quorum_stop) { */
/* /\* start shutting resources down *\/ */
/* new_node->weight = -INFINITY; */
/* } */
handle_startup_fencing(data_set, new_node);
add_node_attrs(xml_obj, new_node, FALSE, data_set);
unpack_instance_attributes(data_set->input, xml_obj, XML_TAG_UTILIZATION, NULL,
new_node->details->utilization, NULL, FALSE, data_set->now);
crm_trace("Done with node %s", crm_element_value(xml_obj, XML_ATTR_UNAME));
}
}
if (data_set->localhost && pe_find_node(data_set->nodes, data_set->localhost) == NULL) {
crm_info("Creating a fake local node");
pe_create_node(data_set->localhost, data_set->localhost, NULL, 0,
data_set);
}
return TRUE;
}
static void
setup_container(resource_t * rsc, pe_working_set_t * data_set)
{
const char *container_id = NULL;
if (rsc->children) {
GListPtr gIter = rsc->children;
for (; gIter != NULL; gIter = gIter->next) {
resource_t *child_rsc = (resource_t *) gIter->data;
setup_container(child_rsc, data_set);
}
return;
}
container_id = g_hash_table_lookup(rsc->meta, XML_RSC_ATTR_CONTAINER);
if (container_id && safe_str_neq(container_id, rsc->id)) {
resource_t *container = pe_find_resource(data_set->resources, container_id);
if (container) {
rsc->container = container;
container->fillers = g_list_append(container->fillers, rsc);
pe_rsc_trace(rsc, "Resource %s's container is %s", rsc->id, container_id);
} else {
pe_err("Resource %s: Unknown resource container (%s)", rsc->id, container_id);
}
}
}
gboolean
unpack_remote_nodes(xmlNode * xml_resources, pe_working_set_t * data_set)
{
xmlNode *xml_obj = NULL;
/* generate remote nodes from resource config before unpacking resources */
for (xml_obj = __xml_first_child(xml_resources); xml_obj != NULL; xml_obj = __xml_next_element(xml_obj)) {
const char *new_node_id = NULL;
/* first check if this is a bare metal remote node. Bare metal remote nodes
* are defined as a resource primitive only. */
if (xml_contains_remote_node(xml_obj)) {
new_node_id = ID(xml_obj);
/* The "pe_find_node" check is here to make sure we don't iterate over
* an expanded node that has already been added to the node list. */
if (new_node_id && pe_find_node(data_set->nodes, new_node_id) == NULL) {
crm_trace("Found baremetal remote node %s in container resource %s", new_node_id, ID(xml_obj));
pe_create_node(new_node_id, new_node_id, "remote", NULL,
data_set);
}
continue;
}
/* Now check for guest remote nodes.
* guest remote nodes are defined within a resource primitive.
* Example1: a vm resource might be configured as a remote node.
* Example2: a vm resource might be configured within a group to be a remote node.
* Note: right now we only support guest remote nodes in as a standalone primitive
* or a primitive within a group. No cloned primitives can be a guest remote node
* right now */
if (crm_str_eq((const char *)xml_obj->name, XML_CIB_TAG_RESOURCE, TRUE)) {
/* expands a metadata defined remote resource into the xml config
* as an actual rsc primitive to be unpacked later. */
new_node_id = expand_remote_rsc_meta(xml_obj, xml_resources, data_set);
if (new_node_id && pe_find_node(data_set->nodes, new_node_id) == NULL) {
crm_trace("Found guest remote node %s in container resource %s", new_node_id, ID(xml_obj));
pe_create_node(new_node_id, new_node_id, "remote", NULL,
data_set);
}
continue;
} else if (crm_str_eq((const char *)xml_obj->name, XML_CIB_TAG_GROUP, TRUE)) {
xmlNode *xml_obj2 = NULL;
/* search through a group to see if any of the primitive contain a remote node. */
for (xml_obj2 = __xml_first_child(xml_obj); xml_obj2 != NULL; xml_obj2 = __xml_next_element(xml_obj2)) {
new_node_id = expand_remote_rsc_meta(xml_obj2, xml_resources, data_set);
if (new_node_id && pe_find_node(data_set->nodes, new_node_id) == NULL) {
crm_trace("Found guest remote node %s in container resource %s which is in group %s", new_node_id, ID(xml_obj2), ID(xml_obj));
pe_create_node(new_node_id, new_node_id, "remote", NULL,
data_set);
}
}
}
}
return TRUE;
}
/* Call this after all the nodes and resources have been
* unpacked, but before the status section is read.
*
* A remote node's online status is reflected by the state
* of the remote node's connection resource. We need to link
* the remote node to this connection resource so we can have
* easy access to the connection resource during the PE calculations.
*/
static void
link_rsc2remotenode(pe_working_set_t *data_set, resource_t *new_rsc)
{
node_t *remote_node = NULL;
if (new_rsc->is_remote_node == FALSE) {
return;
}
if (is_set(data_set->flags, pe_flag_quick_location)) {
/* remote_nodes and remote_resources are not linked in quick location calculations */
return;
}
print_resource(LOG_DEBUG_3, "Linking remote-node connection resource, ", new_rsc, FALSE);
remote_node = pe_find_node(data_set->nodes, new_rsc->id);
CRM_CHECK(remote_node != NULL, return;);
remote_node->details->remote_rsc = new_rsc;
/* If this is a baremetal remote-node (no container resource
* associated with it) then we need to handle startup fencing the same way
* as cluster nodes. */
if (new_rsc->container == NULL) {
handle_startup_fencing(data_set, remote_node);
} else {
/* At this point we know if the remote node is a container or baremetal
* remote node, update the #kind attribute if a container is involved */
g_hash_table_replace(remote_node->details->attrs, strdup("#kind"), strdup("container"));
}
}
static void
destroy_tag(gpointer data)
{
tag_t *tag = data;
if (tag) {
free(tag->id);
g_list_free_full(tag->refs, free);
free(tag);
}
}
gboolean
unpack_resources(xmlNode * xml_resources, pe_working_set_t * data_set)
{
xmlNode *xml_obj = NULL;
GListPtr gIter = NULL;
data_set->template_rsc_sets =
g_hash_table_new_full(crm_str_hash, g_str_equal, g_hash_destroy_str,
destroy_tag);
for (xml_obj = __xml_first_child(xml_resources); xml_obj != NULL; xml_obj = __xml_next_element(xml_obj)) {
resource_t *new_rsc = NULL;
if (crm_str_eq((const char *)xml_obj->name, XML_CIB_TAG_RSC_TEMPLATE, TRUE)) {
const char *template_id = ID(xml_obj);
if (template_id && g_hash_table_lookup_extended(data_set->template_rsc_sets,
template_id, NULL, NULL) == FALSE) {
/* Record the template's ID for the knowledge of its existence anyway. */
g_hash_table_insert(data_set->template_rsc_sets, strdup(template_id), NULL);
}
continue;
}
crm_trace("Beginning unpack... <%s id=%s... >", crm_element_name(xml_obj), ID(xml_obj));
if (common_unpack(xml_obj, &new_rsc, NULL, data_set)) {
data_set->resources = g_list_append(data_set->resources, new_rsc);
print_resource(LOG_DEBUG_3, "Added ", new_rsc, FALSE);
} else {
crm_config_err("Failed unpacking %s %s",
crm_element_name(xml_obj), crm_element_value(xml_obj, XML_ATTR_ID));
if (new_rsc != NULL && new_rsc->fns != NULL) {
new_rsc->fns->free(new_rsc);
}
}
}
for (gIter = data_set->resources; gIter != NULL; gIter = gIter->next) {
resource_t *rsc = (resource_t *) gIter->data;
setup_container(rsc, data_set);
link_rsc2remotenode(data_set, rsc);
}
data_set->resources = g_list_sort(data_set->resources, sort_rsc_priority);
if (is_set(data_set->flags, pe_flag_quick_location)) {
/* Ignore */
} else if (is_set(data_set->flags, pe_flag_stonith_enabled)
&& is_set(data_set->flags, pe_flag_have_stonith_resource) == FALSE) {
crm_config_err("Resource start-up disabled since no STONITH resources have been defined");
crm_config_err("Either configure some or disable STONITH with the stonith-enabled option");
crm_config_err("NOTE: Clusters with shared data need STONITH to ensure data integrity");
}
return TRUE;
}
gboolean
unpack_tags(xmlNode * xml_tags, pe_working_set_t * data_set)
{
xmlNode *xml_tag = NULL;
data_set->tags =
g_hash_table_new_full(crm_str_hash, g_str_equal, g_hash_destroy_str, destroy_tag);
for (xml_tag = __xml_first_child(xml_tags); xml_tag != NULL; xml_tag = __xml_next_element(xml_tag)) {
xmlNode *xml_obj_ref = NULL;
const char *tag_id = ID(xml_tag);
if (crm_str_eq((const char *)xml_tag->name, XML_CIB_TAG_TAG, TRUE) == FALSE) {
continue;
}
if (tag_id == NULL) {
crm_config_err("Failed unpacking %s: %s should be specified",
crm_element_name(xml_tag), XML_ATTR_ID);
continue;
}
for (xml_obj_ref = __xml_first_child(xml_tag); xml_obj_ref != NULL; xml_obj_ref = __xml_next_element(xml_obj_ref)) {
const char *obj_ref = ID(xml_obj_ref);
if (crm_str_eq((const char *)xml_obj_ref->name, XML_CIB_TAG_OBJ_REF, TRUE) == FALSE) {
continue;
}
if (obj_ref == NULL) {
crm_config_err("Failed unpacking %s for tag %s: %s should be specified",
crm_element_name(xml_obj_ref), tag_id, XML_ATTR_ID);
continue;
}
if (add_tag_ref(data_set->tags, tag_id, obj_ref) == FALSE) {
return FALSE;
}
}
}
return TRUE;
}
/* The ticket state section:
* "/cib/status/tickets/ticket_state" */
static gboolean
unpack_ticket_state(xmlNode * xml_ticket, pe_working_set_t * data_set)
{
const char *ticket_id = NULL;
const char *granted = NULL;
const char *last_granted = NULL;
const char *standby = NULL;
xmlAttrPtr xIter = NULL;
ticket_t *ticket = NULL;
ticket_id = ID(xml_ticket);
if (ticket_id == NULL || strlen(ticket_id) == 0) {
return FALSE;
}
crm_trace("Processing ticket state for %s", ticket_id);
ticket = g_hash_table_lookup(data_set->tickets, ticket_id);
if (ticket == NULL) {
ticket = ticket_new(ticket_id, data_set);
if (ticket == NULL) {
return FALSE;
}
}
for (xIter = xml_ticket->properties; xIter; xIter = xIter->next) {
const char *prop_name = (const char *)xIter->name;
const char *prop_value = crm_element_value(xml_ticket, prop_name);
if (crm_str_eq(prop_name, XML_ATTR_ID, TRUE)) {
continue;
}
g_hash_table_replace(ticket->state, strdup(prop_name), strdup(prop_value));
}
granted = g_hash_table_lookup(ticket->state, "granted");
if (granted && crm_is_true(granted)) {
ticket->granted = TRUE;
crm_info("We have ticket '%s'", ticket->id);
} else {
ticket->granted = FALSE;
crm_info("We do not have ticket '%s'", ticket->id);
}
last_granted = g_hash_table_lookup(ticket->state, "last-granted");
if (last_granted) {
ticket->last_granted = crm_parse_int(last_granted, 0);
}
standby = g_hash_table_lookup(ticket->state, "standby");
if (standby && crm_is_true(standby)) {
ticket->standby = TRUE;
if (ticket->granted) {
crm_info("Granted ticket '%s' is in standby-mode", ticket->id);
}
} else {
ticket->standby = FALSE;
}
crm_trace("Done with ticket state for %s", ticket_id);
return TRUE;
}
static gboolean
unpack_tickets_state(xmlNode * xml_tickets, pe_working_set_t * data_set)
{
xmlNode *xml_obj = NULL;
for (xml_obj = __xml_first_child(xml_tickets); xml_obj != NULL; xml_obj = __xml_next_element(xml_obj)) {
if (crm_str_eq((const char *)xml_obj->name, XML_CIB_TAG_TICKET_STATE, TRUE) == FALSE) {
continue;
}
unpack_ticket_state(xml_obj, data_set);
}
return TRUE;
}
/* Compatibility with the deprecated ticket state section:
* "/cib/status/tickets/instance_attributes" */
static void
get_ticket_state_legacy(gpointer key, gpointer value, gpointer user_data)
{
const char *long_key = key;
char *state_key = NULL;
const char *granted_prefix = "granted-ticket-";
const char *last_granted_prefix = "last-granted-";
static int granted_prefix_strlen = 0;
static int last_granted_prefix_strlen = 0;
const char *ticket_id = NULL;
const char *is_granted = NULL;
const char *last_granted = NULL;
const char *sep = NULL;
ticket_t *ticket = NULL;
pe_working_set_t *data_set = user_data;
if (granted_prefix_strlen == 0) {
granted_prefix_strlen = strlen(granted_prefix);
}
if (last_granted_prefix_strlen == 0) {
last_granted_prefix_strlen = strlen(last_granted_prefix);
}
if (strstr(long_key, granted_prefix) == long_key) {
ticket_id = long_key + granted_prefix_strlen;
if (strlen(ticket_id)) {
state_key = strdup("granted");
is_granted = value;
}
} else if (strstr(long_key, last_granted_prefix) == long_key) {
ticket_id = long_key + last_granted_prefix_strlen;
if (strlen(ticket_id)) {
state_key = strdup("last-granted");
last_granted = value;
}
} else if ((sep = strrchr(long_key, '-'))) {
ticket_id = sep + 1;
state_key = strndup(long_key, strlen(long_key) - strlen(sep));
}
if (ticket_id == NULL || strlen(ticket_id) == 0) {
free(state_key);
return;
}
if (state_key == NULL || strlen(state_key) == 0) {
free(state_key);
return;
}
ticket = g_hash_table_lookup(data_set->tickets, ticket_id);
if (ticket == NULL) {
ticket = ticket_new(ticket_id, data_set);
if (ticket == NULL) {
free(state_key);
return;
}
}
g_hash_table_replace(ticket->state, state_key, strdup(value));
if (is_granted) {
if (crm_is_true(is_granted)) {
ticket->granted = TRUE;
crm_info("We have ticket '%s'", ticket->id);
} else {
ticket->granted = FALSE;
crm_info("We do not have ticket '%s'", ticket->id);
}
} else if (last_granted) {
ticket->last_granted = crm_parse_int(last_granted, 0);
}
}
static void
unpack_handle_remote_attrs(node_t *this_node, xmlNode *state, pe_working_set_t * data_set)
{
const char *resource_discovery_enabled = NULL;
xmlNode *attrs = NULL;
resource_t *rsc = NULL;
const char *shutdown = NULL;
if (crm_str_eq((const char *)state->name, XML_CIB_TAG_STATE, TRUE) == FALSE) {
return;
}
if ((this_node == NULL) || (is_remote_node(this_node) == FALSE)) {
return;
}
crm_trace("Processing remote node id=%s, uname=%s", this_node->details->id, this_node->details->uname);
this_node->details->remote_maintenance =
crm_atoi(crm_element_value(state, XML_NODE_IS_MAINTENANCE), "0");
rsc = this_node->details->remote_rsc;
if (this_node->details->remote_requires_reset == FALSE) {
this_node->details->unclean = FALSE;
this_node->details->unseen = FALSE;
}
attrs = find_xml_node(state, XML_TAG_TRANSIENT_NODEATTRS, FALSE);
add_node_attrs(attrs, this_node, TRUE, data_set);
shutdown = g_hash_table_lookup(this_node->details->attrs, XML_CIB_ATTR_SHUTDOWN);
if (shutdown != NULL && safe_str_neq("0", shutdown)) {
crm_info("Node %s is shutting down", this_node->details->uname);
this_node->details->shutdown = TRUE;
if (rsc) {
rsc->next_role = RSC_ROLE_STOPPED;
}
}
if (crm_is_true(g_hash_table_lookup(this_node->details->attrs, "standby"))) {
crm_info("Node %s is in standby-mode", this_node->details->uname);
this_node->details->standby = TRUE;
}
if (crm_is_true(g_hash_table_lookup(this_node->details->attrs, "maintenance")) ||
(rsc && !is_set(rsc->flags, pe_rsc_managed))) {
crm_info("Node %s is in maintenance-mode", this_node->details->uname);
this_node->details->maintenance = TRUE;
}
resource_discovery_enabled = g_hash_table_lookup(this_node->details->attrs, XML_NODE_ATTR_RSC_DISCOVERY);
if (resource_discovery_enabled && !crm_is_true(resource_discovery_enabled)) {
if (is_baremetal_remote_node(this_node) && is_not_set(data_set->flags, pe_flag_stonith_enabled)) {
crm_warn("ignoring %s attribute on baremetal remote node %s, disabling resource discovery requires stonith to be enabled.",
XML_NODE_ATTR_RSC_DISCOVERY, this_node->details->uname);
} else {
/* if we're here, this is either a baremetal node and fencing is enabled,
* or this is a container node which we don't care if fencing is enabled
* or not on. container nodes are 'fenced' by recovering the container resource
* regardless of whether fencing is enabled. */
crm_info("Node %s has resource discovery disabled", this_node->details->uname);
this_node->details->rsc_discovery_enabled = FALSE;
}
}
}
static bool
unpack_node_loop(xmlNode * status, bool fence, pe_working_set_t * data_set)
{
bool changed = false;
xmlNode *lrm_rsc = NULL;
for (xmlNode *state = __xml_first_child(status); state != NULL; state = __xml_next_element(state)) {
const char *id = NULL;
const char *uname = NULL;
node_t *this_node = NULL;
bool process = FALSE;
if (crm_str_eq((const char *)state->name, XML_CIB_TAG_STATE, TRUE) == FALSE) {
continue;
}
id = crm_element_value(state, XML_ATTR_ID);
uname = crm_element_value(state, XML_ATTR_UNAME);
this_node = pe_find_node_any(data_set->nodes, id, uname);
if (this_node == NULL) {
crm_info("Node %s is unknown", id);
continue;
} else if (this_node->details->unpacked) {
crm_info("Node %s is already processed", id);
continue;
} else if (is_remote_node(this_node) == FALSE && is_set(data_set->flags, pe_flag_stonith_enabled)) {
// A redundant test, but preserves the order for regression tests
process = TRUE;
} else if (is_remote_node(this_node)) {
resource_t *rsc = this_node->details->remote_rsc;
if (fence || (rsc && rsc->role == RSC_ROLE_STARTED)) {
determine_remote_online_status(data_set, this_node);
unpack_handle_remote_attrs(this_node, state, data_set);
process = TRUE;
}
} else if (this_node->details->online) {
process = TRUE;
} else if (fence) {
process = TRUE;
}
if(process) {
crm_trace("Processing lrm resource entries on %shealthy%s node: %s",
fence?"un":"", is_remote_node(this_node)?" remote":"",
this_node->details->uname);
changed = TRUE;
this_node->details->unpacked = TRUE;
lrm_rsc = find_xml_node(state, XML_CIB_TAG_LRM, FALSE);
lrm_rsc = find_xml_node(lrm_rsc, XML_LRM_TAG_RESOURCES, FALSE);
unpack_lrm_resources(this_node, lrm_rsc, data_set);
}
}
return changed;
}
/* remove nodes that are down, stopping */
/* create +ve rsc_to_node constraints between resources and the nodes they are running on */
/* anything else? */
gboolean
unpack_status(xmlNode * status, pe_working_set_t * data_set)
{
const char *id = NULL;
const char *uname = NULL;
xmlNode *state = NULL;
node_t *this_node = NULL;
crm_trace("Beginning unpack");
if (data_set->tickets == NULL) {
data_set->tickets =
g_hash_table_new_full(crm_str_hash, g_str_equal, g_hash_destroy_str, destroy_ticket);
}
for (state = __xml_first_child(status); state != NULL; state = __xml_next_element(state)) {
if (crm_str_eq((const char *)state->name, XML_CIB_TAG_TICKETS, TRUE)) {
xmlNode *xml_tickets = state;
GHashTable *state_hash = NULL;
/* Compatibility with the deprecated ticket state section:
* Unpack the attributes in the deprecated "/cib/status/tickets/instance_attributes" if it exists. */
state_hash = crm_str_table_new();
unpack_instance_attributes(data_set->input, xml_tickets, XML_TAG_ATTR_SETS, NULL,
state_hash, NULL, TRUE, data_set->now);
g_hash_table_foreach(state_hash, get_ticket_state_legacy, data_set);
if (state_hash) {
g_hash_table_destroy(state_hash);
}
/* Unpack the new "/cib/status/tickets/ticket_state"s */
unpack_tickets_state(xml_tickets, data_set);
}
if (crm_str_eq((const char *)state->name, XML_CIB_TAG_STATE, TRUE)) {
xmlNode *attrs = NULL;
const char *resource_discovery_enabled = NULL;
id = crm_element_value(state, XML_ATTR_ID);
uname = crm_element_value(state, XML_ATTR_UNAME);
this_node = pe_find_node_any(data_set->nodes, id, uname);
if (uname == NULL) {
/* error */
continue;
} else if (this_node == NULL) {
crm_config_warn("Node %s in status section no longer exists", uname);
continue;
} else if (is_remote_node(this_node)) {
/* online state for remote nodes is determined by the
* rsc state after all the unpacking is done. we do however
* need to mark whether or not the node has been fenced as this plays
* a role during unpacking cluster node resource state */
this_node->details->remote_was_fenced =
crm_atoi(crm_element_value(state, XML_NODE_IS_FENCED), "0");
continue;
}
crm_trace("Processing node id=%s, uname=%s", id, uname);
/* Mark the node as provisionally clean
* - at least we have seen it in the current cluster's lifetime
*/
this_node->details->unclean = FALSE;
this_node->details->unseen = FALSE;
attrs = find_xml_node(state, XML_TAG_TRANSIENT_NODEATTRS, FALSE);
add_node_attrs(attrs, this_node, TRUE, data_set);
if (crm_is_true(g_hash_table_lookup(this_node->details->attrs, "standby"))) {
crm_info("Node %s is in standby-mode", this_node->details->uname);
this_node->details->standby = TRUE;
}
if (crm_is_true(g_hash_table_lookup(this_node->details->attrs, "maintenance"))) {
crm_info("Node %s is in maintenance-mode", this_node->details->uname);
this_node->details->maintenance = TRUE;
}
resource_discovery_enabled = g_hash_table_lookup(this_node->details->attrs, XML_NODE_ATTR_RSC_DISCOVERY);
if (resource_discovery_enabled && !crm_is_true(resource_discovery_enabled)) {
crm_warn("ignoring %s attribute on node %s, disabling resource discovery is not allowed on cluster nodes",
XML_NODE_ATTR_RSC_DISCOVERY, this_node->details->uname);
}
crm_trace("determining node state");
determine_online_status(state, this_node, data_set);
if (this_node->details->online && data_set->no_quorum_policy == no_quorum_suicide) {
/* Everything else should flow from this automatically
* At least until the PE becomes able to migrate off healthy resources
*/
pe_fence_node(data_set, this_node, "cluster does not have quorum");
}
}
}
while(unpack_node_loop(status, FALSE, data_set)) {
crm_trace("Start another loop");
}
// Now catch any nodes we didn't see
unpack_node_loop(status, is_set(data_set->flags, pe_flag_stonith_enabled), data_set);
for (GListPtr gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) {
node_t *this_node = gIter->data;
if (this_node == NULL) {
continue;
} else if(is_remote_node(this_node) == FALSE) {
continue;
} else if(this_node->details->unpacked) {
continue;
}
determine_remote_online_status(data_set, this_node);
}
return TRUE;
}
static gboolean
determine_online_status_no_fencing(pe_working_set_t * data_set, xmlNode * node_state,
node_t * this_node)
{
gboolean online = FALSE;
const char *join = crm_element_value(node_state, XML_NODE_JOIN_STATE);
const char *is_peer = crm_element_value(node_state, XML_NODE_IS_PEER);
const char *in_cluster = crm_element_value(node_state, XML_NODE_IN_CLUSTER);
const char *exp_state = crm_element_value(node_state, XML_NODE_EXPECTED);
if (!crm_is_true(in_cluster)) {
crm_trace("Node is down: in_cluster=%s", crm_str(in_cluster));
} else if (safe_str_eq(is_peer, ONLINESTATUS)) {
if (safe_str_eq(join, CRMD_JOINSTATE_MEMBER)) {
online = TRUE;
} else {
crm_debug("Node is not ready to run resources: %s", join);
}
} else if (this_node->details->expected_up == FALSE) {
crm_trace("CRMd is down: in_cluster=%s", crm_str(in_cluster));
crm_trace("\tis_peer=%s, join=%s, expected=%s",
crm_str(is_peer), crm_str(join), crm_str(exp_state));
} else {
/* mark it unclean */
pe_fence_node(data_set, this_node, "peer is unexpectedly down");
crm_info("\tin_cluster=%s, is_peer=%s, join=%s, expected=%s",
crm_str(in_cluster), crm_str(is_peer), crm_str(join), crm_str(exp_state));
}
return online;
}
static gboolean
determine_online_status_fencing(pe_working_set_t * data_set, xmlNode * node_state,
node_t * this_node)
{
gboolean online = FALSE;
gboolean do_terminate = FALSE;
const char *join = crm_element_value(node_state, XML_NODE_JOIN_STATE);
const char *is_peer = crm_element_value(node_state, XML_NODE_IS_PEER);
const char *in_cluster = crm_element_value(node_state, XML_NODE_IN_CLUSTER);
const char *exp_state = crm_element_value(node_state, XML_NODE_EXPECTED);
const char *terminate = g_hash_table_lookup(this_node->details->attrs, "terminate");
/*
- XML_NODE_IN_CLUSTER ::= true|false
- XML_NODE_IS_PEER ::= true|false|online|offline
- XML_NODE_JOIN_STATE ::= member|down|pending|banned
- XML_NODE_EXPECTED ::= member|down
*/
if (crm_is_true(terminate)) {
do_terminate = TRUE;
} else if (terminate != NULL && strlen(terminate) > 0) {
/* could be a time() value */
char t = terminate[0];
if (t != '0' && isdigit(t)) {
do_terminate = TRUE;
}
}
crm_trace("%s: in_cluster=%s, is_peer=%s, join=%s, expected=%s, term=%d",
this_node->details->uname, crm_str(in_cluster), crm_str(is_peer),
crm_str(join), crm_str(exp_state), do_terminate);
online = crm_is_true(in_cluster);
if (safe_str_eq(is_peer, ONLINESTATUS)) {
is_peer = XML_BOOLEAN_YES;
}
if (exp_state == NULL) {
exp_state = CRMD_JOINSTATE_DOWN;
}
if (this_node->details->shutdown) {
crm_debug("%s is shutting down", this_node->details->uname);
/* Slightly different criteria since we can't shut down a dead peer */
online = crm_is_true(is_peer);
} else if (in_cluster == NULL) {
pe_fence_node(data_set, this_node, "peer has not been seen by the cluster");
} else if (safe_str_eq(join, CRMD_JOINSTATE_NACK)) {
pe_fence_node(data_set, this_node, "peer failed the pacemaker membership criteria");
} else if (do_terminate == FALSE && safe_str_eq(exp_state, CRMD_JOINSTATE_DOWN)) {
if (crm_is_true(in_cluster) || crm_is_true(is_peer)) {
crm_info("- Node %s is not ready to run resources", this_node->details->uname);
this_node->details->standby = TRUE;
this_node->details->pending = TRUE;
} else {
crm_trace("%s is down or still coming up", this_node->details->uname);
}
} else if (do_terminate && safe_str_eq(join, CRMD_JOINSTATE_DOWN)
&& crm_is_true(in_cluster) == FALSE && crm_is_true(is_peer) == FALSE) {
crm_info("Node %s was just shot", this_node->details->uname);
online = FALSE;
} else if (crm_is_true(in_cluster) == FALSE) {
pe_fence_node(data_set, this_node, "peer is no longer part of the cluster");
} else if (crm_is_true(is_peer) == FALSE) {
pe_fence_node(data_set, this_node, "peer process is no longer available");
/* Everything is running at this point, now check join state */
} else if (do_terminate) {
pe_fence_node(data_set, this_node, "termination was requested");
} else if (safe_str_eq(join, CRMD_JOINSTATE_MEMBER)) {
crm_info("Node %s is active", this_node->details->uname);
} else if (safe_str_eq(join, CRMD_JOINSTATE_PENDING)
|| safe_str_eq(join, CRMD_JOINSTATE_DOWN)) {
crm_info("Node %s is not ready to run resources", this_node->details->uname);
this_node->details->standby = TRUE;
this_node->details->pending = TRUE;
} else {
pe_fence_node(data_set, this_node, "peer was in an unknown state");
crm_warn("%s: in-cluster=%s, is-peer=%s, join=%s, expected=%s, term=%d, shutdown=%d",
this_node->details->uname, crm_str(in_cluster), crm_str(is_peer),
crm_str(join), crm_str(exp_state), do_terminate, this_node->details->shutdown);
}
return online;
}
static gboolean
determine_remote_online_status(pe_working_set_t * data_set, node_t * this_node)
{
resource_t *rsc = this_node->details->remote_rsc;
resource_t *container = NULL;
pe_node_t *host = NULL;
/* If there is a node state entry for a (former) Pacemaker Remote node
* but no resource creating that node, the node's connection resource will
* be NULL. Consider it an offline remote node in that case.
*/
if (rsc == NULL) {
this_node->details->online = FALSE;
goto remote_online_done;
}
container = rsc->container;
if (container && (g_list_length(rsc->running_on) == 1)) {
host = rsc->running_on->data;
}
/* If the resource is currently started, mark it online. */
if (rsc->role == RSC_ROLE_STARTED) {
crm_trace("%s node %s presumed ONLINE because connection resource is started",
(container? "Guest" : "Remote"), this_node->details->id);
this_node->details->online = TRUE;
}
/* consider this node shutting down if transitioning start->stop */
if (rsc->role == RSC_ROLE_STARTED && rsc->next_role == RSC_ROLE_STOPPED) {
crm_trace("%s node %s shutting down because connection resource is stopping",
(container? "Guest" : "Remote"), this_node->details->id);
this_node->details->shutdown = TRUE;
}
/* Now check all the failure conditions. */
if(container && is_set(container->flags, pe_rsc_failed)) {
crm_trace("Guest node %s UNCLEAN because guest resource failed",
this_node->details->id);
this_node->details->online = FALSE;
this_node->details->remote_requires_reset = TRUE;
} else if(is_set(rsc->flags, pe_rsc_failed)) {
crm_trace("%s node %s OFFLINE because connection resource failed",
(container? "Guest" : "Remote"), this_node->details->id);
this_node->details->online = FALSE;
} else if (rsc->role == RSC_ROLE_STOPPED
|| (container && container->role == RSC_ROLE_STOPPED)) {
crm_trace("%s node %s OFFLINE because its resource is stopped",
(container? "Guest" : "Remote"), this_node->details->id);
this_node->details->online = FALSE;
this_node->details->remote_requires_reset = FALSE;
} else if (host && (host->details->online == FALSE)
&& host->details->unclean) {
crm_trace("Guest node %s UNCLEAN because host is unclean",
this_node->details->id);
this_node->details->online = FALSE;
this_node->details->remote_requires_reset = TRUE;
}
remote_online_done:
crm_trace("Remote node %s online=%s",
this_node->details->id, this_node->details->online ? "TRUE" : "FALSE");
return this_node->details->online;
}
gboolean
determine_online_status(xmlNode * node_state, node_t * this_node, pe_working_set_t * data_set)
{
gboolean online = FALSE;
const char *shutdown = NULL;
const char *exp_state = crm_element_value(node_state, XML_NODE_EXPECTED);
if (this_node == NULL) {
crm_config_err("No node to check");
return online;
}
this_node->details->shutdown = FALSE;
this_node->details->expected_up = FALSE;
shutdown = g_hash_table_lookup(this_node->details->attrs, XML_CIB_ATTR_SHUTDOWN);
if (shutdown != NULL && safe_str_neq("0", shutdown)) {
this_node->details->shutdown = TRUE;
} else if (safe_str_eq(exp_state, CRMD_JOINSTATE_MEMBER)) {
this_node->details->expected_up = TRUE;
}
if (this_node->details->type == node_ping) {
this_node->details->unclean = FALSE;
online = FALSE; /* As far as resource management is concerned,
* the node is safely offline.
* Anyone caught abusing this logic will be shot
*/
} else if (is_set(data_set->flags, pe_flag_stonith_enabled) == FALSE) {
online = determine_online_status_no_fencing(data_set, node_state, this_node);
} else {
online = determine_online_status_fencing(data_set, node_state, this_node);
}
if (online) {
this_node->details->online = TRUE;
} else {
/* remove node from contention */
this_node->fixed = TRUE;
this_node->weight = -INFINITY;
}
if (online && this_node->details->shutdown) {
/* don't run resources here */
this_node->fixed = TRUE;
this_node->weight = -INFINITY;
}
if (this_node->details->type == node_ping) {
crm_info("Node %s is not a pacemaker node", this_node->details->uname);
} else if (this_node->details->unclean) {
pe_proc_warn("Node %s is unclean", this_node->details->uname);
} else if (this_node->details->online) {
crm_info("Node %s is %s", this_node->details->uname,
this_node->details->shutdown ? "shutting down" :
this_node->details->pending ? "pending" :
this_node->details->standby ? "standby" :
this_node->details->maintenance ? "maintenance" : "online");
} else {
crm_trace("Node %s is offline", this_node->details->uname);
}
return online;
}
char *
clone_strip(const char *last_rsc_id)
{
int lpc = 0;
char *zero = NULL;
CRM_CHECK(last_rsc_id != NULL, return NULL);
lpc = strlen(last_rsc_id);
while (--lpc > 0) {
switch (last_rsc_id[lpc]) {
case 0:
crm_err("Empty string: %s", last_rsc_id);
return NULL;
break;
case '0':
case '1':
case '2':
case '3':
case '4':
case '5':
case '6':
case '7':
case '8':
case '9':
break;
case ':':
zero = calloc(1, lpc + 1);
memcpy(zero, last_rsc_id, lpc);
zero[lpc] = 0;
return zero;
default:
goto done;
}
}
done:
zero = strdup(last_rsc_id);
return zero;
}
char *
clone_zero(const char *last_rsc_id)
{
int lpc = 0;
char *zero = NULL;
CRM_CHECK(last_rsc_id != NULL, return NULL);
if (last_rsc_id != NULL) {
lpc = strlen(last_rsc_id);
}
while (--lpc > 0) {
switch (last_rsc_id[lpc]) {
case 0:
return NULL;
break;
case '0':
case '1':
case '2':
case '3':
case '4':
case '5':
case '6':
case '7':
case '8':
case '9':
break;
case ':':
zero = calloc(1, lpc + 3);
memcpy(zero, last_rsc_id, lpc);
zero[lpc] = ':';
zero[lpc + 1] = '0';
zero[lpc + 2] = 0;
return zero;
default:
goto done;
}
}
done:
lpc = strlen(last_rsc_id);
zero = calloc(1, lpc + 3);
memcpy(zero, last_rsc_id, lpc);
zero[lpc] = ':';
zero[lpc + 1] = '0';
zero[lpc + 2] = 0;
crm_trace("%s -> %s", last_rsc_id, zero);
return zero;
}
static resource_t *
create_fake_resource(const char *rsc_id, xmlNode * rsc_entry, pe_working_set_t * data_set)
{
resource_t *rsc = NULL;
xmlNode *xml_rsc = create_xml_node(NULL, XML_CIB_TAG_RESOURCE);
copy_in_properties(xml_rsc, rsc_entry);
crm_xml_add(xml_rsc, XML_ATTR_ID, rsc_id);
crm_log_xml_debug(xml_rsc, "Orphan resource");
if (!common_unpack(xml_rsc, &rsc, NULL, data_set)) {
return NULL;
}
if (xml_contains_remote_node(xml_rsc)) {
node_t *node;
crm_debug("Detected orphaned remote node %s", rsc_id);
node = pe_find_node(data_set->nodes, rsc_id);
if (node == NULL) {
node = pe_create_node(rsc_id, rsc_id, "remote", NULL, data_set);
}
link_rsc2remotenode(data_set, rsc);
if (node) {
crm_trace("Setting node %s as shutting down due to orphaned connection resource", rsc_id);
node->details->shutdown = TRUE;
}
}
if (crm_element_value(rsc_entry, XML_RSC_ATTR_CONTAINER)) {
/* This orphaned rsc needs to be mapped to a container. */
crm_trace("Detected orphaned container filler %s", rsc_id);
set_bit(rsc->flags, pe_rsc_orphan_container_filler);
}
set_bit(rsc->flags, pe_rsc_orphan);
data_set->resources = g_list_append(data_set->resources, rsc);
return rsc;
}
extern resource_t *create_child_clone(resource_t * rsc, int sub_id, pe_working_set_t * data_set);
static resource_t *
find_anonymous_clone(pe_working_set_t * data_set, node_t * node, resource_t * parent,
const char *rsc_id)
{
GListPtr rIter = NULL;
resource_t *rsc = NULL;
gboolean skip_inactive = FALSE;
CRM_ASSERT(parent != NULL);
CRM_ASSERT(pe_rsc_is_clone(parent));
CRM_ASSERT(is_not_set(parent->flags, pe_rsc_unique));
/* Find an instance active (or partially active for grouped clones) on the specified node */
pe_rsc_trace(parent, "Looking for %s on %s in %s", rsc_id, node->details->uname, parent->id);
for (rIter = parent->children; rsc == NULL && rIter; rIter = rIter->next) {
GListPtr nIter = NULL;
GListPtr locations = NULL;
resource_t *child = rIter->data;
child->fns->location(child, &locations, TRUE);
if (locations == NULL) {
pe_rsc_trace(child, "Resource %s, skip inactive", child->id);
continue;
}
for (nIter = locations; nIter && rsc == NULL; nIter = nIter->next) {
node_t *childnode = nIter->data;
if (childnode->details == node->details) {
/* ->find_rsc() because we might be a cloned group */
rsc = parent->fns->find_rsc(child, rsc_id, NULL, pe_find_clone);
if(rsc) {
pe_rsc_trace(rsc, "Resource %s, active", rsc->id);
}
}
/* Keep this block, it means we'll do the right thing if
* anyone toggles the unique flag to 'off'
*/
if (rsc && rsc->running_on) {
crm_notice("/Anonymous/ clone %s is already running on %s",
parent->id, node->details->uname);
skip_inactive = TRUE;
rsc = NULL;
}
}
g_list_free(locations);
}
/* Find an inactive instance */
if (skip_inactive == FALSE) {
pe_rsc_trace(parent, "Looking for %s anywhere", rsc_id);
for (rIter = parent->children; rsc == NULL && rIter; rIter = rIter->next) {
GListPtr locations = NULL;
resource_t *child = rIter->data;
if (is_set(child->flags, pe_rsc_block)) {
pe_rsc_trace(child, "Skip: blocked in stopped state");
continue;
}
child->fns->location(child, &locations, TRUE);
if (locations == NULL) {
/* ->find_rsc() because we might be a cloned group */
rsc = parent->fns->find_rsc(child, rsc_id, NULL, pe_find_clone);
pe_rsc_trace(parent, "Resource %s, empty slot", rsc->id);
}
g_list_free(locations);
}
}
if (rsc == NULL) {
/* Create an extra orphan */
resource_t *top = create_child_clone(parent, -1, data_set);
/* ->find_rsc() because we might be a cloned group */
rsc = top->fns->find_rsc(top, rsc_id, NULL, pe_find_clone);
CRM_ASSERT(rsc != NULL);
pe_rsc_debug(parent, "Created orphan %s for %s: %s on %s", top->id, parent->id, rsc_id,
node->details->uname);
}
if (safe_str_neq(rsc_id, rsc->id)) {
pe_rsc_debug(rsc, "Internally renamed %s on %s to %s%s",
rsc_id, node->details->uname, rsc->id,
is_set(rsc->flags, pe_rsc_orphan) ? " (ORPHAN)" : "");
}
return rsc;
}
static resource_t *
unpack_find_resource(pe_working_set_t * data_set, node_t * node, const char *rsc_id,
xmlNode * rsc_entry)
{
resource_t *rsc = NULL;
resource_t *parent = NULL;
crm_trace("looking for %s", rsc_id);
rsc = pe_find_resource(data_set->resources, rsc_id);
/* no match */
if (rsc == NULL) {
/* Even when clone-max=0, we still create a single :0 orphan to match against */
char *tmp = clone_zero(rsc_id);
resource_t *clone0 = pe_find_resource(data_set->resources, tmp);
if (clone0 && is_not_set(clone0->flags, pe_rsc_unique)) {
rsc = clone0;
} else {
crm_trace("%s is not known as %s either", rsc_id, tmp);
}
parent = uber_parent(clone0);
free(tmp);
crm_trace("%s not found: %s", rsc_id, parent ? parent->id : "orphan");
} else if (rsc->variant > pe_native) {
crm_trace("%s is no longer a primitive resource, the lrm_resource entry is obsolete",
rsc_id);
return NULL;
} else {
parent = uber_parent(rsc);
}
if(parent && parent->parent) {
rsc = find_container_child(rsc_id, rsc, node);
} else if (pe_rsc_is_clone(parent)) {
if (is_not_set(parent->flags, pe_rsc_unique)) {
char *base = clone_strip(rsc_id);
rsc = find_anonymous_clone(data_set, node, parent, base);
CRM_ASSERT(rsc != NULL);
free(base);
}
if (rsc && safe_str_neq(rsc_id, rsc->id)) {
free(rsc->clone_name);
rsc->clone_name = strdup(rsc_id);
}
}
return rsc;
}
static resource_t *
process_orphan_resource(xmlNode * rsc_entry, node_t * node, pe_working_set_t * data_set)
{
resource_t *rsc = NULL;
const char *rsc_id = crm_element_value(rsc_entry, XML_ATTR_ID);
crm_debug("Detected orphan resource %s on %s", rsc_id, node->details->uname);
rsc = create_fake_resource(rsc_id, rsc_entry, data_set);
if (is_set(data_set->flags, pe_flag_stop_rsc_orphans) == FALSE) {
clear_bit(rsc->flags, pe_rsc_managed);
} else {
print_resource(LOG_DEBUG_3, "Added orphan", rsc, FALSE);
CRM_CHECK(rsc != NULL, return NULL);
resource_location(rsc, NULL, -INFINITY, "__orphan_dont_run__", data_set);
}
return rsc;
}
static void
process_rsc_state(resource_t * rsc, node_t * node,
enum action_fail_response on_fail,
xmlNode * migrate_op, pe_working_set_t * data_set)
{
node_t *tmpnode = NULL;
char *reason = NULL;
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "Resource %s is %s on %s: on_fail=%s",
rsc->id, role2text(rsc->role), node->details->uname, fail2text(on_fail));
/* process current state */
if (rsc->role != RSC_ROLE_UNKNOWN) {
resource_t *iter = rsc;
while (iter) {
if (g_hash_table_lookup(iter->known_on, node->details->id) == NULL) {
node_t *n = node_copy(node);
pe_rsc_trace(rsc, "%s (aka. %s) known on %s", rsc->id, rsc->clone_name,
n->details->uname);
g_hash_table_insert(iter->known_on, (gpointer) n->details->id, n);
}
if (is_set(iter->flags, pe_rsc_unique)) {
break;
}
iter = iter->parent;
}
}
/* If a managed resource is believed to be running, but node is down ... */
if (rsc->role > RSC_ROLE_STOPPED
&& node->details->online == FALSE
&& node->details->maintenance == FALSE
&& is_set(rsc->flags, pe_rsc_managed)) {
gboolean should_fence = FALSE;
/* If this is a guest node, fence it (regardless of whether fencing is
* enabled, because guest node fencing is done by recovery of the
* container resource rather than by stonithd). Mark the resource
* we're processing as failed. When the guest comes back up, its
* operation history in the CIB will be cleared, freeing the affected
* resource to run again once we are sure we know its state.
*/
if (is_container_remote_node(node)) {
set_bit(rsc->flags, pe_rsc_failed);
should_fence = TRUE;
} else if (is_set(data_set->flags, pe_flag_stonith_enabled)) {
if (is_baremetal_remote_node(node) && node->details->remote_rsc
&& is_not_set(node->details->remote_rsc->flags, pe_rsc_failed)) {
/* setting unseen = true means that fencing of the remote node will
* only occur if the connection resource is not going to start somewhere.
* This allows connection resources on a failed cluster-node to move to
* another node without requiring the baremetal remote nodes to be fenced
* as well. */
node->details->unseen = TRUE;
reason = crm_strdup_printf("%s is active there (fencing will be"
" revoked if remote connection can "
"be re-established elsewhere)",
rsc->id);
}
should_fence = TRUE;
}
if (should_fence) {
if (reason == NULL) {
reason = crm_strdup_printf("%s is thought to be active there", rsc->id);
}
pe_fence_node(data_set, node, reason);
}
free(reason);
}
if (node->details->unclean) {
/* No extra processing needed
* Also allows resources to be started again after a node is shot
*/
on_fail = action_fail_ignore;
}
switch (on_fail) {
case action_fail_ignore:
/* nothing to do */
break;
case action_fail_fence:
/* treat it as if it is still running
* but also mark the node as unclean
*/
reason = crm_strdup_printf("%s failed there", rsc->id);
pe_fence_node(data_set, node, reason);
free(reason);
break;
case action_fail_standby:
node->details->standby = TRUE;
node->details->standby_onfail = TRUE;
break;
case action_fail_block:
/* is_managed == FALSE will prevent any
* actions being sent for the resource
*/
clear_bit(rsc->flags, pe_rsc_managed);
set_bit(rsc->flags, pe_rsc_block);
break;
case action_fail_migrate:
/* make sure it comes up somewhere else
* or not at all
*/
resource_location(rsc, node, -INFINITY, "__action_migration_auto__", data_set);
break;
case action_fail_stop:
rsc->next_role = RSC_ROLE_STOPPED;
break;
case action_fail_recover:
if (rsc->role != RSC_ROLE_STOPPED && rsc->role != RSC_ROLE_UNKNOWN) {
set_bit(rsc->flags, pe_rsc_failed);
stop_action(rsc, node, FALSE);
}
break;
case action_fail_restart_container:
set_bit(rsc->flags, pe_rsc_failed);
if (rsc->container) {
stop_action(rsc->container, node, FALSE);
} else if (rsc->role != RSC_ROLE_STOPPED && rsc->role != RSC_ROLE_UNKNOWN) {
stop_action(rsc, node, FALSE);
}
break;
case action_fail_reset_remote:
set_bit(rsc->flags, pe_rsc_failed);
if (is_set(data_set->flags, pe_flag_stonith_enabled)) {
tmpnode = NULL;
if (rsc->is_remote_node) {
tmpnode = pe_find_node(data_set->nodes, rsc->id);
}
if (tmpnode &&
is_baremetal_remote_node(tmpnode) &&
tmpnode->details->remote_was_fenced == 0) {
/* connection resource to baremetal resource failed in a way that
* should result in fencing the remote-node. */
pe_fence_node(data_set, tmpnode,
"remote connection is unrecoverable");
}
}
- /* require the stop action regardless if fencing is occuring or not. */
+ /* require the stop action regardless if fencing is occurring or not. */
if (rsc->role > RSC_ROLE_STOPPED) {
stop_action(rsc, node, FALSE);
}
/* if reconnect delay is in use, prevent the connection from exiting the
* "STOPPED" role until the failure is cleared by the delay timeout. */
if (rsc->remote_reconnect_interval) {
rsc->next_role = RSC_ROLE_STOPPED;
}
break;
}
/* ensure a remote-node connection failure forces an unclean remote-node
* to be fenced. By setting unseen = FALSE, the remote-node failure will
* result in a fencing operation regardless if we're going to attempt to
* reconnect to the remote-node in this transition or not. */
if (is_set(rsc->flags, pe_rsc_failed) && rsc->is_remote_node) {
tmpnode = pe_find_node(data_set->nodes, rsc->id);
if (tmpnode && tmpnode->details->unclean) {
tmpnode->details->unseen = FALSE;
}
}
if (rsc->role != RSC_ROLE_STOPPED && rsc->role != RSC_ROLE_UNKNOWN) {
if (is_set(rsc->flags, pe_rsc_orphan)) {
if (is_set(rsc->flags, pe_rsc_managed)) {
crm_config_warn("Detected active orphan %s running on %s",
rsc->id, node->details->uname);
} else {
crm_config_warn("Cluster configured not to stop active orphans."
" %s must be stopped manually on %s",
rsc->id, node->details->uname);
}
}
native_add_running(rsc, node, data_set);
if (on_fail != action_fail_ignore) {
set_bit(rsc->flags, pe_rsc_failed);
}
} else if (rsc->clone_name && strchr(rsc->clone_name, ':') != NULL) {
/* Only do this for older status sections that included instance numbers
* Otherwise stopped instances will appear as orphans
*/
pe_rsc_trace(rsc, "Resetting clone_name %s for %s (stopped)", rsc->clone_name, rsc->id);
free(rsc->clone_name);
rsc->clone_name = NULL;
} else {
char *key = stop_key(rsc);
GListPtr possible_matches = find_actions(rsc->actions, key, node);
GListPtr gIter = possible_matches;
for (; gIter != NULL; gIter = gIter->next) {
action_t *stop = (action_t *) gIter->data;
stop->flags |= pe_action_optional;
}
g_list_free(possible_matches);
free(key);
}
}
/* create active recurring operations as optional */
static void
process_recurring(node_t * node, resource_t * rsc,
int start_index, int stop_index,
GListPtr sorted_op_list, pe_working_set_t * data_set)
{
int counter = -1;
const char *task = NULL;
const char *status = NULL;
GListPtr gIter = sorted_op_list;
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "%s: Start index %d, stop index = %d", rsc->id, start_index, stop_index);
for (; gIter != NULL; gIter = gIter->next) {
xmlNode *rsc_op = (xmlNode *) gIter->data;
int interval = 0;
char *key = NULL;
const char *id = ID(rsc_op);
const char *interval_s = NULL;
counter++;
if (node->details->online == FALSE) {
pe_rsc_trace(rsc, "Skipping %s/%s: node is offline", rsc->id, node->details->uname);
break;
/* Need to check if there's a monitor for role="Stopped" */
} else if (start_index < stop_index && counter <= stop_index) {
pe_rsc_trace(rsc, "Skipping %s/%s: resource is not active", id, node->details->uname);
continue;
} else if (counter < start_index) {
pe_rsc_trace(rsc, "Skipping %s/%s: old %d", id, node->details->uname, counter);
continue;
}
interval_s = crm_element_value(rsc_op, XML_LRM_ATTR_INTERVAL);
interval = crm_parse_int(interval_s, "0");
if (interval == 0) {
pe_rsc_trace(rsc, "Skipping %s/%s: non-recurring", id, node->details->uname);
continue;
}
status = crm_element_value(rsc_op, XML_LRM_ATTR_OPSTATUS);
if (safe_str_eq(status, "-1")) {
pe_rsc_trace(rsc, "Skipping %s/%s: status", id, node->details->uname);
continue;
}
task = crm_element_value(rsc_op, XML_LRM_ATTR_TASK);
/* create the action */
key = generate_op_key(rsc->id, task, interval);
pe_rsc_trace(rsc, "Creating %s/%s", key, node->details->uname);
custom_action(rsc, key, task, node, TRUE, TRUE, data_set);
}
}
void
calculate_active_ops(GListPtr sorted_op_list, int *start_index, int *stop_index)
{
int counter = -1;
int implied_monitor_start = -1;
int implied_master_start = -1;
const char *task = NULL;
const char *status = NULL;
GListPtr gIter = sorted_op_list;
*stop_index = -1;
*start_index = -1;
for (; gIter != NULL; gIter = gIter->next) {
xmlNode *rsc_op = (xmlNode *) gIter->data;
counter++;
task = crm_element_value(rsc_op, XML_LRM_ATTR_TASK);
status = crm_element_value(rsc_op, XML_LRM_ATTR_OPSTATUS);
if (safe_str_eq(task, CRMD_ACTION_STOP)
&& safe_str_eq(status, "0")) {
*stop_index = counter;
} else if (safe_str_eq(task, CRMD_ACTION_START) || safe_str_eq(task, CRMD_ACTION_MIGRATED)) {
*start_index = counter;
} else if ((implied_monitor_start <= *stop_index) && safe_str_eq(task, CRMD_ACTION_STATUS)) {
const char *rc = crm_element_value(rsc_op, XML_LRM_ATTR_RC);
if (safe_str_eq(rc, "0") || safe_str_eq(rc, "8")) {
implied_monitor_start = counter;
}
} else if (safe_str_eq(task, CRMD_ACTION_PROMOTE) || safe_str_eq(task, CRMD_ACTION_DEMOTE)) {
implied_master_start = counter;
}
}
if (*start_index == -1) {
if (implied_master_start != -1) {
*start_index = implied_master_start;
} else if (implied_monitor_start != -1) {
*start_index = implied_monitor_start;
}
}
}
static resource_t *
unpack_lrm_rsc_state(node_t * node, xmlNode * rsc_entry, pe_working_set_t * data_set)
{
GListPtr gIter = NULL;
int stop_index = -1;
int start_index = -1;
enum rsc_role_e req_role = RSC_ROLE_UNKNOWN;
const char *task = NULL;
const char *rsc_id = crm_element_value(rsc_entry, XML_ATTR_ID);
resource_t *rsc = NULL;
GListPtr op_list = NULL;
GListPtr sorted_op_list = NULL;
xmlNode *migrate_op = NULL;
xmlNode *rsc_op = NULL;
xmlNode *last_failure = NULL;
enum action_fail_response on_fail = FALSE;
enum rsc_role_e saved_role = RSC_ROLE_UNKNOWN;
crm_trace("[%s] Processing %s on %s",
crm_element_name(rsc_entry), rsc_id, node->details->uname);
/* extract operations */
op_list = NULL;
sorted_op_list = NULL;
for (rsc_op = __xml_first_child(rsc_entry); rsc_op != NULL; rsc_op = __xml_next_element(rsc_op)) {
if (crm_str_eq((const char *)rsc_op->name, XML_LRM_TAG_RSC_OP, TRUE)) {
op_list = g_list_prepend(op_list, rsc_op);
}
}
if (op_list == NULL) {
/* if there are no operations, there is nothing to do */
return NULL;
}
/* find the resource */
rsc = unpack_find_resource(data_set, node, rsc_id, rsc_entry);
if (rsc == NULL) {
rsc = process_orphan_resource(rsc_entry, node, data_set);
}
CRM_ASSERT(rsc != NULL);
/* process operations */
saved_role = rsc->role;
on_fail = action_fail_ignore;
rsc->role = RSC_ROLE_UNKNOWN;
sorted_op_list = g_list_sort(op_list, sort_op_by_callid);
for (gIter = sorted_op_list; gIter != NULL; gIter = gIter->next) {
xmlNode *rsc_op = (xmlNode *) gIter->data;
task = crm_element_value(rsc_op, XML_LRM_ATTR_TASK);
if (safe_str_eq(task, CRMD_ACTION_MIGRATED)) {
migrate_op = rsc_op;
}
unpack_rsc_op(rsc, node, rsc_op, &last_failure, &on_fail, data_set);
}
/* create active recurring operations as optional */
calculate_active_ops(sorted_op_list, &start_index, &stop_index);
process_recurring(node, rsc, start_index, stop_index, sorted_op_list, data_set);
/* no need to free the contents */
g_list_free(sorted_op_list);
process_rsc_state(rsc, node, on_fail, migrate_op, data_set);
if (get_target_role(rsc, &req_role)) {
if (rsc->next_role == RSC_ROLE_UNKNOWN || req_role < rsc->next_role) {
pe_rsc_debug(rsc, "%s: Overwriting calculated next role %s"
" with requested next role %s",
rsc->id, role2text(rsc->next_role), role2text(req_role));
rsc->next_role = req_role;
} else if (req_role > rsc->next_role) {
pe_rsc_info(rsc, "%s: Not overwriting calculated next role %s"
" with requested next role %s",
rsc->id, role2text(rsc->next_role), role2text(req_role));
}
}
if (saved_role > rsc->role) {
rsc->role = saved_role;
}
return rsc;
}
static void
handle_orphaned_container_fillers(xmlNode * lrm_rsc_list, pe_working_set_t * data_set)
{
xmlNode *rsc_entry = NULL;
for (rsc_entry = __xml_first_child(lrm_rsc_list); rsc_entry != NULL;
rsc_entry = __xml_next_element(rsc_entry)) {
resource_t *rsc;
resource_t *container;
const char *rsc_id;
const char *container_id;
if (safe_str_neq((const char *)rsc_entry->name, XML_LRM_TAG_RESOURCE)) {
continue;
}
container_id = crm_element_value(rsc_entry, XML_RSC_ATTR_CONTAINER);
rsc_id = crm_element_value(rsc_entry, XML_ATTR_ID);
if (container_id == NULL || rsc_id == NULL) {
continue;
}
container = pe_find_resource(data_set->resources, container_id);
if (container == NULL) {
continue;
}
rsc = pe_find_resource(data_set->resources, rsc_id);
if (rsc == NULL ||
is_set(rsc->flags, pe_rsc_orphan_container_filler) == FALSE ||
rsc->container != NULL) {
continue;
}
pe_rsc_trace(rsc, "Mapped orphaned rsc %s's container to %s", rsc->id, container_id);
rsc->container = container;
container->fillers = g_list_append(container->fillers, rsc);
}
}
gboolean
unpack_lrm_resources(node_t * node, xmlNode * lrm_rsc_list, pe_working_set_t * data_set)
{
xmlNode *rsc_entry = NULL;
gboolean found_orphaned_container_filler = FALSE;
GListPtr unexpected_containers = NULL;
GListPtr gIter = NULL;
resource_t *remote = NULL;
CRM_CHECK(node != NULL, return FALSE);
crm_trace("Unpacking resources on %s", node->details->uname);
for (rsc_entry = __xml_first_child(lrm_rsc_list); rsc_entry != NULL;
rsc_entry = __xml_next_element(rsc_entry)) {
if (crm_str_eq((const char *)rsc_entry->name, XML_LRM_TAG_RESOURCE, TRUE)) {
resource_t *rsc;
rsc = unpack_lrm_rsc_state(node, rsc_entry, data_set);
if (!rsc) {
continue;
}
if (is_set(rsc->flags, pe_rsc_orphan_container_filler)) {
found_orphaned_container_filler = TRUE;
}
if (is_set(rsc->flags, pe_rsc_unexpectedly_running)) {
remote = rsc_contains_remote_node(data_set, rsc);
if (remote) {
unexpected_containers = g_list_append(unexpected_containers, remote);
}
}
}
}
/* If a container resource is unexpectedly up... and the remote-node
* connection resource for that container is not up, the entire container
* must be recovered. */
for (gIter = unexpected_containers; gIter != NULL; gIter = gIter->next) {
remote = (resource_t *) gIter->data;
if (remote->role != RSC_ROLE_STARTED) {
crm_warn("Recovering container resource %s. Resource is unexpectedly running and involves a remote-node.", remote->container->id);
set_bit(remote->container->flags, pe_rsc_failed);
}
}
/* now that all the resource state has been unpacked for this node
* we have to go back and map any orphaned container fillers to their
* container resource */
if (found_orphaned_container_filler) {
handle_orphaned_container_fillers(lrm_rsc_list, data_set);
}
g_list_free(unexpected_containers);
return TRUE;
}
static void
set_active(resource_t * rsc)
{
resource_t *top = uber_parent(rsc);
if (top && top->variant == pe_master) {
rsc->role = RSC_ROLE_SLAVE;
} else {
rsc->role = RSC_ROLE_STARTED;
}
}
static void
set_node_score(gpointer key, gpointer value, gpointer user_data)
{
node_t *node = value;
int *score = user_data;
node->weight = *score;
}
#define STATUS_PATH_MAX 1024
static xmlNode *
find_lrm_op(const char *resource, const char *op, const char *node, const char *source,
pe_working_set_t * data_set)
{
int offset = 0;
char xpath[STATUS_PATH_MAX];
offset += snprintf(xpath + offset, STATUS_PATH_MAX - offset, "//node_state[@uname='%s']", node);
offset +=
snprintf(xpath + offset, STATUS_PATH_MAX - offset, "//" XML_LRM_TAG_RESOURCE "[@id='%s']",
resource);
/* Need to check against transition_magic too? */
if (source && safe_str_eq(op, CRMD_ACTION_MIGRATE)) {
offset +=
snprintf(xpath + offset, STATUS_PATH_MAX - offset,
"/" XML_LRM_TAG_RSC_OP "[@operation='%s' and @migrate_target='%s']", op,
source);
} else if (source && safe_str_eq(op, CRMD_ACTION_MIGRATED)) {
offset +=
snprintf(xpath + offset, STATUS_PATH_MAX - offset,
"/" XML_LRM_TAG_RSC_OP "[@operation='%s' and @migrate_source='%s']", op,
source);
} else {
offset +=
snprintf(xpath + offset, STATUS_PATH_MAX - offset,
"/" XML_LRM_TAG_RSC_OP "[@operation='%s']", op);
}
CRM_LOG_ASSERT(offset > 0);
return get_xpath_object(xpath, data_set->input, LOG_DEBUG);
}
static void
unpack_rsc_migration(resource_t *rsc, node_t *node, xmlNode *xml_op, pe_working_set_t * data_set)
{
/*
* The normal sequence is (now): migrate_to(Src) -> migrate_from(Tgt) -> stop(Src)
*
* So if a migrate_to is followed by a stop, then we don't need to care what
* happened on the target node
*
* Without the stop, we need to look for a successful migrate_from.
* This would also imply we're no longer running on the source
*
* Without the stop, and without a migrate_from op we make sure the resource
* gets stopped on both source and target (assuming the target is up)
*
*/
int stop_id = 0;
int task_id = 0;
xmlNode *stop_op =
find_lrm_op(rsc->id, CRMD_ACTION_STOP, node->details->id, NULL, data_set);
if (stop_op) {
crm_element_value_int(stop_op, XML_LRM_ATTR_CALLID, &stop_id);
}
crm_element_value_int(xml_op, XML_LRM_ATTR_CALLID, &task_id);
if (stop_op == NULL || stop_id < task_id) {
int from_rc = 0, from_status = 0;
const char *migrate_source =
crm_element_value(xml_op, XML_LRM_ATTR_MIGRATE_SOURCE);
const char *migrate_target =
crm_element_value(xml_op, XML_LRM_ATTR_MIGRATE_TARGET);
node_t *target = pe_find_node(data_set->nodes, migrate_target);
node_t *source = pe_find_node(data_set->nodes, migrate_source);
xmlNode *migrate_from =
find_lrm_op(rsc->id, CRMD_ACTION_MIGRATED, migrate_target, migrate_source,
data_set);
rsc->role = RSC_ROLE_STARTED; /* can be master? */
if (migrate_from) {
crm_element_value_int(migrate_from, XML_LRM_ATTR_RC, &from_rc);
crm_element_value_int(migrate_from, XML_LRM_ATTR_OPSTATUS, &from_status);
pe_rsc_trace(rsc, "%s op on %s exited with status=%d, rc=%d",
ID(migrate_from), migrate_target, from_status, from_rc);
}
if (migrate_from && from_rc == PCMK_OCF_OK
&& from_status == PCMK_LRM_OP_DONE) {
pe_rsc_trace(rsc, "Detected dangling migration op: %s on %s", ID(xml_op),
migrate_source);
/* all good
* just need to arrange for the stop action to get sent
* but _without_ affecting the target somehow
*/
rsc->role = RSC_ROLE_STOPPED;
rsc->dangling_migrations = g_list_prepend(rsc->dangling_migrations, node);
} else if (migrate_from) { /* Failed */
if (target && target->details->online) {
pe_rsc_trace(rsc, "Marking active on %s %p %d", migrate_target, target,
target->details->online);
native_add_running(rsc, target, data_set);
}
} else { /* Pending or complete but erased */
if (target && target->details->online) {
pe_rsc_trace(rsc, "Marking active on %s %p %d", migrate_target, target,
target->details->online);
native_add_running(rsc, target, data_set);
if (source && source->details->online) {
/* If we make it here we have a partial migration. The migrate_to
* has completed but the migrate_from on the target has not. Hold on
* to the target and source on the resource. Later on if we detect that
* the resource is still going to run on that target, we may continue
* the migration */
rsc->partial_migration_target = target;
rsc->partial_migration_source = source;
}
} else {
/* Consider it failed here - forces a restart, prevents migration */
set_bit(rsc->flags, pe_rsc_failed);
clear_bit(rsc->flags, pe_rsc_allow_migrate);
}
}
}
}
static void
unpack_rsc_migration_failure(resource_t *rsc, node_t *node, xmlNode *xml_op, pe_working_set_t * data_set)
{
const char *task = crm_element_value(xml_op, XML_LRM_ATTR_TASK);
CRM_ASSERT(rsc);
if (safe_str_eq(task, CRMD_ACTION_MIGRATED)) {
int stop_id = 0;
int migrate_id = 0;
const char *migrate_source = crm_element_value(xml_op, XML_LRM_ATTR_MIGRATE_SOURCE);
const char *migrate_target = crm_element_value(xml_op, XML_LRM_ATTR_MIGRATE_TARGET);
xmlNode *stop_op =
find_lrm_op(rsc->id, CRMD_ACTION_STOP, migrate_source, NULL, data_set);
xmlNode *migrate_op =
find_lrm_op(rsc->id, CRMD_ACTION_MIGRATE, migrate_source, migrate_target,
data_set);
if (stop_op) {
crm_element_value_int(stop_op, XML_LRM_ATTR_CALLID, &stop_id);
}
if (migrate_op) {
crm_element_value_int(migrate_op, XML_LRM_ATTR_CALLID, &migrate_id);
}
/* Get our state right */
rsc->role = RSC_ROLE_STARTED; /* can be master? */
if (stop_op == NULL || stop_id < migrate_id) {
node_t *source = pe_find_node(data_set->nodes, migrate_source);
if (source && source->details->online) {
native_add_running(rsc, source, data_set);
}
}
} else if (safe_str_eq(task, CRMD_ACTION_MIGRATE)) {
int stop_id = 0;
int migrate_id = 0;
const char *migrate_source = crm_element_value(xml_op, XML_LRM_ATTR_MIGRATE_SOURCE);
const char *migrate_target = crm_element_value(xml_op, XML_LRM_ATTR_MIGRATE_TARGET);
xmlNode *stop_op =
find_lrm_op(rsc->id, CRMD_ACTION_STOP, migrate_target, NULL, data_set);
xmlNode *migrate_op =
find_lrm_op(rsc->id, CRMD_ACTION_MIGRATED, migrate_target, migrate_source,
data_set);
if (stop_op) {
crm_element_value_int(stop_op, XML_LRM_ATTR_CALLID, &stop_id);
}
if (migrate_op) {
crm_element_value_int(migrate_op, XML_LRM_ATTR_CALLID, &migrate_id);
}
/* Get our state right */
rsc->role = RSC_ROLE_STARTED; /* can be master? */
if (stop_op == NULL || stop_id < migrate_id) {
node_t *target = pe_find_node(data_set->nodes, migrate_target);
pe_rsc_trace(rsc, "Stop: %p %d, Migrated: %p %d", stop_op, stop_id, migrate_op,
migrate_id);
if (target && target->details->online) {
native_add_running(rsc, target, data_set);
}
} else if (migrate_op == NULL) {
/* Make sure it gets cleaned up, the stop may pre-date the migrate_from */
rsc->dangling_migrations = g_list_prepend(rsc->dangling_migrations, node);
}
}
}
static void
record_failed_op(xmlNode *op, node_t* node, pe_working_set_t * data_set)
{
xmlNode *xIter = NULL;
const char *op_key = crm_element_value(op, XML_LRM_ATTR_TASK_KEY);
if (node->details->online == FALSE) {
return;
}
for (xIter = data_set->failed->children; xIter; xIter = xIter->next) {
const char *key = crm_element_value(xIter, XML_LRM_ATTR_TASK_KEY);
const char *uname = crm_element_value(xIter, XML_ATTR_UNAME);
if(safe_str_eq(op_key, key) && safe_str_eq(uname, node->details->uname)) {
crm_trace("Skipping duplicate entry %s on %s", op_key, node->details->uname);
return;
}
}
crm_trace("Adding entry %s on %s", op_key, node->details->uname);
crm_xml_add(op, XML_ATTR_UNAME, node->details->uname);
add_node_copy(data_set->failed, op);
}
static const char *get_op_key(xmlNode *xml_op)
{
const char *key = crm_element_value(xml_op, XML_LRM_ATTR_TASK_KEY);
if(key == NULL) {
key = ID(xml_op);
}
return key;
}
static void
unpack_rsc_op_failure(resource_t * rsc, node_t * node, int rc, xmlNode * xml_op, xmlNode ** last_failure,
enum action_fail_response * on_fail, pe_working_set_t * data_set)
{
int interval = 0;
bool is_probe = FALSE;
action_t *action = NULL;
const char *key = get_op_key(xml_op);
const char *task = crm_element_value(xml_op, XML_LRM_ATTR_TASK);
const char *op_version = crm_element_value(xml_op, XML_ATTR_CRM_VERSION);
CRM_ASSERT(rsc);
*last_failure = xml_op;
crm_element_value_int(xml_op, XML_LRM_ATTR_INTERVAL, &interval);
if(interval == 0 && safe_str_eq(task, CRMD_ACTION_STATUS)) {
is_probe = TRUE;
pe_rsc_trace(rsc, "is a probe: %s", key);
}
if (rc != PCMK_OCF_NOT_INSTALLED || is_set(data_set->flags, pe_flag_symmetric_cluster)) {
crm_warn("Processing failed op %s for %s on %s: %s (%d)",
task, rsc->id, node->details->uname, services_ocf_exitcode_str(rc),
rc);
record_failed_op(xml_op, node, data_set);
} else {
crm_trace("Processing failed op %s for %s on %s: %s (%d)",
task, rsc->id, node->details->uname, services_ocf_exitcode_str(rc),
rc);
}
action = custom_action(rsc, strdup(key), task, NULL, TRUE, FALSE, data_set);
if ((action->on_fail <= action_fail_fence && *on_fail < action->on_fail) ||
(action->on_fail == action_fail_reset_remote && *on_fail <= action_fail_recover) ||
(action->on_fail == action_fail_restart_container && *on_fail <= action_fail_recover) ||
(*on_fail == action_fail_restart_container && action->on_fail >= action_fail_migrate)) {
pe_rsc_trace(rsc, "on-fail %s -> %s for %s (%s)", fail2text(*on_fail),
fail2text(action->on_fail), action->uuid, key);
*on_fail = action->on_fail;
}
if (safe_str_eq(task, CRMD_ACTION_STOP)) {
resource_location(rsc, node, -INFINITY, "__stop_fail__", data_set);
} else if (safe_str_eq(task, CRMD_ACTION_MIGRATE) || safe_str_eq(task, CRMD_ACTION_MIGRATED)) {
unpack_rsc_migration_failure(rsc, node, xml_op, data_set);
} else if (safe_str_eq(task, CRMD_ACTION_PROMOTE)) {
rsc->role = RSC_ROLE_MASTER;
} else if (safe_str_eq(task, CRMD_ACTION_DEMOTE)) {
/*
* staying in role=master ends up putting the PE/TE into a loop
* setting role=slave is not dangerous because no master will be
* promoted until the failed resource has been fully stopped
*/
if (action->on_fail == action_fail_block) {
rsc->role = RSC_ROLE_MASTER;
rsc->next_role = RSC_ROLE_STOPPED;
} else if(rc == PCMK_OCF_NOT_RUNNING) {
rsc->role = RSC_ROLE_STOPPED;
} else {
crm_warn("Forcing %s to stop after a failed demote action", rsc->id);
rsc->role = RSC_ROLE_SLAVE;
rsc->next_role = RSC_ROLE_STOPPED;
}
} else if (compare_version("2.0", op_version) > 0 && safe_str_eq(task, CRMD_ACTION_START)) {
crm_warn("Compatibility handling for failed op %s on %s", key, node->details->uname);
resource_location(rsc, node, -INFINITY, "__legacy_start__", data_set);
}
if(is_probe && rc == PCMK_OCF_NOT_INSTALLED) {
/* leave stopped */
pe_rsc_trace(rsc, "Leaving %s stopped", rsc->id);
rsc->role = RSC_ROLE_STOPPED;
} else if (rsc->role < RSC_ROLE_STARTED) {
pe_rsc_trace(rsc, "Setting %s active", rsc->id);
set_active(rsc);
}
pe_rsc_trace(rsc, "Resource %s: role=%s, unclean=%s, on_fail=%s, fail_role=%s",
rsc->id, role2text(rsc->role),
node->details->unclean ? "true" : "false",
fail2text(action->on_fail), role2text(action->fail_role));
if (action->fail_role != RSC_ROLE_STARTED && rsc->next_role < action->fail_role) {
rsc->next_role = action->fail_role;
}
if (action->fail_role == RSC_ROLE_STOPPED) {
int score = -INFINITY;
resource_t *fail_rsc = rsc;
if (fail_rsc->parent) {
resource_t *parent = uber_parent(fail_rsc);
if (pe_rsc_is_clone(parent)
&& is_not_set(parent->flags, pe_rsc_unique)) {
/* for clone and master resources, if a child fails on an operation
* with on-fail = stop, all the resources fail. Do this by preventing
* the parent from coming up again. */
fail_rsc = parent;
}
}
crm_warn("Making sure %s doesn't come up again", fail_rsc->id);
/* make sure it doesn't come up again */
g_hash_table_destroy(fail_rsc->allowed_nodes);
fail_rsc->allowed_nodes = node_hash_from_list(data_set->nodes);
g_hash_table_foreach(fail_rsc->allowed_nodes, set_node_score, &score);
}
pe_free_action(action);
}
static int
determine_op_status(
resource_t *rsc, int rc, int target_rc, node_t * node, xmlNode * xml_op, enum action_fail_response * on_fail, pe_working_set_t * data_set)
{
int interval = 0;
int result = PCMK_LRM_OP_DONE;
const char *key = get_op_key(xml_op);
const char *task = crm_element_value(xml_op, XML_LRM_ATTR_TASK);
bool is_probe = FALSE;
CRM_ASSERT(rsc);
crm_element_value_int(xml_op, XML_LRM_ATTR_INTERVAL, &interval);
if (interval == 0 && safe_str_eq(task, CRMD_ACTION_STATUS)) {
is_probe = TRUE;
}
if (target_rc >= 0 && target_rc != rc) {
result = PCMK_LRM_OP_ERROR;
pe_rsc_debug(rsc, "%s on %s returned '%s' (%d) instead of the expected value: '%s' (%d)",
key, node->details->uname,
services_ocf_exitcode_str(rc), rc,
services_ocf_exitcode_str(target_rc), target_rc);
}
/* we could clean this up significantly except for old LRMs and CRMs that
* didn't include target_rc and liked to remap status
*/
switch (rc) {
case PCMK_OCF_OK:
if (is_probe && target_rc == 7) {
result = PCMK_LRM_OP_DONE;
set_bit(rsc->flags, pe_rsc_unexpectedly_running);
pe_rsc_info(rsc, "Operation %s found resource %s active on %s",
task, rsc->id, node->details->uname);
/* legacy code for pre-0.6.5 operations */
} else if (target_rc < 0 && interval > 0 && rsc->role == RSC_ROLE_MASTER) {
/* catch status ops that return 0 instead of 8 while they
* are supposed to be in master mode
*/
result = PCMK_LRM_OP_ERROR;
}
break;
case PCMK_OCF_NOT_RUNNING:
if (is_probe || target_rc == rc || is_not_set(rsc->flags, pe_rsc_managed)) {
result = PCMK_LRM_OP_DONE;
rsc->role = RSC_ROLE_STOPPED;
/* clear any previous failure actions */
*on_fail = action_fail_ignore;
rsc->next_role = RSC_ROLE_UNKNOWN;
} else if (safe_str_neq(task, CRMD_ACTION_STOP)) {
result = PCMK_LRM_OP_ERROR;
}
break;
case PCMK_OCF_RUNNING_MASTER:
if (is_probe) {
result = PCMK_LRM_OP_DONE;
pe_rsc_info(rsc, "Operation %s found resource %s active in master mode on %s",
task, rsc->id, node->details->uname);
} else if (target_rc == rc) {
/* nothing to do */
} else if (target_rc >= 0) {
result = PCMK_LRM_OP_ERROR;
/* legacy code for pre-0.6.5 operations */
} else if (safe_str_neq(task, CRMD_ACTION_STATUS)
|| rsc->role != RSC_ROLE_MASTER) {
result = PCMK_LRM_OP_ERROR;
if (rsc->role != RSC_ROLE_MASTER) {
crm_err("%s reported %s in master mode on %s",
key, rsc->id, node->details->uname);
}
}
rsc->role = RSC_ROLE_MASTER;
break;
case PCMK_OCF_DEGRADED_MASTER:
case PCMK_OCF_FAILED_MASTER:
rsc->role = RSC_ROLE_MASTER;
result = PCMK_LRM_OP_ERROR;
break;
case PCMK_OCF_NOT_CONFIGURED:
result = PCMK_LRM_OP_ERROR_FATAL;
break;
case PCMK_OCF_NOT_INSTALLED:
case PCMK_OCF_INVALID_PARAM:
case PCMK_OCF_INSUFFICIENT_PRIV:
case PCMK_OCF_UNIMPLEMENT_FEATURE:
if (rc == PCMK_OCF_UNIMPLEMENT_FEATURE && interval > 0) {
result = PCMK_LRM_OP_NOTSUPPORTED;
break;
} else if (pe_can_fence(data_set, node) == FALSE
&& safe_str_eq(task, CRMD_ACTION_STOP)) {
/* If a stop fails and we can't fence, there's nothing else we can do */
pe_proc_err("No further recovery can be attempted for %s: %s action failed with '%s' (%d)",
rsc->id, task, services_ocf_exitcode_str(rc), rc);
clear_bit(rsc->flags, pe_rsc_managed);
set_bit(rsc->flags, pe_rsc_block);
}
result = PCMK_LRM_OP_ERROR_HARD;
break;
default:
if (result == PCMK_LRM_OP_DONE) {
crm_info("Treating %s (rc=%d) on %s as an ERROR",
key, rc, node->details->uname);
result = PCMK_LRM_OP_ERROR;
}
}
return result;
}
static bool check_operation_expiry(resource_t *rsc, node_t *node, int rc, xmlNode *xml_op, pe_working_set_t * data_set)
{
bool expired = FALSE;
time_t last_failure = 0;
int interval = 0;
int failure_timeout = rsc->failure_timeout;
const char *key = get_op_key(xml_op);
const char *task = crm_element_value(xml_op, XML_LRM_ATTR_TASK);
const char *clear_reason = NULL;
/* clearing recurring monitor operation failures automatically
* needs to be carefully considered */
if (safe_str_eq(crm_element_value(xml_op, XML_LRM_ATTR_TASK), "monitor") &&
safe_str_neq(crm_element_value(xml_op, XML_LRM_ATTR_INTERVAL), "0")) {
/* TODO, in the future we should consider not clearing recurring monitor
* op failures unless the last action for a resource was a "stop" action.
* otherwise it is possible that clearing the monitor failure will result
* in the resource being in an undeterministic state.
*
* For now we handle this potential undeterministic condition for remote
* node connection resources by not clearing a recurring monitor op failure
* until after the node has been fenced. */
if (is_set(data_set->flags, pe_flag_stonith_enabled) &&
(rsc->remote_reconnect_interval)) {
node_t *remote_node = pe_find_node(data_set->nodes, rsc->id);
if (remote_node && remote_node->details->remote_was_fenced == 0) {
if (strstr(ID(xml_op), "last_failure")) {
crm_info("Waiting to clear monitor failure for remote node %s until fencing has occurred", rsc->id);
}
/* disabling failure timeout for this operation because we believe
* fencing of the remote node should occur first. */
failure_timeout = 0;
}
}
}
if (failure_timeout > 0) {
int last_run = 0;
if (crm_element_value_int(xml_op, XML_RSC_OP_LAST_CHANGE, &last_run) == 0) {
time_t now = get_effective_time(data_set);
if (now > (last_run + failure_timeout)) {
expired = TRUE;
}
}
}
if (expired) {
if (failure_timeout > 0) {
int fc = get_failcount_full(node, rsc, &last_failure, FALSE, xml_op, data_set);
if(fc) {
if (get_failcount_full(node, rsc, &last_failure, TRUE, xml_op, data_set) == 0) {
clear_reason = "it expired";
} else {
expired = FALSE;
}
} else if (rsc->remote_reconnect_interval && strstr(ID(xml_op), "last_failure")) {
/* always clear last failure when reconnect interval is set */
clear_reason = "reconnect interval is set";
}
}
} else if (strstr(ID(xml_op), "last_failure") &&
((strcmp(task, "start") == 0) || (strcmp(task, "monitor") == 0))) {
op_digest_cache_t *digest_data = NULL;
digest_data = rsc_action_digest_cmp(rsc, xml_op, node, data_set);
if (digest_data->rc == RSC_DIGEST_UNKNOWN) {
crm_trace("rsc op %s/%s on node %s does not have a op digest to compare against", rsc->id,
key, node->details->id);
} else if (digest_data->rc != RSC_DIGEST_MATCH) {
clear_reason = "resource parameters have changed";
}
}
if (clear_reason != NULL) {
char *key = generate_op_key(rsc->id, CRM_OP_CLEAR_FAILCOUNT, 0);
action_t *clear_op = custom_action(rsc, key, CRM_OP_CLEAR_FAILCOUNT,
node, FALSE, TRUE, data_set);
add_hash_param(clear_op->meta, XML_ATTR_TE_NOWAIT, XML_BOOLEAN_TRUE);
crm_notice("Clearing failure of %s on %s because %s " CRM_XS " %s",
rsc->id, node->details->uname, clear_reason, clear_op->uuid);
}
crm_element_value_int(xml_op, XML_LRM_ATTR_INTERVAL, &interval);
if(expired && interval == 0 && safe_str_eq(task, CRMD_ACTION_STATUS)) {
switch(rc) {
case PCMK_OCF_OK:
case PCMK_OCF_NOT_RUNNING:
case PCMK_OCF_RUNNING_MASTER:
case PCMK_OCF_DEGRADED:
case PCMK_OCF_DEGRADED_MASTER:
/* Don't expire probes that return these values */
expired = FALSE;
break;
}
}
return expired;
}
int get_target_rc(xmlNode *xml_op)
{
int dummy = 0;
int target_rc = 0;
char *dummy_string = NULL;
const char *key = crm_element_value(xml_op, XML_ATTR_TRANSITION_KEY);
if (key == NULL) {
return -1;
}
decode_transition_key(key, &dummy_string, &dummy, &dummy, &target_rc);
free(dummy_string);
return target_rc;
}
static enum action_fail_response
get_action_on_fail(resource_t *rsc, const char *key, const char *task, pe_working_set_t * data_set)
{
int result = action_fail_recover;
action_t *action = custom_action(rsc, strdup(key), task, NULL, TRUE, FALSE, data_set);
result = action->on_fail;
pe_free_action(action);
return result;
}
static void
update_resource_state(resource_t * rsc, node_t * node, xmlNode * xml_op, const char * task, int rc,
xmlNode * last_failure, enum action_fail_response * on_fail, pe_working_set_t * data_set)
{
gboolean clear_past_failure = FALSE;
CRM_ASSERT(rsc);
CRM_ASSERT(xml_op);
if (rc == PCMK_OCF_NOT_RUNNING) {
clear_past_failure = TRUE;
} else if (rc == PCMK_OCF_NOT_INSTALLED) {
rsc->role = RSC_ROLE_STOPPED;
} else if (safe_str_eq(task, CRMD_ACTION_STATUS)) {
if (last_failure) {
const char *op_key = get_op_key(xml_op);
const char *last_failure_key = get_op_key(last_failure);
if (safe_str_eq(op_key, last_failure_key)) {
clear_past_failure = TRUE;
}
}
if (rsc->role < RSC_ROLE_STARTED) {
set_active(rsc);
}
} else if (safe_str_eq(task, CRMD_ACTION_START)) {
rsc->role = RSC_ROLE_STARTED;
clear_past_failure = TRUE;
} else if (safe_str_eq(task, CRMD_ACTION_STOP)) {
rsc->role = RSC_ROLE_STOPPED;
clear_past_failure = TRUE;
} else if (safe_str_eq(task, CRMD_ACTION_PROMOTE)) {
rsc->role = RSC_ROLE_MASTER;
clear_past_failure = TRUE;
} else if (safe_str_eq(task, CRMD_ACTION_DEMOTE)) {
/* Demote from Master does not clear an error */
rsc->role = RSC_ROLE_SLAVE;
} else if (safe_str_eq(task, CRMD_ACTION_MIGRATED)) {
rsc->role = RSC_ROLE_STARTED;
clear_past_failure = TRUE;
} else if (safe_str_eq(task, CRMD_ACTION_MIGRATE)) {
unpack_rsc_migration(rsc, node, xml_op, data_set);
} else if (rsc->role < RSC_ROLE_STARTED) {
pe_rsc_trace(rsc, "%s active on %s", rsc->id, node->details->uname);
set_active(rsc);
}
/* clear any previous failure actions */
if (clear_past_failure) {
switch (*on_fail) {
case action_fail_stop:
case action_fail_fence:
case action_fail_migrate:
case action_fail_standby:
pe_rsc_trace(rsc, "%s.%s is not cleared by a completed stop",
rsc->id, fail2text(*on_fail));
break;
case action_fail_block:
case action_fail_ignore:
case action_fail_recover:
case action_fail_restart_container:
*on_fail = action_fail_ignore;
rsc->next_role = RSC_ROLE_UNKNOWN;
break;
case action_fail_reset_remote:
if (rsc->remote_reconnect_interval == 0) {
/* when reconnect delay is not in use, the connection is allowed
* to start again after the remote node is fenced and completely
* stopped. Otherwise, with reconnect delay we wait for the failure
* to be cleared entirely before reconnected can be attempted. */
*on_fail = action_fail_ignore;
rsc->next_role = RSC_ROLE_UNKNOWN;
}
break;
}
}
}
gboolean
unpack_rsc_op(resource_t * rsc, node_t * node, xmlNode * xml_op, xmlNode ** last_failure,
enum action_fail_response * on_fail, pe_working_set_t * data_set)
{
int task_id = 0;
const char *key = NULL;
const char *task = NULL;
const char *task_key = NULL;
int rc = 0;
int status = PCMK_LRM_OP_PENDING-1;
int target_rc = get_target_rc(xml_op);
int interval = 0;
gboolean expired = FALSE;
resource_t *parent = rsc;
enum action_fail_response failure_strategy = action_fail_recover;
CRM_CHECK(rsc != NULL, return FALSE);
CRM_CHECK(node != NULL, return FALSE);
CRM_CHECK(xml_op != NULL, return FALSE);
task_key = get_op_key(xml_op);
task = crm_element_value(xml_op, XML_LRM_ATTR_TASK);
key = crm_element_value(xml_op, XML_ATTR_TRANSITION_KEY);
crm_element_value_int(xml_op, XML_LRM_ATTR_RC, &rc);
crm_element_value_int(xml_op, XML_LRM_ATTR_CALLID, &task_id);
crm_element_value_int(xml_op, XML_LRM_ATTR_OPSTATUS, &status);
crm_element_value_int(xml_op, XML_LRM_ATTR_INTERVAL, &interval);
CRM_CHECK(task != NULL, return FALSE);
CRM_CHECK(status <= PCMK_LRM_OP_NOT_INSTALLED, return FALSE);
CRM_CHECK(status >= PCMK_LRM_OP_PENDING, return FALSE);
if (safe_str_eq(task, CRMD_ACTION_NOTIFY) ||
safe_str_eq(task, CRMD_ACTION_METADATA)) {
/* safe to ignore these */
return TRUE;
}
if (is_not_set(rsc->flags, pe_rsc_unique)) {
parent = uber_parent(rsc);
}
pe_rsc_trace(rsc, "Unpacking task %s/%s (call_id=%d, status=%d, rc=%d) on %s (role=%s)",
task_key, task, task_id, status, rc, node->details->uname, role2text(rsc->role));
if (node->details->unclean) {
pe_rsc_trace(rsc, "Node %s (where %s is running) is unclean."
" Further action depends on the value of the stop's on-fail attribute",
node->details->uname, rsc->id);
}
if (status == PCMK_LRM_OP_ERROR) {
/* Older versions set this if rc != 0 but it's up to us to decide */
status = PCMK_LRM_OP_DONE;
}
if(status != PCMK_LRM_OP_NOT_INSTALLED) {
expired = check_operation_expiry(rsc, node, rc, xml_op, data_set);
}
/* Degraded results are informational only, re-map them to their error-free equivalents */
if (rc == PCMK_OCF_DEGRADED && safe_str_eq(task, CRMD_ACTION_STATUS)) {
rc = PCMK_OCF_OK;
/* Add them to the failed list to highlight them for the user */
if ((node->details->shutdown == FALSE) || (node->details->online == TRUE)) {
crm_trace("Remapping %d to %d", PCMK_OCF_DEGRADED, PCMK_OCF_OK);
record_failed_op(xml_op, node, data_set);
}
} else if (rc == PCMK_OCF_DEGRADED_MASTER && safe_str_eq(task, CRMD_ACTION_STATUS)) {
rc = PCMK_OCF_RUNNING_MASTER;
/* Add them to the failed list to highlight them for the user */
if ((node->details->shutdown == FALSE) || (node->details->online == TRUE)) {
crm_trace("Remapping %d to %d", PCMK_OCF_DEGRADED_MASTER, PCMK_OCF_RUNNING_MASTER);
record_failed_op(xml_op, node, data_set);
}
}
if (expired && target_rc != rc) {
const char *magic = crm_element_value(xml_op, XML_ATTR_TRANSITION_MAGIC);
pe_rsc_debug(rsc, "Expired operation '%s' on %s returned '%s' (%d) instead of the expected value: '%s' (%d)",
key, node->details->uname,
services_ocf_exitcode_str(rc), rc,
services_ocf_exitcode_str(target_rc), target_rc);
if(interval == 0) {
crm_notice("Ignoring expired calculated failure %s (rc=%d, magic=%s) on %s",
task_key, rc, magic, node->details->uname);
goto done;
} else if(node->details->online && node->details->unclean == FALSE) {
crm_notice("Re-initiated expired calculated failure %s (rc=%d, magic=%s) on %s",
task_key, rc, magic, node->details->uname);
/* This is SO horrible, but we don't have access to CancelXmlOp() yet */
crm_xml_add(xml_op, XML_LRM_ATTR_RESTART_DIGEST, "calculated-failure-timeout");
goto done;
}
}
if(status == PCMK_LRM_OP_DONE || status == PCMK_LRM_OP_ERROR) {
status = determine_op_status(rsc, rc, target_rc, node, xml_op, on_fail, data_set);
}
pe_rsc_trace(rsc, "Handling status: %d", status);
switch (status) {
case PCMK_LRM_OP_CANCELLED:
/* do nothing?? */
pe_err("Don't know what to do for cancelled ops yet");
break;
case PCMK_LRM_OP_PENDING:
if (safe_str_eq(task, CRMD_ACTION_START)) {
set_bit(rsc->flags, pe_rsc_start_pending);
set_active(rsc);
} else if (safe_str_eq(task, CRMD_ACTION_PROMOTE)) {
rsc->role = RSC_ROLE_MASTER;
} else if (safe_str_eq(task, CRMD_ACTION_MIGRATE) && node->details->unclean) {
/* If a pending migrate_to action is out on a unclean node,
* we have to force the stop action on the target. */
const char *migrate_target = crm_element_value(xml_op, XML_LRM_ATTR_MIGRATE_TARGET);
node_t *target = pe_find_node(data_set->nodes, migrate_target);
if (target) {
stop_action(rsc, target, FALSE);
}
}
if (rsc->pending_task == NULL) {
if (safe_str_eq(task, CRMD_ACTION_STATUS) && interval == 0) {
/* Pending probes are not printed, even if pending
* operations are requested. If someone ever requests that
* behavior, uncomment this and the corresponding part of
* native.c:native_pending_task().
*/
/*rsc->pending_task = strdup("probe");*/
} else {
rsc->pending_task = strdup(task);
}
}
break;
case PCMK_LRM_OP_DONE:
pe_rsc_trace(rsc, "%s/%s completed on %s", rsc->id, task, node->details->uname);
update_resource_state(rsc, node, xml_op, task, rc, *last_failure, on_fail, data_set);
break;
case PCMK_LRM_OP_NOT_INSTALLED:
failure_strategy = get_action_on_fail(rsc, task_key, task, data_set);
if (failure_strategy == action_fail_ignore) {
crm_warn("Cannot ignore failed %s (status=%d, rc=%d) on %s: "
"Resource agent doesn't exist",
task_key, status, rc, node->details->uname);
/* Also for printing it as "FAILED" by marking it as pe_rsc_failed later */
*on_fail = action_fail_migrate;
}
resource_location(parent, node, -INFINITY, "hard-error", data_set);
unpack_rsc_op_failure(rsc, node, rc, xml_op, last_failure, on_fail, data_set);
break;
case PCMK_LRM_OP_ERROR:
case PCMK_LRM_OP_ERROR_HARD:
case PCMK_LRM_OP_ERROR_FATAL:
case PCMK_LRM_OP_TIMEOUT:
case PCMK_LRM_OP_NOTSUPPORTED:
failure_strategy = get_action_on_fail(rsc, task_key, task, data_set);
if ((failure_strategy == action_fail_ignore)
|| (failure_strategy == action_fail_restart_container
&& safe_str_eq(task, CRMD_ACTION_STOP))) {
crm_warn("Pretending the failure of %s (rc=%d) on %s succeeded",
task_key, rc, node->details->uname);
update_resource_state(rsc, node, xml_op, task, target_rc, *last_failure, on_fail, data_set);
crm_xml_add(xml_op, XML_ATTR_UNAME, node->details->uname);
set_bit(rsc->flags, pe_rsc_failure_ignored);
record_failed_op(xml_op, node, data_set);
if (failure_strategy == action_fail_restart_container && *on_fail <= action_fail_recover) {
*on_fail = failure_strategy;
}
} else {
unpack_rsc_op_failure(rsc, node, rc, xml_op, last_failure, on_fail, data_set);
if(status == PCMK_LRM_OP_ERROR_HARD) {
do_crm_log(rc != PCMK_OCF_NOT_INSTALLED?LOG_ERR:LOG_NOTICE,
"Preventing %s from re-starting on %s: operation %s failed '%s' (%d)",
parent->id, node->details->uname,
task, services_ocf_exitcode_str(rc), rc);
resource_location(parent, node, -INFINITY, "hard-error", data_set);
} else if(status == PCMK_LRM_OP_ERROR_FATAL) {
crm_err("Preventing %s from re-starting anywhere: operation %s failed '%s' (%d)",
parent->id, task, services_ocf_exitcode_str(rc), rc);
resource_location(parent, NULL, -INFINITY, "fatal-error", data_set);
}
}
break;
}
done:
pe_rsc_trace(rsc, "Resource %s after %s: role=%s, next=%s", rsc->id, task, role2text(rsc->role), role2text(rsc->next_role));
return TRUE;
}
gboolean
add_node_attrs(xmlNode * xml_obj, node_t * node, gboolean overwrite, pe_working_set_t * data_set)
{
const char *cluster_name = NULL;
g_hash_table_insert(node->details->attrs,
strdup("#uname"), strdup(node->details->uname));
g_hash_table_insert(node->details->attrs, strdup("#" XML_ATTR_ID), strdup(node->details->id));
if (safe_str_eq(node->details->id, data_set->dc_uuid)) {
data_set->dc_node = node;
node->details->is_dc = TRUE;
g_hash_table_insert(node->details->attrs,
strdup("#" XML_ATTR_DC), strdup(XML_BOOLEAN_TRUE));
} else {
g_hash_table_insert(node->details->attrs,
strdup("#" XML_ATTR_DC), strdup(XML_BOOLEAN_FALSE));
}
cluster_name = g_hash_table_lookup(data_set->config_hash, "cluster-name");
if (cluster_name) {
g_hash_table_insert(node->details->attrs, strdup("#cluster-name"), strdup(cluster_name));
}
unpack_instance_attributes(data_set->input, xml_obj, XML_TAG_ATTR_SETS, NULL,
node->details->attrs, NULL, overwrite, data_set->now);
if (g_hash_table_lookup(node->details->attrs, "#site-name") == NULL) {
const char *site_name = g_hash_table_lookup(node->details->attrs, "site-name");
if (site_name) {
/* Prefix '#' to the key */
g_hash_table_insert(node->details->attrs, strdup("#site-name"), strdup(site_name));
} else if (cluster_name) {
/* Default to cluster-name if unset */
g_hash_table_insert(node->details->attrs, strdup("#site-name"), strdup(cluster_name));
}
}
return TRUE;
}
static GListPtr
extract_operations(const char *node, const char *rsc, xmlNode * rsc_entry, gboolean active_filter)
{
int counter = -1;
int stop_index = -1;
int start_index = -1;
xmlNode *rsc_op = NULL;
GListPtr gIter = NULL;
GListPtr op_list = NULL;
GListPtr sorted_op_list = NULL;
/* extract operations */
op_list = NULL;
sorted_op_list = NULL;
for (rsc_op = __xml_first_child(rsc_entry); rsc_op != NULL; rsc_op = __xml_next_element(rsc_op)) {
if (crm_str_eq((const char *)rsc_op->name, XML_LRM_TAG_RSC_OP, TRUE)) {
crm_xml_add(rsc_op, "resource", rsc);
crm_xml_add(rsc_op, XML_ATTR_UNAME, node);
op_list = g_list_prepend(op_list, rsc_op);
}
}
if (op_list == NULL) {
/* if there are no operations, there is nothing to do */
return NULL;
}
sorted_op_list = g_list_sort(op_list, sort_op_by_callid);
/* create active recurring operations as optional */
if (active_filter == FALSE) {
return sorted_op_list;
}
op_list = NULL;
calculate_active_ops(sorted_op_list, &start_index, &stop_index);
for (gIter = sorted_op_list; gIter != NULL; gIter = gIter->next) {
xmlNode *rsc_op = (xmlNode *) gIter->data;
counter++;
if (start_index < stop_index) {
crm_trace("Skipping %s: not active", ID(rsc_entry));
break;
} else if (counter < start_index) {
crm_trace("Skipping %s: old", ID(rsc_op));
continue;
}
op_list = g_list_append(op_list, rsc_op);
}
g_list_free(sorted_op_list);
return op_list;
}
GListPtr
find_operations(const char *rsc, const char *node, gboolean active_filter,
pe_working_set_t * data_set)
{
GListPtr output = NULL;
GListPtr intermediate = NULL;
xmlNode *tmp = NULL;
xmlNode *status = find_xml_node(data_set->input, XML_CIB_TAG_STATUS, TRUE);
node_t *this_node = NULL;
xmlNode *node_state = NULL;
for (node_state = __xml_first_child(status); node_state != NULL;
node_state = __xml_next_element(node_state)) {
if (crm_str_eq((const char *)node_state->name, XML_CIB_TAG_STATE, TRUE)) {
const char *uname = crm_element_value(node_state, XML_ATTR_UNAME);
if (node != NULL && safe_str_neq(uname, node)) {
continue;
}
this_node = pe_find_node(data_set->nodes, uname);
if(this_node == NULL) {
CRM_LOG_ASSERT(this_node != NULL);
continue;
} else if (is_remote_node(this_node)) {
determine_remote_online_status(data_set, this_node);
} else {
determine_online_status(node_state, this_node, data_set);
}
if (this_node->details->online || is_set(data_set->flags, pe_flag_stonith_enabled)) {
/* offline nodes run no resources...
* unless stonith is enabled in which case we need to
* make sure rsc start events happen after the stonith
*/
xmlNode *lrm_rsc = NULL;
tmp = find_xml_node(node_state, XML_CIB_TAG_LRM, FALSE);
tmp = find_xml_node(tmp, XML_LRM_TAG_RESOURCES, FALSE);
for (lrm_rsc = __xml_first_child(tmp); lrm_rsc != NULL;
lrm_rsc = __xml_next_element(lrm_rsc)) {
if (crm_str_eq((const char *)lrm_rsc->name, XML_LRM_TAG_RESOURCE, TRUE)) {
const char *rsc_id = crm_element_value(lrm_rsc, XML_ATTR_ID);
if (rsc != NULL && safe_str_neq(rsc_id, rsc)) {
continue;
}
intermediate = extract_operations(uname, rsc_id, lrm_rsc, active_filter);
output = g_list_concat(output, intermediate);
}
}
}
}
}
return output;
}
diff --git a/pengine/native.c b/pengine/native.c
index dd5ff184d1..c994a453ff 100644
--- a/pengine/native.c
+++ b/pengine/native.c
@@ -1,3304 +1,3304 @@
/*
* Copyright (C) 2004 Andrew Beekhof <andrew@beekhof.net>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <crm_internal.h>
#include <pengine.h>
#include <crm/pengine/rules.h>
#include <crm/msg_xml.h>
#include <allocate.h>
#include <notif.h>
#include <utils.h>
#include <crm/services.h>
/* #define DELETE_THEN_REFRESH 1 // The crmd will remove the resource from the CIB itself, making this redundant */
#define INFINITY_HACK (INFINITY * -100)
#define VARIANT_NATIVE 1
#include <lib/pengine/variant.h>
gboolean update_action(action_t * then);
void native_rsc_colocation_rh_must(resource_t * rsc_lh, gboolean update_lh,
resource_t * rsc_rh, gboolean update_rh);
void native_rsc_colocation_rh_mustnot(resource_t * rsc_lh, gboolean update_lh,
resource_t * rsc_rh, gboolean update_rh);
void Recurring(resource_t * rsc, action_t * start, node_t * node, pe_working_set_t * data_set);
void RecurringOp(resource_t * rsc, action_t * start, node_t * node,
xmlNode * operation, pe_working_set_t * data_set);
void Recurring_Stopped(resource_t * rsc, action_t * start, node_t * node,
pe_working_set_t * data_set);
void RecurringOp_Stopped(resource_t * rsc, action_t * start, node_t * node,
xmlNode * operation, pe_working_set_t * data_set);
void ReloadRsc(resource_t * rsc, node_t *node, pe_working_set_t * data_set);
gboolean DeleteRsc(resource_t * rsc, node_t * node, gboolean optional, pe_working_set_t * data_set);
gboolean StopRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set);
gboolean StartRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set);
gboolean DemoteRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set);
gboolean PromoteRsc(resource_t * rsc, node_t * next, gboolean optional,
pe_working_set_t * data_set);
gboolean RoleError(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set);
gboolean NullOp(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set);
/* *INDENT-OFF* */
enum rsc_role_e rsc_state_matrix[RSC_ROLE_MAX][RSC_ROLE_MAX] = {
/* Current State */
/* Next State: Unknown Stopped Started Slave Master */
/* Unknown */ { RSC_ROLE_UNKNOWN, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, },
/* Stopped */ { RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STARTED, RSC_ROLE_SLAVE, RSC_ROLE_SLAVE, },
/* Started */ { RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STARTED, RSC_ROLE_SLAVE, RSC_ROLE_MASTER, },
/* Slave */ { RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_STOPPED, RSC_ROLE_SLAVE, RSC_ROLE_MASTER, },
/* Master */ { RSC_ROLE_STOPPED, RSC_ROLE_SLAVE, RSC_ROLE_SLAVE, RSC_ROLE_SLAVE, RSC_ROLE_MASTER, },
};
gboolean (*rsc_action_matrix[RSC_ROLE_MAX][RSC_ROLE_MAX])(resource_t*,node_t*,gboolean,pe_working_set_t*) = {
/* Current State */
/* Next State: Unknown Stopped Started Slave Master */
/* Unknown */ { RoleError, StopRsc, RoleError, RoleError, RoleError, },
/* Stopped */ { RoleError, NullOp, StartRsc, StartRsc, RoleError, },
/* Started */ { RoleError, StopRsc, NullOp, NullOp, PromoteRsc, },
/* Slave */ { RoleError, StopRsc, StopRsc, NullOp, PromoteRsc, },
/* Master */ { RoleError, DemoteRsc, DemoteRsc, DemoteRsc, NullOp, },
};
/* *INDENT-ON* */
static action_t * get_first_named_action(resource_t * rsc, const char *action, gboolean only_valid, node_t * current);
static gboolean
native_choose_node(resource_t * rsc, node_t * prefer, pe_working_set_t * data_set)
{
/*
1. Sort by weight
2. color.chosen_node = the node (of those with the highest wieght)
with the fewest resources
3. remove color.chosen_node from all other colors
*/
GListPtr nodes = NULL;
node_t *chosen = NULL;
int lpc = 0;
int multiple = 0;
int length = 0;
gboolean result = FALSE;
process_utilization(rsc, &prefer, data_set);
length = g_hash_table_size(rsc->allowed_nodes);
if (is_not_set(rsc->flags, pe_rsc_provisional)) {
return rsc->allocated_to ? TRUE : FALSE;
}
if(rsc->allowed_nodes) {
nodes = g_hash_table_get_values(rsc->allowed_nodes);
nodes = g_list_sort_with_data(nodes, sort_node_weight, g_list_nth_data(rsc->running_on, 0));
}
if (prefer) {
node_t *best = g_list_nth_data(nodes, 0);
chosen = g_hash_table_lookup(rsc->allowed_nodes, prefer->details->id);
if (chosen && chosen->weight >= 0
&& chosen->weight >= best->weight /* Possible alternative: (chosen->weight >= INFINITY || best->weight < INFINITY) */
&& can_run_resources(chosen)) {
pe_rsc_trace(rsc,
"Using preferred node %s for %s instead of choosing from %d candidates",
chosen->details->uname, rsc->id, length);
} else if (chosen && chosen->weight < 0) {
pe_rsc_trace(rsc, "Preferred node %s for %s was unavailable", chosen->details->uname,
rsc->id);
chosen = NULL;
} else if (chosen && can_run_resources(chosen)) {
pe_rsc_trace(rsc, "Preferred node %s for %s was unsuitable", chosen->details->uname,
rsc->id);
chosen = NULL;
} else {
pe_rsc_trace(rsc, "Preferred node %s for %s was unknown", prefer->details->uname,
rsc->id);
}
}
if (chosen == NULL && rsc->allowed_nodes) {
chosen = g_list_nth_data(nodes, 0);
pe_rsc_trace(rsc, "Chose node %s for %s from %d candidates",
chosen ? chosen->details->uname : "<none>", rsc->id, length);
if (chosen && chosen->weight > 0 && can_run_resources(chosen)) {
node_t *running = g_list_nth_data(rsc->running_on, 0);
if (running && can_run_resources(running) == FALSE) {
pe_rsc_trace(rsc, "Current node for %s (%s) can't run resources",
rsc->id, running->details->uname);
running = NULL;
}
for (lpc = 1; lpc < length && running; lpc++) {
node_t *tmp = g_list_nth_data(nodes, lpc);
if (tmp->weight == chosen->weight) {
multiple++;
if (tmp->details == running->details) {
/* prefer the existing node if scores are equal */
chosen = tmp;
}
}
}
}
}
if (multiple > 1) {
int log_level = LOG_INFO;
static char score[33];
score2char_stack(chosen->weight, score, sizeof(score));
if (chosen->weight >= INFINITY) {
log_level = LOG_WARNING;
}
do_crm_log(log_level, "%d nodes with equal score (%s) for"
" running %s resources. Chose %s.",
multiple, score, rsc->id, chosen->details->uname);
}
result = native_assign_node(rsc, nodes, chosen, FALSE);
g_list_free(nodes);
return result;
}
static int
node_list_attr_score(GHashTable * list, const char *attr, const char *value)
{
GHashTableIter iter;
node_t *node = NULL;
int best_score = -INFINITY;
const char *best_node = NULL;
if (attr == NULL) {
attr = "#" XML_ATTR_UNAME;
}
g_hash_table_iter_init(&iter, list);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
int weight = node->weight;
if (can_run_resources(node) == FALSE) {
weight = -INFINITY;
}
if (weight > best_score || best_node == NULL) {
const char *tmp = g_hash_table_lookup(node->details->attrs, attr);
if (safe_str_eq(value, tmp)) {
best_score = weight;
best_node = node->details->uname;
}
}
}
if (safe_str_neq(attr, "#" XML_ATTR_UNAME)) {
crm_info("Best score for %s=%s was %s with %d",
attr, value, best_node ? best_node : "<none>", best_score);
}
return best_score;
}
static void
node_hash_update(GHashTable * list1, GHashTable * list2, const char *attr, float factor,
gboolean only_positive)
{
int score = 0;
int new_score = 0;
GHashTableIter iter;
node_t *node = NULL;
if (attr == NULL) {
attr = "#" XML_ATTR_UNAME;
}
g_hash_table_iter_init(&iter, list1);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
CRM_LOG_ASSERT(node != NULL);
if(node == NULL) { continue; };
score = node_list_attr_score(list2, attr, g_hash_table_lookup(node->details->attrs, attr));
new_score = merge_weights(factor * score, node->weight);
if (factor < 0 && score < 0) {
/* Negative preference for a node with a negative score
* should not become a positive preference
*
* TODO - Decide if we want to filter only if weight == -INFINITY
*
*/
crm_trace("%s: Filtering %d + %f*%d (factor * score)",
node->details->uname, node->weight, factor, score);
} else if (node->weight == INFINITY_HACK) {
crm_trace("%s: Filtering %d + %f*%d (node < 0)",
node->details->uname, node->weight, factor, score);
} else if (only_positive && new_score < 0 && node->weight > 0) {
node->weight = INFINITY_HACK;
crm_trace("%s: Filtering %d + %f*%d (score > 0)",
node->details->uname, node->weight, factor, score);
} else if (only_positive && new_score < 0 && node->weight == 0) {
crm_trace("%s: Filtering %d + %f*%d (score == 0)",
node->details->uname, node->weight, factor, score);
} else {
crm_trace("%s: %d + %f*%d", node->details->uname, node->weight, factor, score);
node->weight = new_score;
}
}
}
GHashTable *
node_hash_dup(GHashTable * hash)
{
/* Hack! */
GListPtr list = g_hash_table_get_values(hash);
GHashTable *result = node_hash_from_list(list);
g_list_free(list);
return result;
}
GHashTable *
native_merge_weights(resource_t * rsc, const char *rhs, GHashTable * nodes, const char *attr,
float factor, enum pe_weights flags)
{
return rsc_merge_weights(rsc, rhs, nodes, attr, factor, flags);
}
GHashTable *
rsc_merge_weights(resource_t * rsc, const char *rhs, GHashTable * nodes, const char *attr,
float factor, enum pe_weights flags)
{
GHashTable *work = NULL;
int multiplier = 1;
if (factor < 0) {
multiplier = -1;
}
if (is_set(rsc->flags, pe_rsc_merging)) {
pe_rsc_info(rsc, "%s: Breaking dependency loop at %s", rhs, rsc->id);
return nodes;
}
set_bit(rsc->flags, pe_rsc_merging);
if (is_set(flags, pe_weights_init)) {
if (rsc->variant == pe_group && rsc->children) {
GListPtr last = rsc->children;
while (last->next != NULL) {
last = last->next;
}
pe_rsc_trace(rsc, "Merging %s as a group %p %p", rsc->id, rsc->children, last);
work = rsc_merge_weights(last->data, rhs, NULL, attr, factor, flags);
} else {
work = node_hash_dup(rsc->allowed_nodes);
}
clear_bit(flags, pe_weights_init);
} else if (rsc->variant == pe_group && rsc->children) {
GListPtr iter = rsc->children;
pe_rsc_trace(rsc, "%s: Combining scores from %d children of %s", rhs, g_list_length(iter), rsc->id);
work = node_hash_dup(nodes);
for(iter = rsc->children; iter->next != NULL; iter = iter->next) {
work = rsc_merge_weights(iter->data, rhs, work, attr, factor, flags);
}
} else {
pe_rsc_trace(rsc, "%s: Combining scores from %s", rhs, rsc->id);
work = node_hash_dup(nodes);
node_hash_update(work, rsc->allowed_nodes, attr, factor,
is_set(flags, pe_weights_positive));
}
if (is_set(flags, pe_weights_rollback) && can_run_any(work) == FALSE) {
pe_rsc_info(rsc, "%s: Rolling back scores from %s", rhs, rsc->id);
g_hash_table_destroy(work);
clear_bit(rsc->flags, pe_rsc_merging);
return nodes;
}
if (can_run_any(work)) {
GListPtr gIter = NULL;
if (is_set(flags, pe_weights_forward)) {
gIter = rsc->rsc_cons;
crm_trace("Checking %d additional colocation constraints", g_list_length(gIter));
} else if(rsc->variant == pe_group && rsc->children) {
GListPtr last = rsc->children;
while (last->next != NULL) {
last = last->next;
}
gIter = ((resource_t*)last->data)->rsc_cons_lhs;
crm_trace("Checking %d additional optional group colocation constraints from %s",
g_list_length(gIter), ((resource_t*)last->data)->id);
} else {
gIter = rsc->rsc_cons_lhs;
crm_trace("Checking %d additional optional colocation constraints %s", g_list_length(gIter), rsc->id);
}
for (; gIter != NULL; gIter = gIter->next) {
resource_t *other = NULL;
rsc_colocation_t *constraint = (rsc_colocation_t *) gIter->data;
if (is_set(flags, pe_weights_forward)) {
other = constraint->rsc_rh;
} else {
other = constraint->rsc_lh;
}
pe_rsc_trace(rsc, "Applying %s (%s)", constraint->id, other->id);
work = rsc_merge_weights(other, rhs, work, constraint->node_attribute,
multiplier * (float)constraint->score / INFINITY, flags|pe_weights_rollback);
dump_node_scores(LOG_TRACE, NULL, rhs, work);
}
}
if (is_set(flags, pe_weights_positive)) {
node_t *node = NULL;
GHashTableIter iter;
g_hash_table_iter_init(&iter, work);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
if (node->weight == INFINITY_HACK) {
node->weight = 1;
}
}
}
if (nodes) {
g_hash_table_destroy(nodes);
}
clear_bit(rsc->flags, pe_rsc_merging);
return work;
}
node_t *
native_color(resource_t * rsc, node_t * prefer, pe_working_set_t * data_set)
{
GListPtr gIter = NULL;
int alloc_details = scores_log_level + 1;
if (rsc->parent && is_not_set(rsc->parent->flags, pe_rsc_allocating)) {
/* never allocate children on their own */
pe_rsc_debug(rsc, "Escalating allocation of %s to its parent: %s", rsc->id,
rsc->parent->id);
rsc->parent->cmds->allocate(rsc->parent, prefer, data_set);
}
if (is_not_set(rsc->flags, pe_rsc_provisional)) {
return rsc->allocated_to;
}
if (is_set(rsc->flags, pe_rsc_allocating)) {
pe_rsc_debug(rsc, "Dependency loop detected involving %s", rsc->id);
return NULL;
}
set_bit(rsc->flags, pe_rsc_allocating);
print_resource(alloc_details, "Allocating: ", rsc, FALSE);
dump_node_scores(alloc_details, rsc, "Pre-alloc", rsc->allowed_nodes);
for (gIter = rsc->rsc_cons; gIter != NULL; gIter = gIter->next) {
rsc_colocation_t *constraint = (rsc_colocation_t *) gIter->data;
GHashTable *archive = NULL;
resource_t *rsc_rh = constraint->rsc_rh;
pe_rsc_trace(rsc, "%s: Pre-Processing %s (%s, %d, %s)",
rsc->id, constraint->id, rsc_rh->id,
constraint->score, role2text(constraint->role_lh));
if (constraint->role_lh >= RSC_ROLE_MASTER
|| (constraint->score < 0 && constraint->score > -INFINITY)) {
archive = node_hash_dup(rsc->allowed_nodes);
}
rsc_rh->cmds->allocate(rsc_rh, NULL, data_set);
rsc->cmds->rsc_colocation_lh(rsc, rsc_rh, constraint);
if (archive && can_run_any(rsc->allowed_nodes) == FALSE) {
pe_rsc_info(rsc, "%s: Rolling back scores from %s", rsc->id, rsc_rh->id);
g_hash_table_destroy(rsc->allowed_nodes);
rsc->allowed_nodes = archive;
archive = NULL;
}
if (archive) {
g_hash_table_destroy(archive);
}
}
dump_node_scores(alloc_details, rsc, "Post-coloc", rsc->allowed_nodes);
for (gIter = rsc->rsc_cons_lhs; gIter != NULL; gIter = gIter->next) {
rsc_colocation_t *constraint = (rsc_colocation_t *) gIter->data;
rsc->allowed_nodes =
constraint->rsc_lh->cmds->merge_weights(constraint->rsc_lh, rsc->id, rsc->allowed_nodes,
constraint->node_attribute,
(float)constraint->score / INFINITY,
pe_weights_rollback);
}
print_resource(LOG_DEBUG_2, "Allocating: ", rsc, FALSE);
if (rsc->next_role == RSC_ROLE_STOPPED) {
pe_rsc_trace(rsc, "Making sure %s doesn't get allocated", rsc->id);
/* make sure it doesn't come up again */
resource_location(rsc, NULL, -INFINITY, XML_RSC_ATTR_TARGET_ROLE, data_set);
} else if(rsc->next_role > rsc->role
&& is_set(data_set->flags, pe_flag_have_quorum) == FALSE
&& data_set->no_quorum_policy == no_quorum_freeze) {
crm_notice("Resource %s cannot be elevated from %s to %s: no-quorum-policy=freeze",
rsc->id, role2text(rsc->role), role2text(rsc->next_role));
rsc->next_role = rsc->role;
}
dump_node_scores(show_scores ? 0 : scores_log_level, rsc, __FUNCTION__,
rsc->allowed_nodes);
if (is_set(data_set->flags, pe_flag_stonith_enabled)
&& is_set(data_set->flags, pe_flag_have_stonith_resource) == FALSE) {
clear_bit(rsc->flags, pe_rsc_managed);
}
if (is_not_set(rsc->flags, pe_rsc_managed)) {
const char *reason = NULL;
node_t *assign_to = NULL;
rsc->next_role = rsc->role;
if (rsc->running_on == NULL) {
reason = "inactive";
} else if (rsc->role == RSC_ROLE_MASTER) {
assign_to = rsc->running_on->data;
reason = "master";
} else if (is_set(rsc->flags, pe_rsc_failed)) {
assign_to = rsc->running_on->data;
reason = "failed";
} else {
assign_to = rsc->running_on->data;
reason = "active";
}
pe_rsc_info(rsc, "Unmanaged resource %s allocated to %s: %s", rsc->id,
assign_to ? assign_to->details->uname : "'nowhere'", reason);
native_assign_node(rsc, NULL, assign_to, TRUE);
} else if (is_set(data_set->flags, pe_flag_stop_everything)) {
pe_rsc_debug(rsc, "Forcing %s to stop", rsc->id);
native_assign_node(rsc, NULL, NULL, TRUE);
} else if (is_set(rsc->flags, pe_rsc_provisional)
&& native_choose_node(rsc, prefer, data_set)) {
pe_rsc_trace(rsc, "Allocated resource %s to %s", rsc->id,
rsc->allocated_to->details->uname);
} else if (rsc->allocated_to == NULL) {
if (is_not_set(rsc->flags, pe_rsc_orphan)) {
pe_rsc_info(rsc, "Resource %s cannot run anywhere", rsc->id);
} else if (rsc->running_on != NULL) {
pe_rsc_info(rsc, "Stopping orphan resource %s", rsc->id);
}
} else {
pe_rsc_debug(rsc, "Pre-Allocated resource %s to %s", rsc->id,
rsc->allocated_to->details->uname);
}
clear_bit(rsc->flags, pe_rsc_allocating);
print_resource(LOG_DEBUG_3, "Allocated ", rsc, TRUE);
if (rsc->is_remote_node) {
node_t *remote_node = pe_find_node(data_set->nodes, rsc->id);
CRM_ASSERT(remote_node != NULL);
if (rsc->allocated_to && rsc->next_role != RSC_ROLE_STOPPED) {
crm_trace("Setting remote node %s to ONLINE", remote_node->details->id);
remote_node->details->online = TRUE;
/* We shouldn't consider an unseen remote-node unclean if we are going
* to try and connect to it. Otherwise we get an unnecessary fence */
if (remote_node->details->unseen == TRUE) {
remote_node->details->unclean = FALSE;
}
} else {
crm_trace("Setting remote node %s to SHUTDOWN. next role = %s, allocated=%s",
remote_node->details->id, role2text(rsc->next_role), rsc->allocated_to ? "true" : "false");
remote_node->details->shutdown = TRUE;
}
}
return rsc->allocated_to;
}
static gboolean
is_op_dup(resource_t * rsc, const char *name, const char *interval)
{
gboolean dup = FALSE;
const char *id = NULL;
const char *value = NULL;
xmlNode *operation = NULL;
CRM_ASSERT(rsc);
for (operation = __xml_first_child(rsc->ops_xml); operation != NULL;
operation = __xml_next_element(operation)) {
if (crm_str_eq((const char *)operation->name, "op", TRUE)) {
value = crm_element_value(operation, "name");
if (safe_str_neq(value, name)) {
continue;
}
value = crm_element_value(operation, XML_LRM_ATTR_INTERVAL);
if (value == NULL) {
value = "0";
}
if (safe_str_neq(value, interval)) {
continue;
}
if (id == NULL) {
id = ID(operation);
} else {
crm_config_err("Operation %s is a duplicate of %s", ID(operation), id);
crm_config_err
("Do not use the same (name, interval) combination more than once per resource");
dup = TRUE;
}
}
}
return dup;
}
void
RecurringOp(resource_t * rsc, action_t * start, node_t * node,
xmlNode * operation, pe_working_set_t * data_set)
{
char *key = NULL;
const char *name = NULL;
const char *value = NULL;
const char *interval = NULL;
const char *node_uname = NULL;
unsigned long long interval_ms = 0;
action_t *mon = NULL;
gboolean is_optional = TRUE;
GListPtr possible_matches = NULL;
/* Only process for the operations without role="Stopped" */
value = crm_element_value(operation, "role");
if (value && text2role(value) == RSC_ROLE_STOPPED) {
return;
}
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "Creating recurring action %s for %s in role %s on %s",
ID(operation), rsc->id, role2text(rsc->next_role),
node ? node->details->uname : "n/a");
if (node != NULL) {
node_uname = node->details->uname;
}
interval = crm_element_value(operation, XML_LRM_ATTR_INTERVAL);
interval_ms = crm_get_interval(interval);
if (interval_ms == 0) {
return;
}
name = crm_element_value(operation, "name");
if (is_op_dup(rsc, name, interval)) {
return;
}
if (safe_str_eq(name, RSC_STOP)
|| safe_str_eq(name, RSC_START)
|| safe_str_eq(name, RSC_DEMOTE)
|| safe_str_eq(name, RSC_PROMOTE)
) {
crm_config_err("Invalid recurring action %s wth name: '%s'", ID(operation), name);
return;
}
key = generate_op_key(rsc->id, name, interval_ms);
if (find_rsc_op_entry(rsc, key) == NULL) {
/* disabled */
free(key);
return;
}
if (start != NULL) {
pe_rsc_trace(rsc, "Marking %s %s due to %s",
key, is_set(start->flags, pe_action_optional) ? "optional" : "mandatory",
start->uuid);
is_optional = (rsc->cmds->action_flags(start, NULL) & pe_action_optional);
} else {
pe_rsc_trace(rsc, "Marking %s optional", key);
is_optional = TRUE;
}
/* start a monitor for an already active resource */
possible_matches = find_actions_exact(rsc->actions, key, node);
if (possible_matches == NULL) {
is_optional = FALSE;
pe_rsc_trace(rsc, "Marking %s mandatory: not active", key);
} else {
GListPtr gIter = NULL;
for (gIter = possible_matches; gIter != NULL; gIter = gIter->next) {
action_t *op = (action_t *) gIter->data;
if (is_set(op->flags, pe_action_reschedule)) {
is_optional = FALSE;
break;
}
}
g_list_free(possible_matches);
}
if ((rsc->next_role == RSC_ROLE_MASTER && value == NULL)
|| (value != NULL && text2role(value) != rsc->next_role)) {
int log_level = LOG_DEBUG_2;
const char *result = "Ignoring";
if (is_optional) {
char *local_key = strdup(key);
log_level = LOG_INFO;
result = "Cancelling";
/* it's running : cancel it */
mon = custom_action(rsc, local_key, RSC_CANCEL, node, FALSE, TRUE, data_set);
free(mon->task);
free(mon->cancel_task);
mon->task = strdup(RSC_CANCEL);
mon->cancel_task = strdup(name);
add_hash_param(mon->meta, XML_LRM_ATTR_INTERVAL, interval);
add_hash_param(mon->meta, XML_LRM_ATTR_TASK, name);
local_key = NULL;
switch (rsc->role) {
case RSC_ROLE_SLAVE:
case RSC_ROLE_STARTED:
if (rsc->next_role == RSC_ROLE_MASTER) {
local_key = promote_key(rsc);
} else if (rsc->next_role == RSC_ROLE_STOPPED) {
local_key = stop_key(rsc);
}
break;
case RSC_ROLE_MASTER:
local_key = demote_key(rsc);
break;
default:
break;
}
if (local_key) {
custom_action_order(rsc, NULL, mon, rsc, local_key, NULL,
pe_order_runnable_left, data_set);
}
mon = NULL;
}
do_crm_log(log_level, "%s action %s (%s vs. %s)",
result, key, value ? value : role2text(RSC_ROLE_SLAVE),
role2text(rsc->next_role));
free(key);
return;
}
mon = custom_action(rsc, key, name, node, is_optional, TRUE, data_set);
key = mon->uuid;
if (is_optional) {
pe_rsc_trace(rsc, "%s\t %s (optional)", crm_str(node_uname), mon->uuid);
}
if (start == NULL || is_set(start->flags, pe_action_runnable) == FALSE) {
pe_rsc_debug(rsc, "%s\t %s (cancelled : start un-runnable)", crm_str(node_uname),
mon->uuid);
update_action_flags(mon, pe_action_runnable | pe_action_clear, __FUNCTION__, __LINE__);
} else if (node == NULL || node->details->online == FALSE || node->details->unclean) {
pe_rsc_debug(rsc, "%s\t %s (cancelled : no node available)", crm_str(node_uname),
mon->uuid);
update_action_flags(mon, pe_action_runnable | pe_action_clear, __FUNCTION__, __LINE__);
} else if (is_set(mon->flags, pe_action_optional) == FALSE) {
pe_rsc_info(rsc, " Start recurring %s (%llus) for %s on %s", mon->task, interval_ms / 1000,
rsc->id, crm_str(node_uname));
}
if (rsc->next_role == RSC_ROLE_MASTER) {
char *running_master = crm_itoa(PCMK_OCF_RUNNING_MASTER);
add_hash_param(mon->meta, XML_ATTR_TE_TARGET_RC, running_master);
free(running_master);
}
if (node == NULL || is_set(rsc->flags, pe_rsc_managed)) {
custom_action_order(rsc, start_key(rsc), NULL,
NULL, strdup(key), mon,
pe_order_implies_then | pe_order_runnable_left, data_set);
custom_action_order(rsc, reload_key(rsc), NULL,
NULL, strdup(key), mon,
pe_order_implies_then | pe_order_runnable_left, data_set);
if (rsc->next_role == RSC_ROLE_MASTER) {
custom_action_order(rsc, promote_key(rsc), NULL,
rsc, NULL, mon,
pe_order_optional | pe_order_runnable_left, data_set);
} else if (rsc->role == RSC_ROLE_MASTER) {
custom_action_order(rsc, demote_key(rsc), NULL,
rsc, NULL, mon,
pe_order_optional | pe_order_runnable_left, data_set);
}
}
}
void
Recurring(resource_t * rsc, action_t * start, node_t * node, pe_working_set_t * data_set)
{
if (is_not_set(rsc->flags, pe_rsc_maintenance) &&
(node == NULL || node->details->maintenance == FALSE)) {
xmlNode *operation = NULL;
for (operation = __xml_first_child(rsc->ops_xml); operation != NULL;
operation = __xml_next_element(operation)) {
if (crm_str_eq((const char *)operation->name, "op", TRUE)) {
RecurringOp(rsc, start, node, operation, data_set);
}
}
}
}
void
RecurringOp_Stopped(resource_t * rsc, action_t * start, node_t * node,
xmlNode * operation, pe_working_set_t * data_set)
{
char *key = NULL;
const char *name = NULL;
const char *role = NULL;
const char *interval = NULL;
const char *node_uname = NULL;
unsigned long long interval_ms = 0;
GListPtr possible_matches = NULL;
GListPtr gIter = NULL;
/* TODO: Support of non-unique clone */
if (is_set(rsc->flags, pe_rsc_unique) == FALSE) {
return;
}
/* Only process for the operations with role="Stopped" */
role = crm_element_value(operation, "role");
if (role == NULL || text2role(role) != RSC_ROLE_STOPPED) {
return;
}
pe_rsc_trace(rsc,
"Creating recurring actions %s for %s in role %s on nodes where it'll not be running",
ID(operation), rsc->id, role2text(rsc->next_role));
if (node != NULL) {
node_uname = node->details->uname;
}
interval = crm_element_value(operation, XML_LRM_ATTR_INTERVAL);
interval_ms = crm_get_interval(interval);
if (interval_ms == 0) {
return;
}
name = crm_element_value(operation, "name");
if (is_op_dup(rsc, name, interval)) {
return;
}
if (safe_str_eq(name, RSC_STOP)
|| safe_str_eq(name, RSC_START)
|| safe_str_eq(name, RSC_DEMOTE)
|| safe_str_eq(name, RSC_PROMOTE)
) {
crm_config_err("Invalid recurring action %s wth name: '%s'", ID(operation), name);
return;
}
key = generate_op_key(rsc->id, name, interval_ms);
if (find_rsc_op_entry(rsc, key) == NULL) {
/* disabled */
free(key);
return;
}
/* if the monitor exists on the node where the resource will be running, cancel it */
if (node != NULL) {
possible_matches = find_actions_exact(rsc->actions, key, node);
if (possible_matches) {
action_t *cancel_op = NULL;
char *local_key = strdup(key);
g_list_free(possible_matches);
cancel_op = custom_action(rsc, local_key, RSC_CANCEL, node, FALSE, TRUE, data_set);
free(cancel_op->task);
free(cancel_op->cancel_task);
cancel_op->task = strdup(RSC_CANCEL);
cancel_op->cancel_task = strdup(name);
add_hash_param(cancel_op->meta, XML_LRM_ATTR_INTERVAL, interval);
add_hash_param(cancel_op->meta, XML_LRM_ATTR_TASK, name);
local_key = NULL;
if (rsc->next_role == RSC_ROLE_STARTED || rsc->next_role == RSC_ROLE_SLAVE) {
/* rsc->role == RSC_ROLE_STOPPED: cancel the monitor before start */
/* rsc->role == RSC_ROLE_STARTED: for a migration, cancel the monitor on the target node before start */
custom_action_order(rsc, NULL, cancel_op, rsc, start_key(rsc), NULL,
pe_order_runnable_left, data_set);
}
pe_rsc_info(rsc, "Cancel action %s (%s vs. %s) on %s",
key, role, role2text(rsc->next_role), crm_str(node_uname));
}
}
for (gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) {
node_t *stop_node = (node_t *) gIter->data;
const char *stop_node_uname = stop_node->details->uname;
gboolean is_optional = TRUE;
gboolean probe_is_optional = TRUE;
gboolean stop_is_optional = TRUE;
action_t *stopped_mon = NULL;
char *rc_inactive = NULL;
GListPtr probe_complete_ops = NULL;
GListPtr stop_ops = NULL;
GListPtr local_gIter = NULL;
char *stop_op_key = NULL;
if (node_uname && safe_str_eq(stop_node_uname, node_uname)) {
continue;
}
pe_rsc_trace(rsc, "Creating recurring action %s for %s on %s",
ID(operation), rsc->id, crm_str(stop_node_uname));
/* start a monitor for an already stopped resource */
possible_matches = find_actions_exact(rsc->actions, key, stop_node);
if (possible_matches == NULL) {
pe_rsc_trace(rsc, "Marking %s mandatory on %s: not active", key,
crm_str(stop_node_uname));
is_optional = FALSE;
} else {
pe_rsc_trace(rsc, "Marking %s optional on %s: already active", key,
crm_str(stop_node_uname));
is_optional = TRUE;
g_list_free(possible_matches);
}
stopped_mon = custom_action(rsc, strdup(key), name, stop_node, is_optional, TRUE, data_set);
rc_inactive = crm_itoa(PCMK_OCF_NOT_RUNNING);
add_hash_param(stopped_mon->meta, XML_ATTR_TE_TARGET_RC, rc_inactive);
free(rc_inactive);
if (is_set(rsc->flags, pe_rsc_managed)) {
char *probe_key = generate_op_key(rsc->id, CRMD_ACTION_STATUS, 0);
GListPtr probes = find_actions(rsc->actions, probe_key, stop_node);
GListPtr pIter = NULL;
for (pIter = probes; pIter != NULL; pIter = pIter->next) {
action_t *probe = (action_t *) pIter->data;
order_actions(probe, stopped_mon, pe_order_runnable_left);
crm_trace("%s then %s on %s", probe->uuid, stopped_mon->uuid, stop_node->details->uname);
}
g_list_free(probes);
free(probe_key);
}
if (probe_complete_ops) {
g_list_free(probe_complete_ops);
}
stop_op_key = stop_key(rsc);
stop_ops = find_actions_exact(rsc->actions, stop_op_key, stop_node);
for (local_gIter = stop_ops; local_gIter != NULL; local_gIter = local_gIter->next) {
action_t *stop = (action_t *) local_gIter->data;
if (is_set(stop->flags, pe_action_optional) == FALSE) {
stop_is_optional = FALSE;
}
if (is_set(stop->flags, pe_action_runnable) == FALSE) {
crm_debug("%s\t %s (cancelled : stop un-runnable)",
crm_str(stop_node_uname), stopped_mon->uuid);
update_action_flags(stopped_mon, pe_action_runnable | pe_action_clear, __FUNCTION__, __LINE__);
}
if (is_set(rsc->flags, pe_rsc_managed)) {
custom_action_order(rsc, strdup(stop_op_key), stop,
NULL, strdup(key), stopped_mon,
pe_order_implies_then | pe_order_runnable_left, data_set);
}
}
if (stop_ops) {
g_list_free(stop_ops);
}
free(stop_op_key);
if (is_optional == FALSE && probe_is_optional && stop_is_optional
&& is_set(rsc->flags, pe_rsc_managed) == FALSE) {
pe_rsc_trace(rsc, "Marking %s optional on %s due to unmanaged",
key, crm_str(stop_node_uname));
update_action_flags(stopped_mon, pe_action_optional, __FUNCTION__, __LINE__);
}
if (is_set(stopped_mon->flags, pe_action_optional)) {
pe_rsc_trace(rsc, "%s\t %s (optional)", crm_str(stop_node_uname), stopped_mon->uuid);
}
if (stop_node->details->online == FALSE || stop_node->details->unclean) {
pe_rsc_debug(rsc, "%s\t %s (cancelled : no node available)",
crm_str(stop_node_uname), stopped_mon->uuid);
update_action_flags(stopped_mon, pe_action_runnable | pe_action_clear, __FUNCTION__, __LINE__);
}
if (is_set(stopped_mon->flags, pe_action_runnable)
&& is_set(stopped_mon->flags, pe_action_optional) == FALSE) {
crm_notice(" Start recurring %s (%llus) for %s on %s", stopped_mon->task,
interval_ms / 1000, rsc->id, crm_str(stop_node_uname));
}
}
free(key);
}
void
Recurring_Stopped(resource_t * rsc, action_t * start, node_t * node, pe_working_set_t * data_set)
{
if (is_not_set(rsc->flags, pe_rsc_maintenance) &&
(node == NULL || node->details->maintenance == FALSE)) {
xmlNode *operation = NULL;
for (operation = __xml_first_child(rsc->ops_xml); operation != NULL;
operation = __xml_next_element(operation)) {
if (crm_str_eq((const char *)operation->name, "op", TRUE)) {
RecurringOp_Stopped(rsc, start, node, operation, data_set);
}
}
}
}
static void
handle_migration_actions(resource_t * rsc, node_t *current, node_t *chosen, pe_working_set_t * data_set)
{
action_t *migrate_to = NULL;
action_t *migrate_from = NULL;
action_t *start = NULL;
action_t *stop = NULL;
gboolean partial = rsc->partial_migration_target ? TRUE : FALSE;
pe_rsc_trace(rsc, "Processing migration actions %s moving from %s to %s . partial migration = %s",
rsc->id, current->details->id, chosen->details->id, partial ? "TRUE" : "FALSE");
start = start_action(rsc, chosen, TRUE);
stop = stop_action(rsc, current, TRUE);
if (partial == FALSE) {
migrate_to = custom_action(rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), RSC_MIGRATE, current, TRUE, TRUE, data_set);
}
migrate_from = custom_action(rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), RSC_MIGRATED, chosen, TRUE, TRUE, data_set);
if ((migrate_to && migrate_from) || (migrate_from && partial)) {
set_bit(start->flags, pe_action_migrate_runnable);
set_bit(stop->flags, pe_action_migrate_runnable);
update_action_flags(start, pe_action_pseudo, __FUNCTION__, __LINE__); /* easier than trying to delete it from the graph */
/* order probes before migrations */
if (partial) {
set_bit(migrate_from->flags, pe_action_migrate_runnable);
migrate_from->needs = start->needs;
custom_action_order(rsc, generate_op_key(rsc->id, RSC_STATUS, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL, pe_order_optional, data_set);
} else {
set_bit(migrate_from->flags, pe_action_migrate_runnable);
set_bit(migrate_to->flags, pe_action_migrate_runnable);
migrate_to->needs = start->needs;
custom_action_order(rsc, generate_op_key(rsc->id, RSC_STATUS, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), NULL, pe_order_optional, data_set);
custom_action_order(rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL, pe_order_optional | pe_order_implies_first_migratable, data_set);
}
custom_action_order(rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL, pe_order_optional | pe_order_implies_first_migratable, data_set);
custom_action_order(rsc, generate_op_key(rsc->id, RSC_MIGRATED, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_START, 0), NULL, pe_order_optional | pe_order_implies_first_migratable | pe_order_pseudo_left, data_set);
}
if (migrate_to) {
add_hash_param(migrate_to->meta, XML_LRM_ATTR_MIGRATE_SOURCE, current->details->uname);
add_hash_param(migrate_to->meta, XML_LRM_ATTR_MIGRATE_TARGET, chosen->details->uname);
/* pcmk remote connections don't require pending to be recorded in cib.
* We can optimize cib writes by only setting PENDING for non pcmk remote
* connection resources */
if (rsc->is_remote_node == FALSE) {
/* migrate_to takes place on the source node, but can
* have an effect on the target node depending on how
* the agent is written. Because of this, we have to maintain
* a record that the migrate_to occurred incase the source node
* loses membership while the migrate_to action is still in-flight. */
add_hash_param(migrate_to->meta, XML_OP_ATTR_PENDING, "true");
}
}
if (migrate_from) {
add_hash_param(migrate_from->meta, XML_LRM_ATTR_MIGRATE_SOURCE, current->details->uname);
add_hash_param(migrate_from->meta, XML_LRM_ATTR_MIGRATE_TARGET, chosen->details->uname);
}
}
void
native_create_actions(resource_t * rsc, pe_working_set_t * data_set)
{
action_t *start = NULL;
node_t *chosen = NULL;
node_t *current = NULL;
gboolean need_stop = FALSE;
gboolean is_moving = FALSE;
gboolean allow_migrate = is_set(rsc->flags, pe_rsc_allow_migrate) ? TRUE : FALSE;
GListPtr gIter = NULL;
int num_active_nodes = 0;
enum rsc_role_e role = RSC_ROLE_UNKNOWN;
enum rsc_role_e next_role = RSC_ROLE_UNKNOWN;
CRM_ASSERT(rsc);
chosen = rsc->allocated_to;
if (chosen != NULL && rsc->next_role == RSC_ROLE_UNKNOWN) {
rsc->next_role = RSC_ROLE_STARTED;
pe_rsc_trace(rsc, "Fixed next_role: unknown -> %s", role2text(rsc->next_role));
} else if (rsc->next_role == RSC_ROLE_UNKNOWN) {
rsc->next_role = RSC_ROLE_STOPPED;
pe_rsc_trace(rsc, "Fixed next_role: unknown -> %s", role2text(rsc->next_role));
}
pe_rsc_trace(rsc, "Processing state transition for %s %p: %s->%s", rsc->id, rsc,
role2text(rsc->role), role2text(rsc->next_role));
if (rsc->running_on) {
current = rsc->running_on->data;
}
for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) {
node_t *n = (node_t *) gIter->data;
if (rsc->partial_migration_source &&
(n->details == rsc->partial_migration_source->details)) {
current = rsc->partial_migration_source;
}
num_active_nodes++;
}
for (gIter = rsc->dangling_migrations; gIter != NULL; gIter = gIter->next) {
node_t *current = (node_t *) gIter->data;
action_t *stop = stop_action(rsc, current, FALSE);
set_bit(stop->flags, pe_action_dangle);
pe_rsc_trace(rsc, "Forcing a cleanup of %s on %s", rsc->id, current->details->uname);
if (is_set(data_set->flags, pe_flag_remove_after_stop)) {
DeleteRsc(rsc, current, FALSE, data_set);
}
}
if (num_active_nodes > 1) {
if (num_active_nodes == 2
&& chosen
&& rsc->partial_migration_target
&& rsc->partial_migration_source
&& (current->details == rsc->partial_migration_source->details)
&& (chosen->details == rsc->partial_migration_target->details)) {
/* Here the chosen node is still the migration target from a partial
* migration. Attempt to continue the migration instead of recovering
* by stopping the resource everywhere and starting it on a single node. */
pe_rsc_trace(rsc,
"Will attempt to continue with a partial migration to target %s from %s",
rsc->partial_migration_target->details->id,
rsc->partial_migration_source->details->id);
} else {
const char *type = crm_element_value(rsc->xml, XML_ATTR_TYPE);
const char *class = crm_element_value(rsc->xml, XML_AGENT_ATTR_CLASS);
if(rsc->partial_migration_target && rsc->partial_migration_source) {
crm_notice("Resource %s can no longer migrate to %s. Stopping on %s too", rsc->id,
rsc->partial_migration_target->details->uname,
rsc->partial_migration_source->details->uname);
} else {
pe_proc_err("Resource %s (%s::%s) is active on %d nodes %s",
rsc->id, class, type, num_active_nodes, recovery2text(rsc->recovery_type));
crm_warn("See %s for more information.",
"http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active");
}
if (rsc->recovery_type == recovery_stop_start) {
need_stop = TRUE;
}
/* If by chance a partial migration is in process,
* but the migration target is not chosen still, clear all
* partial migration data. */
rsc->partial_migration_source = rsc->partial_migration_target = NULL;
allow_migrate = FALSE;
}
}
if (is_set(rsc->flags, pe_rsc_start_pending)) {
start = start_action(rsc, chosen, TRUE);
set_bit(start->flags, pe_action_print_always);
}
if (current && chosen && current->details != chosen->details) {
pe_rsc_trace(rsc, "Moving %s", rsc->id);
is_moving = TRUE;
need_stop = TRUE;
} else if (is_set(rsc->flags, pe_rsc_failed)) {
pe_rsc_trace(rsc, "Recovering %s", rsc->id);
need_stop = TRUE;
} else if (is_set(rsc->flags, pe_rsc_block)) {
pe_rsc_trace(rsc, "Block %s", rsc->id);
need_stop = TRUE;
} else if (rsc->role > RSC_ROLE_STARTED && current != NULL && chosen != NULL) {
/* Recovery of a promoted resource */
start = start_action(rsc, chosen, TRUE);
if (is_set(start->flags, pe_action_optional) == FALSE) {
pe_rsc_trace(rsc, "Forced start %s", rsc->id);
need_stop = TRUE;
}
}
pe_rsc_trace(rsc, "Creating actions for %s: %s->%s", rsc->id,
role2text(rsc->role), role2text(rsc->next_role));
/* Create any additional actions required when bringing resource down and
* back up to same level.
*/
role = rsc->role;
while (role != RSC_ROLE_STOPPED) {
next_role = rsc_state_matrix[role][RSC_ROLE_STOPPED];
pe_rsc_trace(rsc, "Down: Executing: %s->%s (%s)%s", role2text(role), role2text(next_role),
rsc->id, need_stop ? " required" : "");
if (rsc_action_matrix[role][next_role] (rsc, current, !need_stop, data_set) == FALSE) {
break;
}
role = next_role;
}
while (rsc->role <= rsc->next_role && role != rsc->role && is_not_set(rsc->flags, pe_rsc_block)) {
next_role = rsc_state_matrix[role][rsc->role];
pe_rsc_trace(rsc, "Up: Executing: %s->%s (%s)%s", role2text(role), role2text(next_role),
rsc->id, need_stop ? " required" : "");
if (rsc_action_matrix[role][next_role] (rsc, chosen, !need_stop, data_set) == FALSE) {
break;
}
role = next_role;
}
role = rsc->role;
/* Required steps from this role to the next */
while (role != rsc->next_role) {
next_role = rsc_state_matrix[role][rsc->next_role];
pe_rsc_trace(rsc, "Role: Executing: %s->%s = (%s on %s)", role2text(role), role2text(next_role), rsc->id, chosen?chosen->details->uname:"NA");
if (rsc_action_matrix[role][next_role] (rsc, chosen, FALSE, data_set) == FALSE) {
break;
}
role = next_role;
}
if(is_set(rsc->flags, pe_rsc_block)) {
pe_rsc_trace(rsc, "No monitor additional ops for blocked resource");
} else if (rsc->next_role != RSC_ROLE_STOPPED || is_set(rsc->flags, pe_rsc_managed) == FALSE) {
pe_rsc_trace(rsc, "Monitor ops for active resource");
start = start_action(rsc, chosen, TRUE);
Recurring(rsc, start, chosen, data_set);
Recurring_Stopped(rsc, start, chosen, data_set);
} else {
pe_rsc_trace(rsc, "Monitor ops for in-active resource");
Recurring_Stopped(rsc, NULL, NULL, data_set);
}
/* if we are stuck in a partial migration, where the target
* of the partial migration no longer matches the chosen target.
* A full stop/start is required */
if (rsc->partial_migration_target && (chosen == NULL || rsc->partial_migration_target->details != chosen->details)) {
pe_rsc_trace(rsc, "Not allowing partial migration to continue. %s", rsc->id);
allow_migrate = FALSE;
} else if (is_moving == FALSE ||
is_not_set(rsc->flags, pe_rsc_managed) ||
is_set(rsc->flags, pe_rsc_failed) ||
is_set(rsc->flags, pe_rsc_start_pending) ||
(current->details->unclean == TRUE) ||
rsc->next_role < RSC_ROLE_STARTED) {
allow_migrate = FALSE;
}
if (allow_migrate) {
handle_migration_actions(rsc, current, chosen, data_set);
}
}
static void
rsc_avoids_remote_nodes(resource_t *rsc)
{
GHashTableIter iter;
node_t *node = NULL;
g_hash_table_iter_init(&iter, rsc->allowed_nodes);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
if (node->details->remote_rsc) {
node->weight = -INFINITY;
}
}
}
void
native_internal_constraints(resource_t * rsc, pe_working_set_t * data_set)
{
/* This function is on the critical path and worth optimizing as much as possible */
resource_t *top = uber_parent(rsc);
int type = pe_order_optional | pe_order_implies_then | pe_order_restart;
gboolean is_stonith = is_set(rsc->flags, pe_rsc_fence_device);
custom_action_order(rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_START, 0), NULL, type, data_set);
if (top->variant == pe_master || rsc->role > RSC_ROLE_SLAVE) {
custom_action_order(rsc, generate_op_key(rsc->id, RSC_DEMOTE, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL,
pe_order_implies_first_master, data_set);
custom_action_order(rsc, generate_op_key(rsc->id, RSC_START, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_PROMOTE, 0), NULL,
pe_order_runnable_left, data_set);
}
if (is_stonith == FALSE
&& is_set(data_set->flags, pe_flag_enable_unfencing)
&& is_set(rsc->flags, pe_rsc_needs_unfencing)) {
/* Check if the node needs to be unfenced first */
node_t *node = NULL;
GHashTableIter iter;
g_hash_table_iter_init(&iter, rsc->allowed_nodes);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
action_t *unfence = pe_fence_op(node, "on", TRUE, data_set);
crm_debug("Ordering any stops of %s before %s, and any starts after",
rsc->id, unfence->uuid);
/*
* It would be more efficient to order clone resources once,
* rather than order each instance, but ordering the instance
* allows us to avoid unnecessary dependencies that might conflict
* with user constraints.
*
* @TODO: This constraint can still produce a transition loop if the
* resource has a stop scheduled on the node being unfenced, and
* there is a user ordering constraint to start some other resource
* (which will be ordered after the unfence) before stopping this
* resource. An example is "start some slow-starting cloned service
* before stopping an associated virtual IP that may be moving to
* it":
* stop this -> unfencing -> start that -> stop this
*/
custom_action_order(rsc, stop_key(rsc), NULL,
NULL, strdup(unfence->uuid), unfence,
pe_order_optional|pe_order_same_node, data_set);
custom_action_order(NULL, strdup(unfence->uuid), unfence,
rsc, start_key(rsc), NULL,
pe_order_implies_then_on_node|pe_order_same_node,
data_set);
}
}
if (is_not_set(rsc->flags, pe_rsc_managed)) {
pe_rsc_trace(rsc, "Skipping fencing constraints for unmanaged resource: %s", rsc->id);
return;
}
{
action_t *all_stopped = get_pseudo_op(ALL_STOPPED, data_set);
custom_action_order(rsc, stop_key(rsc), NULL,
NULL, strdup(all_stopped->task), all_stopped,
pe_order_implies_then | pe_order_runnable_left, data_set);
}
if (g_hash_table_size(rsc->utilization) > 0
&& safe_str_neq(data_set->placement_strategy, "default")) {
GHashTableIter iter;
node_t *next = NULL;
GListPtr gIter = NULL;
pe_rsc_trace(rsc, "Creating utilization constraints for %s - strategy: %s",
rsc->id, data_set->placement_strategy);
for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) {
node_t *current = (node_t *) gIter->data;
char *load_stopped_task = crm_concat(LOAD_STOPPED, current->details->uname, '_');
action_t *load_stopped = get_pseudo_op(load_stopped_task, data_set);
if (load_stopped->node == NULL) {
load_stopped->node = node_copy(current);
update_action_flags(load_stopped, pe_action_optional | pe_action_clear, __FUNCTION__, __LINE__);
}
custom_action_order(rsc, stop_key(rsc), NULL,
NULL, load_stopped_task, load_stopped, pe_order_load, data_set);
}
g_hash_table_iter_init(&iter, rsc->allowed_nodes);
while (g_hash_table_iter_next(&iter, NULL, (void **)&next)) {
char *load_stopped_task = crm_concat(LOAD_STOPPED, next->details->uname, '_');
action_t *load_stopped = get_pseudo_op(load_stopped_task, data_set);
if (load_stopped->node == NULL) {
load_stopped->node = node_copy(next);
update_action_flags(load_stopped, pe_action_optional | pe_action_clear, __FUNCTION__, __LINE__);
}
custom_action_order(NULL, strdup(load_stopped_task), load_stopped,
rsc, start_key(rsc), NULL, pe_order_load, data_set);
custom_action_order(NULL, strdup(load_stopped_task), load_stopped,
rsc, generate_op_key(rsc->id, RSC_MIGRATE, 0), NULL,
pe_order_load, data_set);
free(load_stopped_task);
}
}
if (rsc->container) {
resource_t *remote_rsc = NULL;
/* A user can specify that a resource must start on a Pacemaker Remote
* node by explicitly configuring it with the container=NODENAME
* meta-attribute. This is of questionable merit, since location
* constraints can accomplish the same thing. But we support it, so here
* we check whether a resource (that is not itself a remote connection)
* has container set to a remote node or guest node resource.
*/
if (rsc->container->is_remote_node) {
remote_rsc = rsc->container;
} else if (rsc->is_remote_node == FALSE) {
remote_rsc = rsc_contains_remote_node(data_set, rsc->container);
}
if (remote_rsc) {
/* The container represents a Pacemaker Remote node, so force the
* resource on the Pacemaker Remote node instead of colocating the
* resource with the container resource.
*/
GHashTableIter iter;
node_t *node = NULL;
g_hash_table_iter_init(&iter, rsc->allowed_nodes);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
if (node->details->remote_rsc != remote_rsc) {
node->weight = -INFINITY;
}
}
} else {
/* This resource is either a filler for a container that does NOT
* represent a Pacemaker Remote node, or a Pacemaker Remote
* connection resource for a guest node or bundle.
*/
int score;
crm_trace("Order and colocate %s relative to its container %s",
rsc->id, rsc->container->id);
custom_action_order(rsc->container, generate_op_key(rsc->container->id, RSC_START, 0), NULL,
rsc, generate_op_key(rsc->id, RSC_START, 0), NULL,
pe_order_implies_then | pe_order_runnable_left, data_set);
custom_action_order(rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL,
rsc->container, generate_op_key(rsc->container->id, RSC_STOP, 0), NULL,
pe_order_implies_first, data_set);
if (is_set(rsc->flags, pe_rsc_allow_remote_remotes)) {
score = 10000; /* Highly preferred but not essential */
} else {
score = INFINITY; /* Force them to run on the same host */
}
rsc_colocation_new("resource-with-container", NULL, score, rsc,
rsc->container, NULL, NULL, data_set);
}
}
if (rsc->is_remote_node || is_stonith) {
/* don't allow remote nodes to run stonith devices
* or remote connection resources.*/
rsc_avoids_remote_nodes(rsc);
}
/* If this is a guest node's implicit remote connection, do not allow the
* guest resource to live on a Pacemaker Remote node, to avoid nesting
* remotes. However, allow bundles to run on remote nodes.
*/
if (rsc->is_remote_node && rsc->container
&& is_not_set(rsc->flags, pe_rsc_allow_remote_remotes)) {
rsc_avoids_remote_nodes(rsc->container);
}
}
void
native_rsc_colocation_lh(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint)
{
if (rsc_lh == NULL) {
pe_err("rsc_lh was NULL for %s", constraint->id);
return;
} else if (constraint->rsc_rh == NULL) {
pe_err("rsc_rh was NULL for %s", constraint->id);
return;
}
pe_rsc_trace(rsc_lh, "Processing colocation constraint between %s and %s", rsc_lh->id,
rsc_rh->id);
rsc_rh->cmds->rsc_colocation_rh(rsc_lh, rsc_rh, constraint);
}
enum filter_colocation_res
filter_colocation_constraint(resource_t * rsc_lh, resource_t * rsc_rh,
rsc_colocation_t * constraint, gboolean preview)
{
if (constraint->score == 0) {
return influence_nothing;
}
/* rh side must be allocated before we can process constraint */
if (preview == FALSE && is_set(rsc_rh->flags, pe_rsc_provisional)) {
return influence_nothing;
}
if ((constraint->role_lh >= RSC_ROLE_SLAVE) &&
rsc_lh->parent &&
rsc_lh->parent->variant == pe_master && is_not_set(rsc_lh->flags, pe_rsc_provisional)) {
/* LH and RH resources have already been allocated, place the correct
* priority oh LH rsc for the given multistate resource role */
return influence_rsc_priority;
}
if (preview == FALSE && is_not_set(rsc_lh->flags, pe_rsc_provisional)) {
/* error check */
struct node_shared_s *details_lh;
struct node_shared_s *details_rh;
if ((constraint->score > -INFINITY) && (constraint->score < INFINITY)) {
return influence_nothing;
}
details_rh = rsc_rh->allocated_to ? rsc_rh->allocated_to->details : NULL;
details_lh = rsc_lh->allocated_to ? rsc_lh->allocated_to->details : NULL;
if (constraint->score == INFINITY && details_lh != details_rh) {
crm_err("%s and %s are both allocated"
" but to different nodes: %s vs. %s",
rsc_lh->id, rsc_rh->id,
details_lh ? details_lh->uname : "n/a", details_rh ? details_rh->uname : "n/a");
} else if (constraint->score == -INFINITY && details_lh == details_rh) {
crm_err("%s and %s are both allocated"
" but to the SAME node: %s",
rsc_lh->id, rsc_rh->id, details_rh ? details_rh->uname : "n/a");
}
return influence_nothing;
}
if (constraint->score > 0
&& constraint->role_lh != RSC_ROLE_UNKNOWN && constraint->role_lh != rsc_lh->next_role) {
crm_trace("LH: Skipping constraint: \"%s\" state filter nextrole is %s",
role2text(constraint->role_lh), role2text(rsc_lh->next_role));
return influence_nothing;
}
if (constraint->score > 0
&& constraint->role_rh != RSC_ROLE_UNKNOWN && constraint->role_rh != rsc_rh->next_role) {
crm_trace("RH: Skipping constraint: \"%s\" state filter", role2text(constraint->role_rh));
return FALSE;
}
if (constraint->score < 0
&& constraint->role_lh != RSC_ROLE_UNKNOWN && constraint->role_lh == rsc_lh->next_role) {
crm_trace("LH: Skipping -ve constraint: \"%s\" state filter",
role2text(constraint->role_lh));
return influence_nothing;
}
if (constraint->score < 0
&& constraint->role_rh != RSC_ROLE_UNKNOWN && constraint->role_rh == rsc_rh->next_role) {
crm_trace("RH: Skipping -ve constraint: \"%s\" state filter",
role2text(constraint->role_rh));
return influence_nothing;
}
return influence_rsc_location;
}
static void
influence_priority(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint)
{
const char *rh_value = NULL;
const char *lh_value = NULL;
const char *attribute = "#id";
int score_multiplier = 1;
if (constraint->node_attribute != NULL) {
attribute = constraint->node_attribute;
}
if (!rsc_rh->allocated_to || !rsc_lh->allocated_to) {
return;
}
lh_value = g_hash_table_lookup(rsc_lh->allocated_to->details->attrs, attribute);
rh_value = g_hash_table_lookup(rsc_rh->allocated_to->details->attrs, attribute);
if (!safe_str_eq(lh_value, rh_value)) {
if(constraint->score == INFINITY && constraint->role_lh == RSC_ROLE_MASTER) {
rsc_lh->priority = -INFINITY;
}
return;
}
if (constraint->role_rh && (constraint->role_rh != rsc_rh->next_role)) {
return;
}
if (constraint->role_lh == RSC_ROLE_SLAVE) {
score_multiplier = -1;
}
rsc_lh->priority = merge_weights(score_multiplier * constraint->score, rsc_lh->priority);
}
static void
colocation_match(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint)
{
const char *tmp = NULL;
const char *value = NULL;
const char *attribute = "#id";
GHashTable *work = NULL;
gboolean do_check = FALSE;
GHashTableIter iter;
node_t *node = NULL;
if (constraint->node_attribute != NULL) {
attribute = constraint->node_attribute;
}
if (rsc_rh->allocated_to) {
value = g_hash_table_lookup(rsc_rh->allocated_to->details->attrs, attribute);
do_check = TRUE;
} else if (constraint->score < 0) {
/* nothing to do:
* anti-colocation with something that is not running
*/
return;
}
work = node_hash_dup(rsc_lh->allowed_nodes);
g_hash_table_iter_init(&iter, work);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
tmp = g_hash_table_lookup(node->details->attrs, attribute);
if (do_check && safe_str_eq(tmp, value)) {
if (constraint->score < INFINITY) {
pe_rsc_trace(rsc_lh, "%s: %s.%s += %d", constraint->id, rsc_lh->id,
node->details->uname, constraint->score);
node->weight = merge_weights(constraint->score, node->weight);
}
} else if (do_check == FALSE || constraint->score >= INFINITY) {
pe_rsc_trace(rsc_lh, "%s: %s.%s -= %d (%s)", constraint->id, rsc_lh->id,
node->details->uname, constraint->score,
do_check ? "failed" : "unallocated");
node->weight = merge_weights(-constraint->score, node->weight);
}
}
if (can_run_any(work)
|| constraint->score <= -INFINITY || constraint->score >= INFINITY) {
g_hash_table_destroy(rsc_lh->allowed_nodes);
rsc_lh->allowed_nodes = work;
work = NULL;
} else {
static char score[33];
score2char_stack(constraint->score, score, sizeof(score));
pe_rsc_info(rsc_lh, "%s: Rolling back scores from %s (%d, %s)",
rsc_lh->id, rsc_rh->id, do_check, score);
}
if (work) {
g_hash_table_destroy(work);
}
}
void
native_rsc_colocation_rh(resource_t * rsc_lh, resource_t * rsc_rh, rsc_colocation_t * constraint)
{
enum filter_colocation_res filter_results;
CRM_ASSERT(rsc_lh);
CRM_ASSERT(rsc_rh);
filter_results = filter_colocation_constraint(rsc_lh, rsc_rh, constraint, FALSE);
pe_rsc_trace(rsc_lh, "%sColocating %s with %s (%s, weight=%d, filter=%d)",
constraint->score >= 0 ? "" : "Anti-",
rsc_lh->id, rsc_rh->id, constraint->id, constraint->score, filter_results);
switch (filter_results) {
case influence_rsc_priority:
influence_priority(rsc_lh, rsc_rh, constraint);
break;
case influence_rsc_location:
pe_rsc_trace(rsc_lh, "%sColocating %s with %s (%s, weight=%d)",
constraint->score >= 0 ? "" : "Anti-",
rsc_lh->id, rsc_rh->id, constraint->id, constraint->score);
colocation_match(rsc_lh, rsc_rh, constraint);
break;
case influence_nothing:
default:
return;
}
}
static gboolean
filter_rsc_ticket(resource_t * rsc_lh, rsc_ticket_t * rsc_ticket)
{
if (rsc_ticket->role_lh != RSC_ROLE_UNKNOWN && rsc_ticket->role_lh != rsc_lh->role) {
pe_rsc_trace(rsc_lh, "LH: Skipping constraint: \"%s\" state filter",
role2text(rsc_ticket->role_lh));
return FALSE;
}
return TRUE;
}
void
rsc_ticket_constraint(resource_t * rsc_lh, rsc_ticket_t * rsc_ticket, pe_working_set_t * data_set)
{
if (rsc_ticket == NULL) {
pe_err("rsc_ticket was NULL");
return;
}
if (rsc_lh == NULL) {
pe_err("rsc_lh was NULL for %s", rsc_ticket->id);
return;
}
if (rsc_ticket->ticket->granted && rsc_ticket->ticket->standby == FALSE) {
return;
}
if (rsc_lh->children) {
GListPtr gIter = rsc_lh->children;
pe_rsc_trace(rsc_lh, "Processing ticket dependencies from %s", rsc_lh->id);
for (; gIter != NULL; gIter = gIter->next) {
resource_t *child_rsc = (resource_t *) gIter->data;
rsc_ticket_constraint(child_rsc, rsc_ticket, data_set);
}
return;
}
pe_rsc_trace(rsc_lh, "%s: Processing ticket dependency on %s (%s, %s)",
rsc_lh->id, rsc_ticket->ticket->id, rsc_ticket->id,
role2text(rsc_ticket->role_lh));
if (rsc_ticket->ticket->granted == FALSE && g_list_length(rsc_lh->running_on) > 0) {
GListPtr gIter = NULL;
switch (rsc_ticket->loss_policy) {
case loss_ticket_stop:
resource_location(rsc_lh, NULL, -INFINITY, "__loss_of_ticket__", data_set);
break;
case loss_ticket_demote:
/*Promotion score will be set to -INFINITY in master_promotion_order() */
if (rsc_ticket->role_lh != RSC_ROLE_MASTER) {
resource_location(rsc_lh, NULL, -INFINITY, "__loss_of_ticket__", data_set);
}
break;
case loss_ticket_fence:
if (filter_rsc_ticket(rsc_lh, rsc_ticket) == FALSE) {
return;
}
resource_location(rsc_lh, NULL, -INFINITY, "__loss_of_ticket__", data_set);
for (gIter = rsc_lh->running_on; gIter != NULL; gIter = gIter->next) {
node_t *node = (node_t *) gIter->data;
pe_fence_node(data_set, node, "deadman ticket was lost");
}
break;
case loss_ticket_freeze:
if (filter_rsc_ticket(rsc_lh, rsc_ticket) == FALSE) {
return;
}
if (g_list_length(rsc_lh->running_on) > 0) {
clear_bit(rsc_lh->flags, pe_rsc_managed);
set_bit(rsc_lh->flags, pe_rsc_block);
}
break;
}
} else if (rsc_ticket->ticket->granted == FALSE) {
if (rsc_ticket->role_lh != RSC_ROLE_MASTER || rsc_ticket->loss_policy == loss_ticket_stop) {
resource_location(rsc_lh, NULL, -INFINITY, "__no_ticket__", data_set);
}
} else if (rsc_ticket->ticket->standby) {
if (rsc_ticket->role_lh != RSC_ROLE_MASTER || rsc_ticket->loss_policy == loss_ticket_stop) {
resource_location(rsc_lh, NULL, -INFINITY, "__ticket_standby__", data_set);
}
}
}
enum pe_action_flags
native_action_flags(action_t * action, node_t * node)
{
return action->flags;
}
enum pe_graph_flags
native_update_actions(action_t * first, action_t * then, node_t * node, enum pe_action_flags flags,
enum pe_action_flags filter, enum pe_ordering type)
{
/* flags == get_action_flags(first, then_node) called from update_action() */
enum pe_graph_flags changed = pe_graph_none;
enum pe_action_flags then_flags = then->flags;
enum pe_action_flags first_flags = first->flags;
crm_trace( "Testing %s on %s (0x%.6x) with %s 0x%.6x",
first->uuid, first->node ? first->node->details->uname : "[none]",
first->flags, then->uuid, then->flags);
if (type & pe_order_asymmetrical) {
resource_t *then_rsc = then->rsc;
enum rsc_role_e then_rsc_role = then_rsc ? then_rsc->fns->state(then_rsc, TRUE) : 0;
if (!then_rsc) {
/* ignore */
} else if ((then_rsc_role == RSC_ROLE_STOPPED) && safe_str_eq(then->task, RSC_STOP)) {
/* ignore... if 'then' is supposed to be stopped after 'first', but
* then is already stopped, there is nothing to be done when non-symmetrical. */
} else if ((then_rsc_role >= RSC_ROLE_STARTED)
&& safe_str_eq(then->task, RSC_START)
&& then->node
&& then_rsc->running_on
&& g_list_length(then_rsc->running_on) == 1
&& then->node->details == ((node_t *) then_rsc->running_on->data)->details) {
/* ignore... if 'then' is supposed to be started after 'first', but
* then is already started, there is nothing to be done when non-symmetrical. */
} else if (!(first->flags & pe_action_runnable)) {
/* prevent 'then' action from happening if 'first' is not runnable and
* 'then' has not yet occurred. */
pe_clear_action_bit(then, pe_action_runnable);
pe_clear_action_bit(then, pe_action_optional);
pe_rsc_trace(then->rsc, "Unset optional and runnable on %s", then->uuid);
} else {
/* ignore... then is allowed to start/stop if it wants to. */
}
}
if (type & pe_order_implies_first) {
if ((filter & pe_action_optional) && (flags & pe_action_optional) == 0) {
pe_rsc_trace(first->rsc, "Unset optional on %s because of %s", first->uuid, then->uuid);
pe_clear_action_bit(first, pe_action_optional);
}
if (is_set(flags, pe_action_migrate_runnable) &&
is_set(then->flags, pe_action_migrate_runnable) == FALSE &&
is_set(then->flags, pe_action_optional) == FALSE) {
pe_rsc_trace(first->rsc, "Unset migrate runnable on %s because of %s",
first->uuid, then->uuid);
pe_clear_action_bit(first, pe_action_migrate_runnable);
}
}
if (type & pe_order_implies_first_master) {
if ((filter & pe_action_optional) &&
((then->flags & pe_action_optional) == FALSE) &&
then->rsc && (then->rsc->role == RSC_ROLE_MASTER)) {
pe_clear_action_bit(first, pe_action_optional);
if (is_set(first->flags, pe_action_migrate_runnable) &&
is_set(then->flags, pe_action_migrate_runnable) == FALSE) {
pe_rsc_trace(first->rsc, "Unset migrate runnable on %s because of %s", first->uuid, then->uuid);
pe_clear_action_bit(first, pe_action_migrate_runnable);
}
pe_rsc_trace(then->rsc, "Unset optional on %s because of %s", first->uuid, then->uuid);
}
}
if ((type & pe_order_implies_first_migratable)
&& is_set(filter, pe_action_optional)) {
if (((then->flags & pe_action_migrate_runnable) == FALSE) ||
((then->flags & pe_action_runnable) == FALSE)) {
pe_rsc_trace(then->rsc, "Unset runnable on %s because %s is neither runnable or migratable", first->uuid, then->uuid);
pe_clear_action_bit(first, pe_action_runnable);
}
if ((then->flags & pe_action_optional) == 0) {
pe_rsc_trace(then->rsc, "Unset optional on %s because %s is not optional", first->uuid, then->uuid);
pe_clear_action_bit(first, pe_action_optional);
}
}
if ((type & pe_order_pseudo_left)
&& is_set(filter, pe_action_optional)) {
if ((first->flags & pe_action_runnable) == FALSE) {
pe_clear_action_bit(then, pe_action_migrate_runnable);
pe_clear_action_bit(then, pe_action_pseudo);
pe_rsc_trace(then->rsc, "Unset pseudo on %s because %s is not runnable", then->uuid, first->uuid);
}
}
if (is_set(type, pe_order_runnable_left)
&& is_set(filter, pe_action_runnable)
&& is_set(then->flags, pe_action_runnable)
&& is_set(flags, pe_action_runnable) == FALSE) {
pe_rsc_trace(then->rsc, "Unset runnable on %s because of %s", then->uuid, first->uuid);
pe_clear_action_bit(then, pe_action_runnable);
pe_clear_action_bit(then, pe_action_migrate_runnable);
}
if (is_set(type, pe_order_implies_then)
&& is_set(filter, pe_action_optional)
&& is_set(then->flags, pe_action_optional)
&& is_set(flags, pe_action_optional) == FALSE) {
/* in this case, treat migrate_runnable as if first is optional */
if (is_set(first->flags, pe_action_migrate_runnable) == FALSE) {
pe_rsc_trace(then->rsc, "Unset optional on %s because of %s", then->uuid, first->uuid);
pe_clear_action_bit(then, pe_action_optional);
}
}
if (is_set(type, pe_order_restart)) {
const char *reason = NULL;
CRM_ASSERT(first->rsc && first->rsc->variant == pe_native);
CRM_ASSERT(then->rsc && then->rsc->variant == pe_native);
if ((filter & pe_action_runnable)
&& (then->flags & pe_action_runnable) == 0
&& (then->rsc->flags & pe_rsc_managed)) {
reason = "shutdown";
}
if ((filter & pe_action_optional) && (then->flags & pe_action_optional) == 0) {
reason = "recover";
}
if (reason && is_set(first->flags, pe_action_optional)) {
if (is_set(first->flags, pe_action_runnable)
|| is_not_set(then->flags, pe_action_optional)) {
pe_rsc_trace(first->rsc, "Handling %s: %s -> %s", reason, first->uuid, then->uuid);
pe_clear_action_bit(first, pe_action_optional);
}
}
if (reason && is_not_set(first->flags, pe_action_optional)
&& is_not_set(first->flags, pe_action_runnable)) {
pe_rsc_trace(then->rsc, "Handling %s: %s -> %s", reason, first->uuid, then->uuid);
pe_clear_action_bit(then, pe_action_runnable);
}
if (reason &&
is_not_set(first->flags, pe_action_optional) &&
is_set(first->flags, pe_action_migrate_runnable) &&
is_not_set(then->flags, pe_action_migrate_runnable)) {
pe_clear_action_bit(first, pe_action_migrate_runnable);
}
}
if (then_flags != then->flags) {
changed |= pe_graph_updated_then;
pe_rsc_trace(then->rsc,
"Then: Flags for %s on %s are now 0x%.6x (was 0x%.6x) because of %s 0x%.6x",
then->uuid, then->node ? then->node->details->uname : "[none]", then->flags,
then_flags, first->uuid, first->flags);
if(then->rsc && then->rsc->parent) {
/* "X_stop then X_start" doesn't get handled for cloned groups unless we do this */
update_action(then);
}
}
if (first_flags != first->flags) {
changed |= pe_graph_updated_first;
pe_rsc_trace(first->rsc,
"First: Flags for %s on %s are now 0x%.6x (was 0x%.6x) because of %s 0x%.6x",
first->uuid, first->node ? first->node->details->uname : "[none]",
first->flags, first_flags, then->uuid, then->flags);
}
return changed;
}
void
native_rsc_location(resource_t * rsc, rsc_to_node_t * constraint)
{
GListPtr gIter = NULL;
GHashTableIter iter;
node_t *node = NULL;
if (constraint == NULL) {
pe_err("Constraint is NULL");
return;
} else if (rsc == NULL) {
pe_err("LHS of rsc_to_node (%s) is NULL", constraint->id);
return;
}
pe_rsc_trace(rsc, "Applying %s (%s) to %s", constraint->id,
role2text(constraint->role_filter), rsc->id);
/* take "lifetime" into account */
if (constraint->role_filter > RSC_ROLE_UNKNOWN && constraint->role_filter != rsc->next_role) {
pe_rsc_debug(rsc, "Constraint (%s) is not active (role : %s vs. %s)",
constraint->id, role2text(constraint->role_filter), role2text(rsc->next_role));
return;
} else if (is_active(constraint) == FALSE) {
pe_rsc_trace(rsc, "Constraint (%s) is not active", constraint->id);
return;
}
if (constraint->node_list_rh == NULL) {
pe_rsc_trace(rsc, "RHS of constraint %s is NULL", constraint->id);
return;
}
for (gIter = constraint->node_list_rh; gIter != NULL; gIter = gIter->next) {
node_t *node = (node_t *) gIter->data;
node_t *other_node = NULL;
other_node = (node_t *) pe_hash_table_lookup(rsc->allowed_nodes, node->details->id);
if (other_node != NULL) {
pe_rsc_trace(rsc, "%s + %s: %d + %d",
node->details->uname,
other_node->details->uname, node->weight, other_node->weight);
other_node->weight = merge_weights(other_node->weight, node->weight);
} else {
other_node = node_copy(node);
g_hash_table_insert(rsc->allowed_nodes, (gpointer) other_node->details->id, other_node);
}
if (other_node->rsc_discover_mode < constraint->discover_mode) {
if (constraint->discover_mode == discover_exclusive) {
rsc->exclusive_discover = TRUE;
}
/* exclusive > never > always... always is default */
other_node->rsc_discover_mode = constraint->discover_mode;
}
}
g_hash_table_iter_init(&iter, rsc->allowed_nodes);
while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
pe_rsc_trace(rsc, "%s + %s : %d", rsc->id, node->details->uname, node->weight);
}
}
void
native_expand(resource_t * rsc, pe_working_set_t * data_set)
{
GListPtr gIter = NULL;
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "Processing actions from %s", rsc->id);
for (gIter = rsc->actions; gIter != NULL; gIter = gIter->next) {
action_t *action = (action_t *) gIter->data;
crm_trace("processing action %d for rsc=%s", action->id, rsc->id);
graph_element_from_action(action, data_set);
}
for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) {
resource_t *child_rsc = (resource_t *) gIter->data;
child_rsc->cmds->expand(child_rsc, data_set);
}
}
#define log_change(fmt, args...) do { \
if(terminal) { \
printf(" * "fmt"\n", ##args); \
} else { \
crm_notice(fmt, ##args); \
} \
} while(0)
#define STOP_SANITY_ASSERT(lineno) do { \
if(current && current->details->unclean) { \
/* It will be a pseudo op */ \
} else if(stop == NULL) { \
crm_err("%s:%d: No stop action exists for %s", __FUNCTION__, lineno, rsc->id); \
CRM_ASSERT(stop != NULL); \
} else if(is_set(stop->flags, pe_action_optional)) { \
crm_err("%s:%d: Action %s is still optional", __FUNCTION__, lineno, stop->uuid); \
CRM_ASSERT(is_not_set(stop->flags, pe_action_optional)); \
} \
} while(0)
void
LogActions(resource_t * rsc, pe_working_set_t * data_set, gboolean terminal)
{
node_t *next = NULL;
node_t *current = NULL;
action_t *stop = NULL;
action_t *start = NULL;
action_t *demote = NULL;
action_t *promote = NULL;
char *key = NULL;
gboolean moving = FALSE;
GListPtr possible_matches = NULL;
if(rsc->variant == pe_container) {
container_LogActions(rsc, data_set, terminal);
return;
}
if (rsc->children) {
GListPtr gIter = NULL;
for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) {
resource_t *child_rsc = (resource_t *) gIter->data;
LogActions(child_rsc, data_set, terminal);
}
return;
}
next = rsc->allocated_to;
if (rsc->running_on) {
if (g_list_length(rsc->running_on) > 1 && rsc->partial_migration_source) {
current = rsc->partial_migration_source;
} else {
current = rsc->running_on->data;
}
if (rsc->role == RSC_ROLE_STOPPED) {
/*
* This can occur when resources are being recovered
* We fiddle with the current role in native_create_actions()
*/
rsc->role = RSC_ROLE_STARTED;
}
}
if (current == NULL && is_set(rsc->flags, pe_rsc_orphan)) {
/* Don't log stopped orphans */
return;
}
if (is_not_set(rsc->flags, pe_rsc_managed)
|| (current == NULL && next == NULL)) {
pe_rsc_info(rsc, "Leave %s\t(%s%s)",
rsc->id, role2text(rsc->role), is_not_set(rsc->flags,
pe_rsc_managed) ? " unmanaged" : "");
return;
}
if (current != NULL && next != NULL && safe_str_neq(current->details->id, next->details->id)) {
moving = TRUE;
}
key = start_key(rsc);
possible_matches = find_actions(rsc->actions, key, next);
free(key);
if (possible_matches) {
start = possible_matches->data;
g_list_free(possible_matches);
}
key = stop_key(rsc);
if(start == NULL || is_set(start->flags, pe_action_runnable) == FALSE) {
possible_matches = find_actions(rsc->actions, key, NULL);
} else {
possible_matches = find_actions(rsc->actions, key, current);
}
free(key);
if (possible_matches) {
stop = possible_matches->data;
g_list_free(possible_matches);
}
key = promote_key(rsc);
possible_matches = find_actions(rsc->actions, key, next);
free(key);
if (possible_matches) {
promote = possible_matches->data;
g_list_free(possible_matches);
}
key = demote_key(rsc);
possible_matches = find_actions(rsc->actions, key, next);
free(key);
if (possible_matches) {
demote = possible_matches->data;
g_list_free(possible_matches);
}
if (rsc->role == rsc->next_role) {
action_t *migrate_to = NULL;
key = generate_op_key(rsc->id, RSC_MIGRATED, 0);
possible_matches = find_actions(rsc->actions, key, next);
free(key);
if (possible_matches) {
migrate_to = possible_matches->data;
}
CRM_CHECK(next != NULL,);
if (next == NULL) {
} else if (migrate_to && is_set(migrate_to->flags, pe_action_runnable) && current) {
log_change("Migrate %s\t(%s %s -> %s)",
rsc->id, role2text(rsc->role), current->details->uname,
next->details->uname);
} else if (is_set(rsc->flags, pe_rsc_reload)) {
log_change("Reload %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname);
} else if (start == NULL || is_set(start->flags, pe_action_optional)) {
pe_rsc_info(rsc, "Leave %s\t(%s %s)", rsc->id, role2text(rsc->role),
next->details->uname);
} else if (start && is_set(start->flags, pe_action_runnable) == FALSE) {
log_change("Stop %s\t(%s %s%s)", rsc->id, role2text(rsc->role), current?current->details->uname:"N/A",
stop && is_not_set(stop->flags, pe_action_runnable) ? " - blocked" : "");
STOP_SANITY_ASSERT(__LINE__);
} else if (moving && current) {
log_change("%s %s\t(%s %s -> %s)",
is_set(rsc->flags, pe_rsc_failed) ? "Recover" : "Move ",
rsc->id, role2text(rsc->role),
current->details->uname, next->details->uname);
} else if (is_set(rsc->flags, pe_rsc_failed)) {
log_change("Recover %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname);
STOP_SANITY_ASSERT(__LINE__);
} else {
log_change("Restart %s\t(%s %s)", rsc->id, role2text(rsc->role), next->details->uname);
/* STOP_SANITY_ASSERT(__LINE__); False positive for migrate-fail-7 */
}
g_list_free(possible_matches);
return;
}
if (rsc->role > RSC_ROLE_SLAVE && rsc->role > rsc->next_role) {
CRM_CHECK(current != NULL,);
if (current != NULL) {
gboolean allowed = FALSE;
if (demote != NULL && (demote->flags & pe_action_runnable)) {
allowed = TRUE;
}
log_change("Demote %s\t(%s -> %s %s%s)",
rsc->id,
role2text(rsc->role),
role2text(rsc->next_role),
current->details->uname, allowed ? "" : " - blocked");
if (stop != NULL && is_not_set(stop->flags, pe_action_optional)
&& rsc->next_role > RSC_ROLE_STOPPED && moving == FALSE) {
if (is_set(rsc->flags, pe_rsc_failed)) {
log_change("Recover %s\t(%s %s)",
rsc->id, role2text(rsc->role), next->details->uname);
STOP_SANITY_ASSERT(__LINE__);
} else if (is_set(rsc->flags, pe_rsc_reload)) {
log_change("Reload %s\t(%s %s)", rsc->id, role2text(rsc->role),
next->details->uname);
} else {
log_change("Restart %s\t(%s %s)",
rsc->id, role2text(rsc->next_role), next->details->uname);
STOP_SANITY_ASSERT(__LINE__);
}
}
}
} else if (rsc->next_role == RSC_ROLE_STOPPED) {
GListPtr gIter = NULL;
CRM_CHECK(current != NULL,);
key = stop_key(rsc);
for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) {
node_t *node = (node_t *) gIter->data;
action_t *stop_op = NULL;
gboolean allowed = FALSE;
possible_matches = find_actions(rsc->actions, key, node);
if (possible_matches) {
stop_op = possible_matches->data;
g_list_free(possible_matches);
}
if (stop_op && (stop_op->flags & pe_action_runnable)) {
STOP_SANITY_ASSERT(__LINE__);
allowed = TRUE;
}
log_change("Stop %s\t(%s%s)", rsc->id, node->details->uname,
allowed ? "" : " - blocked");
}
free(key);
}
if (moving) {
log_change("Move %s\t(%s %s -> %s)",
rsc->id, role2text(rsc->next_role), current->details->uname,
next->details->uname);
STOP_SANITY_ASSERT(__LINE__);
}
if (rsc->role == RSC_ROLE_STOPPED) {
gboolean allowed = FALSE;
if (start && (start->flags & pe_action_runnable)) {
allowed = TRUE;
}
CRM_CHECK(next != NULL,);
if (next != NULL) {
log_change("Start %s\t(%s%s)", rsc->id, next->details->uname,
allowed ? "" : " - blocked");
}
if (allowed == FALSE) {
return;
}
}
if (rsc->next_role > RSC_ROLE_SLAVE && rsc->role < rsc->next_role) {
gboolean allowed = FALSE;
CRM_LOG_ASSERT(next);
if (stop != NULL && is_not_set(stop->flags, pe_action_optional)
&& rsc->role > RSC_ROLE_STOPPED) {
if (is_set(rsc->flags, pe_rsc_failed)) {
log_change("Recover %s\t(%s %s)",
rsc->id, role2text(rsc->role), next?next->details->uname:NULL);
STOP_SANITY_ASSERT(__LINE__);
} else if (is_set(rsc->flags, pe_rsc_reload)) {
log_change("Reload %s\t(%s %s)", rsc->id, role2text(rsc->role),
next?next->details->uname:NULL);
STOP_SANITY_ASSERT(__LINE__);
} else {
log_change("Restart %s\t(%s %s)",
rsc->id, role2text(rsc->role), next?next->details->uname:NULL);
STOP_SANITY_ASSERT(__LINE__);
}
}
if (promote && (promote->flags & pe_action_runnable)) {
allowed = TRUE;
}
log_change("Promote %s\t(%s -> %s %s%s)",
rsc->id,
role2text(rsc->role),
role2text(rsc->next_role),
next?next->details->uname:NULL,
allowed ? "" : " - blocked");
}
}
gboolean
StopRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set)
{
GListPtr gIter = NULL;
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "%s", rsc->id);
for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) {
node_t *current = (node_t *) gIter->data;
action_t *stop;
if (rsc->partial_migration_target) {
if (rsc->partial_migration_target->details == current->details) {
pe_rsc_trace(rsc, "Filtered %s -> %s %s", current->details->uname,
next->details->uname, rsc->id);
continue;
} else {
pe_rsc_trace(rsc, "Forced on %s %s", current->details->uname, rsc->id);
optional = FALSE;
}
}
pe_rsc_trace(rsc, "%s on %s", rsc->id, current->details->uname);
stop = stop_action(rsc, current, optional);
if (is_not_set(rsc->flags, pe_rsc_managed)) {
update_action_flags(stop, pe_action_runnable | pe_action_clear, __FUNCTION__, __LINE__);
}
if (is_set(data_set->flags, pe_flag_remove_after_stop)) {
DeleteRsc(rsc, current, optional, data_set);
}
}
return TRUE;
}
gboolean
StartRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set)
{
action_t *start = NULL;
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "%s on %s %d", rsc->id, next ? next->details->uname : "N/A", optional);
start = start_action(rsc, next, TRUE);
if (is_set(start->flags, pe_action_runnable) && optional == FALSE) {
update_action_flags(start, pe_action_optional | pe_action_clear, __FUNCTION__, __LINE__);
}
return TRUE;
}
gboolean
PromoteRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set)
{
char *key = NULL;
GListPtr gIter = NULL;
gboolean runnable = TRUE;
GListPtr action_list = NULL;
CRM_ASSERT(rsc);
CRM_CHECK(next != NULL, return FALSE);
pe_rsc_trace(rsc, "%s on %s", rsc->id, next->details->uname);
key = start_key(rsc);
action_list = find_actions_exact(rsc->actions, key, next);
free(key);
for (gIter = action_list; gIter != NULL; gIter = gIter->next) {
action_t *start = (action_t *) gIter->data;
if (is_set(start->flags, pe_action_runnable) == FALSE) {
runnable = FALSE;
}
}
g_list_free(action_list);
if (runnable) {
promote_action(rsc, next, optional);
return TRUE;
}
pe_rsc_debug(rsc, "%s\tPromote %s (canceled)", next->details->uname, rsc->id);
key = promote_key(rsc);
action_list = find_actions_exact(rsc->actions, key, next);
free(key);
for (gIter = action_list; gIter != NULL; gIter = gIter->next) {
action_t *promote = (action_t *) gIter->data;
update_action_flags(promote, pe_action_runnable | pe_action_clear, __FUNCTION__, __LINE__);
}
g_list_free(action_list);
return TRUE;
}
gboolean
DemoteRsc(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set)
{
GListPtr gIter = NULL;
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "%s", rsc->id);
/* CRM_CHECK(rsc->next_role == RSC_ROLE_SLAVE, return FALSE); */
for (gIter = rsc->running_on; gIter != NULL; gIter = gIter->next) {
node_t *current = (node_t *) gIter->data;
pe_rsc_trace(rsc, "%s on %s", rsc->id, next ? next->details->uname : "N/A");
demote_action(rsc, current, optional);
}
return TRUE;
}
gboolean
RoleError(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set)
{
CRM_ASSERT(rsc);
crm_err("%s on %s", rsc->id, next ? next->details->uname : "N/A");
CRM_CHECK(FALSE, return FALSE);
return FALSE;
}
gboolean
NullOp(resource_t * rsc, node_t * next, gboolean optional, pe_working_set_t * data_set)
{
CRM_ASSERT(rsc);
pe_rsc_trace(rsc, "%s", rsc->id);
return FALSE;
}
gboolean
DeleteRsc(resource_t * rsc, node_t * node, gboolean optional, pe_working_set_t * data_set)
{
if (is_set(rsc->flags, pe_rsc_failed)) {
pe_rsc_trace(rsc, "Resource %s not deleted from %s: failed", rsc->id, node->details->uname);
return FALSE;
} else if (node == NULL) {
pe_rsc_trace(rsc, "Resource %s not deleted: NULL node", rsc->id);
return FALSE;
} else if (node->details->unclean || node->details->online == FALSE) {
pe_rsc_trace(rsc, "Resource %s not deleted from %s: unrunnable", rsc->id,
node->details->uname);
return FALSE;
}
crm_notice("Removing %s from %s", rsc->id, node->details->uname);
delete_action(rsc, node, optional);
new_rsc_order(rsc, RSC_STOP, rsc, RSC_DELETE,
optional ? pe_order_implies_then : pe_order_optional, data_set);
new_rsc_order(rsc, RSC_DELETE, rsc, RSC_START,
optional ? pe_order_implies_then : pe_order_optional, data_set);
return TRUE;
}
#include <../lib/pengine/unpack.h>
#define set_char(x) last_rsc_id[lpc] = x; complete = TRUE;
static char *
increment_clone(char *last_rsc_id)
{
int lpc = 0;
int len = 0;
char *tmp = NULL;
gboolean complete = FALSE;
CRM_CHECK(last_rsc_id != NULL, return NULL);
if (last_rsc_id != NULL) {
len = strlen(last_rsc_id);
}
lpc = len - 1;
while (complete == FALSE && lpc > 0) {
switch (last_rsc_id[lpc]) {
case 0:
lpc--;
break;
case '0':
set_char('1');
break;
case '1':
set_char('2');
break;
case '2':
set_char('3');
break;
case '3':
set_char('4');
break;
case '4':
set_char('5');
break;
case '5':
set_char('6');
break;
case '6':
set_char('7');
break;
case '7':
set_char('8');
break;
case '8':
set_char('9');
break;
case '9':
last_rsc_id[lpc] = '0';
lpc--;
break;
case ':':
tmp = last_rsc_id;
last_rsc_id = calloc(1, len + 2);
memcpy(last_rsc_id, tmp, len);
last_rsc_id[++lpc] = '1';
last_rsc_id[len] = '0';
last_rsc_id[len + 1] = 0;
complete = TRUE;
free(tmp);
break;
default:
crm_err("Unexpected char: %c (%d)", last_rsc_id[lpc], lpc);
return NULL;
break;
}
}
return last_rsc_id;
}
static node_t *
probe_grouped_clone(resource_t * rsc, node_t * node, pe_working_set_t * data_set)
{
node_t *running = NULL;
resource_t *top = uber_parent(rsc);
if (running == NULL && is_set(top->flags, pe_rsc_unique) == FALSE) {
/* Annoyingly we also need to check any other clone instances
* Clumsy, but it will work.
*
* An alternative would be to update known_on for every peer
* during process_rsc_state()
*
* This code desperately needs optimization
* ptest -x with 100 nodes, 100 clones and clone-max=10:
* No probes O(25s)
* Detection without clone loop O(3m)
* Detection with clone loop O(8m)
ptest[32211]: 2010/02/18_14:27:55 CRIT: stage5: Probing for unknown resources
ptest[32211]: 2010/02/18_14:33:39 CRIT: stage5: Done
ptest[32211]: 2010/02/18_14:35:05 CRIT: stage7: Updating action states
ptest[32211]: 2010/02/18_14:35:05 CRIT: stage7: Done
*/
char *clone_id = clone_zero(rsc->id);
resource_t *peer = pe_find_resource(top->children, clone_id);
while (peer && running == NULL) {
running = pe_hash_table_lookup(peer->known_on, node->details->id);
if (running != NULL) {
/* we already know the status of the resource on this node */
pe_rsc_trace(rsc, "Skipping active clone: %s", rsc->id);
free(clone_id);
return running;
}
clone_id = increment_clone(clone_id);
peer = pe_find_resource(data_set->resources, clone_id);
}
free(clone_id);
}
return running;
}
gboolean
native_create_probe(resource_t * rsc, node_t * node, action_t * complete,
gboolean force, pe_working_set_t * data_set)
{
enum pe_ordering flags = pe_order_optional;
char *key = NULL;
action_t *probe = NULL;
node_t *running = NULL;
node_t *allowed = NULL;
resource_t *top = uber_parent(rsc);
static const char *rc_master = NULL;
static const char *rc_inactive = NULL;
if (rc_inactive == NULL) {
rc_inactive = crm_itoa(PCMK_OCF_NOT_RUNNING);
rc_master = crm_itoa(PCMK_OCF_RUNNING_MASTER);
}
CRM_CHECK(node != NULL, return FALSE);
if (force == FALSE && is_not_set(data_set->flags, pe_flag_startup_probes)) {
pe_rsc_trace(rsc, "Skipping active resource detection for %s", rsc->id);
return FALSE;
} else if (force == FALSE && is_container_remote_node(node)) {
pe_rsc_trace(rsc, "Skipping active resource detection for %s on container %s",
rsc->id, node->details->id);
return FALSE;
}
if (is_remote_node(node)) {
const char *class = crm_element_value(rsc->xml, XML_AGENT_ATTR_CLASS);
if (safe_str_eq(class, PCMK_RESOURCE_CLASS_STONITH)) {
pe_rsc_trace(rsc, "Skipping probe for %s on node %s, remote-nodes do not run stonith agents.", rsc->id, node->details->id);
return FALSE;
} else if (rsc_contains_remote_node(data_set, rsc)) {
pe_rsc_trace(rsc, "Skipping probe for %s on node %s, remote-nodes can not run resources that contain connection resources.", rsc->id, node->details->id);
return FALSE;
} else if (rsc->is_remote_node) {
pe_rsc_trace(rsc, "Skipping probe for %s on node %s, remote-nodes can not run connection resources", rsc->id, node->details->id);
return FALSE;
}
}
if (rsc->children) {
GListPtr gIter = NULL;
gboolean any_created = FALSE;
for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) {
resource_t *child_rsc = (resource_t *) gIter->data;
any_created = child_rsc->cmds->create_probe(child_rsc, node, complete, force, data_set)
|| any_created;
}
return any_created;
} else if ((rsc->container) && (!rsc->is_remote_node)) {
pe_rsc_trace(rsc, "Skipping %s: it is within container %s", rsc->id, rsc->container->id);
return FALSE;
}
if (is_set(rsc->flags, pe_rsc_orphan)) {
pe_rsc_trace(rsc, "Skipping orphan: %s", rsc->id);
return FALSE;
}
running = g_hash_table_lookup(rsc->known_on, node->details->id);
if (running == NULL && is_set(rsc->flags, pe_rsc_unique) == FALSE) {
/* Anonymous clones */
if (rsc->parent == top) {
running = g_hash_table_lookup(rsc->parent->known_on, node->details->id);
} else {
/* Grouped anonymous clones need extra special handling */
running = probe_grouped_clone(rsc, node, data_set);
}
}
if (force == FALSE && running != NULL) {
/* we already know the status of the resource on this node */
pe_rsc_trace(rsc, "Skipping known: %s on %s", rsc->id, node->details->uname);
return FALSE;
}
allowed = g_hash_table_lookup(rsc->allowed_nodes, node->details->id);
if (rsc->exclusive_discover || top->exclusive_discover) {
if (allowed == NULL) {
/* exclusive discover is enabled and this node is not in the allowed list. */
return FALSE;
} else if (allowed->rsc_discover_mode != discover_exclusive) {
/* exclusive discover is enabled and this node is not marked
* as a node this resource should be discovered on */
return FALSE;
}
}
if (allowed && allowed->rsc_discover_mode == discover_never) {
/* this resource is marked as not needing to be discovered on this node */
return FALSE;
}
key = generate_op_key(rsc->id, RSC_STATUS, 0);
probe = custom_action(rsc, key, RSC_STATUS, node, FALSE, TRUE, data_set);
update_action_flags(probe, pe_action_optional | pe_action_clear, __FUNCTION__, __LINE__);
/* If enabled, require unfencing before probing any fence devices
* but ensure it happens after any resources that require
* unfencing have been probed.
*
* Doing it the other way (requiring unfencing after probing
* resources that need it) would result in the node being
* unfenced, and all its resources being stopped, whenever a new
* resource is added. Which would be highly suboptimal.
*
* So essentially, at the point the fencing device(s) have been
* probed, we know the state of all resources that require
* unfencing and that unfencing occurred.
*/
if(is_set(rsc->flags, pe_rsc_fence_device) && is_set(data_set->flags, pe_flag_enable_unfencing)) {
trigger_unfencing(NULL, node, "node discovery", probe, data_set);
probe->priority = INFINITY; /* Ensure this runs if unfencing succeeds */
} else if(is_set(rsc->flags, pe_rsc_needs_unfencing)) {
action_t *unfence = pe_fence_op(node, "on", TRUE, data_set);
order_actions(probe, unfence, pe_order_optional);
}
/*
* We need to know if it's running_on (not just known_on) this node
* to correctly determine the target rc.
*/
running = pe_find_node_id(rsc->running_on, node->details->id);
if (running == NULL) {
add_hash_param(probe->meta, XML_ATTR_TE_TARGET_RC, rc_inactive);
} else if (rsc->role == RSC_ROLE_MASTER) {
add_hash_param(probe->meta, XML_ATTR_TE_TARGET_RC, rc_master);
}
crm_debug("Probing %s on %s (%s) %d %p", rsc->id, node->details->uname, role2text(rsc->role),
is_set(probe->flags, pe_action_runnable), rsc->running_on);
if(is_set(rsc->flags, pe_rsc_fence_device) && is_set(data_set->flags, pe_flag_enable_unfencing)) {
top = rsc;
} else if (pe_rsc_is_clone(top) == FALSE) {
top = rsc;
} else {
crm_trace("Probing %s on %s (%s) as %s", rsc->id, node->details->uname, role2text(rsc->role), top->id);
}
if(is_not_set(probe->flags, pe_action_runnable) && rsc->running_on == NULL) {
- /* Prevent the start from occuring if rsc isn't active, but
+ /* Prevent the start from occurring if rsc isn't active, but
* don't cause it to stop if it was active already
*/
flags |= pe_order_runnable_left;
}
custom_action_order(rsc, NULL, probe,
top, generate_op_key(top->id, RSC_START, 0), NULL,
flags, data_set);
/* Before any reloads, if they exist */
custom_action_order(rsc, NULL, probe,
top, reload_key(rsc), NULL,
pe_order_optional, data_set);
if (node->details->shutdown == FALSE) {
custom_action_order(rsc, NULL, probe,
rsc, generate_op_key(rsc->id, RSC_STOP, 0), NULL,
pe_order_optional, data_set);
}
if(is_set(rsc->flags, pe_rsc_fence_device) && is_set(data_set->flags, pe_flag_enable_unfencing)) {
/* Normally rsc.start depends on probe complete which depends
* on rsc.probe. But this can't be the case in this scenario as
* it would create graph loops.
*
* So instead we explicitly order 'rsc.probe then rsc.start'
*/
} else {
order_actions(probe, complete, pe_order_implies_then);
}
return TRUE;
}
static void
native_start_constraints(resource_t * rsc, action_t * stonith_op, pe_working_set_t * data_set)
{
node_t *target;
GListPtr gIter = NULL;
action_t *all_stopped = get_pseudo_op(ALL_STOPPED, data_set);
action_t *stonith_done = get_pseudo_op(STONITH_DONE, data_set);
CRM_CHECK(stonith_op && stonith_op->node, return);
target = stonith_op->node;
for (gIter = rsc->actions; gIter != NULL; gIter = gIter->next) {
action_t *action = (action_t *) gIter->data;
if(action->needs == rsc_req_nothing) {
/* Anything other than start or promote requires nothing */
} else if (action->needs == rsc_req_stonith) {
order_actions(stonith_done, action, pe_order_optional);
} else if (safe_str_eq(action->task, RSC_START)
&& NULL == pe_hash_table_lookup(rsc->known_on, target->details->id)) {
/* if known == NULL, then we don't know if
* the resource is active on the node
* we're about to shoot
*
* in this case, regardless of action->needs,
* the only safe option is to wait until
* the node is shot before doing anything
* to with the resource
*
* it's analogous to waiting for all the probes
* for rscX to complete before starting rscX
*
* the most likely explanation is that the
* DC died and took its status with it
*/
pe_rsc_debug(rsc, "Ordering %s after %s recovery", action->uuid,
target->details->uname);
order_actions(all_stopped, action, pe_order_optional | pe_order_runnable_left);
}
}
}
static void
native_stop_constraints(resource_t * rsc, action_t * stonith_op, pe_working_set_t * data_set)
{
char *key = NULL;
GListPtr gIter = NULL;
GListPtr action_list = NULL;
action_t *start = NULL;
resource_t *top = uber_parent(rsc);
node_t *target;
CRM_CHECK(stonith_op && stonith_op->node, return);
target = stonith_op->node;
/* Check whether the resource has a pending start action */
start = find_first_action(rsc->actions, NULL, CRMD_ACTION_START, NULL);
/* Get a list of stop actions potentially implied by the fencing */
key = stop_key(rsc);
action_list = find_actions(rsc->actions, key, target);
free(key);
for (gIter = action_list; gIter != NULL; gIter = gIter->next) {
action_t *action = (action_t *) gIter->data;
if (is_set(rsc->flags, pe_rsc_failed)) {
crm_notice("Stop of failed resource %s is implicit after %s is fenced",
rsc->id, target->details->uname);
} else {
crm_info("%s is implicit after %s is fenced",
action->uuid, target->details->uname);
}
/* The stop would never complete and is now implied by the fencing,
* so convert it into a pseudo-action.
*/
update_action_flags(action, pe_action_pseudo, __FUNCTION__, __LINE__);
update_action_flags(action, pe_action_runnable, __FUNCTION__, __LINE__);
update_action_flags(action, pe_action_implied_by_stonith, __FUNCTION__, __LINE__);
if(start == NULL || start->needs > rsc_req_quorum) {
enum pe_ordering flags = pe_order_optional;
action_t *parent_stop = find_first_action(top->actions, NULL, RSC_STOP, NULL);
if (target->details->remote_rsc) {
/* User constraints must not order a resource in a guest node
* relative to the guest node container resource. This flag
* marks constraints as generated by the cluster and thus
* immune to that check.
*/
flags |= pe_order_preserve;
}
order_actions(stonith_op, action, flags);
order_actions(stonith_op, parent_stop, flags);
}
if (is_set(rsc->flags, pe_rsc_notify)) {
/* Create a second notification that will be delivered
* immediately after the node is fenced
*
* Basic problem:
* - C is a clone active on the node to be shot and stopping on another
* - R is a resource that depends on C
*
* + C.stop depends on R.stop
* + C.stopped depends on STONITH
* + C.notify depends on C.stopped
* + C.healthy depends on C.notify
* + R.stop depends on C.healthy
*
* The extra notification here changes
* + C.healthy depends on C.notify
* into:
* + C.healthy depends on C.notify'
* + C.notify' depends on STONITH'
* thus breaking the loop
*/
create_secondary_notification(action, rsc, stonith_op, data_set);
}
/* From Bug #1601, successful fencing must be an input to a failed resources stop action.
However given group(rA, rB) running on nodeX and B.stop has failed,
A := stop healthy resource (rA.stop)
B := stop failed resource (pseudo operation B.stop)
C := stonith nodeX
A requires B, B requires C, C requires A
This loop would prevent the cluster from making progress.
This block creates the "C requires A" dependency and therefore must (at least
for now) be disabled.
Instead, run the block above and treat all resources on nodeX as B would be
(marked as a pseudo op depending on the STONITH).
TODO: Break the "A requires B" dependency in update_action() and re-enable this block
} else if(is_stonith == FALSE) {
crm_info("Moving healthy resource %s"
" off %s before fencing",
rsc->id, node->details->uname);
* stop healthy resources before the
* stonith op
*
custom_action_order(
rsc, stop_key(rsc), NULL,
NULL,strdup(CRM_OP_FENCE),stonith_op,
pe_order_optional, data_set);
*/
}
g_list_free(action_list);
/* Get a list of demote actions potentially implied by the fencing */
key = demote_key(rsc);
action_list = find_actions(rsc->actions, key, target);
free(key);
for (gIter = action_list; gIter != NULL; gIter = gIter->next) {
action_t *action = (action_t *) gIter->data;
if (action->node->details->online == FALSE || action->node->details->unclean == TRUE
|| is_set(rsc->flags, pe_rsc_failed)) {
if (is_set(rsc->flags, pe_rsc_failed)) {
pe_rsc_info(rsc,
"Demote of failed resource %s is implicit after %s is fenced",
rsc->id, target->details->uname);
} else {
pe_rsc_info(rsc, "%s is implicit after %s is fenced",
action->uuid, target->details->uname);
}
/* The demote would never complete and is now implied by the
* fencing, so convert it into a pseudo-action.
*/
update_action_flags(action, pe_action_pseudo, __FUNCTION__, __LINE__);
update_action_flags(action, pe_action_runnable, __FUNCTION__, __LINE__);
if (start == NULL || start->needs > rsc_req_quorum) {
order_actions(stonith_op, action, pe_order_preserve|pe_order_optional);
}
}
}
g_list_free(action_list);
}
void
rsc_stonith_ordering(resource_t * rsc, action_t * stonith_op, pe_working_set_t * data_set)
{
if (rsc->children) {
GListPtr gIter = NULL;
for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) {
resource_t *child_rsc = (resource_t *) gIter->data;
rsc_stonith_ordering(child_rsc, stonith_op, data_set);
}
} else if (is_not_set(rsc->flags, pe_rsc_managed)) {
pe_rsc_trace(rsc, "Skipping fencing constraints for unmanaged resource: %s", rsc->id);
} else {
native_start_constraints(rsc, stonith_op, data_set);
native_stop_constraints(rsc, stonith_op, data_set);
}
}
enum stack_activity {
stack_stable = 0,
stack_starting = 1,
stack_stopping = 2,
stack_middle = 4,
};
static action_t *
get_first_named_action(resource_t * rsc, const char *action, gboolean only_valid, node_t * current)
{
action_t *a = NULL;
GListPtr action_list = NULL;
char *key = generate_op_key(rsc->id, action, 0);
action_list = find_actions(rsc->actions, key, current);
if (action_list == NULL || action_list->data == NULL) {
crm_trace("%s: no %s action", rsc->id, action);
free(key);
return NULL;
}
a = action_list->data;
g_list_free(action_list);
if (only_valid && is_set(a->flags, pe_action_pseudo)) {
crm_trace("%s: pseudo", key);
a = NULL;
} else if (only_valid && is_not_set(a->flags, pe_action_runnable)) {
crm_trace("%s: runnable", key);
a = NULL;
}
free(key);
return a;
}
void
ReloadRsc(resource_t * rsc, node_t *node, pe_working_set_t * data_set)
{
GListPtr gIter = NULL;
action_t *other = NULL;
action_t *reload = NULL;
if (rsc->children) {
for (gIter = rsc->children; gIter != NULL; gIter = gIter->next) {
resource_t *child_rsc = (resource_t *) gIter->data;
ReloadRsc(child_rsc, node, data_set);
}
return;
} else if (rsc->variant > pe_native) {
/* Complex resource with no children */
return;
} else if (is_not_set(rsc->flags, pe_rsc_managed)) {
pe_rsc_trace(rsc, "%s: unmanaged", rsc->id);
return;
} else if (is_set(rsc->flags, pe_rsc_failed) || is_set(rsc->flags, pe_rsc_start_pending)) {
pe_rsc_trace(rsc, "%s: general resource state: flags=0x%.16llx", rsc->id, rsc->flags);
stop_action(rsc, node, FALSE); /* Force a full restart, overkill? */
return;
} else if (node == NULL) {
pe_rsc_trace(rsc, "%s: not active", rsc->id);
return;
}
pe_rsc_trace(rsc, "Processing %s", rsc->id);
set_bit(rsc->flags, pe_rsc_reload);
reload = custom_action(
rsc, reload_key(rsc), CRMD_ACTION_RELOAD, node, FALSE, TRUE, data_set);
/* stop = stop_action(rsc, node, optional); */
other = get_first_named_action(rsc, RSC_STOP, TRUE, node);
if (other != NULL) {
order_actions(reload, other, pe_order_optional);
}
other = get_first_named_action(rsc, RSC_DEMOTE, TRUE, node);
if (other != NULL) {
order_actions(reload, other, pe_order_optional);
}
}
void
native_append_meta(resource_t * rsc, xmlNode * xml)
{
char *value = g_hash_table_lookup(rsc->meta, XML_RSC_ATTR_INCARNATION);
resource_t *iso_parent, *last_parent, *parent;
if (value) {
char *name = NULL;
name = crm_meta_name(XML_RSC_ATTR_INCARNATION);
crm_xml_add(xml, name, value);
free(name);
}
value = g_hash_table_lookup(rsc->meta, XML_RSC_ATTR_REMOTE_NODE);
if (value) {
char *name = NULL;
name = crm_meta_name(XML_RSC_ATTR_REMOTE_NODE);
crm_xml_add(xml, name, value);
free(name);
}
for (parent = rsc; parent != NULL; parent = parent->parent) {
if (parent->container) {
crm_xml_add(xml, CRM_META"_"XML_RSC_ATTR_CONTAINER, parent->container->id);
}
}
last_parent = iso_parent = rsc;
while (iso_parent != NULL) {
char *name = NULL;
char *iso = NULL;
if (iso_parent->isolation_wrapper == NULL) {
last_parent = iso_parent;
iso_parent = iso_parent->parent;
continue;
}
/* name of wrapper script this resource is routed through. */
name = crm_meta_name(XML_RSC_ATTR_ISOLATION_WRAPPER);
crm_xml_add(xml, name, iso_parent->isolation_wrapper);
free(name);
/* instance name for isolated environment */
name = crm_meta_name(XML_RSC_ATTR_ISOLATION_INSTANCE);
if (pe_rsc_is_clone(iso_parent)) {
/* if isolation is set at the clone/master level, we have to
* give this resource the unique isolation instance associated
* with the clone child (last_parent)*/
/* Example: cloned group. group is container
* clone myclone - iso_parent
* group mygroup - last_parent (this is the iso environment)
* rsc myrsc1 - rsc
* rsc myrsc2
* The group is what is isolated in example1. We have to make
* sure myrsc1 and myrsc2 launch in the same isolated environment.
*
* Example: cloned primitives. rsc primitive is container
* clone myclone iso_parent
* rsc myrsc1 - last_parent == rsc (this is the iso environment)
* The individual cloned primitive instances are isolated
*/
value = g_hash_table_lookup(last_parent->meta, XML_RSC_ATTR_INCARNATION);
CRM_ASSERT(value != NULL);
iso = crm_concat(crm_element_value(last_parent->xml, XML_ATTR_ID), value, '_');
crm_xml_add(xml, name, iso);
free(iso);
} else {
/*
* Example: cloned group of containers
* clone myclone
* group mygroup
* rsc myrsc1 - iso_parent (this is the iso environment)
* rsc myrsc2
*
* Example: group of containers
* group mygroup
* rsc myrsc1 - iso_parent (this is the iso environment)
* rsc myrsc2
*
* Example: group is container
* group mygroup - iso_parent ( this is iso environment)
* rsc myrsc1
* rsc myrsc2
*
* Example: single primitive
* rsc myrsc1 - iso_parent (this is the iso environment)
*/
value = g_hash_table_lookup(iso_parent->meta, XML_RSC_ATTR_INCARNATION);
if (value) {
crm_xml_add(xml, name, iso_parent->id);
iso = crm_concat(crm_element_value(iso_parent->xml, XML_ATTR_ID), value, '_');
crm_xml_add(xml, name, iso);
free(iso);
} else {
crm_xml_add(xml, name, iso_parent->id);
}
}
free(name);
break;
}
}

File Metadata

Mime Type
text/x-diff
Expires
Sat, Nov 23, 1:07 PM (23 h, 24 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1018744
Default Alt Text
(573 KB)

Event Timeline