diff --git a/cts/scheduler/summary/a-demote-then-b-migrate.summary b/cts/scheduler/summary/a-demote-then-b-migrate.summary index 1f5c90b8ac..32c136e777 100644 --- a/cts/scheduler/summary/a-demote-then-b-migrate.summary +++ b/cts/scheduler/summary/a-demote-then-b-migrate.summary @@ -1,57 +1,57 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] * rsc2 (ocf:pacemaker:Dummy): Started node1 Transition Summary: * Demote rsc1:0 ( Promoted -> Unpromoted node1 ) * Promote rsc1:1 ( Unpromoted -> Promoted node2 ) - * Migrate rsc2 ( node1 -> node2 ) + * Migrate rsc2 ( node1 -> node2 ) Executing Cluster Transition: * Resource action: rsc1:1 cancel=5000 on node1 * Resource action: rsc1:0 cancel=10000 on node2 * Pseudo action: ms1_pre_notify_demote_0 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-pre_notify_demote_0 * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_post_notify_demoted_0 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-post_notify_demoted_0 * Pseudo action: ms1_pre_notify_promote_0 * Resource action: rsc2 migrate_to on node1 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-pre_notify_promote_0 * Resource action: rsc2 migrate_from on node2 * Resource action: rsc2 stop on node1 * Pseudo action: rsc2_start_0 * Pseudo action: ms1_promote_0 * Resource action: rsc2 monitor=5000 on node2 * Resource action: rsc1:0 promote on node2 * Pseudo action: ms1_promoted_0 * Pseudo action: ms1_post_notify_promoted_0 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-post_notify_promoted_0 * Resource action: rsc1:1 monitor=10000 on node1 * Resource action: rsc1:0 monitor=5000 on node2 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node2 ] * Unpromoted: [ node1 ] * rsc2 (ocf:pacemaker:Dummy): Started node2 diff --git a/cts/scheduler/summary/a-promote-then-b-migrate.summary b/cts/scheduler/summary/a-promote-then-b-migrate.summary index 17486c56f2..6489a4ff8e 100644 --- a/cts/scheduler/summary/a-promote-then-b-migrate.summary +++ b/cts/scheduler/summary/a-promote-then-b-migrate.summary @@ -1,42 +1,42 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] * rsc2 (ocf:pacemaker:Dummy): Started node1 Transition Summary: * Promote rsc1:1 ( Unpromoted -> Promoted node2 ) - * Migrate rsc2 ( node1 -> node2 ) + * Migrate rsc2 ( node1 -> node2 ) Executing Cluster Transition: * Resource action: rsc1:1 cancel=10000 on node2 * Pseudo action: ms1_pre_notify_promote_0 * Resource action: rsc1:0 notify on node1 * Resource action: rsc1:1 notify on node2 * Pseudo action: ms1_confirmed-pre_notify_promote_0 * Pseudo action: ms1_promote_0 * Resource action: rsc1:1 promote on node2 * Pseudo action: ms1_promoted_0 * Pseudo action: ms1_post_notify_promoted_0 * Resource action: rsc1:0 notify on node1 * Resource action: rsc1:1 notify on node2 * Pseudo action: ms1_confirmed-post_notify_promoted_0 * Resource action: rsc2 migrate_to on node1 * Resource action: rsc1:1 monitor=5000 on node2 * Resource action: rsc2 migrate_from on node2 * Resource action: rsc2 stop on node1 * Pseudo action: rsc2_start_0 * Resource action: rsc2 monitor=5000 on node2 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 node2 ] * rsc2 (ocf:pacemaker:Dummy): Started node2 diff --git a/cts/scheduler/summary/anti-colocation-promoted.summary b/cts/scheduler/summary/anti-colocation-promoted.summary index c1b88cab47..2348f76f32 100644 --- a/cts/scheduler/summary/anti-colocation-promoted.summary +++ b/cts/scheduler/summary/anti-colocation-promoted.summary @@ -1,38 +1,38 @@ Using the original execution date of: 2016-04-29 09:06:59Z Current cluster status: * Node List: * Online: [ sle12sp2-1 sle12sp2-2 ] * Full List of Resources: * st_sbd (stonith:external/sbd): Started sle12sp2-2 * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-2 * Clone Set: ms1 [state1] (promotable): * Promoted: [ sle12sp2-1 ] * Unpromoted: [ sle12sp2-2 ] Transition Summary: - * Move dummy1 ( sle12sp2-2 -> sle12sp2-1 ) + * Move dummy1 ( sle12sp2-2 -> sle12sp2-1 ) * Promote state1:0 ( Unpromoted -> Promoted sle12sp2-2 ) * Demote state1:1 ( Promoted -> Unpromoted sle12sp2-1 ) Executing Cluster Transition: * Resource action: dummy1 stop on sle12sp2-2 * Pseudo action: ms1_demote_0 * Resource action: state1 demote on sle12sp2-1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_promote_0 * Resource action: dummy1 start on sle12sp2-1 * Resource action: state1 promote on sle12sp2-2 * Pseudo action: ms1_promoted_0 Using the original execution date of: 2016-04-29 09:06:59Z Revised Cluster Status: * Node List: * Online: [ sle12sp2-1 sle12sp2-2 ] * Full List of Resources: * st_sbd (stonith:external/sbd): Started sle12sp2-2 * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-1 * Clone Set: ms1 [state1] (promotable): * Promoted: [ sle12sp2-2 ] * Unpromoted: [ sle12sp2-1 ] diff --git a/cts/scheduler/summary/anti-colocation-unpromoted.summary b/cts/scheduler/summary/anti-colocation-unpromoted.summary index 42aa106b10..a7087bc819 100644 --- a/cts/scheduler/summary/anti-colocation-unpromoted.summary +++ b/cts/scheduler/summary/anti-colocation-unpromoted.summary @@ -1,36 +1,36 @@ Current cluster status: * Node List: * Online: [ sle12sp2-1 sle12sp2-2 ] * Full List of Resources: * st_sbd (stonith:external/sbd): Started sle12sp2-1 * Clone Set: ms1 [state1] (promotable): * Promoted: [ sle12sp2-1 ] * Unpromoted: [ sle12sp2-2 ] * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-1 Transition Summary: * Demote state1:0 ( Promoted -> Unpromoted sle12sp2-1 ) * Promote state1:1 ( Unpromoted -> Promoted sle12sp2-2 ) - * Move dummy1 ( sle12sp2-1 -> sle12sp2-2 ) + * Move dummy1 ( sle12sp2-1 -> sle12sp2-2 ) Executing Cluster Transition: * Resource action: dummy1 stop on sle12sp2-1 * Pseudo action: ms1_demote_0 * Resource action: state1 demote on sle12sp2-1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_promote_0 * Resource action: state1 promote on sle12sp2-2 * Pseudo action: ms1_promoted_0 * Resource action: dummy1 start on sle12sp2-2 Revised Cluster Status: * Node List: * Online: [ sle12sp2-1 sle12sp2-2 ] * Full List of Resources: * st_sbd (stonith:external/sbd): Started sle12sp2-1 * Clone Set: ms1 [state1] (promotable): * Promoted: [ sle12sp2-2 ] * Unpromoted: [ sle12sp2-1 ] * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-2 diff --git a/cts/scheduler/summary/bug-1572-1.summary b/cts/scheduler/summary/bug-1572-1.summary index c572db21d5..16870b2286 100644 --- a/cts/scheduler/summary/bug-1572-1.summary +++ b/cts/scheduler/summary/bug-1572-1.summary @@ -1,85 +1,85 @@ Current cluster status: * Node List: * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] * Full List of Resources: * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable): * Promoted: [ arc-tkincaidlx.wsicorp.com ] * Unpromoted: [ arc-dknightlx ] * Resource Group: grp_pgsql_mirror: * fs_mirror (ocf:heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com * pgsql_5555 (ocf:heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com Transition Summary: - * Stop rsc_drbd_7788:0 ( Unpromoted arc-dknightlx ) due to node availability + * Stop rsc_drbd_7788:0 ( Unpromoted arc-dknightlx ) due to node availability * Restart rsc_drbd_7788:1 ( Promoted arc-tkincaidlx.wsicorp.com ) due to resource definition change - * Restart fs_mirror ( arc-tkincaidlx.wsicorp.com ) due to required ms_drbd_7788 notified - * Restart pgsql_5555 ( arc-tkincaidlx.wsicorp.com ) due to required fs_mirror start - * Restart IPaddr_147_81_84_133 ( arc-tkincaidlx.wsicorp.com ) due to required pgsql_5555 start + * Restart fs_mirror ( arc-tkincaidlx.wsicorp.com ) due to required ms_drbd_7788 notified + * Restart pgsql_5555 ( arc-tkincaidlx.wsicorp.com ) due to required fs_mirror start + * Restart IPaddr_147_81_84_133 ( arc-tkincaidlx.wsicorp.com ) due to required pgsql_5555 start Executing Cluster Transition: * Pseudo action: ms_drbd_7788_pre_notify_demote_0 * Pseudo action: grp_pgsql_mirror_stop_0 * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_demote_0 * Resource action: pgsql_5555 stop on arc-tkincaidlx.wsicorp.com * Resource action: fs_mirror stop on arc-tkincaidlx.wsicorp.com * Pseudo action: grp_pgsql_mirror_stopped_0 * Pseudo action: ms_drbd_7788_demote_0 * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_demoted_0 * Pseudo action: ms_drbd_7788_post_notify_demoted_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_7788_pre_notify_stop_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_7788_stop_0 * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx * Resource action: rsc_drbd_7788:1 stop on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_stopped_0 * Cluster action: do_shutdown on arc-dknightlx * Pseudo action: ms_drbd_7788_post_notify_stopped_0 * Pseudo action: ms_drbd_7788_confirmed-post_notify_stopped_0 * Pseudo action: ms_drbd_7788_pre_notify_start_0 * Pseudo action: ms_drbd_7788_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_7788_start_0 * Resource action: rsc_drbd_7788:1 start on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_running_0 * Pseudo action: ms_drbd_7788_post_notify_running_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_running_0 * Pseudo action: ms_drbd_7788_pre_notify_promote_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_7788_promote_0 * Resource action: rsc_drbd_7788:1 promote on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_promoted_0 * Pseudo action: ms_drbd_7788_post_notify_promoted_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_promoted_0 * Pseudo action: grp_pgsql_mirror_start_0 * Resource action: fs_mirror start on arc-tkincaidlx.wsicorp.com * Resource action: pgsql_5555 start on arc-tkincaidlx.wsicorp.com * Resource action: pgsql_5555 monitor=30000 on arc-tkincaidlx.wsicorp.com * Resource action: IPaddr_147_81_84_133 start on arc-tkincaidlx.wsicorp.com * Resource action: IPaddr_147_81_84_133 monitor=25000 on arc-tkincaidlx.wsicorp.com * Pseudo action: grp_pgsql_mirror_running_0 Revised Cluster Status: * Node List: * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] * Full List of Resources: * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable): * Promoted: [ arc-tkincaidlx.wsicorp.com ] * Stopped: [ arc-dknightlx ] * Resource Group: grp_pgsql_mirror: * fs_mirror (ocf:heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com * pgsql_5555 (ocf:heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com diff --git a/cts/scheduler/summary/bug-1572-2.summary b/cts/scheduler/summary/bug-1572-2.summary index 012ca78dd6..c161239be2 100644 --- a/cts/scheduler/summary/bug-1572-2.summary +++ b/cts/scheduler/summary/bug-1572-2.summary @@ -1,61 +1,61 @@ Current cluster status: * Node List: * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] * Full List of Resources: * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable): * Promoted: [ arc-tkincaidlx.wsicorp.com ] * Unpromoted: [ arc-dknightlx ] * Resource Group: grp_pgsql_mirror: * fs_mirror (ocf:heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com * pgsql_5555 (ocf:heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com Transition Summary: - * Stop rsc_drbd_7788:0 ( Unpromoted arc-dknightlx ) due to node availability + * Stop rsc_drbd_7788:0 ( Unpromoted arc-dknightlx ) due to node availability * Demote rsc_drbd_7788:1 ( Promoted -> Unpromoted arc-tkincaidlx.wsicorp.com ) - * Stop fs_mirror ( arc-tkincaidlx.wsicorp.com ) due to node availability - * Stop pgsql_5555 ( arc-tkincaidlx.wsicorp.com ) due to node availability - * Stop IPaddr_147_81_84_133 ( arc-tkincaidlx.wsicorp.com ) due to node availability + * Stop fs_mirror ( arc-tkincaidlx.wsicorp.com ) due to node availability + * Stop pgsql_5555 ( arc-tkincaidlx.wsicorp.com ) due to node availability + * Stop IPaddr_147_81_84_133 ( arc-tkincaidlx.wsicorp.com ) due to node availability Executing Cluster Transition: * Pseudo action: ms_drbd_7788_pre_notify_demote_0 * Pseudo action: grp_pgsql_mirror_stop_0 * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_demote_0 * Resource action: pgsql_5555 stop on arc-tkincaidlx.wsicorp.com * Resource action: fs_mirror stop on arc-tkincaidlx.wsicorp.com * Pseudo action: grp_pgsql_mirror_stopped_0 * Pseudo action: ms_drbd_7788_demote_0 * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_demoted_0 * Pseudo action: ms_drbd_7788_post_notify_demoted_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_7788_pre_notify_stop_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_7788_stop_0 * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx * Pseudo action: ms_drbd_7788_stopped_0 * Cluster action: do_shutdown on arc-dknightlx * Pseudo action: ms_drbd_7788_post_notify_stopped_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_stopped_0 Revised Cluster Status: * Node List: * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] * Full List of Resources: * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable): * Unpromoted: [ arc-tkincaidlx.wsicorp.com ] * Stopped: [ arc-dknightlx ] * Resource Group: grp_pgsql_mirror: * fs_mirror (ocf:heartbeat:Filesystem): Stopped * pgsql_5555 (ocf:heartbeat:pgsql): Stopped * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Stopped diff --git a/cts/scheduler/summary/bug-1685.summary b/cts/scheduler/summary/bug-1685.summary index 044b018f19..2ed29bc0e1 100644 --- a/cts/scheduler/summary/bug-1685.summary +++ b/cts/scheduler/summary/bug-1685.summary @@ -1,38 +1,38 @@ Current cluster status: * Node List: * Online: [ redun1 redun2 ] * Full List of Resources: * Clone Set: shared_storage [prim_shared_storage] (promotable): * Unpromoted: [ redun1 redun2 ] * shared_filesystem (ocf:heartbeat:Filesystem): Stopped Transition Summary: * Promote prim_shared_storage:0 ( Unpromoted -> Promoted redun2 ) - * Start shared_filesystem ( redun2 ) + * Start shared_filesystem ( redun2 ) Executing Cluster Transition: * Pseudo action: shared_storage_pre_notify_promote_0 * Resource action: prim_shared_storage:0 notify on redun2 * Resource action: prim_shared_storage:1 notify on redun1 * Pseudo action: shared_storage_confirmed-pre_notify_promote_0 * Pseudo action: shared_storage_promote_0 * Resource action: prim_shared_storage:0 promote on redun2 * Pseudo action: shared_storage_promoted_0 * Pseudo action: shared_storage_post_notify_promoted_0 * Resource action: prim_shared_storage:0 notify on redun2 * Resource action: prim_shared_storage:1 notify on redun1 * Pseudo action: shared_storage_confirmed-post_notify_promoted_0 * Resource action: shared_filesystem start on redun2 * Resource action: prim_shared_storage:1 monitor=120000 on redun1 * Resource action: shared_filesystem monitor=120000 on redun2 Revised Cluster Status: * Node List: * Online: [ redun1 redun2 ] * Full List of Resources: * Clone Set: shared_storage [prim_shared_storage] (promotable): * Promoted: [ redun2 ] * Unpromoted: [ redun1 ] * shared_filesystem (ocf:heartbeat:Filesystem): Started redun2 diff --git a/cts/scheduler/summary/bug-5059.summary b/cts/scheduler/summary/bug-5059.summary index c555d1dfb5..b3661e0ad6 100644 --- a/cts/scheduler/summary/bug-5059.summary +++ b/cts/scheduler/summary/bug-5059.summary @@ -1,77 +1,77 @@ Current cluster status: * Node List: * Node gluster03.h: standby * Online: [ gluster01.h gluster02.h ] * OFFLINE: [ gluster04.h ] * Full List of Resources: * Clone Set: ms_stateful [g_stateful] (promotable): * Resource Group: g_stateful:0: * p_stateful1 (ocf:pacemaker:Stateful): Unpromoted gluster01.h * p_stateful2 (ocf:pacemaker:Stateful): Stopped * Resource Group: g_stateful:1: * p_stateful1 (ocf:pacemaker:Stateful): Unpromoted gluster02.h * p_stateful2 (ocf:pacemaker:Stateful): Stopped * Stopped: [ gluster03.h gluster04.h ] * Clone Set: c_dummy [p_dummy1]: * Started: [ gluster01.h gluster02.h ] Transition Summary: - * Promote p_stateful1:0 ( Unpromoted -> Promoted gluster01.h ) - * Promote p_stateful2:0 ( Stopped -> Promoted gluster01.h ) - * Start p_stateful2:1 ( gluster02.h ) + * Promote p_stateful1:0 ( Unpromoted -> Promoted gluster01.h ) + * Promote p_stateful2:0 ( Stopped -> Promoted gluster01.h ) + * Start p_stateful2:1 ( gluster02.h ) Executing Cluster Transition: * Pseudo action: ms_stateful_pre_notify_start_0 * Resource action: iptest delete on gluster02.h * Resource action: ipsrc2 delete on gluster02.h * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-pre_notify_start_0 * Pseudo action: ms_stateful_start_0 * Pseudo action: g_stateful:0_start_0 * Resource action: p_stateful2:0 start on gluster01.h * Pseudo action: g_stateful:1_start_0 * Resource action: p_stateful2:1 start on gluster02.h * Pseudo action: g_stateful:0_running_0 * Pseudo action: g_stateful:1_running_0 * Pseudo action: ms_stateful_running_0 * Pseudo action: ms_stateful_post_notify_running_0 * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful2:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Resource action: p_stateful2:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-post_notify_running_0 * Pseudo action: ms_stateful_pre_notify_promote_0 * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful2:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Resource action: p_stateful2:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-pre_notify_promote_0 * Pseudo action: ms_stateful_promote_0 * Pseudo action: g_stateful:0_promote_0 * Resource action: p_stateful1:0 promote on gluster01.h * Resource action: p_stateful2:0 promote on gluster01.h * Pseudo action: g_stateful:0_promoted_0 * Pseudo action: ms_stateful_promoted_0 * Pseudo action: ms_stateful_post_notify_promoted_0 * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful2:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Resource action: p_stateful2:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-post_notify_promoted_0 * Resource action: p_stateful1:1 monitor=10000 on gluster02.h * Resource action: p_stateful2:1 monitor=10000 on gluster02.h Revised Cluster Status: * Node List: * Node gluster03.h: standby * Online: [ gluster01.h gluster02.h ] * OFFLINE: [ gluster04.h ] * Full List of Resources: * Clone Set: ms_stateful [g_stateful] (promotable): * Promoted: [ gluster01.h ] * Unpromoted: [ gluster02.h ] * Clone Set: c_dummy [p_dummy1]: * Started: [ gluster01.h gluster02.h ] diff --git a/cts/scheduler/summary/bug-cl-5212.summary b/cts/scheduler/summary/bug-cl-5212.summary index e7a6e26833..7cbe97558b 100644 --- a/cts/scheduler/summary/bug-cl-5212.summary +++ b/cts/scheduler/summary/bug-cl-5212.summary @@ -1,69 +1,69 @@ Current cluster status: * Node List: * Node srv01: UNCLEAN (offline) * Node srv02: UNCLEAN (offline) * Online: [ srv03 ] * Full List of Resources: * Resource Group: grpStonith1: * prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN) * Resource Group: grpStonith2: * prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN) * Resource Group: grpStonith3: * prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN) * Clone Set: msPostgresql [pgsql] (promotable): * pgsql (ocf:pacemaker:Stateful): Unpromoted srv02 (UNCLEAN) * pgsql (ocf:pacemaker:Stateful): Promoted srv01 (UNCLEAN) * Unpromoted: [ srv03 ] * Clone Set: clnPingd [prmPingd]: * prmPingd (ocf:pacemaker:ping): Started srv02 (UNCLEAN) * prmPingd (ocf:pacemaker:ping): Started srv01 (UNCLEAN) * Started: [ srv03 ] Transition Summary: - * Stop prmStonith1-1 ( srv02 ) blocked - * Stop prmStonith2-1 ( srv01 ) blocked - * Stop prmStonith3-1 ( srv01 ) due to node availability (blocked) - * Stop pgsql:0 ( Unpromoted srv02 ) due to node availability (blocked) - * Stop pgsql:1 ( Promoted srv01 ) due to node availability (blocked) - * Stop prmPingd:0 ( srv02 ) due to node availability (blocked) - * Stop prmPingd:1 ( srv01 ) due to node availability (blocked) + * Stop prmStonith1-1 ( srv02 ) blocked + * Stop prmStonith2-1 ( srv01 ) blocked + * Stop prmStonith3-1 ( srv01 ) due to node availability (blocked) + * Stop pgsql:0 ( Unpromoted srv02 ) due to node availability (blocked) + * Stop pgsql:1 ( Promoted srv01 ) due to node availability (blocked) + * Stop prmPingd:0 ( srv02 ) due to node availability (blocked) + * Stop prmPingd:1 ( srv01 ) due to node availability (blocked) Executing Cluster Transition: * Pseudo action: grpStonith1_stop_0 * Pseudo action: grpStonith1_start_0 * Pseudo action: grpStonith2_stop_0 * Pseudo action: grpStonith2_start_0 * Pseudo action: grpStonith3_stop_0 * Pseudo action: msPostgresql_pre_notify_stop_0 * Pseudo action: clnPingd_stop_0 * Resource action: pgsql notify on srv03 * Pseudo action: msPostgresql_confirmed-pre_notify_stop_0 * Pseudo action: msPostgresql_stop_0 * Pseudo action: clnPingd_stopped_0 * Pseudo action: msPostgresql_stopped_0 * Pseudo action: msPostgresql_post_notify_stopped_0 * Resource action: pgsql notify on srv03 * Pseudo action: msPostgresql_confirmed-post_notify_stopped_0 Revised Cluster Status: * Node List: * Node srv01: UNCLEAN (offline) * Node srv02: UNCLEAN (offline) * Online: [ srv03 ] * Full List of Resources: * Resource Group: grpStonith1: * prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN) * Resource Group: grpStonith2: * prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN) * Resource Group: grpStonith3: * prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN) * Clone Set: msPostgresql [pgsql] (promotable): * pgsql (ocf:pacemaker:Stateful): Unpromoted srv02 (UNCLEAN) * pgsql (ocf:pacemaker:Stateful): Promoted srv01 (UNCLEAN) * Unpromoted: [ srv03 ] * Clone Set: clnPingd [prmPingd]: * prmPingd (ocf:pacemaker:ping): Started srv02 (UNCLEAN) * prmPingd (ocf:pacemaker:ping): Started srv01 (UNCLEAN) * Started: [ srv03 ] diff --git a/cts/scheduler/summary/bug-cl-5247.summary b/cts/scheduler/summary/bug-cl-5247.summary index 67ad0c3ded..056e526490 100644 --- a/cts/scheduler/summary/bug-cl-5247.summary +++ b/cts/scheduler/summary/bug-cl-5247.summary @@ -1,87 +1,87 @@ Using the original execution date of: 2015-08-12 02:53:40Z Current cluster status: * Node List: * Online: [ bl460g8n3 bl460g8n4 ] * GuestOnline: [ pgsr01@bl460g8n3 ] * Full List of Resources: * prmDB1 (ocf:heartbeat:VirtualDomain): Started bl460g8n3 * prmDB2 (ocf:heartbeat:VirtualDomain): FAILED bl460g8n4 * Resource Group: grpStonith1: * prmStonith1-2 (stonith:external/ipmi): Started bl460g8n4 * Resource Group: grpStonith2: * prmStonith2-2 (stonith:external/ipmi): Started bl460g8n3 * Resource Group: master-group: * vip-master (ocf:heartbeat:Dummy): FAILED pgsr02 * vip-rep (ocf:heartbeat:Dummy): FAILED pgsr02 * Clone Set: msPostgresql [pgsql] (promotable): * Promoted: [ pgsr01 ] * Stopped: [ bl460g8n3 bl460g8n4 ] Transition Summary: * Fence (off) pgsr02 (resource: prmDB2) 'guest is unclean' * Stop prmDB2 ( bl460g8n4 ) due to node availability * Recover vip-master ( pgsr02 -> pgsr01 ) * Recover vip-rep ( pgsr02 -> pgsr01 ) - * Stop pgsql:0 ( Promoted pgsr02 ) due to node availability + * Stop pgsql:0 ( Promoted pgsr02 ) due to node availability * Stop pgsr02 ( bl460g8n4 ) due to node availability Executing Cluster Transition: * Resource action: vip-master monitor on pgsr01 * Resource action: vip-rep monitor on pgsr01 * Pseudo action: msPostgresql_pre_notify_demote_0 * Resource action: pgsr01 monitor on bl460g8n4 * Resource action: pgsr02 stop on bl460g8n4 * Resource action: pgsr02 monitor on bl460g8n3 * Resource action: prmDB2 stop on bl460g8n4 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-pre_notify_demote_0 * Pseudo action: msPostgresql_demote_0 * Pseudo action: stonith-pgsr02-off on pgsr02 * Pseudo action: pgsql_post_notify_stop_0 * Pseudo action: pgsql_demote_0 * Pseudo action: msPostgresql_demoted_0 * Pseudo action: msPostgresql_post_notify_demoted_0 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-post_notify_demoted_0 * Pseudo action: msPostgresql_pre_notify_stop_0 * Pseudo action: master-group_stop_0 * Pseudo action: vip-rep_stop_0 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-pre_notify_stop_0 * Pseudo action: msPostgresql_stop_0 * Pseudo action: vip-master_stop_0 * Pseudo action: pgsql_stop_0 * Pseudo action: msPostgresql_stopped_0 * Pseudo action: master-group_stopped_0 * Pseudo action: master-group_start_0 * Resource action: vip-master start on pgsr01 * Resource action: vip-rep start on pgsr01 * Pseudo action: msPostgresql_post_notify_stopped_0 * Pseudo action: master-group_running_0 * Resource action: vip-master monitor=10000 on pgsr01 * Resource action: vip-rep monitor=10000 on pgsr01 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-post_notify_stopped_0 * Pseudo action: pgsql_notified_0 * Resource action: pgsql monitor=9000 on pgsr01 Using the original execution date of: 2015-08-12 02:53:40Z Revised Cluster Status: * Node List: * Online: [ bl460g8n3 bl460g8n4 ] * GuestOnline: [ pgsr01@bl460g8n3 ] * Full List of Resources: * prmDB1 (ocf:heartbeat:VirtualDomain): Started bl460g8n3 * prmDB2 (ocf:heartbeat:VirtualDomain): FAILED * Resource Group: grpStonith1: * prmStonith1-2 (stonith:external/ipmi): Started bl460g8n4 * Resource Group: grpStonith2: * prmStonith2-2 (stonith:external/ipmi): Started bl460g8n3 * Resource Group: master-group: * vip-master (ocf:heartbeat:Dummy): FAILED [ pgsr01 pgsr02 ] * vip-rep (ocf:heartbeat:Dummy): FAILED [ pgsr01 pgsr02 ] * Clone Set: msPostgresql [pgsql] (promotable): * Promoted: [ pgsr01 ] * Stopped: [ bl460g8n3 bl460g8n4 ] diff --git a/cts/scheduler/summary/bug-lf-2153.summary b/cts/scheduler/summary/bug-lf-2153.summary index 8b4d223eed..631e73ac9b 100644 --- a/cts/scheduler/summary/bug-lf-2153.summary +++ b/cts/scheduler/summary/bug-lf-2153.summary @@ -1,59 +1,59 @@ Current cluster status: * Node List: * Node bob: standby (with active resources) * Online: [ alice ] * Full List of Resources: * Clone Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] (promotable): * Promoted: [ alice ] * Unpromoted: [ bob ] * Clone Set: cl_tgtd [res_tgtd]: * Started: [ alice bob ] * Resource Group: rg_iscsivg01: * res_portblock_iscsivg01_block (ocf:heartbeat:portblock): Started alice * res_lvm_iscsivg01 (ocf:heartbeat:LVM): Started alice * res_target_iscsivg01 (ocf:heartbeat:iSCSITarget): Started alice * res_lu_iscsivg01_lun1 (ocf:heartbeat:iSCSILogicalUnit): Started alice * res_lu_iscsivg01_lun2 (ocf:heartbeat:iSCSILogicalUnit): Started alice * res_ip_alicebob01 (ocf:heartbeat:IPaddr2): Started alice * res_portblock_iscsivg01_unblock (ocf:heartbeat:portblock): Started alice Transition Summary: * Stop res_drbd_iscsivg01:0 ( Unpromoted bob ) due to node availability - * Stop res_tgtd:0 ( bob ) due to node availability + * Stop res_tgtd:0 ( bob ) due to node availability Executing Cluster Transition: * Pseudo action: ms_drbd_iscsivg01_pre_notify_stop_0 * Pseudo action: cl_tgtd_stop_0 * Resource action: res_drbd_iscsivg01:0 notify on bob * Resource action: res_drbd_iscsivg01:1 notify on alice * Pseudo action: ms_drbd_iscsivg01_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_iscsivg01_stop_0 * Resource action: res_tgtd:0 stop on bob * Pseudo action: cl_tgtd_stopped_0 * Resource action: res_drbd_iscsivg01:0 stop on bob * Pseudo action: ms_drbd_iscsivg01_stopped_0 * Pseudo action: ms_drbd_iscsivg01_post_notify_stopped_0 * Resource action: res_drbd_iscsivg01:1 notify on alice * Pseudo action: ms_drbd_iscsivg01_confirmed-post_notify_stopped_0 Revised Cluster Status: * Node List: * Node bob: standby * Online: [ alice ] * Full List of Resources: * Clone Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] (promotable): * Promoted: [ alice ] * Stopped: [ bob ] * Clone Set: cl_tgtd [res_tgtd]: * Started: [ alice ] * Stopped: [ bob ] * Resource Group: rg_iscsivg01: * res_portblock_iscsivg01_block (ocf:heartbeat:portblock): Started alice * res_lvm_iscsivg01 (ocf:heartbeat:LVM): Started alice * res_target_iscsivg01 (ocf:heartbeat:iSCSITarget): Started alice * res_lu_iscsivg01_lun1 (ocf:heartbeat:iSCSILogicalUnit): Started alice * res_lu_iscsivg01_lun2 (ocf:heartbeat:iSCSILogicalUnit): Started alice * res_ip_alicebob01 (ocf:heartbeat:IPaddr2): Started alice * res_portblock_iscsivg01_unblock (ocf:heartbeat:portblock): Started alice diff --git a/cts/scheduler/summary/bug-lf-2606.summary b/cts/scheduler/summary/bug-lf-2606.summary index 004788e80b..e0b7ebf0e6 100644 --- a/cts/scheduler/summary/bug-lf-2606.summary +++ b/cts/scheduler/summary/bug-lf-2606.summary @@ -1,46 +1,46 @@ 1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Node node2: UNCLEAN (online) * Online: [ node1 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): FAILED node2 (disabled) * rsc2 (ocf:pacemaker:Dummy): Started node2 * Clone Set: ms3 [rsc3] (promotable): * Promoted: [ node2 ] * Unpromoted: [ node1 ] Transition Summary: * Fence (reboot) node2 'rsc1 failed there' * Stop rsc1 ( node2 ) due to node availability * Move rsc2 ( node2 -> node1 ) - * Stop rsc3:1 ( Promoted node2 ) due to node availability + * Stop rsc3:1 ( Promoted node2 ) due to node availability Executing Cluster Transition: * Pseudo action: ms3_demote_0 * Fencing node2 (reboot) * Pseudo action: rsc1_stop_0 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc3:1_demote_0 * Pseudo action: ms3_demoted_0 * Pseudo action: ms3_stop_0 * Resource action: rsc2 start on node1 * Pseudo action: rsc3:1_stop_0 * Pseudo action: ms3_stopped_0 * Resource action: rsc2 monitor=10000 on node1 Revised Cluster Status: * Node List: * Online: [ node1 ] * OFFLINE: [ node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled) * rsc2 (ocf:pacemaker:Dummy): Started node1 * Clone Set: ms3 [rsc3] (promotable): * Unpromoted: [ node1 ] * Stopped: [ node2 ] diff --git a/cts/scheduler/summary/bug-pm-11.summary b/cts/scheduler/summary/bug-pm-11.summary index 7a9fc5c1b0..c3f8f5b3af 100644 --- a/cts/scheduler/summary/bug-pm-11.summary +++ b/cts/scheduler/summary/bug-pm-11.summary @@ -1,48 +1,48 @@ Current cluster status: * Node List: * Online: [ node-a node-b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b * stateful-2:0 (ocf:heartbeat:Stateful): Stopped * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a * stateful-2:1 (ocf:heartbeat:Stateful): Stopped Transition Summary: - * Start stateful-2:0 ( node-b ) + * Start stateful-2:0 ( node-b ) * Promote stateful-2:1 ( Stopped -> Promoted node-a ) Executing Cluster Transition: * Resource action: stateful-2:0 monitor on node-b * Resource action: stateful-2:0 monitor on node-a * Resource action: stateful-2:1 monitor on node-b * Resource action: stateful-2:1 monitor on node-a * Pseudo action: ms-sf_start_0 * Pseudo action: group:0_start_0 * Resource action: stateful-2:0 start on node-b * Pseudo action: group:1_start_0 * Resource action: stateful-2:1 start on node-a * Pseudo action: group:0_running_0 * Pseudo action: group:1_running_0 * Pseudo action: ms-sf_running_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-2:1 promote on node-a * Pseudo action: group:1_promoted_0 * Pseudo action: ms-sf_promoted_0 Revised Cluster Status: * Node List: * Online: [ node-a node-b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b * stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a * stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a diff --git a/cts/scheduler/summary/bug-pm-12.summary b/cts/scheduler/summary/bug-pm-12.summary index 2b473e8b91..8defffe8d6 100644 --- a/cts/scheduler/summary/bug-pm-12.summary +++ b/cts/scheduler/summary/bug-pm-12.summary @@ -1,57 +1,57 @@ Current cluster status: * Node List: * Online: [ node-a node-b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b * stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a * stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a Transition Summary: - * Restart stateful-2:0 ( Unpromoted node-b ) due to resource definition change - * Restart stateful-2:1 ( Promoted node-a ) due to resource definition change + * Restart stateful-2:0 ( Unpromoted node-b ) due to resource definition change + * Restart stateful-2:1 ( Promoted node-a ) due to resource definition change Executing Cluster Transition: * Pseudo action: ms-sf_demote_0 * Pseudo action: group:1_demote_0 * Resource action: stateful-2:1 demote on node-a * Pseudo action: group:1_demoted_0 * Pseudo action: ms-sf_demoted_0 * Pseudo action: ms-sf_stop_0 * Pseudo action: group:0_stop_0 * Resource action: stateful-2:0 stop on node-b * Pseudo action: group:1_stop_0 * Resource action: stateful-2:1 stop on node-a * Pseudo action: group:0_stopped_0 * Pseudo action: group:1_stopped_0 * Pseudo action: ms-sf_stopped_0 * Pseudo action: ms-sf_start_0 * Pseudo action: group:0_start_0 * Resource action: stateful-2:0 start on node-b * Pseudo action: group:1_start_0 * Resource action: stateful-2:1 start on node-a * Pseudo action: group:0_running_0 * Pseudo action: group:1_running_0 * Pseudo action: ms-sf_running_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-2:1 promote on node-a * Pseudo action: group:1_promoted_0 * Pseudo action: ms-sf_promoted_0 Revised Cluster Status: * Node List: * Online: [ node-a node-b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b * stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a * stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a diff --git a/cts/scheduler/summary/bundle-order-fencing.summary b/cts/scheduler/summary/bundle-order-fencing.summary index 8cb40718db..ae0c42d2ef 100644 --- a/cts/scheduler/summary/bundle-order-fencing.summary +++ b/cts/scheduler/summary/bundle-order-fencing.summary @@ -1,220 +1,220 @@ Using the original execution date of: 2017-09-12 10:51:59Z Current cluster status: * Node List: * Node controller-0: UNCLEAN (offline) * Online: [ controller-1 controller-2 ] * GuestOnline: [ galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Full List of Resources: * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): FAILED controller-0 (UNCLEAN) * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2 * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted controller-0 (UNCLEAN) * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2 * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): FAILED Promoted controller-0 (UNCLEAN) * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * ip-192.168.24.7 (ocf:heartbeat:IPaddr2): Started controller-0 (UNCLEAN) * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-0 (UNCLEAN) * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.19 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-0 (UNCLEAN) * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0 (UNCLEAN) * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-2 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1 * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-2 * stonith-fence_ipmilan-525400efba5c (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-5254003e8e97 (stonith:fence_ipmilan): Started controller-0 (UNCLEAN) * stonith-fence_ipmilan-5254000dcb3f (stonith:fence_ipmilan): Started controller-0 (UNCLEAN) Transition Summary: * Fence (off) redis-bundle-0 (resource: redis-bundle-docker-0) 'guest is unclean' * Fence (off) rabbitmq-bundle-0 (resource: rabbitmq-bundle-docker-0) 'guest is unclean' * Fence (off) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean' * Fence (reboot) controller-0 'peer is no longer part of the cluster' - * Stop rabbitmq-bundle-docker-0 ( controller-0 ) due to node availability - * Stop rabbitmq-bundle-0 ( controller-0 ) due to unrunnable rabbitmq-bundle-docker-0 start - * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-docker-0 start - * Stop galera-bundle-docker-0 ( controller-0 ) due to node availability - * Stop galera-bundle-0 ( controller-0 ) due to unrunnable galera-bundle-docker-0 start - * Stop galera:0 ( Promoted galera-bundle-0 ) due to unrunnable galera-bundle-docker-0 start - * Stop redis-bundle-docker-0 ( controller-0 ) due to node availability - * Stop redis-bundle-0 ( controller-0 ) due to unrunnable redis-bundle-docker-0 start - * Stop redis:0 ( Promoted redis-bundle-0 ) due to unrunnable redis-bundle-docker-0 start + * Stop rabbitmq-bundle-docker-0 ( controller-0 ) due to node availability + * Stop rabbitmq-bundle-0 ( controller-0 ) due to unrunnable rabbitmq-bundle-docker-0 start + * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-docker-0 start + * Stop galera-bundle-docker-0 ( controller-0 ) due to node availability + * Stop galera-bundle-0 ( controller-0 ) due to unrunnable galera-bundle-docker-0 start + * Stop galera:0 ( Promoted galera-bundle-0 ) due to unrunnable galera-bundle-docker-0 start + * Stop redis-bundle-docker-0 ( controller-0 ) due to node availability + * Stop redis-bundle-0 ( controller-0 ) due to unrunnable redis-bundle-docker-0 start + * Stop redis:0 ( Promoted redis-bundle-0 ) due to unrunnable redis-bundle-docker-0 start * Promote redis:1 ( Unpromoted -> Promoted redis-bundle-1 ) - * Move ip-192.168.24.7 ( controller-0 -> controller-2 ) - * Move ip-10.0.0.109 ( controller-0 -> controller-1 ) - * Move ip-172.17.4.11 ( controller-0 -> controller-1 ) - * Stop haproxy-bundle-docker-0 ( controller-0 ) due to node availability - * Move stonith-fence_ipmilan-5254003e8e97 ( controller-0 -> controller-1 ) - * Move stonith-fence_ipmilan-5254000dcb3f ( controller-0 -> controller-2 ) + * Move ip-192.168.24.7 ( controller-0 -> controller-2 ) + * Move ip-10.0.0.109 ( controller-0 -> controller-1 ) + * Move ip-172.17.4.11 ( controller-0 -> controller-1 ) + * Stop haproxy-bundle-docker-0 ( controller-0 ) due to node availability + * Move stonith-fence_ipmilan-5254003e8e97 ( controller-0 -> controller-1 ) + * Move stonith-fence_ipmilan-5254000dcb3f ( controller-0 -> controller-2 ) Executing Cluster Transition: * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0 * Pseudo action: rabbitmq-bundle-0_stop_0 * Resource action: rabbitmq-bundle-0 monitor on controller-2 * Resource action: rabbitmq-bundle-0 monitor on controller-1 * Resource action: rabbitmq-bundle-1 monitor on controller-2 * Resource action: rabbitmq-bundle-2 monitor on controller-1 * Pseudo action: galera-bundle-0_stop_0 * Resource action: galera-bundle-0 monitor on controller-2 * Resource action: galera-bundle-0 monitor on controller-1 * Resource action: galera-bundle-1 monitor on controller-2 * Resource action: galera-bundle-2 monitor on controller-1 * Resource action: redis cancel=45000 on redis-bundle-1 * Resource action: redis cancel=60000 on redis-bundle-1 * Pseudo action: redis-bundle-master_pre_notify_demote_0 * Pseudo action: redis-bundle-0_stop_0 * Resource action: redis-bundle-0 monitor on controller-2 * Resource action: redis-bundle-0 monitor on controller-1 * Resource action: redis-bundle-1 monitor on controller-2 * Resource action: redis-bundle-2 monitor on controller-1 * Pseudo action: stonith-fence_ipmilan-5254003e8e97_stop_0 * Pseudo action: stonith-fence_ipmilan-5254000dcb3f_stop_0 * Pseudo action: haproxy-bundle_stop_0 * Pseudo action: redis-bundle_demote_0 * Pseudo action: galera-bundle_demote_0 * Pseudo action: rabbitmq-bundle_stop_0 * Pseudo action: rabbitmq-bundle_start_0 * Fencing controller-0 (reboot) * Resource action: rabbitmq notify on rabbitmq-bundle-1 * Resource action: rabbitmq notify on rabbitmq-bundle-2 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0 * Pseudo action: rabbitmq-bundle-docker-0_stop_0 * Pseudo action: galera-bundle-master_demote_0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-pre_notify_demote_0 * Pseudo action: redis-bundle-master_demote_0 * Pseudo action: haproxy-bundle-docker-0_stop_0 * Resource action: stonith-fence_ipmilan-5254003e8e97 start on controller-1 * Resource action: stonith-fence_ipmilan-5254000dcb3f start on controller-2 * Pseudo action: stonith-redis-bundle-0-off on redis-bundle-0 * Pseudo action: stonith-rabbitmq-bundle-0-off on rabbitmq-bundle-0 * Pseudo action: stonith-galera-bundle-0-off on galera-bundle-0 * Pseudo action: haproxy-bundle_stopped_0 * Pseudo action: rabbitmq_post_notify_stop_0 * Pseudo action: rabbitmq-bundle-clone_stop_0 * Pseudo action: galera_demote_0 * Pseudo action: galera-bundle-master_demoted_0 * Pseudo action: redis_post_notify_stop_0 * Pseudo action: redis_demote_0 * Pseudo action: redis-bundle-master_demoted_0 * Pseudo action: ip-192.168.24.7_stop_0 * Pseudo action: ip-10.0.0.109_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: stonith-fence_ipmilan-5254003e8e97 monitor=60000 on controller-1 * Resource action: stonith-fence_ipmilan-5254000dcb3f monitor=60000 on controller-2 * Pseudo action: galera-bundle_demoted_0 * Pseudo action: galera-bundle_stop_0 * Pseudo action: rabbitmq_stop_0 * Pseudo action: rabbitmq-bundle-clone_stopped_0 * Pseudo action: galera-bundle-master_stop_0 * Pseudo action: galera-bundle-docker-0_stop_0 * Pseudo action: redis-bundle-master_post_notify_demoted_0 * Resource action: ip-192.168.24.7 start on controller-2 * Resource action: ip-10.0.0.109 start on controller-1 * Resource action: ip-172.17.4.11 start on controller-1 * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0 * Pseudo action: galera_stop_0 * Pseudo action: galera-bundle-master_stopped_0 * Pseudo action: galera-bundle-master_start_0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_demoted_0 * Pseudo action: redis-bundle-master_pre_notify_stop_0 * Resource action: ip-192.168.24.7 monitor=10000 on controller-2 * Resource action: ip-10.0.0.109 monitor=10000 on controller-1 * Resource action: ip-172.17.4.11 monitor=10000 on controller-1 * Pseudo action: redis-bundle_demoted_0 * Pseudo action: redis-bundle_stop_0 * Pseudo action: galera-bundle_stopped_0 * Resource action: rabbitmq notify on rabbitmq-bundle-1 * Resource action: rabbitmq notify on rabbitmq-bundle-2 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0 * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0 * Pseudo action: galera-bundle-master_running_0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-bundle-master_stop_0 * Pseudo action: redis-bundle-docker-0_stop_0 * Pseudo action: galera-bundle_running_0 * Pseudo action: rabbitmq-bundle_stopped_0 * Pseudo action: rabbitmq_notified_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0 * Pseudo action: rabbitmq-bundle-clone_start_0 * Pseudo action: redis_stop_0 * Pseudo action: redis-bundle-master_stopped_0 * Pseudo action: rabbitmq-bundle-clone_running_0 * Pseudo action: redis-bundle-master_post_notify_stopped_0 * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_stopped_0 * Pseudo action: redis-bundle-master_pre_notify_start_0 * Pseudo action: redis-bundle_stopped_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0 * Pseudo action: redis_notified_0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0 * Pseudo action: redis-bundle-master_start_0 * Pseudo action: rabbitmq-bundle_running_0 * Pseudo action: redis-bundle-master_running_0 * Pseudo action: redis-bundle-master_post_notify_running_0 * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0 * Pseudo action: redis-bundle_running_0 * Pseudo action: redis-bundle-master_pre_notify_promote_0 * Pseudo action: redis-bundle_promote_0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: redis-bundle-master_promote_0 * Resource action: redis promote on redis-bundle-1 * Pseudo action: redis-bundle-master_promoted_0 * Pseudo action: redis-bundle-master_post_notify_promoted_0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: redis-bundle_promoted_0 * Resource action: redis monitor=20000 on redis-bundle-1 Using the original execution date of: 2017-09-12 10:51:59Z Revised Cluster Status: * Node List: * Online: [ controller-1 controller-2 ] * OFFLINE: [ controller-0 ] * GuestOnline: [ galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Full List of Resources: * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): FAILED * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2 * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2 * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): FAILED Promoted * redis-bundle-1 (ocf:heartbeat:redis): Promoted controller-1 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * ip-192.168.24.7 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.19 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-2 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1 * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-2 * stonith-fence_ipmilan-525400efba5c (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-5254003e8e97 (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-5254000dcb3f (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/summary/bundle-order-partial-start-2.summary b/cts/scheduler/summary/bundle-order-partial-start-2.summary index 7575a2511e..9ca81d6ebd 100644 --- a/cts/scheduler/summary/bundle-order-partial-start-2.summary +++ b/cts/scheduler/summary/bundle-order-partial-start-2.summary @@ -1,100 +1,100 @@ Current cluster status: * Node List: * Online: [ undercloud ] * GuestOnline: [ galera-bundle-0@undercloud rabbitmq-bundle-0@undercloud redis-bundle-0@undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped undercloud * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Stopped undercloud * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted undercloud * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud Transition Summary: - * Start rabbitmq:0 ( rabbitmq-bundle-0 ) - * Restart galera-bundle-docker-0 ( undercloud ) due to required haproxy-bundle running - * Restart galera-bundle-0 ( undercloud ) due to required galera-bundle-docker-0 start - * Start galera:0 ( galera-bundle-0 ) + * Start rabbitmq:0 ( rabbitmq-bundle-0 ) + * Restart galera-bundle-docker-0 ( undercloud ) due to required haproxy-bundle running + * Restart galera-bundle-0 ( undercloud ) due to required galera-bundle-docker-0 start + * Start galera:0 ( galera-bundle-0 ) * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 ) - * Start haproxy-bundle-docker-0 ( undercloud ) + * Start haproxy-bundle-docker-0 ( undercloud ) Executing Cluster Transition: * Resource action: rabbitmq:0 monitor on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0 * Resource action: galera-bundle-0 stop on undercloud * Pseudo action: redis-bundle-master_pre_notify_promote_0 * Resource action: haproxy-bundle-docker-0 monitor on undercloud * Pseudo action: haproxy-bundle_start_0 * Pseudo action: redis-bundle_promote_0 * Pseudo action: galera-bundle_stop_0 * Pseudo action: rabbitmq-bundle_start_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0 * Pseudo action: rabbitmq-bundle-clone_start_0 * Resource action: galera-bundle-docker-0 stop on undercloud * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: redis-bundle-master_promote_0 * Resource action: haproxy-bundle-docker-0 start on undercloud * Pseudo action: haproxy-bundle_running_0 * Pseudo action: galera-bundle_stopped_0 * Resource action: rabbitmq:0 start on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_running_0 * Resource action: redis promote on redis-bundle-0 * Pseudo action: redis-bundle-master_promoted_0 * Resource action: haproxy-bundle-docker-0 monitor=60000 on undercloud * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0 * Pseudo action: redis-bundle-master_post_notify_promoted_0 * Resource action: rabbitmq:0 notify on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0 * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: redis-bundle_promoted_0 * Pseudo action: rabbitmq-bundle_running_0 * Resource action: rabbitmq:0 monitor=10000 on rabbitmq-bundle-0 * Resource action: redis monitor=20000 on redis-bundle-0 * Pseudo action: galera-bundle_start_0 * Resource action: galera-bundle-docker-0 start on undercloud * Resource action: galera-bundle-docker-0 monitor=60000 on undercloud * Resource action: galera-bundle-0 start on undercloud * Resource action: galera-bundle-0 monitor=30000 on undercloud * Resource action: galera:0 monitor on galera-bundle-0 * Pseudo action: galera-bundle-master_start_0 * Resource action: galera:0 start on galera-bundle-0 * Pseudo action: galera-bundle-master_running_0 * Pseudo action: galera-bundle_running_0 * Resource action: galera:0 monitor=30000 on galera-bundle-0 * Resource action: galera:0 monitor=20000 on galera-bundle-0 Revised Cluster Status: * Node List: * Online: [ undercloud ] * GuestOnline: [ galera-bundle-0@undercloud rabbitmq-bundle-0@undercloud redis-bundle-0@undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted undercloud * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud diff --git a/cts/scheduler/summary/bundle-order-partial-start.summary b/cts/scheduler/summary/bundle-order-partial-start.summary index 3c45f4f974..7951a3fcf2 100644 --- a/cts/scheduler/summary/bundle-order-partial-start.summary +++ b/cts/scheduler/summary/bundle-order-partial-start.summary @@ -1,97 +1,97 @@ Current cluster status: * Node List: * Online: [ undercloud ] * GuestOnline: [ rabbitmq-bundle-0@undercloud redis-bundle-0@undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped undercloud * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Stopped * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted undercloud * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud Transition Summary: - * Start rabbitmq:0 ( rabbitmq-bundle-0 ) - * Start galera-bundle-docker-0 ( undercloud ) - * Start galera-bundle-0 ( undercloud ) - * Start galera:0 ( galera-bundle-0 ) + * Start rabbitmq:0 ( rabbitmq-bundle-0 ) + * Start galera-bundle-docker-0 ( undercloud ) + * Start galera-bundle-0 ( undercloud ) + * Start galera:0 ( galera-bundle-0 ) * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 ) - * Start haproxy-bundle-docker-0 ( undercloud ) + * Start haproxy-bundle-docker-0 ( undercloud ) Executing Cluster Transition: * Resource action: rabbitmq:0 monitor on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0 * Resource action: galera-bundle-docker-0 monitor on undercloud * Pseudo action: redis-bundle-master_pre_notify_promote_0 * Resource action: haproxy-bundle-docker-0 monitor on undercloud * Pseudo action: haproxy-bundle_start_0 * Pseudo action: redis-bundle_promote_0 * Pseudo action: rabbitmq-bundle_start_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0 * Pseudo action: rabbitmq-bundle-clone_start_0 * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: redis-bundle-master_promote_0 * Resource action: haproxy-bundle-docker-0 start on undercloud * Pseudo action: haproxy-bundle_running_0 * Resource action: rabbitmq:0 start on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_running_0 * Resource action: redis promote on redis-bundle-0 * Pseudo action: redis-bundle-master_promoted_0 * Resource action: haproxy-bundle-docker-0 monitor=60000 on undercloud * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0 * Pseudo action: redis-bundle-master_post_notify_promoted_0 * Resource action: rabbitmq:0 notify on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0 * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: redis-bundle_promoted_0 * Pseudo action: rabbitmq-bundle_running_0 * Resource action: rabbitmq:0 monitor=10000 on rabbitmq-bundle-0 * Resource action: redis monitor=20000 on redis-bundle-0 * Pseudo action: galera-bundle_start_0 * Pseudo action: galera-bundle-master_start_0 * Resource action: galera-bundle-docker-0 start on undercloud * Resource action: galera-bundle-0 monitor on undercloud * Resource action: galera-bundle-docker-0 monitor=60000 on undercloud * Resource action: galera-bundle-0 start on undercloud * Resource action: galera:0 start on galera-bundle-0 * Pseudo action: galera-bundle-master_running_0 * Resource action: galera-bundle-0 monitor=30000 on undercloud * Pseudo action: galera-bundle_running_0 * Resource action: galera:0 monitor=30000 on galera-bundle-0 * Resource action: galera:0 monitor=20000 on galera-bundle-0 Revised Cluster Status: * Node List: * Online: [ undercloud ] * GuestOnline: [ galera-bundle-0@undercloud rabbitmq-bundle-0@undercloud redis-bundle-0@undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted undercloud * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud diff --git a/cts/scheduler/summary/bundle-order-partial-stop.summary b/cts/scheduler/summary/bundle-order-partial-stop.summary index 0954c59992..4313a6ce00 100644 --- a/cts/scheduler/summary/bundle-order-partial-stop.summary +++ b/cts/scheduler/summary/bundle-order-partial-stop.summary @@ -1,127 +1,127 @@ Current cluster status: * Node List: * Online: [ undercloud ] * GuestOnline: [ galera-bundle-0@undercloud rabbitmq-bundle-0@undercloud redis-bundle-0@undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted undercloud * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud Transition Summary: - * Stop rabbitmq-bundle-docker-0 ( undercloud ) due to node availability - * Stop rabbitmq-bundle-0 ( undercloud ) due to node availability - * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-0 start - * Stop galera-bundle-docker-0 ( undercloud ) due to node availability - * Stop galera-bundle-0 ( undercloud ) due to node availability + * Stop rabbitmq-bundle-docker-0 ( undercloud ) due to node availability + * Stop rabbitmq-bundle-0 ( undercloud ) due to node availability + * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-0 start + * Stop galera-bundle-docker-0 ( undercloud ) due to node availability + * Stop galera-bundle-0 ( undercloud ) due to node availability * Stop galera:0 ( Promoted galera-bundle-0 ) due to unrunnable galera-bundle-0 start - * Stop redis-bundle-docker-0 ( undercloud ) due to node availability - * Stop redis-bundle-0 ( undercloud ) due to node availability + * Stop redis-bundle-docker-0 ( undercloud ) due to node availability + * Stop redis-bundle-0 ( undercloud ) due to node availability * Stop redis:0 ( Promoted redis-bundle-0 ) due to unrunnable redis-bundle-0 start - * Stop ip-192.168.122.254 ( undercloud ) due to node availability - * Stop ip-192.168.122.250 ( undercloud ) due to node availability - * Stop ip-192.168.122.249 ( undercloud ) due to node availability - * Stop ip-192.168.122.253 ( undercloud ) due to node availability - * Stop ip-192.168.122.247 ( undercloud ) due to node availability - * Stop ip-192.168.122.248 ( undercloud ) due to node availability - * Stop haproxy-bundle-docker-0 ( undercloud ) due to node availability - * Stop openstack-cinder-volume-docker-0 ( undercloud ) due to node availability + * Stop ip-192.168.122.254 ( undercloud ) due to node availability + * Stop ip-192.168.122.250 ( undercloud ) due to node availability + * Stop ip-192.168.122.249 ( undercloud ) due to node availability + * Stop ip-192.168.122.253 ( undercloud ) due to node availability + * Stop ip-192.168.122.247 ( undercloud ) due to node availability + * Stop ip-192.168.122.248 ( undercloud ) due to node availability + * Stop haproxy-bundle-docker-0 ( undercloud ) due to node availability + * Stop openstack-cinder-volume-docker-0 ( undercloud ) due to node availability Executing Cluster Transition: * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0 * Resource action: galera cancel=10000 on galera-bundle-0 * Resource action: redis cancel=20000 on redis-bundle-0 * Pseudo action: redis-bundle-master_pre_notify_demote_0 * Pseudo action: openstack-cinder-volume_stop_0 * Pseudo action: redis-bundle_demote_0 * Pseudo action: galera-bundle_demote_0 * Pseudo action: rabbitmq-bundle_stop_0 * Resource action: rabbitmq notify on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0 * Pseudo action: rabbitmq-bundle-clone_stop_0 * Pseudo action: galera-bundle-master_demote_0 * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_demote_0 * Pseudo action: redis-bundle-master_demote_0 * Resource action: openstack-cinder-volume-docker-0 stop on undercloud * Pseudo action: openstack-cinder-volume_stopped_0 * Resource action: rabbitmq stop on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_stopped_0 * Resource action: rabbitmq-bundle-0 stop on undercloud * Resource action: galera demote on galera-bundle-0 * Pseudo action: galera-bundle-master_demoted_0 * Resource action: redis demote on redis-bundle-0 * Pseudo action: redis-bundle-master_demoted_0 * Pseudo action: galera-bundle_demoted_0 * Pseudo action: galera-bundle_stop_0 * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0 * Resource action: rabbitmq-bundle-docker-0 stop on undercloud * Pseudo action: galera-bundle-master_stop_0 * Pseudo action: redis-bundle-master_post_notify_demoted_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0 * Resource action: galera stop on galera-bundle-0 * Pseudo action: galera-bundle-master_stopped_0 * Resource action: galera-bundle-0 stop on undercloud * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-post_notify_demoted_0 * Pseudo action: redis-bundle-master_pre_notify_stop_0 * Pseudo action: redis-bundle_demoted_0 * Pseudo action: rabbitmq-bundle_stopped_0 * Resource action: galera-bundle-docker-0 stop on undercloud * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_stop_0 * Pseudo action: galera-bundle_stopped_0 * Pseudo action: redis-bundle_stop_0 * Pseudo action: redis-bundle-master_stop_0 * Resource action: redis stop on redis-bundle-0 * Pseudo action: redis-bundle-master_stopped_0 * Resource action: redis-bundle-0 stop on undercloud * Pseudo action: redis-bundle-master_post_notify_stopped_0 * Resource action: redis-bundle-docker-0 stop on undercloud * Pseudo action: redis-bundle-master_confirmed-post_notify_stopped_0 * Pseudo action: redis-bundle_stopped_0 * Pseudo action: haproxy-bundle_stop_0 * Resource action: haproxy-bundle-docker-0 stop on undercloud * Pseudo action: haproxy-bundle_stopped_0 * Resource action: ip-192.168.122.254 stop on undercloud * Resource action: ip-192.168.122.250 stop on undercloud * Resource action: ip-192.168.122.249 stop on undercloud * Resource action: ip-192.168.122.253 stop on undercloud * Resource action: ip-192.168.122.247 stop on undercloud * Resource action: ip-192.168.122.248 stop on undercloud * Cluster action: do_shutdown on undercloud Revised Cluster Status: * Node List: * Online: [ undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Stopped * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Stopped * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Stopped * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Stopped diff --git a/cts/scheduler/summary/bundle-order-startup-clone-2.summary b/cts/scheduler/summary/bundle-order-startup-clone-2.summary index cb63d78fd1..8fc4cc1f88 100644 --- a/cts/scheduler/summary/bundle-order-startup-clone-2.summary +++ b/cts/scheduler/summary/bundle-order-startup-clone-2.summary @@ -1,213 +1,213 @@ Current cluster status: * Node List: * Online: [ metal-1 metal-2 metal-3 ] * RemoteOFFLINE: [ rabbitmq-bundle-0 ] * Full List of Resources: * Clone Set: storage-clone [storage]: * Stopped: [ metal-1 metal-2 metal-3 rabbitmq-bundle-0 ] * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Stopped * galera-bundle-1 (ocf:heartbeat:galera): Stopped * galera-bundle-2 (ocf:heartbeat:galera): Stopped * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Stopped * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Stopped * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Stopped * redis-bundle-1 (ocf:heartbeat:redis): Stopped * redis-bundle-2 (ocf:heartbeat:redis): Stopped Transition Summary: - * Start storage:0 ( metal-1 ) - * Start storage:1 ( metal-2 ) - * Start storage:2 ( metal-3 ) - * Start galera-bundle-docker-0 ( metal-1 ) - * Start galera-bundle-0 ( metal-1 ) - * Start galera:0 ( galera-bundle-0 ) - * Start galera-bundle-docker-1 ( metal-2 ) - * Start galera-bundle-1 ( metal-2 ) - * Start galera:1 ( galera-bundle-1 ) - * Start galera-bundle-docker-2 ( metal-3 ) - * Start galera-bundle-2 ( metal-3 ) - * Start galera:2 ( galera-bundle-2 ) - * Start haproxy-bundle-docker-0 ( metal-1 ) - * Start haproxy-bundle-docker-1 ( metal-2 ) - * Start haproxy-bundle-docker-2 ( metal-3 ) - * Start redis-bundle-docker-0 ( metal-1 ) - * Start redis-bundle-0 ( metal-1 ) + * Start storage:0 ( metal-1 ) + * Start storage:1 ( metal-2 ) + * Start storage:2 ( metal-3 ) + * Start galera-bundle-docker-0 ( metal-1 ) + * Start galera-bundle-0 ( metal-1 ) + * Start galera:0 ( galera-bundle-0 ) + * Start galera-bundle-docker-1 ( metal-2 ) + * Start galera-bundle-1 ( metal-2 ) + * Start galera:1 ( galera-bundle-1 ) + * Start galera-bundle-docker-2 ( metal-3 ) + * Start galera-bundle-2 ( metal-3 ) + * Start galera:2 ( galera-bundle-2 ) + * Start haproxy-bundle-docker-0 ( metal-1 ) + * Start haproxy-bundle-docker-1 ( metal-2 ) + * Start haproxy-bundle-docker-2 ( metal-3 ) + * Start redis-bundle-docker-0 ( metal-1 ) + * Start redis-bundle-0 ( metal-1 ) * Promote redis:0 ( Stopped -> Promoted redis-bundle-0 ) - * Start redis-bundle-docker-1 ( metal-2 ) - * Start redis-bundle-1 ( metal-2 ) + * Start redis-bundle-docker-1 ( metal-2 ) + * Start redis-bundle-1 ( metal-2 ) * Promote redis:1 ( Stopped -> Promoted redis-bundle-1 ) - * Start redis-bundle-docker-2 ( metal-3 ) - * Start redis-bundle-2 ( metal-3 ) + * Start redis-bundle-docker-2 ( metal-3 ) + * Start redis-bundle-2 ( metal-3 ) * Promote redis:2 ( Stopped -> Promoted redis-bundle-2 ) Executing Cluster Transition: * Resource action: storage:0 monitor on metal-1 * Resource action: storage:1 monitor on metal-2 * Resource action: storage:2 monitor on metal-3 * Pseudo action: storage-clone_pre_notify_start_0 * Resource action: galera-bundle-docker-0 monitor on metal-3 * Resource action: galera-bundle-docker-0 monitor on metal-2 * Resource action: galera-bundle-docker-0 monitor on metal-1 * Resource action: galera-bundle-docker-1 monitor on metal-3 * Resource action: galera-bundle-docker-1 monitor on metal-2 * Resource action: galera-bundle-docker-1 monitor on metal-1 * Resource action: galera-bundle-docker-2 monitor on metal-3 * Resource action: galera-bundle-docker-2 monitor on metal-2 * Resource action: galera-bundle-docker-2 monitor on metal-1 * Resource action: haproxy-bundle-docker-0 monitor on metal-3 * Resource action: haproxy-bundle-docker-0 monitor on metal-2 * Resource action: haproxy-bundle-docker-0 monitor on metal-1 * Resource action: haproxy-bundle-docker-1 monitor on metal-3 * Resource action: haproxy-bundle-docker-1 monitor on metal-2 * Resource action: haproxy-bundle-docker-1 monitor on metal-1 * Resource action: haproxy-bundle-docker-2 monitor on metal-3 * Resource action: haproxy-bundle-docker-2 monitor on metal-2 * Resource action: haproxy-bundle-docker-2 monitor on metal-1 * Pseudo action: redis-bundle-master_pre_notify_start_0 * Resource action: redis-bundle-docker-0 monitor on metal-3 * Resource action: redis-bundle-docker-0 monitor on metal-2 * Resource action: redis-bundle-docker-0 monitor on metal-1 * Resource action: redis-bundle-docker-1 monitor on metal-3 * Resource action: redis-bundle-docker-1 monitor on metal-2 * Resource action: redis-bundle-docker-1 monitor on metal-1 * Resource action: redis-bundle-docker-2 monitor on metal-3 * Resource action: redis-bundle-docker-2 monitor on metal-2 * Resource action: redis-bundle-docker-2 monitor on metal-1 * Pseudo action: redis-bundle_start_0 * Pseudo action: haproxy-bundle_start_0 * Pseudo action: storage-clone_confirmed-pre_notify_start_0 * Resource action: haproxy-bundle-docker-0 start on metal-1 * Resource action: haproxy-bundle-docker-1 start on metal-2 * Resource action: haproxy-bundle-docker-2 start on metal-3 * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0 * Pseudo action: redis-bundle-master_start_0 * Resource action: redis-bundle-docker-0 start on metal-1 * Resource action: redis-bundle-0 monitor on metal-3 * Resource action: redis-bundle-0 monitor on metal-2 * Resource action: redis-bundle-0 monitor on metal-1 * Resource action: redis-bundle-docker-1 start on metal-2 * Resource action: redis-bundle-1 monitor on metal-3 * Resource action: redis-bundle-1 monitor on metal-2 * Resource action: redis-bundle-1 monitor on metal-1 * Resource action: redis-bundle-docker-2 start on metal-3 * Resource action: redis-bundle-2 monitor on metal-3 * Resource action: redis-bundle-2 monitor on metal-2 * Resource action: redis-bundle-2 monitor on metal-1 * Pseudo action: haproxy-bundle_running_0 * Resource action: haproxy-bundle-docker-0 monitor=60000 on metal-1 * Resource action: haproxy-bundle-docker-1 monitor=60000 on metal-2 * Resource action: haproxy-bundle-docker-2 monitor=60000 on metal-3 * Resource action: redis-bundle-docker-0 monitor=60000 on metal-1 * Resource action: redis-bundle-0 start on metal-1 * Resource action: redis-bundle-docker-1 monitor=60000 on metal-2 * Resource action: redis-bundle-1 start on metal-2 * Resource action: redis-bundle-docker-2 monitor=60000 on metal-3 * Resource action: redis-bundle-2 start on metal-3 * Resource action: redis:0 start on redis-bundle-0 * Resource action: redis:1 start on redis-bundle-1 * Resource action: redis:2 start on redis-bundle-2 * Pseudo action: redis-bundle-master_running_0 * Resource action: redis-bundle-0 monitor=30000 on metal-1 * Resource action: redis-bundle-1 monitor=30000 on metal-2 * Resource action: redis-bundle-2 monitor=30000 on metal-3 * Pseudo action: redis-bundle-master_post_notify_running_0 * Resource action: redis:0 notify on redis-bundle-0 * Resource action: redis:1 notify on redis-bundle-1 * Resource action: redis:2 notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0 * Pseudo action: redis-bundle_running_0 * Pseudo action: redis-bundle-master_pre_notify_promote_0 * Pseudo action: redis-bundle_promote_0 * Resource action: redis:0 notify on redis-bundle-0 * Resource action: redis:1 notify on redis-bundle-1 * Resource action: redis:2 notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: redis-bundle-master_promote_0 * Resource action: redis:0 promote on redis-bundle-0 * Resource action: redis:1 promote on redis-bundle-1 * Resource action: redis:2 promote on redis-bundle-2 * Pseudo action: redis-bundle-master_promoted_0 * Pseudo action: redis-bundle-master_post_notify_promoted_0 * Resource action: redis:0 notify on redis-bundle-0 * Resource action: redis:1 notify on redis-bundle-1 * Resource action: redis:2 notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: redis-bundle_promoted_0 * Pseudo action: storage-clone_start_0 * Resource action: redis:0 monitor=20000 on redis-bundle-0 * Resource action: redis:1 monitor=20000 on redis-bundle-1 * Resource action: redis:2 monitor=20000 on redis-bundle-2 * Resource action: storage:0 start on metal-1 * Resource action: storage:1 start on metal-2 * Resource action: storage:2 start on metal-3 * Pseudo action: storage-clone_running_0 * Pseudo action: storage-clone_post_notify_running_0 * Resource action: storage:0 notify on metal-1 * Resource action: storage:1 notify on metal-2 * Resource action: storage:2 notify on metal-3 * Pseudo action: storage-clone_confirmed-post_notify_running_0 * Pseudo action: galera-bundle_start_0 * Resource action: storage:0 monitor=30000 on metal-1 * Resource action: storage:1 monitor=30000 on metal-2 * Resource action: storage:2 monitor=30000 on metal-3 * Pseudo action: galera-bundle-master_start_0 * Resource action: galera-bundle-docker-0 start on metal-1 * Resource action: galera-bundle-0 monitor on metal-3 * Resource action: galera-bundle-0 monitor on metal-2 * Resource action: galera-bundle-0 monitor on metal-1 * Resource action: galera-bundle-docker-1 start on metal-2 * Resource action: galera-bundle-1 monitor on metal-3 * Resource action: galera-bundle-1 monitor on metal-2 * Resource action: galera-bundle-1 monitor on metal-1 * Resource action: galera-bundle-docker-2 start on metal-3 * Resource action: galera-bundle-2 monitor on metal-3 * Resource action: galera-bundle-2 monitor on metal-2 * Resource action: galera-bundle-2 monitor on metal-1 * Resource action: galera-bundle-docker-0 monitor=60000 on metal-1 * Resource action: galera-bundle-0 start on metal-1 * Resource action: galera-bundle-docker-1 monitor=60000 on metal-2 * Resource action: galera-bundle-1 start on metal-2 * Resource action: galera-bundle-docker-2 monitor=60000 on metal-3 * Resource action: galera-bundle-2 start on metal-3 * Resource action: galera:0 start on galera-bundle-0 * Resource action: galera:1 start on galera-bundle-1 * Resource action: galera:2 start on galera-bundle-2 * Pseudo action: galera-bundle-master_running_0 * Resource action: galera-bundle-0 monitor=30000 on metal-1 * Resource action: galera-bundle-1 monitor=30000 on metal-2 * Resource action: galera-bundle-2 monitor=30000 on metal-3 * Pseudo action: galera-bundle_running_0 * Resource action: galera:0 monitor=30000 on galera-bundle-0 * Resource action: galera:0 monitor=20000 on galera-bundle-0 * Resource action: galera:1 monitor=30000 on galera-bundle-1 * Resource action: galera:1 monitor=20000 on galera-bundle-1 * Resource action: galera:2 monitor=30000 on galera-bundle-2 * Resource action: galera:2 monitor=20000 on galera-bundle-2 Revised Cluster Status: * Node List: * Online: [ metal-1 metal-2 metal-3 ] * RemoteOFFLINE: [ rabbitmq-bundle-0 ] * GuestOnline: [ galera-bundle-0@metal-1 galera-bundle-1@metal-2 galera-bundle-2@metal-3 redis-bundle-0@metal-1 redis-bundle-1@metal-2 redis-bundle-2@metal-3 ] * Full List of Resources: * Clone Set: storage-clone [storage]: * Started: [ metal-1 metal-2 metal-3 ] * Stopped: [ rabbitmq-bundle-0 ] * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted metal-1 * galera-bundle-1 (ocf:heartbeat:galera): Unpromoted metal-2 * galera-bundle-2 (ocf:heartbeat:galera): Unpromoted metal-3 * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started metal-1 * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started metal-2 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started metal-3 * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted metal-1 * redis-bundle-1 (ocf:heartbeat:redis): Promoted metal-2 * redis-bundle-2 (ocf:heartbeat:redis): Promoted metal-3 diff --git a/cts/scheduler/summary/bundle-order-stop-clone.summary b/cts/scheduler/summary/bundle-order-stop-clone.summary index db3b9344b2..b278a00d52 100644 --- a/cts/scheduler/summary/bundle-order-stop-clone.summary +++ b/cts/scheduler/summary/bundle-order-stop-clone.summary @@ -1,88 +1,88 @@ Current cluster status: * Node List: * Online: [ metal-1 metal-2 metal-3 ] * RemoteOFFLINE: [ rabbitmq-bundle-0 ] * GuestOnline: [ galera-bundle-0@metal-1 galera-bundle-1@metal-2 galera-bundle-2@metal-3 redis-bundle-0@metal-1 redis-bundle-1@metal-2 redis-bundle-2@metal-3 ] * Full List of Resources: * Clone Set: storage-clone [storage]: * Started: [ metal-1 metal-2 metal-3 ] * Stopped: [ rabbitmq-bundle-0 ] * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted metal-1 * galera-bundle-1 (ocf:heartbeat:galera): Unpromoted metal-2 * galera-bundle-2 (ocf:heartbeat:galera): Unpromoted metal-3 * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started metal-1 * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started metal-2 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started metal-3 * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted metal-1 * redis-bundle-1 (ocf:heartbeat:redis): Promoted metal-2 * redis-bundle-2 (ocf:heartbeat:redis): Promoted metal-3 Transition Summary: - * Stop storage:0 ( metal-1 ) due to node availability - * Stop galera-bundle-docker-0 ( metal-1 ) due to node availability - * Stop galera-bundle-0 ( metal-1 ) due to unrunnable galera-bundle-docker-0 start + * Stop storage:0 ( metal-1 ) due to node availability + * Stop galera-bundle-docker-0 ( metal-1 ) due to node availability + * Stop galera-bundle-0 ( metal-1 ) due to unrunnable galera-bundle-docker-0 start * Stop galera:0 ( Unpromoted galera-bundle-0 ) due to unrunnable galera-bundle-docker-0 start Executing Cluster Transition: * Pseudo action: storage-clone_pre_notify_stop_0 * Resource action: galera-bundle-0 monitor on metal-3 * Resource action: galera-bundle-0 monitor on metal-2 * Resource action: galera-bundle-1 monitor on metal-3 * Resource action: galera-bundle-1 monitor on metal-1 * Resource action: galera-bundle-2 monitor on metal-2 * Resource action: galera-bundle-2 monitor on metal-1 * Resource action: redis-bundle-0 monitor on metal-3 * Resource action: redis-bundle-0 monitor on metal-2 * Resource action: redis-bundle-1 monitor on metal-3 * Resource action: redis-bundle-1 monitor on metal-1 * Resource action: redis-bundle-2 monitor on metal-2 * Resource action: redis-bundle-2 monitor on metal-1 * Pseudo action: galera-bundle_stop_0 * Resource action: storage:0 notify on metal-1 * Resource action: storage:1 notify on metal-2 * Resource action: storage:2 notify on metal-3 * Pseudo action: storage-clone_confirmed-pre_notify_stop_0 * Pseudo action: galera-bundle-master_stop_0 * Resource action: galera:0 stop on galera-bundle-0 * Pseudo action: galera-bundle-master_stopped_0 * Resource action: galera-bundle-0 stop on metal-1 * Resource action: galera-bundle-docker-0 stop on metal-1 * Pseudo action: galera-bundle_stopped_0 * Pseudo action: galera-bundle_start_0 * Pseudo action: storage-clone_stop_0 * Pseudo action: galera-bundle-master_start_0 * Resource action: storage:0 stop on metal-1 * Pseudo action: storage-clone_stopped_0 * Pseudo action: galera-bundle-master_running_0 * Pseudo action: galera-bundle_running_0 * Pseudo action: storage-clone_post_notify_stopped_0 * Resource action: storage:1 notify on metal-2 * Resource action: storage:2 notify on metal-3 * Pseudo action: storage-clone_confirmed-post_notify_stopped_0 Revised Cluster Status: * Node List: * Online: [ metal-1 metal-2 metal-3 ] * RemoteOFFLINE: [ rabbitmq-bundle-0 ] * GuestOnline: [ galera-bundle-1@metal-2 galera-bundle-2@metal-3 redis-bundle-0@metal-1 redis-bundle-1@metal-2 redis-bundle-2@metal-3 ] * Full List of Resources: * Clone Set: storage-clone [storage]: * Started: [ metal-2 metal-3 ] * Stopped: [ metal-1 rabbitmq-bundle-0 ] * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Stopped * galera-bundle-1 (ocf:heartbeat:galera): Unpromoted metal-2 * galera-bundle-2 (ocf:heartbeat:galera): Unpromoted metal-3 * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started metal-1 * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started metal-2 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started metal-3 * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted metal-1 * redis-bundle-1 (ocf:heartbeat:redis): Promoted metal-2 * redis-bundle-2 (ocf:heartbeat:redis): Promoted metal-3 diff --git a/cts/scheduler/summary/bundle-order-stop-on-remote.summary b/cts/scheduler/summary/bundle-order-stop-on-remote.summary index 8cd17eef61..fa4ef5798a 100644 --- a/cts/scheduler/summary/bundle-order-stop-on-remote.summary +++ b/cts/scheduler/summary/bundle-order-stop-on-remote.summary @@ -1,224 +1,224 @@ Current cluster status: * Node List: * RemoteNode database-0: UNCLEAN (offline) * RemoteNode database-2: UNCLEAN (offline) * Online: [ controller-0 controller-1 controller-2 ] * RemoteOnline: [ database-1 messaging-0 messaging-1 messaging-2 ] * GuestOnline: [ galera-bundle-1@controller-2 rabbitmq-bundle-0@controller-2 rabbitmq-bundle-1@controller-2 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-2@controller-2 ] * Full List of Resources: * database-0 (ocf:pacemaker:remote): Stopped * database-1 (ocf:pacemaker:remote): Started controller-2 * database-2 (ocf:pacemaker:remote): Stopped * messaging-0 (ocf:pacemaker:remote): Started controller-2 * messaging-1 (ocf:pacemaker:remote): Started controller-2 * messaging-2 (ocf:pacemaker:remote): Started controller-2 * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2 * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted database-0 (UNCLEAN) * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1 * galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted database-2 (UNCLEAN) * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted controller-0 * redis-bundle-1 (ocf:heartbeat:redis): Stopped * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Stopped * ip-10.0.0.104 (ocf:heartbeat:IPaddr2): Stopped * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Stopped * ip-172.17.3.13 (ocf:heartbeat:IPaddr2): Stopped * ip-172.17.4.19 (ocf:heartbeat:IPaddr2): Started controller-2 * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0 * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Stopped * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2 * openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped * stonith-fence_ipmilan-525400244e09 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400cdec10 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400c709f7 (stonith:fence_ipmilan): Stopped * stonith-fence_ipmilan-525400a7f9e0 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400a25787 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-5254005ea387 (stonith:fence_ipmilan): Stopped * stonith-fence_ipmilan-525400542c06 (stonith:fence_ipmilan): Stopped * stonith-fence_ipmilan-525400aac413 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400498d34 (stonith:fence_ipmilan): Stopped Transition Summary: * Fence (reboot) galera-bundle-2 (resource: galera-bundle-docker-2) 'guest is unclean' * Fence (reboot) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean' - * Start database-0 ( controller-0 ) - * Start database-2 ( controller-1 ) - * Recover galera-bundle-docker-0 ( database-0 ) - * Start galera-bundle-0 ( controller-0 ) - * Recover galera:0 ( Promoted galera-bundle-0 ) - * Recover galera-bundle-docker-2 ( database-2 ) - * Start galera-bundle-2 ( controller-1 ) - * Recover galera:2 ( Promoted galera-bundle-2 ) + * Start database-0 ( controller-0 ) + * Start database-2 ( controller-1 ) + * Recover galera-bundle-docker-0 ( database-0 ) + * Start galera-bundle-0 ( controller-0 ) + * Recover galera:0 ( Promoted galera-bundle-0 ) + * Recover galera-bundle-docker-2 ( database-2 ) + * Start galera-bundle-2 ( controller-1 ) + * Recover galera:2 ( Promoted galera-bundle-2 ) * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 ) - * Start redis-bundle-docker-1 ( controller-1 ) - * Start redis-bundle-1 ( controller-1 ) - * Start redis:1 ( redis-bundle-1 ) - * Start ip-192.168.24.11 ( controller-0 ) - * Start ip-10.0.0.104 ( controller-1 ) - * Start ip-172.17.1.11 ( controller-0 ) - * Start ip-172.17.3.13 ( controller-1 ) - * Start haproxy-bundle-docker-1 ( controller-1 ) - * Start openstack-cinder-volume ( controller-0 ) - * Start stonith-fence_ipmilan-525400c709f7 ( controller-1 ) - * Start stonith-fence_ipmilan-5254005ea387 ( controller-1 ) - * Start stonith-fence_ipmilan-525400542c06 ( controller-0 ) - * Start stonith-fence_ipmilan-525400498d34 ( controller-1 ) + * Start redis-bundle-docker-1 ( controller-1 ) + * Start redis-bundle-1 ( controller-1 ) + * Start redis:1 ( redis-bundle-1 ) + * Start ip-192.168.24.11 ( controller-0 ) + * Start ip-10.0.0.104 ( controller-1 ) + * Start ip-172.17.1.11 ( controller-0 ) + * Start ip-172.17.3.13 ( controller-1 ) + * Start haproxy-bundle-docker-1 ( controller-1 ) + * Start openstack-cinder-volume ( controller-0 ) + * Start stonith-fence_ipmilan-525400c709f7 ( controller-1 ) + * Start stonith-fence_ipmilan-5254005ea387 ( controller-1 ) + * Start stonith-fence_ipmilan-525400542c06 ( controller-0 ) + * Start stonith-fence_ipmilan-525400498d34 ( controller-1 ) Executing Cluster Transition: * Resource action: database-0 start on controller-0 * Resource action: database-2 start on controller-1 * Pseudo action: redis-bundle-master_pre_notify_start_0 * Resource action: stonith-fence_ipmilan-525400c709f7 start on controller-1 * Resource action: stonith-fence_ipmilan-5254005ea387 start on controller-1 * Resource action: stonith-fence_ipmilan-525400542c06 start on controller-0 * Resource action: stonith-fence_ipmilan-525400498d34 start on controller-1 * Pseudo action: redis-bundle_start_0 * Pseudo action: galera-bundle_demote_0 * Resource action: database-0 monitor=20000 on controller-0 * Resource action: database-2 monitor=20000 on controller-1 * Pseudo action: galera-bundle-master_demote_0 * Resource action: redis notify on redis-bundle-0 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0 * Pseudo action: redis-bundle-master_start_0 * Resource action: stonith-fence_ipmilan-525400c709f7 monitor=60000 on controller-1 * Resource action: stonith-fence_ipmilan-5254005ea387 monitor=60000 on controller-1 * Resource action: stonith-fence_ipmilan-525400542c06 monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-525400498d34 monitor=60000 on controller-1 * Pseudo action: galera_demote_0 * Pseudo action: galera_demote_0 * Pseudo action: galera-bundle-master_demoted_0 * Pseudo action: galera-bundle_demoted_0 * Pseudo action: galera-bundle_stop_0 * Resource action: galera-bundle-docker-0 stop on database-0 * Resource action: galera-bundle-docker-2 stop on database-2 * Pseudo action: stonith-galera-bundle-2-reboot on galera-bundle-2 * Pseudo action: stonith-galera-bundle-0-reboot on galera-bundle-0 * Pseudo action: galera-bundle-master_stop_0 * Resource action: redis-bundle-docker-1 start on controller-1 * Resource action: redis-bundle-1 monitor on controller-1 * Resource action: ip-192.168.24.11 start on controller-0 * Resource action: ip-10.0.0.104 start on controller-1 * Resource action: ip-172.17.1.11 start on controller-0 * Resource action: ip-172.17.3.13 start on controller-1 * Resource action: openstack-cinder-volume start on controller-0 * Pseudo action: haproxy-bundle_start_0 * Pseudo action: galera_stop_0 * Resource action: redis-bundle-docker-1 monitor=60000 on controller-1 * Resource action: redis-bundle-1 start on controller-1 * Resource action: ip-192.168.24.11 monitor=10000 on controller-0 * Resource action: ip-10.0.0.104 monitor=10000 on controller-1 * Resource action: ip-172.17.1.11 monitor=10000 on controller-0 * Resource action: ip-172.17.3.13 monitor=10000 on controller-1 * Resource action: haproxy-bundle-docker-1 start on controller-1 * Resource action: openstack-cinder-volume monitor=60000 on controller-0 * Pseudo action: haproxy-bundle_running_0 * Pseudo action: galera_stop_0 * Pseudo action: galera-bundle-master_stopped_0 * Resource action: redis start on redis-bundle-1 * Pseudo action: redis-bundle-master_running_0 * Resource action: redis-bundle-1 monitor=30000 on controller-1 * Resource action: haproxy-bundle-docker-1 monitor=60000 on controller-1 * Pseudo action: galera-bundle_stopped_0 * Pseudo action: galera-bundle_start_0 * Pseudo action: galera-bundle-master_start_0 * Resource action: galera-bundle-docker-0 start on database-0 * Resource action: galera-bundle-0 monitor on controller-1 * Resource action: galera-bundle-docker-2 start on database-2 * Resource action: galera-bundle-2 monitor on controller-1 * Pseudo action: redis-bundle-master_post_notify_running_0 * Resource action: galera-bundle-docker-0 monitor=60000 on database-0 * Resource action: galera-bundle-0 start on controller-0 * Resource action: galera-bundle-docker-2 monitor=60000 on database-2 * Resource action: galera-bundle-2 start on controller-1 * Resource action: redis notify on redis-bundle-0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0 * Pseudo action: redis-bundle_running_0 * Resource action: galera start on galera-bundle-0 * Resource action: galera start on galera-bundle-2 * Pseudo action: galera-bundle-master_running_0 * Resource action: galera-bundle-0 monitor=30000 on controller-0 * Resource action: galera-bundle-2 monitor=30000 on controller-1 * Pseudo action: redis-bundle-master_pre_notify_promote_0 * Pseudo action: redis-bundle_promote_0 * Pseudo action: galera-bundle_running_0 * Resource action: redis notify on redis-bundle-0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: redis-bundle-master_promote_0 * Pseudo action: galera-bundle_promote_0 * Pseudo action: galera-bundle-master_promote_0 * Resource action: redis promote on redis-bundle-0 * Pseudo action: redis-bundle-master_promoted_0 * Resource action: galera promote on galera-bundle-0 * Resource action: galera promote on galera-bundle-2 * Pseudo action: galera-bundle-master_promoted_0 * Pseudo action: redis-bundle-master_post_notify_promoted_0 * Pseudo action: galera-bundle_promoted_0 * Resource action: galera monitor=10000 on galera-bundle-0 * Resource action: galera monitor=10000 on galera-bundle-2 * Resource action: redis notify on redis-bundle-0 * Resource action: redis notify on redis-bundle-1 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: redis-bundle_promoted_0 * Resource action: redis monitor=20000 on redis-bundle-0 * Resource action: redis monitor=60000 on redis-bundle-1 * Resource action: redis monitor=45000 on redis-bundle-1 Revised Cluster Status: * Node List: * Online: [ controller-0 controller-1 controller-2 ] * RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ] * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-2 galera-bundle-2@controller-1 rabbitmq-bundle-0@controller-2 rabbitmq-bundle-1@controller-2 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Full List of Resources: * database-0 (ocf:pacemaker:remote): Started controller-0 * database-1 (ocf:pacemaker:remote): Started controller-2 * database-2 (ocf:pacemaker:remote): Started controller-1 * messaging-0 (ocf:pacemaker:remote): Started controller-2 * messaging-1 (ocf:pacemaker:remote): Started controller-2 * messaging-2 (ocf:pacemaker:remote): Started controller-2 * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2 * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0 * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2 * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-0 * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.104 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.3.13 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.4.19 (ocf:heartbeat:IPaddr2): Started controller-2 * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0 * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2 * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400244e09 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400cdec10 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400c709f7 (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-525400a7f9e0 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400a25787 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-5254005ea387 (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-525400542c06 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400aac413 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400498d34 (stonith:fence_ipmilan): Started controller-1 diff --git a/cts/scheduler/summary/bundle-order-stop.summary b/cts/scheduler/summary/bundle-order-stop.summary index 0954c59992..4313a6ce00 100644 --- a/cts/scheduler/summary/bundle-order-stop.summary +++ b/cts/scheduler/summary/bundle-order-stop.summary @@ -1,127 +1,127 @@ Current cluster status: * Node List: * Online: [ undercloud ] * GuestOnline: [ galera-bundle-0@undercloud rabbitmq-bundle-0@undercloud redis-bundle-0@undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted undercloud * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud Transition Summary: - * Stop rabbitmq-bundle-docker-0 ( undercloud ) due to node availability - * Stop rabbitmq-bundle-0 ( undercloud ) due to node availability - * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-0 start - * Stop galera-bundle-docker-0 ( undercloud ) due to node availability - * Stop galera-bundle-0 ( undercloud ) due to node availability + * Stop rabbitmq-bundle-docker-0 ( undercloud ) due to node availability + * Stop rabbitmq-bundle-0 ( undercloud ) due to node availability + * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-0 start + * Stop galera-bundle-docker-0 ( undercloud ) due to node availability + * Stop galera-bundle-0 ( undercloud ) due to node availability * Stop galera:0 ( Promoted galera-bundle-0 ) due to unrunnable galera-bundle-0 start - * Stop redis-bundle-docker-0 ( undercloud ) due to node availability - * Stop redis-bundle-0 ( undercloud ) due to node availability + * Stop redis-bundle-docker-0 ( undercloud ) due to node availability + * Stop redis-bundle-0 ( undercloud ) due to node availability * Stop redis:0 ( Promoted redis-bundle-0 ) due to unrunnable redis-bundle-0 start - * Stop ip-192.168.122.254 ( undercloud ) due to node availability - * Stop ip-192.168.122.250 ( undercloud ) due to node availability - * Stop ip-192.168.122.249 ( undercloud ) due to node availability - * Stop ip-192.168.122.253 ( undercloud ) due to node availability - * Stop ip-192.168.122.247 ( undercloud ) due to node availability - * Stop ip-192.168.122.248 ( undercloud ) due to node availability - * Stop haproxy-bundle-docker-0 ( undercloud ) due to node availability - * Stop openstack-cinder-volume-docker-0 ( undercloud ) due to node availability + * Stop ip-192.168.122.254 ( undercloud ) due to node availability + * Stop ip-192.168.122.250 ( undercloud ) due to node availability + * Stop ip-192.168.122.249 ( undercloud ) due to node availability + * Stop ip-192.168.122.253 ( undercloud ) due to node availability + * Stop ip-192.168.122.247 ( undercloud ) due to node availability + * Stop ip-192.168.122.248 ( undercloud ) due to node availability + * Stop haproxy-bundle-docker-0 ( undercloud ) due to node availability + * Stop openstack-cinder-volume-docker-0 ( undercloud ) due to node availability Executing Cluster Transition: * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0 * Resource action: galera cancel=10000 on galera-bundle-0 * Resource action: redis cancel=20000 on redis-bundle-0 * Pseudo action: redis-bundle-master_pre_notify_demote_0 * Pseudo action: openstack-cinder-volume_stop_0 * Pseudo action: redis-bundle_demote_0 * Pseudo action: galera-bundle_demote_0 * Pseudo action: rabbitmq-bundle_stop_0 * Resource action: rabbitmq notify on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0 * Pseudo action: rabbitmq-bundle-clone_stop_0 * Pseudo action: galera-bundle-master_demote_0 * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_demote_0 * Pseudo action: redis-bundle-master_demote_0 * Resource action: openstack-cinder-volume-docker-0 stop on undercloud * Pseudo action: openstack-cinder-volume_stopped_0 * Resource action: rabbitmq stop on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_stopped_0 * Resource action: rabbitmq-bundle-0 stop on undercloud * Resource action: galera demote on galera-bundle-0 * Pseudo action: galera-bundle-master_demoted_0 * Resource action: redis demote on redis-bundle-0 * Pseudo action: redis-bundle-master_demoted_0 * Pseudo action: galera-bundle_demoted_0 * Pseudo action: galera-bundle_stop_0 * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0 * Resource action: rabbitmq-bundle-docker-0 stop on undercloud * Pseudo action: galera-bundle-master_stop_0 * Pseudo action: redis-bundle-master_post_notify_demoted_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0 * Resource action: galera stop on galera-bundle-0 * Pseudo action: galera-bundle-master_stopped_0 * Resource action: galera-bundle-0 stop on undercloud * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-post_notify_demoted_0 * Pseudo action: redis-bundle-master_pre_notify_stop_0 * Pseudo action: redis-bundle_demoted_0 * Pseudo action: rabbitmq-bundle_stopped_0 * Resource action: galera-bundle-docker-0 stop on undercloud * Resource action: redis notify on redis-bundle-0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_stop_0 * Pseudo action: galera-bundle_stopped_0 * Pseudo action: redis-bundle_stop_0 * Pseudo action: redis-bundle-master_stop_0 * Resource action: redis stop on redis-bundle-0 * Pseudo action: redis-bundle-master_stopped_0 * Resource action: redis-bundle-0 stop on undercloud * Pseudo action: redis-bundle-master_post_notify_stopped_0 * Resource action: redis-bundle-docker-0 stop on undercloud * Pseudo action: redis-bundle-master_confirmed-post_notify_stopped_0 * Pseudo action: redis-bundle_stopped_0 * Pseudo action: haproxy-bundle_stop_0 * Resource action: haproxy-bundle-docker-0 stop on undercloud * Pseudo action: haproxy-bundle_stopped_0 * Resource action: ip-192.168.122.254 stop on undercloud * Resource action: ip-192.168.122.250 stop on undercloud * Resource action: ip-192.168.122.249 stop on undercloud * Resource action: ip-192.168.122.253 stop on undercloud * Resource action: ip-192.168.122.247 stop on undercloud * Resource action: ip-192.168.122.248 stop on undercloud * Cluster action: do_shutdown on undercloud Revised Cluster Status: * Node List: * Online: [ undercloud ] * Full List of Resources: * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]: * galera-bundle-0 (ocf:heartbeat:galera): Stopped * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]: * redis-bundle-0 (ocf:heartbeat:redis): Stopped * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Stopped * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Stopped * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Stopped diff --git a/cts/scheduler/summary/cancel-behind-moving-remote.summary b/cts/scheduler/summary/cancel-behind-moving-remote.summary index 3c16b75ea0..00524c893d 100644 --- a/cts/scheduler/summary/cancel-behind-moving-remote.summary +++ b/cts/scheduler/summary/cancel-behind-moving-remote.summary @@ -1,211 +1,211 @@ Using the original execution date of: 2021-02-15 01:40:51Z Current cluster status: * Node List: * Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-2 ] * OFFLINE: [ messaging-1 ] * RemoteOnline: [ compute-0 compute-1 ] * GuestOnline: [ galera-bundle-0@database-0 galera-bundle-1@database-1 galera-bundle-2@database-2 ovn-dbs-bundle-1@controller-2 ovn-dbs-bundle-2@controller-1 rabbitmq-bundle-0@messaging-0 rabbitmq-bundle-2@messaging-2 redis-bundle-0@controller-2 redis-bundle-1@controller-0 redis-bundle-2@controller-1 ] * Full List of Resources: * compute-0 (ocf:pacemaker:remote): Started controller-1 * compute-1 (ocf:pacemaker:remote): Started controller-2 * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0 * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2 * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Stopped * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2 * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2 * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1 * ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-2 * Container bundle set: haproxy-bundle [cluster.common.tag/rhosp16-openstack-haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0 * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1 * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]: * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Stopped * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-2 * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1 * ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Stopped * stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-1 * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]: * Started: [ compute-0 compute-1 ] * Stopped: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ] * nova-evacuate (ocf:openstack:NovaEvacuate): Started database-2 * stonith-fence_ipmilan-525400aa1373 (stonith:fence_ipmilan): Started messaging-0 * stonith-fence_ipmilan-525400dc23e0 (stonith:fence_ipmilan): Started messaging-2 * stonith-fence_ipmilan-52540040bb56 (stonith:fence_ipmilan): Started messaging-2 * stonith-fence_ipmilan-525400addd38 (stonith:fence_ipmilan): Started messaging-0 * stonith-fence_ipmilan-52540078fb07 (stonith:fence_ipmilan): Started database-0 * stonith-fence_ipmilan-525400ea59b0 (stonith:fence_ipmilan): Started database-1 * stonith-fence_ipmilan-525400066e50 (stonith:fence_ipmilan): Started database-2 * stonith-fence_ipmilan-525400e1534e (stonith:fence_ipmilan): Started database-1 * stonith-fence_ipmilan-52540060dbba (stonith:fence_ipmilan): Started database-2 * stonith-fence_ipmilan-525400e018b6 (stonith:fence_ipmilan): Started database-0 * stonith-fence_ipmilan-525400c87cdb (stonith:fence_ipmilan): Started messaging-0 * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-2 Transition Summary: - * Start rabbitmq-bundle-1 ( controller-0 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked) - * Start rabbitmq:1 ( rabbitmq-bundle-1 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked) - * Start ovn-dbs-bundle-podman-0 ( controller-2 ) - * Start ovn-dbs-bundle-0 ( controller-2 ) - * Start ovndb_servers:0 ( ovn-dbs-bundle-0 ) - * Move ovn-dbs-bundle-podman-1 ( controller-2 -> controller-0 ) - * Move ovn-dbs-bundle-1 ( controller-2 -> controller-0 ) + * Start rabbitmq-bundle-1 ( controller-0 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked) + * Start rabbitmq:1 ( rabbitmq-bundle-1 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked) + * Start ovn-dbs-bundle-podman-0 ( controller-2 ) + * Start ovn-dbs-bundle-0 ( controller-2 ) + * Start ovndb_servers:0 ( ovn-dbs-bundle-0 ) + * Move ovn-dbs-bundle-podman-1 ( controller-2 -> controller-0 ) + * Move ovn-dbs-bundle-1 ( controller-2 -> controller-0 ) * Restart ovndb_servers:1 ( Unpromoted -> Promoted ovn-dbs-bundle-1 ) due to required ovn-dbs-bundle-podman-1 start - * Start ip-172.17.1.87 ( controller-0 ) - * Move stonith-fence_ipmilan-52540040bb56 ( messaging-2 -> database-0 ) - * Move stonith-fence_ipmilan-525400e1534e ( database-1 -> messaging-2 ) + * Start ip-172.17.1.87 ( controller-0 ) + * Move stonith-fence_ipmilan-52540040bb56 ( messaging-2 -> database-0 ) + * Move stonith-fence_ipmilan-525400e1534e ( database-1 -> messaging-2 ) Executing Cluster Transition: * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0 * Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-1 * Pseudo action: ovn-dbs-bundle-master_pre_notify_stop_0 * Cluster action: clear_failcount for ovn-dbs-bundle-0 on controller-0 * Cluster action: clear_failcount for ovn-dbs-bundle-1 on controller-2 * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on messaging-0 * Cluster action: clear_failcount for nova-evacuate on messaging-0 * Cluster action: clear_failcount for stonith-fence_ipmilan-525400aa1373 on database-0 * Cluster action: clear_failcount for stonith-fence_ipmilan-525400dc23e0 on database-2 * Resource action: stonith-fence_ipmilan-52540040bb56 stop on messaging-2 * Cluster action: clear_failcount for stonith-fence_ipmilan-52540078fb07 on messaging-2 * Cluster action: clear_failcount for stonith-fence_ipmilan-525400ea59b0 on database-0 * Cluster action: clear_failcount for stonith-fence_ipmilan-525400066e50 on messaging-2 * Resource action: stonith-fence_ipmilan-525400e1534e stop on database-1 * Cluster action: clear_failcount for stonith-fence_ipmilan-525400e1534e on database-2 * Cluster action: clear_failcount for stonith-fence_ipmilan-52540060dbba on messaging-0 * Cluster action: clear_failcount for stonith-fence_ipmilan-525400e018b6 on database-0 * Cluster action: clear_failcount for stonith-fence_ipmilan-525400c87cdb on database-2 * Pseudo action: ovn-dbs-bundle_stop_0 * Pseudo action: rabbitmq-bundle_start_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0 * Pseudo action: rabbitmq-bundle-clone_start_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_stop_0 * Pseudo action: ovn-dbs-bundle-master_stop_0 * Resource action: stonith-fence_ipmilan-52540040bb56 start on database-0 * Resource action: stonith-fence_ipmilan-525400e1534e start on messaging-2 * Pseudo action: rabbitmq-bundle-clone_running_0 * Resource action: ovndb_servers stop on ovn-dbs-bundle-1 * Pseudo action: ovn-dbs-bundle-master_stopped_0 * Resource action: ovn-dbs-bundle-1 stop on controller-2 * Resource action: stonith-fence_ipmilan-52540040bb56 monitor=60000 on database-0 * Resource action: stonith-fence_ipmilan-525400e1534e monitor=60000 on messaging-2 * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0 * Pseudo action: ovn-dbs-bundle-master_post_notify_stopped_0 * Resource action: ovn-dbs-bundle-podman-1 stop on controller-2 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_stopped_0 * Pseudo action: ovn-dbs-bundle-master_pre_notify_start_0 * Pseudo action: ovn-dbs-bundle_stopped_0 * Pseudo action: ovn-dbs-bundle_start_0 * Pseudo action: rabbitmq-bundle_running_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_start_0 * Pseudo action: ovn-dbs-bundle-master_start_0 * Resource action: ovn-dbs-bundle-podman-0 start on controller-2 * Resource action: ovn-dbs-bundle-0 start on controller-2 * Resource action: ovn-dbs-bundle-podman-1 start on controller-0 * Resource action: ovn-dbs-bundle-1 start on controller-0 * Resource action: ovndb_servers start on ovn-dbs-bundle-0 * Resource action: ovndb_servers start on ovn-dbs-bundle-1 * Pseudo action: ovn-dbs-bundle-master_running_0 * Resource action: ovn-dbs-bundle-podman-0 monitor=60000 on controller-2 * Resource action: ovn-dbs-bundle-0 monitor=30000 on controller-2 * Resource action: ovn-dbs-bundle-podman-1 monitor=60000 on controller-0 * Resource action: ovn-dbs-bundle-1 monitor=30000 on controller-0 * Pseudo action: ovn-dbs-bundle-master_post_notify_running_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_running_0 * Pseudo action: ovn-dbs-bundle_running_0 * Pseudo action: ovn-dbs-bundle-master_pre_notify_promote_0 * Pseudo action: ovn-dbs-bundle_promote_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: ovn-dbs-bundle-master_promote_0 * Resource action: ovndb_servers promote on ovn-dbs-bundle-1 * Pseudo action: ovn-dbs-bundle-master_promoted_0 * Pseudo action: ovn-dbs-bundle-master_post_notify_promoted_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: ovn-dbs-bundle_promoted_0 * Resource action: ovndb_servers monitor=30000 on ovn-dbs-bundle-0 * Resource action: ovndb_servers monitor=10000 on ovn-dbs-bundle-1 * Resource action: ip-172.17.1.87 start on controller-0 * Resource action: ip-172.17.1.87 monitor=10000 on controller-0 Using the original execution date of: 2021-02-15 01:40:51Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-2 ] * OFFLINE: [ messaging-1 ] * RemoteOnline: [ compute-0 compute-1 ] * GuestOnline: [ galera-bundle-0@database-0 galera-bundle-1@database-1 galera-bundle-2@database-2 ovn-dbs-bundle-0@controller-2 ovn-dbs-bundle-1@controller-0 ovn-dbs-bundle-2@controller-1 rabbitmq-bundle-0@messaging-0 rabbitmq-bundle-2@messaging-2 redis-bundle-0@controller-2 redis-bundle-1@controller-0 redis-bundle-2@controller-1 ] * Full List of Resources: * compute-0 (ocf:pacemaker:remote): Started controller-1 * compute-1 (ocf:pacemaker:remote): Started controller-2 * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0 * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2 * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Stopped * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2 * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2 * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1 * ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-2 * Container bundle set: haproxy-bundle [cluster.common.tag/rhosp16-openstack-haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0 * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1 * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]: * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Unpromoted controller-2 * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Promoted controller-0 * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1 * ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Started controller-0 * stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-1 * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]: * Started: [ compute-0 compute-1 ] * Stopped: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ] * nova-evacuate (ocf:openstack:NovaEvacuate): Started database-2 * stonith-fence_ipmilan-525400aa1373 (stonith:fence_ipmilan): Started messaging-0 * stonith-fence_ipmilan-525400dc23e0 (stonith:fence_ipmilan): Started messaging-2 * stonith-fence_ipmilan-52540040bb56 (stonith:fence_ipmilan): Started database-0 * stonith-fence_ipmilan-525400addd38 (stonith:fence_ipmilan): Started messaging-0 * stonith-fence_ipmilan-52540078fb07 (stonith:fence_ipmilan): Started database-0 * stonith-fence_ipmilan-525400ea59b0 (stonith:fence_ipmilan): Started database-1 * stonith-fence_ipmilan-525400066e50 (stonith:fence_ipmilan): Started database-2 * stonith-fence_ipmilan-525400e1534e (stonith:fence_ipmilan): Started messaging-2 * stonith-fence_ipmilan-52540060dbba (stonith:fence_ipmilan): Started database-2 * stonith-fence_ipmilan-525400e018b6 (stonith:fence_ipmilan): Started database-0 * stonith-fence_ipmilan-525400c87cdb (stonith:fence_ipmilan): Started messaging-0 * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-2 diff --git a/cts/scheduler/summary/clone-no-shuffle.summary b/cts/scheduler/summary/clone-no-shuffle.summary index 9dbee84c2f..e9b61b6f5f 100644 --- a/cts/scheduler/summary/clone-no-shuffle.summary +++ b/cts/scheduler/summary/clone-no-shuffle.summary @@ -1,61 +1,61 @@ Current cluster status: * Node List: * Online: [ dktest1sles10 dktest2sles10 ] * Full List of Resources: * stonith-1 (stonith:dummy): Stopped * Clone Set: ms-drbd1 [drbd1] (promotable): * Promoted: [ dktest2sles10 ] * Stopped: [ dktest1sles10 ] * testip (ocf:heartbeat:IPaddr2): Started dktest2sles10 Transition Summary: - * Start stonith-1 ( dktest1sles10 ) + * Start stonith-1 ( dktest1sles10 ) * Stop drbd1:0 ( Promoted dktest2sles10 ) due to node availability - * Start drbd1:1 ( dktest1sles10 ) - * Stop testip ( dktest2sles10 ) due to node availability + * Start drbd1:1 ( dktest1sles10 ) + * Stop testip ( dktest2sles10 ) due to node availability Executing Cluster Transition: * Resource action: stonith-1 monitor on dktest2sles10 * Resource action: stonith-1 monitor on dktest1sles10 * Resource action: drbd1:1 monitor on dktest1sles10 * Pseudo action: ms-drbd1_pre_notify_demote_0 * Resource action: testip stop on dktest2sles10 * Resource action: testip monitor on dktest1sles10 * Resource action: stonith-1 start on dktest1sles10 * Resource action: drbd1:0 notify on dktest2sles10 * Pseudo action: ms-drbd1_confirmed-pre_notify_demote_0 * Pseudo action: ms-drbd1_demote_0 * Resource action: drbd1:0 demote on dktest2sles10 * Pseudo action: ms-drbd1_demoted_0 * Pseudo action: ms-drbd1_post_notify_demoted_0 * Resource action: drbd1:0 notify on dktest2sles10 * Pseudo action: ms-drbd1_confirmed-post_notify_demoted_0 * Pseudo action: ms-drbd1_pre_notify_stop_0 * Resource action: drbd1:0 notify on dktest2sles10 * Pseudo action: ms-drbd1_confirmed-pre_notify_stop_0 * Pseudo action: ms-drbd1_stop_0 * Resource action: drbd1:0 stop on dktest2sles10 * Pseudo action: ms-drbd1_stopped_0 * Pseudo action: ms-drbd1_post_notify_stopped_0 * Pseudo action: ms-drbd1_confirmed-post_notify_stopped_0 * Pseudo action: ms-drbd1_pre_notify_start_0 * Pseudo action: ms-drbd1_confirmed-pre_notify_start_0 * Pseudo action: ms-drbd1_start_0 * Resource action: drbd1:1 start on dktest1sles10 * Pseudo action: ms-drbd1_running_0 * Pseudo action: ms-drbd1_post_notify_running_0 * Resource action: drbd1:1 notify on dktest1sles10 * Pseudo action: ms-drbd1_confirmed-post_notify_running_0 * Resource action: drbd1:1 monitor=11000 on dktest1sles10 Revised Cluster Status: * Node List: * Online: [ dktest1sles10 dktest2sles10 ] * Full List of Resources: * stonith-1 (stonith:dummy): Started dktest1sles10 * Clone Set: ms-drbd1 [drbd1] (promotable): * Unpromoted: [ dktest1sles10 ] * Stopped: [ dktest2sles10 ] * testip (ocf:heartbeat:IPaddr2): Stopped diff --git a/cts/scheduler/summary/colo_unpromoted_w_native.summary b/cts/scheduler/summary/colo_unpromoted_w_native.summary index 477d1c6866..42df383b82 100644 --- a/cts/scheduler/summary/colo_unpromoted_w_native.summary +++ b/cts/scheduler/summary/colo_unpromoted_w_native.summary @@ -1,53 +1,53 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * A (ocf:pacemaker:Dummy): Started node1 * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable): * Promoted: [ node2 ] * Unpromoted: [ node1 ] Transition Summary: - * Move A ( node1 -> node2 ) + * Move A ( node1 -> node2 ) * Demote MS_RSC_NATIVE:0 ( Promoted -> Unpromoted node2 ) * Promote MS_RSC_NATIVE:1 ( Unpromoted -> Promoted node1 ) Executing Cluster Transition: * Resource action: A stop on node1 * Resource action: MS_RSC_NATIVE:1 cancel=15000 on node1 * Pseudo action: MS_RSC_pre_notify_demote_0 * Resource action: A start on node2 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-pre_notify_demote_0 * Pseudo action: MS_RSC_demote_0 * Resource action: A monitor=10000 on node2 * Resource action: MS_RSC_NATIVE:0 demote on node2 * Pseudo action: MS_RSC_demoted_0 * Pseudo action: MS_RSC_post_notify_demoted_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-post_notify_demoted_0 * Pseudo action: MS_RSC_pre_notify_promote_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-pre_notify_promote_0 * Pseudo action: MS_RSC_promote_0 * Resource action: MS_RSC_NATIVE:1 promote on node1 * Pseudo action: MS_RSC_promoted_0 * Pseudo action: MS_RSC_post_notify_promoted_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-post_notify_promoted_0 * Resource action: MS_RSC_NATIVE:0 monitor=15000 on node2 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * A (ocf:pacemaker:Dummy): Started node2 * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] diff --git a/cts/scheduler/summary/colocation-influence.summary b/cts/scheduler/summary/colocation-influence.summary index 7fa4fcf0c2..3ea8b3f545 100644 --- a/cts/scheduler/summary/colocation-influence.summary +++ b/cts/scheduler/summary/colocation-influence.summary @@ -1,170 +1,170 @@ Current cluster status: * Node List: * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * GuestOnline: [ bundle10-0@rhel7-2 bundle10-1@rhel7-3 bundle11-0@rhel7-1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-1 * rsc1a (ocf:pacemaker:Dummy): Started rhel7-2 * rsc1b (ocf:pacemaker:Dummy): Started rhel7-2 * rsc2a (ocf:pacemaker:Dummy): Started rhel7-4 * rsc2b (ocf:pacemaker:Dummy): Started rhel7-4 * rsc3a (ocf:pacemaker:Dummy): Stopped * rsc3b (ocf:pacemaker:Dummy): Stopped * rsc4a (ocf:pacemaker:Dummy): Started rhel7-3 * rsc4b (ocf:pacemaker:Dummy): Started rhel7-3 * rsc5a (ocf:pacemaker:Dummy): Started rhel7-1 * Resource Group: group5a: * rsc5a1 (ocf:pacemaker:Dummy): Started rhel7-1 * rsc5a2 (ocf:pacemaker:Dummy): Started rhel7-1 * Resource Group: group6a: * rsc6a1 (ocf:pacemaker:Dummy): Started rhel7-2 * rsc6a2 (ocf:pacemaker:Dummy): Started rhel7-2 * rsc6a (ocf:pacemaker:Dummy): Started rhel7-2 * Resource Group: group7a: * rsc7a1 (ocf:pacemaker:Dummy): Started rhel7-3 * rsc7a2 (ocf:pacemaker:Dummy): Started rhel7-3 * Clone Set: rsc8a-clone [rsc8a]: * Started: [ rhel7-1 rhel7-3 rhel7-4 ] * Clone Set: rsc8b-clone [rsc8b]: * Started: [ rhel7-1 rhel7-3 rhel7-4 ] * rsc9a (ocf:pacemaker:Dummy): Started rhel7-4 * rsc9b (ocf:pacemaker:Dummy): Started rhel7-4 * rsc9c (ocf:pacemaker:Dummy): Started rhel7-4 * rsc10a (ocf:pacemaker:Dummy): Started rhel7-2 * rsc11a (ocf:pacemaker:Dummy): Started rhel7-1 * rsc12a (ocf:pacemaker:Dummy): Started rhel7-1 * rsc12b (ocf:pacemaker:Dummy): Started rhel7-1 * rsc12c (ocf:pacemaker:Dummy): Started rhel7-1 * Container bundle set: bundle10 [pcmktest:http]: * bundle10-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel7-2 * bundle10-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel7-3 * Container bundle set: bundle11 [pcmktest:http]: * bundle11-0 (192.168.122.134) (ocf:pacemaker:Dummy): Started rhel7-1 * bundle11-1 (192.168.122.135) (ocf:pacemaker:Dummy): Stopped * rsc13a (ocf:pacemaker:Dummy): Started rhel7-3 * Clone Set: rsc13b-clone [rsc13b] (promotable): * Promoted: [ rhel7-3 ] * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ] * Stopped: [ rhel7-5 ] * rsc14b (ocf:pacemaker:Dummy): Started rhel7-4 * Clone Set: rsc14a-clone [rsc14a] (promotable): * Promoted: [ rhel7-4 ] * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ] * Stopped: [ rhel7-5 ] Transition Summary: * Move rsc1a ( rhel7-2 -> rhel7-3 ) * Move rsc1b ( rhel7-2 -> rhel7-3 ) * Stop rsc2a ( rhel7-4 ) due to node availability * Start rsc3a ( rhel7-2 ) * Start rsc3b ( rhel7-2 ) * Stop rsc4a ( rhel7-3 ) due to node availability * Stop rsc5a ( rhel7-1 ) due to node availability * Stop rsc6a1 ( rhel7-2 ) due to node availability * Stop rsc6a2 ( rhel7-2 ) due to node availability * Stop rsc7a2 ( rhel7-3 ) due to node availability * Stop rsc8a:1 ( rhel7-4 ) due to node availability * Stop rsc9c ( rhel7-4 ) due to node availability * Move rsc10a ( rhel7-2 -> rhel7-3 ) * Stop rsc12b ( rhel7-1 ) due to node availability * Start bundle11-1 ( rhel7-5 ) due to unrunnable bundle11-docker-1 start (blocked) * Start bundle11a:1 ( bundle11-1 ) due to unrunnable bundle11-docker-1 start (blocked) * Stop rsc13a ( rhel7-3 ) due to node availability - * Stop rsc14a:1 ( Promoted rhel7-4 ) due to node availability + * Stop rsc14a:1 ( Promoted rhel7-4 ) due to node availability Executing Cluster Transition: * Resource action: rsc1a stop on rhel7-2 * Resource action: rsc1b stop on rhel7-2 * Resource action: rsc2a stop on rhel7-4 * Resource action: rsc3a start on rhel7-2 * Resource action: rsc3b start on rhel7-2 * Resource action: rsc4a stop on rhel7-3 * Resource action: rsc5a stop on rhel7-1 * Pseudo action: group6a_stop_0 * Resource action: rsc6a2 stop on rhel7-2 * Pseudo action: group7a_stop_0 * Resource action: rsc7a2 stop on rhel7-3 * Pseudo action: rsc8a-clone_stop_0 * Resource action: rsc9c stop on rhel7-4 * Resource action: rsc10a stop on rhel7-2 * Resource action: rsc12b stop on rhel7-1 * Resource action: rsc13a stop on rhel7-3 * Pseudo action: rsc14a-clone_demote_0 * Pseudo action: bundle11_start_0 * Resource action: rsc1a start on rhel7-3 * Resource action: rsc1b start on rhel7-3 * Resource action: rsc3a monitor=10000 on rhel7-2 * Resource action: rsc3b monitor=10000 on rhel7-2 * Resource action: rsc6a1 stop on rhel7-2 * Pseudo action: group7a_stopped_0 * Resource action: rsc8a stop on rhel7-4 * Pseudo action: rsc8a-clone_stopped_0 * Resource action: rsc10a start on rhel7-3 * Pseudo action: bundle11-clone_start_0 * Resource action: rsc14a demote on rhel7-4 * Pseudo action: rsc14a-clone_demoted_0 * Pseudo action: rsc14a-clone_stop_0 * Resource action: rsc1a monitor=10000 on rhel7-3 * Resource action: rsc1b monitor=10000 on rhel7-3 * Pseudo action: group6a_stopped_0 * Resource action: rsc10a monitor=10000 on rhel7-3 * Pseudo action: bundle11-clone_running_0 * Resource action: rsc14a stop on rhel7-4 * Pseudo action: rsc14a-clone_stopped_0 * Pseudo action: bundle11_running_0 Revised Cluster Status: * Node List: * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * GuestOnline: [ bundle10-0@rhel7-2 bundle10-1@rhel7-3 bundle11-0@rhel7-1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-1 * rsc1a (ocf:pacemaker:Dummy): Started rhel7-3 * rsc1b (ocf:pacemaker:Dummy): Started rhel7-3 * rsc2a (ocf:pacemaker:Dummy): Stopped * rsc2b (ocf:pacemaker:Dummy): Started rhel7-4 * rsc3a (ocf:pacemaker:Dummy): Started rhel7-2 * rsc3b (ocf:pacemaker:Dummy): Started rhel7-2 * rsc4a (ocf:pacemaker:Dummy): Stopped * rsc4b (ocf:pacemaker:Dummy): Started rhel7-3 * rsc5a (ocf:pacemaker:Dummy): Stopped * Resource Group: group5a: * rsc5a1 (ocf:pacemaker:Dummy): Started rhel7-1 * rsc5a2 (ocf:pacemaker:Dummy): Started rhel7-1 * Resource Group: group6a: * rsc6a1 (ocf:pacemaker:Dummy): Stopped * rsc6a2 (ocf:pacemaker:Dummy): Stopped * rsc6a (ocf:pacemaker:Dummy): Started rhel7-2 * Resource Group: group7a: * rsc7a1 (ocf:pacemaker:Dummy): Started rhel7-3 * rsc7a2 (ocf:pacemaker:Dummy): Stopped * Clone Set: rsc8a-clone [rsc8a]: * Started: [ rhel7-1 rhel7-3 ] * Stopped: [ rhel7-2 rhel7-4 rhel7-5 ] * Clone Set: rsc8b-clone [rsc8b]: * Started: [ rhel7-1 rhel7-3 rhel7-4 ] * rsc9a (ocf:pacemaker:Dummy): Started rhel7-4 * rsc9b (ocf:pacemaker:Dummy): Started rhel7-4 * rsc9c (ocf:pacemaker:Dummy): Stopped * rsc10a (ocf:pacemaker:Dummy): Started rhel7-3 * rsc11a (ocf:pacemaker:Dummy): Started rhel7-1 * rsc12a (ocf:pacemaker:Dummy): Started rhel7-1 * rsc12b (ocf:pacemaker:Dummy): Stopped * rsc12c (ocf:pacemaker:Dummy): Started rhel7-1 * Container bundle set: bundle10 [pcmktest:http]: * bundle10-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel7-2 * bundle10-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel7-3 * Container bundle set: bundle11 [pcmktest:http]: * bundle11-0 (192.168.122.134) (ocf:pacemaker:Dummy): Started rhel7-1 * bundle11-1 (192.168.122.135) (ocf:pacemaker:Dummy): Stopped * rsc13a (ocf:pacemaker:Dummy): Stopped * Clone Set: rsc13b-clone [rsc13b] (promotable): * Promoted: [ rhel7-3 ] * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ] * Stopped: [ rhel7-5 ] * rsc14b (ocf:pacemaker:Dummy): Started rhel7-4 * Clone Set: rsc14a-clone [rsc14a] (promotable): * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ] * Stopped: [ rhel7-4 rhel7-5 ] diff --git a/cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary b/cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary index 4b16656a2c..00d33a0307 100644 --- a/cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary +++ b/cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary @@ -1,36 +1,36 @@ 1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ fc16-builder ] * OFFLINE: [ fc16-builder2 ] * Full List of Resources: * Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable): * Unpromoted: [ fc16-builder ] * NATIVE_RSC_B (ocf:pacemaker:Dummy): Started fc16-builder (disabled) Transition Summary: * Stop NATIVE_RSC_A:0 ( Unpromoted fc16-builder ) due to node availability - * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability + * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability Executing Cluster Transition: * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_B stop on fc16-builder * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 Revised Cluster Status: * Node List: * Online: [ fc16-builder ] * OFFLINE: [ fc16-builder2 ] * Full List of Resources: * Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable): * Stopped: [ fc16-builder fc16-builder2 ] * NATIVE_RSC_B (ocf:pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/summary/dc-fence-ordering.summary b/cts/scheduler/summary/dc-fence-ordering.summary index 305ebd5c19..0261cad597 100644 --- a/cts/scheduler/summary/dc-fence-ordering.summary +++ b/cts/scheduler/summary/dc-fence-ordering.summary @@ -1,82 +1,82 @@ Using the original execution date of: 2018-11-28 18:37:16Z Current cluster status: * Node List: * Node rhel7-1: UNCLEAN (online) * Online: [ rhel7-2 rhel7-4 rhel7-5 ] * OFFLINE: [ rhel7-3 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Stopped * FencingPass (stonith:fence_dummy): Stopped * FencingFail (stonith:fence_dummy): Stopped * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Stopped * migrator (ocf:pacemaker:Dummy): Stopped * Clone Set: Connectivity [ping-1]: * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * Clone Set: promotable-1 [stateful-1] (promotable): * Promoted: [ rhel7-1 ] * Unpromoted: [ rhel7-2 rhel7-4 rhel7-5 ] * Stopped: [ rhel7-3 ] * Resource Group: group-1: * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-1 * petulant (service:pacemaker-cts-dummyd@10): FAILED rhel7-1 * r192.168.122.208 (ocf:heartbeat:IPaddr2): Stopped * lsb-dummy (lsb:LSBDummy): Stopped Transition Summary: * Fence (reboot) rhel7-1 'petulant failed there' - * Stop stateful-1:0 ( Unpromoted rhel7-5 ) due to node availability - * Stop stateful-1:1 ( Promoted rhel7-1 ) due to node availability - * Stop stateful-1:2 ( Unpromoted rhel7-2 ) due to node availability - * Stop stateful-1:3 ( Unpromoted rhel7-4 ) due to node availability - * Stop r192.168.122.207 ( rhel7-1 ) due to node availability - * Stop petulant ( rhel7-1 ) due to node availability + * Stop stateful-1:0 ( Unpromoted rhel7-5 ) due to node availability + * Stop stateful-1:1 ( Promoted rhel7-1 ) due to node availability + * Stop stateful-1:2 ( Unpromoted rhel7-2 ) due to node availability + * Stop stateful-1:3 ( Unpromoted rhel7-4 ) due to node availability + * Stop r192.168.122.207 ( rhel7-1 ) due to node availability + * Stop petulant ( rhel7-1 ) due to node availability Executing Cluster Transition: * Fencing rhel7-1 (reboot) * Pseudo action: group-1_stop_0 * Pseudo action: petulant_stop_0 * Pseudo action: r192.168.122.207_stop_0 * Pseudo action: group-1_stopped_0 * Pseudo action: promotable-1_demote_0 * Pseudo action: stateful-1_demote_0 * Pseudo action: promotable-1_demoted_0 * Pseudo action: promotable-1_stop_0 * Resource action: stateful-1 stop on rhel7-5 * Pseudo action: stateful-1_stop_0 * Resource action: stateful-1 stop on rhel7-2 * Resource action: stateful-1 stop on rhel7-4 * Pseudo action: promotable-1_stopped_0 * Cluster action: do_shutdown on rhel7-5 * Cluster action: do_shutdown on rhel7-4 * Cluster action: do_shutdown on rhel7-2 Using the original execution date of: 2018-11-28 18:37:16Z Revised Cluster Status: * Node List: * Online: [ rhel7-2 rhel7-4 rhel7-5 ] * OFFLINE: [ rhel7-1 rhel7-3 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Stopped * FencingPass (stonith:fence_dummy): Stopped * FencingFail (stonith:fence_dummy): Stopped * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Stopped * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Stopped * migrator (ocf:pacemaker:Dummy): Stopped * Clone Set: Connectivity [ping-1]: * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * Clone Set: promotable-1 [stateful-1] (promotable): * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * Resource Group: group-1: * r192.168.122.207 (ocf:heartbeat:IPaddr2): Stopped * petulant (service:pacemaker-cts-dummyd@10): Stopped * r192.168.122.208 (ocf:heartbeat:IPaddr2): Stopped * lsb-dummy (lsb:LSBDummy): Stopped diff --git a/cts/scheduler/summary/group-dependents.summary b/cts/scheduler/summary/group-dependents.summary index ae880fbb4c..3365255547 100644 --- a/cts/scheduler/summary/group-dependents.summary +++ b/cts/scheduler/summary/group-dependents.summary @@ -1,196 +1,196 @@ Current cluster status: * Node List: * Online: [ asttest1 asttest2 ] * Full List of Resources: * Resource Group: voip: * mysqld (lsb:mysql): Started asttest1 * dahdi (lsb:dahdi): Started asttest1 * fonulator (lsb:fonulator): Stopped * asterisk (lsb:asterisk-11.0.1): Stopped * iax2_mon (lsb:iax2_mon): Stopped * httpd (lsb:apache2): Stopped * tftp (lsb:tftp-srce): Stopped * Resource Group: ip_voip_routes: * ip_voip_route_test1 (ocf:heartbeat:Route): Started asttest1 * ip_voip_route_test2 (ocf:heartbeat:Route): Started asttest1 * Resource Group: ip_voip_addresses_p: * ip_voip_vlan850 (ocf:heartbeat:IPaddr2): Started asttest1 * ip_voip_vlan998 (ocf:heartbeat:IPaddr2): Started asttest1 * ip_voip_vlan851 (ocf:heartbeat:IPaddr2): Started asttest1 * ip_voip_vlan852 (ocf:heartbeat:IPaddr2): Started asttest1 * ip_voip_vlan853 (ocf:heartbeat:IPaddr2): Started asttest1 * ip_voip_vlan854 (ocf:heartbeat:IPaddr2): Started asttest1 * ip_voip_vlan855 (ocf:heartbeat:IPaddr2): Started asttest1 * ip_voip_vlan856 (ocf:heartbeat:IPaddr2): Started asttest1 * Clone Set: cl_route [ip_voip_route_default]: * Started: [ asttest1 asttest2 ] * fs_drbd (ocf:heartbeat:Filesystem): Started asttest1 * Clone Set: ms_drbd [drbd] (promotable): * Promoted: [ asttest1 ] * Unpromoted: [ asttest2 ] Transition Summary: - * Migrate mysqld ( asttest1 -> asttest2 ) - * Migrate dahdi ( asttest1 -> asttest2 ) - * Start fonulator ( asttest2 ) - * Start asterisk ( asttest2 ) - * Start iax2_mon ( asttest2 ) - * Start httpd ( asttest2 ) - * Start tftp ( asttest2 ) - * Migrate ip_voip_route_test1 ( asttest1 -> asttest2 ) - * Migrate ip_voip_route_test2 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan850 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan998 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan851 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan852 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan853 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan854 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan855 ( asttest1 -> asttest2 ) - * Migrate ip_voip_vlan856 ( asttest1 -> asttest2 ) - * Move fs_drbd ( asttest1 -> asttest2 ) + * Migrate mysqld ( asttest1 -> asttest2 ) + * Migrate dahdi ( asttest1 -> asttest2 ) + * Start fonulator ( asttest2 ) + * Start asterisk ( asttest2 ) + * Start iax2_mon ( asttest2 ) + * Start httpd ( asttest2 ) + * Start tftp ( asttest2 ) + * Migrate ip_voip_route_test1 ( asttest1 -> asttest2 ) + * Migrate ip_voip_route_test2 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan850 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan998 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan851 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan852 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan853 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan854 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan855 ( asttest1 -> asttest2 ) + * Migrate ip_voip_vlan856 ( asttest1 -> asttest2 ) + * Move fs_drbd ( asttest1 -> asttest2 ) * Demote drbd:0 ( Promoted -> Unpromoted asttest1 ) * Promote drbd:1 ( Unpromoted -> Promoted asttest2 ) Executing Cluster Transition: * Pseudo action: voip_stop_0 * Resource action: mysqld migrate_to on asttest1 * Resource action: ip_voip_route_test1 migrate_to on asttest1 * Resource action: ip_voip_route_test2 migrate_to on asttest1 * Resource action: ip_voip_vlan850 migrate_to on asttest1 * Resource action: ip_voip_vlan998 migrate_to on asttest1 * Resource action: ip_voip_vlan851 migrate_to on asttest1 * Resource action: ip_voip_vlan852 migrate_to on asttest1 * Resource action: ip_voip_vlan853 migrate_to on asttest1 * Resource action: ip_voip_vlan854 migrate_to on asttest1 * Resource action: ip_voip_vlan855 migrate_to on asttest1 * Resource action: ip_voip_vlan856 migrate_to on asttest1 * Resource action: drbd:1 cancel=31000 on asttest2 * Pseudo action: ms_drbd_pre_notify_demote_0 * Resource action: mysqld migrate_from on asttest2 * Resource action: dahdi migrate_to on asttest1 * Resource action: ip_voip_route_test1 migrate_from on asttest2 * Resource action: ip_voip_route_test2 migrate_from on asttest2 * Resource action: ip_voip_vlan850 migrate_from on asttest2 * Resource action: ip_voip_vlan998 migrate_from on asttest2 * Resource action: ip_voip_vlan851 migrate_from on asttest2 * Resource action: ip_voip_vlan852 migrate_from on asttest2 * Resource action: ip_voip_vlan853 migrate_from on asttest2 * Resource action: ip_voip_vlan854 migrate_from on asttest2 * Resource action: ip_voip_vlan855 migrate_from on asttest2 * Resource action: ip_voip_vlan856 migrate_from on asttest2 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-pre_notify_demote_0 * Resource action: dahdi migrate_from on asttest2 * Resource action: dahdi stop on asttest1 * Resource action: mysqld stop on asttest1 * Pseudo action: voip_stopped_0 * Pseudo action: ip_voip_routes_stop_0 * Resource action: ip_voip_route_test1 stop on asttest1 * Resource action: ip_voip_route_test2 stop on asttest1 * Pseudo action: ip_voip_routes_stopped_0 * Pseudo action: ip_voip_addresses_p_stop_0 * Resource action: ip_voip_vlan850 stop on asttest1 * Resource action: ip_voip_vlan998 stop on asttest1 * Resource action: ip_voip_vlan851 stop on asttest1 * Resource action: ip_voip_vlan852 stop on asttest1 * Resource action: ip_voip_vlan853 stop on asttest1 * Resource action: ip_voip_vlan854 stop on asttest1 * Resource action: ip_voip_vlan855 stop on asttest1 * Resource action: ip_voip_vlan856 stop on asttest1 * Pseudo action: ip_voip_addresses_p_stopped_0 * Resource action: fs_drbd stop on asttest1 * Pseudo action: ms_drbd_demote_0 * Resource action: drbd:0 demote on asttest1 * Pseudo action: ms_drbd_demoted_0 * Pseudo action: ms_drbd_post_notify_demoted_0 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_pre_notify_promote_0 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_promote_0 * Resource action: drbd:1 promote on asttest2 * Pseudo action: ms_drbd_promoted_0 * Pseudo action: ms_drbd_post_notify_promoted_0 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-post_notify_promoted_0 * Resource action: fs_drbd start on asttest2 * Resource action: drbd:0 monitor=31000 on asttest1 * Pseudo action: ip_voip_addresses_p_start_0 * Pseudo action: ip_voip_vlan850_start_0 * Pseudo action: ip_voip_vlan998_start_0 * Pseudo action: ip_voip_vlan851_start_0 * Pseudo action: ip_voip_vlan852_start_0 * Pseudo action: ip_voip_vlan853_start_0 * Pseudo action: ip_voip_vlan854_start_0 * Pseudo action: ip_voip_vlan855_start_0 * Pseudo action: ip_voip_vlan856_start_0 * Resource action: fs_drbd monitor=1000 on asttest2 * Pseudo action: ip_voip_addresses_p_running_0 * Resource action: ip_voip_vlan850 monitor=1000 on asttest2 * Resource action: ip_voip_vlan998 monitor=1000 on asttest2 * Resource action: ip_voip_vlan851 monitor=1000 on asttest2 * Resource action: ip_voip_vlan852 monitor=1000 on asttest2 * Resource action: ip_voip_vlan853 monitor=1000 on asttest2 * Resource action: ip_voip_vlan854 monitor=1000 on asttest2 * Resource action: ip_voip_vlan855 monitor=1000 on asttest2 * Resource action: ip_voip_vlan856 monitor=1000 on asttest2 * Pseudo action: ip_voip_routes_start_0 * Pseudo action: ip_voip_route_test1_start_0 * Pseudo action: ip_voip_route_test2_start_0 * Pseudo action: ip_voip_routes_running_0 * Resource action: ip_voip_route_test1 monitor=1000 on asttest2 * Resource action: ip_voip_route_test2 monitor=1000 on asttest2 * Pseudo action: voip_start_0 * Pseudo action: mysqld_start_0 * Pseudo action: dahdi_start_0 * Resource action: fonulator start on asttest2 * Resource action: asterisk start on asttest2 * Resource action: iax2_mon start on asttest2 * Resource action: httpd start on asttest2 * Resource action: tftp start on asttest2 * Pseudo action: voip_running_0 * Resource action: mysqld monitor=1000 on asttest2 * Resource action: dahdi monitor=1000 on asttest2 * Resource action: fonulator monitor=1000 on asttest2 * Resource action: asterisk monitor=1000 on asttest2 * Resource action: iax2_mon monitor=60000 on asttest2 * Resource action: httpd monitor=1000 on asttest2 * Resource action: tftp monitor=60000 on asttest2 Revised Cluster Status: * Node List: * Online: [ asttest1 asttest2 ] * Full List of Resources: * Resource Group: voip: * mysqld (lsb:mysql): Started asttest2 * dahdi (lsb:dahdi): Started asttest2 * fonulator (lsb:fonulator): Started asttest2 * asterisk (lsb:asterisk-11.0.1): Started asttest2 * iax2_mon (lsb:iax2_mon): Started asttest2 * httpd (lsb:apache2): Started asttest2 * tftp (lsb:tftp-srce): Started asttest2 * Resource Group: ip_voip_routes: * ip_voip_route_test1 (ocf:heartbeat:Route): Started asttest2 * ip_voip_route_test2 (ocf:heartbeat:Route): Started asttest2 * Resource Group: ip_voip_addresses_p: * ip_voip_vlan850 (ocf:heartbeat:IPaddr2): Started asttest2 * ip_voip_vlan998 (ocf:heartbeat:IPaddr2): Started asttest2 * ip_voip_vlan851 (ocf:heartbeat:IPaddr2): Started asttest2 * ip_voip_vlan852 (ocf:heartbeat:IPaddr2): Started asttest2 * ip_voip_vlan853 (ocf:heartbeat:IPaddr2): Started asttest2 * ip_voip_vlan854 (ocf:heartbeat:IPaddr2): Started asttest2 * ip_voip_vlan855 (ocf:heartbeat:IPaddr2): Started asttest2 * ip_voip_vlan856 (ocf:heartbeat:IPaddr2): Started asttest2 * Clone Set: cl_route [ip_voip_route_default]: * Started: [ asttest1 asttest2 ] * fs_drbd (ocf:heartbeat:Filesystem): Started asttest2 * Clone Set: ms_drbd [drbd] (promotable): * Promoted: [ asttest2 ] * Unpromoted: [ asttest1 ] diff --git a/cts/scheduler/summary/guest-host-not-fenceable.summary b/cts/scheduler/summary/guest-host-not-fenceable.summary index 69b456a22f..e17d21f0f2 100644 --- a/cts/scheduler/summary/guest-host-not-fenceable.summary +++ b/cts/scheduler/summary/guest-host-not-fenceable.summary @@ -1,91 +1,91 @@ Using the original execution date of: 2019-08-26 04:52:42Z Current cluster status: * Node List: * Node node2: UNCLEAN (offline) * Node node3: UNCLEAN (offline) * Online: [ node1 ] * GuestOnline: [ galera-bundle-0@node1 rabbitmq-bundle-0@node1 ] * Full List of Resources: * Container bundle set: rabbitmq-bundle [192.168.122.139:8787/rhosp13/openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started node1 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): FAILED node2 (UNCLEAN) * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): FAILED node3 (UNCLEAN) * Container bundle set: galera-bundle [192.168.122.139:8787/rhosp13/openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted node1 * galera-bundle-1 (ocf:heartbeat:galera): FAILED Promoted node2 (UNCLEAN) * galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted node3 (UNCLEAN) * stonith-fence_ipmilan-node1 (stonith:fence_ipmilan): Started node2 (UNCLEAN) * stonith-fence_ipmilan-node3 (stonith:fence_ipmilan): Started node2 (UNCLEAN) * stonith-fence_ipmilan-node2 (stonith:fence_ipmilan): Started node3 (UNCLEAN) Transition Summary: - * Stop rabbitmq-bundle-docker-0 ( node1 ) due to no quorum - * Stop rabbitmq-bundle-0 ( node1 ) due to no quorum - * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to no quorum - * Stop rabbitmq-bundle-docker-1 ( node2 ) due to node availability (blocked) - * Stop rabbitmq-bundle-1 ( node2 ) due to no quorum (blocked) - * Stop rabbitmq:1 ( rabbitmq-bundle-1 ) due to no quorum (blocked) - * Stop rabbitmq-bundle-docker-2 ( node3 ) due to node availability (blocked) - * Stop rabbitmq-bundle-2 ( node3 ) due to no quorum (blocked) - * Stop rabbitmq:2 ( rabbitmq-bundle-2 ) due to no quorum (blocked) - * Stop galera-bundle-docker-0 ( node1 ) due to no quorum - * Stop galera-bundle-0 ( node1 ) due to no quorum + * Stop rabbitmq-bundle-docker-0 ( node1 ) due to no quorum + * Stop rabbitmq-bundle-0 ( node1 ) due to no quorum + * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to no quorum + * Stop rabbitmq-bundle-docker-1 ( node2 ) due to node availability (blocked) + * Stop rabbitmq-bundle-1 ( node2 ) due to no quorum (blocked) + * Stop rabbitmq:1 ( rabbitmq-bundle-1 ) due to no quorum (blocked) + * Stop rabbitmq-bundle-docker-2 ( node3 ) due to node availability (blocked) + * Stop rabbitmq-bundle-2 ( node3 ) due to no quorum (blocked) + * Stop rabbitmq:2 ( rabbitmq-bundle-2 ) due to no quorum (blocked) + * Stop galera-bundle-docker-0 ( node1 ) due to no quorum + * Stop galera-bundle-0 ( node1 ) due to no quorum * Stop galera:0 ( Promoted galera-bundle-0 ) due to no quorum - * Stop galera-bundle-docker-1 ( node2 ) due to node availability (blocked) - * Stop galera-bundle-1 ( node2 ) due to no quorum (blocked) + * Stop galera-bundle-docker-1 ( node2 ) due to node availability (blocked) + * Stop galera-bundle-1 ( node2 ) due to no quorum (blocked) * Stop galera:1 ( Promoted galera-bundle-1 ) due to no quorum (blocked) - * Stop galera-bundle-docker-2 ( node3 ) due to node availability (blocked) - * Stop galera-bundle-2 ( node3 ) due to no quorum (blocked) + * Stop galera-bundle-docker-2 ( node3 ) due to node availability (blocked) + * Stop galera-bundle-2 ( node3 ) due to no quorum (blocked) * Stop galera:2 ( Promoted galera-bundle-2 ) due to no quorum (blocked) - * Stop stonith-fence_ipmilan-node1 ( node2 ) due to node availability (blocked) - * Stop stonith-fence_ipmilan-node3 ( node2 ) due to no quorum (blocked) - * Stop stonith-fence_ipmilan-node2 ( node3 ) due to no quorum (blocked) + * Stop stonith-fence_ipmilan-node1 ( node2 ) due to node availability (blocked) + * Stop stonith-fence_ipmilan-node3 ( node2 ) due to no quorum (blocked) + * Stop stonith-fence_ipmilan-node2 ( node3 ) due to no quorum (blocked) Executing Cluster Transition: * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0 * Pseudo action: galera-bundle_demote_0 * Pseudo action: rabbitmq-bundle_stop_0 * Resource action: rabbitmq notify on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0 * Pseudo action: rabbitmq-bundle-clone_stop_0 * Pseudo action: galera-bundle-master_demote_0 * Resource action: rabbitmq stop on rabbitmq-bundle-0 * Pseudo action: rabbitmq-bundle-clone_stopped_0 * Resource action: rabbitmq-bundle-0 stop on node1 * Resource action: rabbitmq-bundle-0 cancel=60000 on node1 * Resource action: galera demote on galera-bundle-0 * Pseudo action: galera-bundle-master_demoted_0 * Pseudo action: galera-bundle_demoted_0 * Pseudo action: galera-bundle_stop_0 * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0 * Resource action: rabbitmq-bundle-docker-0 stop on node1 * Pseudo action: galera-bundle-master_stop_0 * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0 * Resource action: galera stop on galera-bundle-0 * Pseudo action: galera-bundle-master_stopped_0 * Resource action: galera-bundle-0 stop on node1 * Resource action: galera-bundle-0 cancel=60000 on node1 * Pseudo action: rabbitmq-bundle_stopped_0 * Resource action: galera-bundle-docker-0 stop on node1 * Pseudo action: galera-bundle_stopped_0 Using the original execution date of: 2019-08-26 04:52:42Z Revised Cluster Status: * Node List: * Node node2: UNCLEAN (offline) * Node node3: UNCLEAN (offline) * Online: [ node1 ] * Full List of Resources: * Container bundle set: rabbitmq-bundle [192.168.122.139:8787/rhosp13/openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): FAILED node2 (UNCLEAN) * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): FAILED node3 (UNCLEAN) * Container bundle set: galera-bundle [192.168.122.139:8787/rhosp13/openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Stopped * galera-bundle-1 (ocf:heartbeat:galera): FAILED Promoted node2 (UNCLEAN) * galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted node3 (UNCLEAN) * stonith-fence_ipmilan-node1 (stonith:fence_ipmilan): Started node2 (UNCLEAN) * stonith-fence_ipmilan-node3 (stonith:fence_ipmilan): Started node2 (UNCLEAN) * stonith-fence_ipmilan-node2 (stonith:fence_ipmilan): Started node3 (UNCLEAN) diff --git a/cts/scheduler/summary/guest-node-cleanup.summary b/cts/scheduler/summary/guest-node-cleanup.summary index 4a7ac74a18..4298619820 100644 --- a/cts/scheduler/summary/guest-node-cleanup.summary +++ b/cts/scheduler/summary/guest-node-cleanup.summary @@ -1,55 +1,55 @@ Using the original execution date of: 2018-10-15 16:02:04Z Current cluster status: * Node List: * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * GuestOnline: [ lxc2@rhel7-1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-2 * FencingPass (stonith:fence_dummy): Started rhel7-3 * container1 (ocf:heartbeat:VirtualDomain): FAILED * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Unpromoted: [ lxc2 ] * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Transition Summary: * Fence (reboot) lxc1 (resource: container1) 'guest is unclean' - * Start container1 ( rhel7-1 ) + * Start container1 ( rhel7-1 ) * Recover lxc-ms:1 ( Promoted lxc1 ) - * Restart lxc1 ( rhel7-1 ) due to required container1 start + * Restart lxc1 ( rhel7-1 ) due to required container1 start Executing Cluster Transition: * Resource action: container1 monitor on rhel7-1 * Pseudo action: lxc-ms-master_demote_0 * Resource action: lxc1 stop on rhel7-1 * Pseudo action: stonith-lxc1-reboot on lxc1 * Resource action: container1 start on rhel7-1 * Pseudo action: lxc-ms_demote_0 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Resource action: lxc1 start on rhel7-1 * Resource action: lxc1 monitor=30000 on rhel7-1 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Resource action: lxc-ms start on lxc1 * Pseudo action: lxc-ms-master_running_0 * Pseudo action: lxc-ms-master_promote_0 * Resource action: lxc-ms promote on lxc1 * Pseudo action: lxc-ms-master_promoted_0 Using the original execution date of: 2018-10-15 16:02:04Z Revised Cluster Status: * Node List: * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * GuestOnline: [ lxc1@rhel7-1 lxc2@rhel7-1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-2 * FencingPass (stonith:fence_dummy): Started rhel7-3 * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-1 * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Promoted: [ lxc1 ] * Unpromoted: [ lxc2 ] diff --git a/cts/scheduler/summary/guest-node-host-dies.summary b/cts/scheduler/summary/guest-node-host-dies.summary index f4509b9029..b0286b2846 100644 --- a/cts/scheduler/summary/guest-node-host-dies.summary +++ b/cts/scheduler/summary/guest-node-host-dies.summary @@ -1,82 +1,82 @@ Current cluster status: * Node List: * Node rhel7-1: UNCLEAN (offline) * Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-4 * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1 (UNCLEAN) * container1 (ocf:heartbeat:VirtualDomain): FAILED rhel7-1 (UNCLEAN) * container2 (ocf:heartbeat:VirtualDomain): FAILED rhel7-1 (UNCLEAN) * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Transition Summary: * Fence (reboot) lxc2 (resource: container2) 'guest is unclean' * Fence (reboot) lxc1 (resource: container1) 'guest is unclean' * Fence (reboot) rhel7-1 'rsc_rhel7-1 is thought to be active there' * Restart Fencing ( rhel7-4 ) due to resource definition change * Move rsc_rhel7-1 ( rhel7-1 -> rhel7-5 ) * Recover container1 ( rhel7-1 -> rhel7-2 ) * Recover container2 ( rhel7-1 -> rhel7-3 ) - * Recover lxc-ms:0 ( Promoted lxc1 ) - * Recover lxc-ms:1 ( Unpromoted lxc2 ) + * Recover lxc-ms:0 ( Promoted lxc1 ) + * Recover lxc-ms:1 ( Unpromoted lxc2 ) * Move lxc1 ( rhel7-1 -> rhel7-2 ) * Move lxc2 ( rhel7-1 -> rhel7-3 ) Executing Cluster Transition: * Resource action: Fencing stop on rhel7-4 * Pseudo action: lxc-ms-master_demote_0 * Pseudo action: lxc1_stop_0 * Resource action: lxc1 monitor on rhel7-5 * Resource action: lxc1 monitor on rhel7-4 * Resource action: lxc1 monitor on rhel7-3 * Pseudo action: lxc2_stop_0 * Resource action: lxc2 monitor on rhel7-5 * Resource action: lxc2 monitor on rhel7-4 * Resource action: lxc2 monitor on rhel7-2 * Fencing rhel7-1 (reboot) * Pseudo action: rsc_rhel7-1_stop_0 * Pseudo action: container1_stop_0 * Pseudo action: container2_stop_0 * Pseudo action: stonith-lxc2-reboot on lxc2 * Pseudo action: stonith-lxc1-reboot on lxc1 * Resource action: Fencing start on rhel7-4 * Resource action: Fencing monitor=120000 on rhel7-4 * Resource action: rsc_rhel7-1 start on rhel7-5 * Resource action: container1 start on rhel7-2 * Resource action: container2 start on rhel7-3 * Pseudo action: lxc-ms_demote_0 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Resource action: lxc1 start on rhel7-2 * Resource action: lxc2 start on rhel7-3 * Resource action: rsc_rhel7-1 monitor=5000 on rhel7-5 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Resource action: lxc1 monitor=30000 on rhel7-2 * Resource action: lxc2 monitor=30000 on rhel7-3 * Resource action: lxc-ms start on lxc1 * Resource action: lxc-ms start on lxc2 * Pseudo action: lxc-ms-master_running_0 * Resource action: lxc-ms monitor=10000 on lxc2 * Pseudo action: lxc-ms-master_promote_0 * Resource action: lxc-ms promote on lxc1 * Pseudo action: lxc-ms-master_promoted_0 Revised Cluster Status: * Node List: * Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * OFFLINE: [ rhel7-1 ] * GuestOnline: [ lxc1@rhel7-2 lxc2@rhel7-3 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-4 * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-5 * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-2 * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-3 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Promoted: [ lxc1 ] * Unpromoted: [ lxc2 ] diff --git a/cts/scheduler/summary/inc11.summary b/cts/scheduler/summary/inc11.summary index 256a10e8f7..1149123210 100644 --- a/cts/scheduler/summary/inc11.summary +++ b/cts/scheduler/summary/inc11.summary @@ -1,43 +1,43 @@ Current cluster status: * Node List: * Online: [ node0 node1 node2 ] * Full List of Resources: * simple-rsc (ocf:heartbeat:apache): Stopped * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Stopped * child_rsc1:1 (ocf:heartbeat:apache): Stopped Transition Summary: - * Start simple-rsc ( node2 ) - * Start child_rsc1:0 ( node1 ) + * Start simple-rsc ( node2 ) + * Start child_rsc1:0 ( node1 ) * Promote child_rsc1:1 ( Stopped -> Promoted node2 ) Executing Cluster Transition: * Resource action: simple-rsc monitor on node2 * Resource action: simple-rsc monitor on node1 * Resource action: simple-rsc monitor on node0 * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:0 monitor on node0 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:1 monitor on node0 * Pseudo action: rsc1_start_0 * Resource action: simple-rsc start on node2 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised Cluster Status: * Node List: * Online: [ node0 node1 node2 ] * Full List of Resources: * simple-rsc (ocf:heartbeat:apache): Started node2 * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2 diff --git a/cts/scheduler/summary/inc12.summary b/cts/scheduler/summary/inc12.summary index 2c93e2678c..36ffffad8f 100644 --- a/cts/scheduler/summary/inc12.summary +++ b/cts/scheduler/summary/inc12.summary @@ -1,132 +1,132 @@ Current cluster status: * Node List: * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Stopped * Resource Group: group-1: * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02 * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02 * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02 * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n04 * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n05 * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 * Clone Set: DoFencing [child_DoFencing]: * Started: [ c001n02 c001n04 c001n05 c001n06 c001n07 ] * Stopped: [ c001n03 ] * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted c001n04 * ocf_msdummy:3 (ocf:heartbeat:Stateful): Unpromoted c001n04 * ocf_msdummy:4 (ocf:heartbeat:Stateful): Unpromoted c001n05 * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted c001n05 * ocf_msdummy:6 (ocf:heartbeat:Stateful): Unpromoted c001n06 * ocf_msdummy:7 (ocf:heartbeat:Stateful): Unpromoted c001n06 * ocf_msdummy:8 (ocf:heartbeat:Stateful): Unpromoted c001n07 * ocf_msdummy:9 (ocf:heartbeat:Stateful): Unpromoted c001n07 * ocf_msdummy:10 (ocf:heartbeat:Stateful): Unpromoted c001n02 * ocf_msdummy:11 (ocf:heartbeat:Stateful): Unpromoted c001n02 Transition Summary: - * Stop ocf_192.168.100.181 ( c001n02 ) due to node availability - * Stop heartbeat_192.168.100.182 ( c001n02 ) due to node availability - * Stop ocf_192.168.100.183 ( c001n02 ) due to node availability - * Stop lsb_dummy ( c001n04 ) due to node availability - * Stop rsc_c001n03 ( c001n05 ) due to node availability - * Stop rsc_c001n02 ( c001n02 ) due to node availability - * Stop rsc_c001n04 ( c001n04 ) due to node availability - * Stop rsc_c001n05 ( c001n05 ) due to node availability - * Stop rsc_c001n06 ( c001n06 ) due to node availability - * Stop rsc_c001n07 ( c001n07 ) due to node availability - * Stop child_DoFencing:0 ( c001n02 ) due to node availability - * Stop child_DoFencing:1 ( c001n04 ) due to node availability - * Stop child_DoFencing:2 ( c001n05 ) due to node availability - * Stop child_DoFencing:3 ( c001n06 ) due to node availability - * Stop child_DoFencing:4 ( c001n07 ) due to node availability + * Stop ocf_192.168.100.181 ( c001n02 ) due to node availability + * Stop heartbeat_192.168.100.182 ( c001n02 ) due to node availability + * Stop ocf_192.168.100.183 ( c001n02 ) due to node availability + * Stop lsb_dummy ( c001n04 ) due to node availability + * Stop rsc_c001n03 ( c001n05 ) due to node availability + * Stop rsc_c001n02 ( c001n02 ) due to node availability + * Stop rsc_c001n04 ( c001n04 ) due to node availability + * Stop rsc_c001n05 ( c001n05 ) due to node availability + * Stop rsc_c001n06 ( c001n06 ) due to node availability + * Stop rsc_c001n07 ( c001n07 ) due to node availability + * Stop child_DoFencing:0 ( c001n02 ) due to node availability + * Stop child_DoFencing:1 ( c001n04 ) due to node availability + * Stop child_DoFencing:2 ( c001n05 ) due to node availability + * Stop child_DoFencing:3 ( c001n06 ) due to node availability + * Stop child_DoFencing:4 ( c001n07 ) due to node availability * Stop ocf_msdummy:2 ( Unpromoted c001n04 ) due to node availability * Stop ocf_msdummy:3 ( Unpromoted c001n04 ) due to node availability * Stop ocf_msdummy:4 ( Unpromoted c001n05 ) due to node availability * Stop ocf_msdummy:5 ( Unpromoted c001n05 ) due to node availability * Stop ocf_msdummy:6 ( Unpromoted c001n06 ) due to node availability * Stop ocf_msdummy:7 ( Unpromoted c001n06 ) due to node availability * Stop ocf_msdummy:8 ( Unpromoted c001n07 ) due to node availability * Stop ocf_msdummy:9 ( Unpromoted c001n07 ) due to node availability * Stop ocf_msdummy:10 ( Unpromoted c001n02 ) due to node availability * Stop ocf_msdummy:11 ( Unpromoted c001n02 ) due to node availability Executing Cluster Transition: * Pseudo action: group-1_stop_0 * Resource action: ocf_192.168.100.183 stop on c001n02 * Resource action: lsb_dummy stop on c001n04 * Resource action: rsc_c001n03 stop on c001n05 * Resource action: rsc_c001n02 stop on c001n02 * Resource action: rsc_c001n04 stop on c001n04 * Resource action: rsc_c001n05 stop on c001n05 * Resource action: rsc_c001n06 stop on c001n06 * Resource action: rsc_c001n07 stop on c001n07 * Pseudo action: DoFencing_stop_0 * Pseudo action: master_rsc_1_stop_0 * Resource action: heartbeat_192.168.100.182 stop on c001n02 * Resource action: child_DoFencing:1 stop on c001n02 * Resource action: child_DoFencing:2 stop on c001n04 * Resource action: child_DoFencing:3 stop on c001n05 * Resource action: child_DoFencing:4 stop on c001n06 * Resource action: child_DoFencing:5 stop on c001n07 * Pseudo action: DoFencing_stopped_0 * Resource action: ocf_msdummy:2 stop on c001n04 * Resource action: ocf_msdummy:3 stop on c001n04 * Resource action: ocf_msdummy:4 stop on c001n05 * Resource action: ocf_msdummy:5 stop on c001n05 * Resource action: ocf_msdummy:6 stop on c001n06 * Resource action: ocf_msdummy:7 stop on c001n06 * Resource action: ocf_msdummy:8 stop on c001n07 * Resource action: ocf_msdummy:9 stop on c001n07 * Resource action: ocf_msdummy:10 stop on c001n02 * Resource action: ocf_msdummy:11 stop on c001n02 * Pseudo action: master_rsc_1_stopped_0 * Cluster action: do_shutdown on c001n07 * Cluster action: do_shutdown on c001n06 * Cluster action: do_shutdown on c001n05 * Cluster action: do_shutdown on c001n04 * Resource action: ocf_192.168.100.181 stop on c001n02 * Cluster action: do_shutdown on c001n02 * Pseudo action: group-1_stopped_0 * Cluster action: do_shutdown on c001n03 Revised Cluster Status: * Node List: * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Stopped * Resource Group: group-1: * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Stopped * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Stopped * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Stopped * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped * rsc_c001n04 (ocf:heartbeat:IPaddr): Stopped * rsc_c001n05 (ocf:heartbeat:IPaddr): Stopped * rsc_c001n06 (ocf:heartbeat:IPaddr): Stopped * rsc_c001n07 (ocf:heartbeat:IPaddr): Stopped * Clone Set: DoFencing [child_DoFencing]: * Stopped: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ] * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:2 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:3 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:4 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:5 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:8 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:9 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:10 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:11 (ocf:heartbeat:Stateful): Stopped diff --git a/cts/scheduler/summary/migrate-fencing.summary b/cts/scheduler/summary/migrate-fencing.summary index 955bb0f434..ebc65bd6a8 100644 --- a/cts/scheduler/summary/migrate-fencing.summary +++ b/cts/scheduler/summary/migrate-fencing.summary @@ -1,108 +1,108 @@ Current cluster status: * Node List: * Node pcmk-4: UNCLEAN (online) * Online: [ pcmk-1 pcmk-2 pcmk-3 ] * Full List of Resources: * Clone Set: Fencing [FencingChild]: * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Resource Group: group-1: * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-4 * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-4 * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-4 * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-4 * migrator (ocf:pacemaker:Dummy): Started pcmk-1 * Clone Set: Connectivity [ping-1]: * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ pcmk-4 ] * Unpromoted: [ pcmk-1 pcmk-2 pcmk-3 ] Transition Summary: * Fence (reboot) pcmk-4 'termination was requested' - * Stop FencingChild:0 ( pcmk-4 ) due to node availability - * Move r192.168.101.181 ( pcmk-4 -> pcmk-1 ) - * Move r192.168.101.182 ( pcmk-4 -> pcmk-1 ) - * Move r192.168.101.183 ( pcmk-4 -> pcmk-1 ) - * Move rsc_pcmk-4 ( pcmk-4 -> pcmk-2 ) - * Move lsb-dummy ( pcmk-4 -> pcmk-1 ) - * Migrate migrator ( pcmk-1 -> pcmk-3 ) - * Stop ping-1:0 ( pcmk-4 ) due to node availability - * Stop stateful-1:0 ( Promoted pcmk-4 ) due to node availability + * Stop FencingChild:0 ( pcmk-4 ) due to node availability + * Move r192.168.101.181 ( pcmk-4 -> pcmk-1 ) + * Move r192.168.101.182 ( pcmk-4 -> pcmk-1 ) + * Move r192.168.101.183 ( pcmk-4 -> pcmk-1 ) + * Move rsc_pcmk-4 ( pcmk-4 -> pcmk-2 ) + * Move lsb-dummy ( pcmk-4 -> pcmk-1 ) + * Migrate migrator ( pcmk-1 -> pcmk-3 ) + * Stop ping-1:0 ( pcmk-4 ) due to node availability + * Stop stateful-1:0 ( Promoted pcmk-4 ) due to node availability * Promote stateful-1:1 ( Unpromoted -> Promoted pcmk-1 ) Executing Cluster Transition: * Pseudo action: Fencing_stop_0 * Resource action: stateful-1:3 monitor=15000 on pcmk-3 * Resource action: stateful-1:2 monitor=15000 on pcmk-2 * Fencing pcmk-4 (reboot) * Pseudo action: FencingChild:0_stop_0 * Pseudo action: Fencing_stopped_0 * Pseudo action: rsc_pcmk-4_stop_0 * Pseudo action: lsb-dummy_stop_0 * Resource action: migrator migrate_to on pcmk-1 * Pseudo action: Connectivity_stop_0 * Pseudo action: group-1_stop_0 * Pseudo action: r192.168.101.183_stop_0 * Resource action: rsc_pcmk-4 start on pcmk-2 * Resource action: migrator migrate_from on pcmk-3 * Resource action: migrator stop on pcmk-1 * Pseudo action: ping-1:0_stop_0 * Pseudo action: Connectivity_stopped_0 * Pseudo action: r192.168.101.182_stop_0 * Resource action: rsc_pcmk-4 monitor=5000 on pcmk-2 * Pseudo action: migrator_start_0 * Pseudo action: r192.168.101.181_stop_0 * Resource action: migrator monitor=10000 on pcmk-3 * Pseudo action: group-1_stopped_0 * Pseudo action: master-1_demote_0 * Pseudo action: stateful-1:0_demote_0 * Pseudo action: master-1_demoted_0 * Pseudo action: master-1_stop_0 * Pseudo action: stateful-1:0_stop_0 * Pseudo action: master-1_stopped_0 * Pseudo action: master-1_promote_0 * Resource action: stateful-1:1 promote on pcmk-1 * Pseudo action: master-1_promoted_0 * Pseudo action: group-1_start_0 * Resource action: r192.168.101.181 start on pcmk-1 * Resource action: r192.168.101.182 start on pcmk-1 * Resource action: r192.168.101.183 start on pcmk-1 * Resource action: stateful-1:1 monitor=16000 on pcmk-1 * Pseudo action: group-1_running_0 * Resource action: r192.168.101.181 monitor=5000 on pcmk-1 * Resource action: r192.168.101.182 monitor=5000 on pcmk-1 * Resource action: r192.168.101.183 monitor=5000 on pcmk-1 * Resource action: lsb-dummy start on pcmk-1 * Resource action: lsb-dummy monitor=5000 on pcmk-1 Revised Cluster Status: * Node List: * Online: [ pcmk-1 pcmk-2 pcmk-3 ] * OFFLINE: [ pcmk-4 ] * Full List of Resources: * Clone Set: Fencing [FencingChild]: * Started: [ pcmk-1 pcmk-2 pcmk-3 ] * Stopped: [ pcmk-4 ] * Resource Group: group-1: * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-1 * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-1 * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-1 * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-2 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1 * migrator (ocf:pacemaker:Dummy): Started pcmk-3 * Clone Set: Connectivity [ping-1]: * Started: [ pcmk-1 pcmk-2 pcmk-3 ] * Stopped: [ pcmk-4 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ pcmk-1 ] * Unpromoted: [ pcmk-2 pcmk-3 ] * Stopped: [ pcmk-4 ] diff --git a/cts/scheduler/summary/migrate-shutdown.summary b/cts/scheduler/summary/migrate-shutdown.summary index 1da9db21e8..985b554c22 100644 --- a/cts/scheduler/summary/migrate-shutdown.summary +++ b/cts/scheduler/summary/migrate-shutdown.summary @@ -1,92 +1,92 @@ Current cluster status: * Node List: * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started pcmk-1 * Resource Group: group-1: * r192.168.122.105 (ocf:heartbeat:IPaddr): Started pcmk-2 * r192.168.122.106 (ocf:heartbeat:IPaddr): Started pcmk-2 * r192.168.122.107 (ocf:heartbeat:IPaddr): Started pcmk-2 * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Stopped * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 * migrator (ocf:pacemaker:Dummy): Started pcmk-1 * Clone Set: Connectivity [ping-1]: * Started: [ pcmk-1 pcmk-2 pcmk-4 ] * Stopped: [ pcmk-3 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ pcmk-2 ] * Unpromoted: [ pcmk-1 pcmk-4 ] * Stopped: [ pcmk-3 ] Transition Summary: - * Stop Fencing ( pcmk-1 ) due to node availability - * Stop r192.168.122.105 ( pcmk-2 ) due to node availability - * Stop r192.168.122.106 ( pcmk-2 ) due to node availability - * Stop r192.168.122.107 ( pcmk-2 ) due to node availability - * Stop rsc_pcmk-1 ( pcmk-1 ) due to node availability - * Stop rsc_pcmk-2 ( pcmk-2 ) due to node availability - * Stop rsc_pcmk-4 ( pcmk-4 ) due to node availability - * Stop lsb-dummy ( pcmk-2 ) due to node availability - * Stop migrator ( pcmk-1 ) due to node availability - * Stop ping-1:0 ( pcmk-1 ) due to node availability - * Stop ping-1:1 ( pcmk-2 ) due to node availability - * Stop ping-1:2 ( pcmk-4 ) due to node availability - * Stop stateful-1:0 ( Unpromoted pcmk-1 ) due to node availability - * Stop stateful-1:1 ( Promoted pcmk-2 ) due to node availability - * Stop stateful-1:2 ( Unpromoted pcmk-4 ) due to node availability + * Stop Fencing ( pcmk-1 ) due to node availability + * Stop r192.168.122.105 ( pcmk-2 ) due to node availability + * Stop r192.168.122.106 ( pcmk-2 ) due to node availability + * Stop r192.168.122.107 ( pcmk-2 ) due to node availability + * Stop rsc_pcmk-1 ( pcmk-1 ) due to node availability + * Stop rsc_pcmk-2 ( pcmk-2 ) due to node availability + * Stop rsc_pcmk-4 ( pcmk-4 ) due to node availability + * Stop lsb-dummy ( pcmk-2 ) due to node availability + * Stop migrator ( pcmk-1 ) due to node availability + * Stop ping-1:0 ( pcmk-1 ) due to node availability + * Stop ping-1:1 ( pcmk-2 ) due to node availability + * Stop ping-1:2 ( pcmk-4 ) due to node availability + * Stop stateful-1:0 ( Unpromoted pcmk-1 ) due to node availability + * Stop stateful-1:1 ( Promoted pcmk-2 ) due to node availability + * Stop stateful-1:2 ( Unpromoted pcmk-4 ) due to node availability Executing Cluster Transition: * Resource action: Fencing stop on pcmk-1 * Resource action: rsc_pcmk-1 stop on pcmk-1 * Resource action: rsc_pcmk-2 stop on pcmk-2 * Resource action: rsc_pcmk-4 stop on pcmk-4 * Resource action: lsb-dummy stop on pcmk-2 * Resource action: migrator stop on pcmk-1 * Resource action: migrator stop on pcmk-3 * Pseudo action: Connectivity_stop_0 * Cluster action: do_shutdown on pcmk-3 * Pseudo action: group-1_stop_0 * Resource action: r192.168.122.107 stop on pcmk-2 * Resource action: ping-1:0 stop on pcmk-1 * Resource action: ping-1:1 stop on pcmk-2 * Resource action: ping-1:3 stop on pcmk-4 * Pseudo action: Connectivity_stopped_0 * Resource action: r192.168.122.106 stop on pcmk-2 * Resource action: r192.168.122.105 stop on pcmk-2 * Pseudo action: group-1_stopped_0 * Pseudo action: master-1_demote_0 * Resource action: stateful-1:0 demote on pcmk-2 * Pseudo action: master-1_demoted_0 * Pseudo action: master-1_stop_0 * Resource action: stateful-1:2 stop on pcmk-1 * Resource action: stateful-1:0 stop on pcmk-2 * Resource action: stateful-1:3 stop on pcmk-4 * Pseudo action: master-1_stopped_0 * Cluster action: do_shutdown on pcmk-4 * Cluster action: do_shutdown on pcmk-2 * Cluster action: do_shutdown on pcmk-1 Revised Cluster Status: * Node List: * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Stopped * Resource Group: group-1: * r192.168.122.105 (ocf:heartbeat:IPaddr): Stopped * r192.168.122.106 (ocf:heartbeat:IPaddr): Stopped * r192.168.122.107 (ocf:heartbeat:IPaddr): Stopped * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Stopped * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Stopped * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Stopped * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Stopped * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped * migrator (ocf:pacemaker:Dummy): Stopped * Clone Set: Connectivity [ping-1]: * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Clone Set: master-1 [stateful-1] (promotable): * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] diff --git a/cts/scheduler/summary/nested-remote-recovery.summary b/cts/scheduler/summary/nested-remote-recovery.summary index 3114f2790c..0274d2d876 100644 --- a/cts/scheduler/summary/nested-remote-recovery.summary +++ b/cts/scheduler/summary/nested-remote-recovery.summary @@ -1,131 +1,131 @@ Using the original execution date of: 2018-09-11 21:23:25Z Current cluster status: * Node List: * Online: [ controller-0 controller-1 controller-2 ] * RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ] * GuestOnline: [ galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-0@controller-2 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-1 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Full List of Resources: * database-0 (ocf:pacemaker:remote): Started controller-0 * database-1 (ocf:pacemaker:remote): Started controller-1 * database-2 (ocf:pacemaker:remote): Started controller-2 * messaging-0 (ocf:pacemaker:remote): Started controller-2 * messaging-1 (ocf:pacemaker:remote): Started controller-1 * messaging-2 (ocf:pacemaker:remote): Started controller-1 * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted database-0 * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2 * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2 * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted controller-0 * redis-bundle-1 (ocf:heartbeat:redis): Promoted controller-1 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * ip-192.168.24.12 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.18 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.12 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.18 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.4.14 (ocf:heartbeat:IPaddr2): Started controller-1 * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0 * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2 * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0 * stonith-fence_ipmilan-5254005f9a33 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-52540098c9ff (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-5254000203a2 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-5254003296a5 (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-52540066e27e (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-52540065418e (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400aab9d9 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400a16c0d (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-5254002f6d57 (stonith:fence_ipmilan): Started controller-1 Transition Summary: * Fence (reboot) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean' - * Recover galera-bundle-docker-0 ( database-0 ) - * Recover galera-bundle-0 ( controller-0 ) + * Recover galera-bundle-docker-0 ( database-0 ) + * Recover galera-bundle-0 ( controller-0 ) * Recover galera:0 ( Promoted galera-bundle-0 ) Executing Cluster Transition: * Resource action: galera-bundle-0 stop on controller-0 * Pseudo action: galera-bundle_demote_0 * Pseudo action: galera-bundle-master_demote_0 * Pseudo action: galera_demote_0 * Pseudo action: galera-bundle-master_demoted_0 * Pseudo action: galera-bundle_demoted_0 * Pseudo action: galera-bundle_stop_0 * Resource action: galera-bundle-docker-0 stop on database-0 * Pseudo action: stonith-galera-bundle-0-reboot on galera-bundle-0 * Pseudo action: galera-bundle-master_stop_0 * Pseudo action: galera_stop_0 * Pseudo action: galera-bundle-master_stopped_0 * Pseudo action: galera-bundle_stopped_0 * Pseudo action: galera-bundle_start_0 * Pseudo action: galera-bundle-master_start_0 * Resource action: galera-bundle-docker-0 start on database-0 * Resource action: galera-bundle-docker-0 monitor=60000 on database-0 * Resource action: galera-bundle-0 start on controller-0 * Resource action: galera-bundle-0 monitor=30000 on controller-0 * Resource action: galera start on galera-bundle-0 * Pseudo action: galera-bundle-master_running_0 * Pseudo action: galera-bundle_running_0 * Pseudo action: galera-bundle_promote_0 * Pseudo action: galera-bundle-master_promote_0 * Resource action: galera promote on galera-bundle-0 * Pseudo action: galera-bundle-master_promoted_0 * Pseudo action: galera-bundle_promoted_0 * Resource action: galera monitor=10000 on galera-bundle-0 Using the original execution date of: 2018-09-11 21:23:25Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-1 controller-2 ] * RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ] * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-0@controller-2 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-1 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Full List of Resources: * database-0 (ocf:pacemaker:remote): Started controller-0 * database-1 (ocf:pacemaker:remote): Started controller-1 * database-2 (ocf:pacemaker:remote): Started controller-2 * messaging-0 (ocf:pacemaker:remote): Started controller-2 * messaging-1 (ocf:pacemaker:remote): Started controller-1 * messaging-2 (ocf:pacemaker:remote): Started controller-1 * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0 * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2 * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2 * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted controller-0 * redis-bundle-1 (ocf:heartbeat:redis): Promoted controller-1 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * ip-192.168.24.12 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.18 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.12 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.18 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.4.14 (ocf:heartbeat:IPaddr2): Started controller-1 * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]: * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0 * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1 * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2 * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]: * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0 * stonith-fence_ipmilan-5254005f9a33 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-52540098c9ff (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-5254000203a2 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-5254003296a5 (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-52540066e27e (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-52540065418e (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400aab9d9 (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400a16c0d (stonith:fence_ipmilan): Started controller-1 * stonith-fence_ipmilan-5254002f6d57 (stonith:fence_ipmilan): Started controller-1 diff --git a/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary index 8eb68a4cb9..493b50c856 100644 --- a/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary +++ b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary @@ -1,103 +1,103 @@ Using the original execution date of: 2020-05-14 10:49:31Z Current cluster status: * Node List: * Online: [ controller-0 controller-1 controller-2 ] * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-0@controller-0 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Full List of Resources: * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-0 * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2 * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2 * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-0 * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]: * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Unpromoted controller-0 * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-1 * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-2 * stonith-fence_ipmilan-5254005e097a (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400afe30e (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400985679 (stonith:fence_ipmilan): Started controller-1 * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-0 Transition Summary: - * Stop ovn-dbs-bundle-podman-0 ( controller-0 ) due to node availability - * Stop ovn-dbs-bundle-0 ( controller-0 ) due to unrunnable ovn-dbs-bundle-podman-0 start - * Stop ovndb_servers:0 ( Unpromoted ovn-dbs-bundle-0 ) due to unrunnable ovn-dbs-bundle-podman-0 start + * Stop ovn-dbs-bundle-podman-0 ( controller-0 ) due to node availability + * Stop ovn-dbs-bundle-0 ( controller-0 ) due to unrunnable ovn-dbs-bundle-podman-0 start + * Stop ovndb_servers:0 ( Unpromoted ovn-dbs-bundle-0 ) due to unrunnable ovn-dbs-bundle-podman-0 start * Promote ovndb_servers:1 ( Unpromoted -> Promoted ovn-dbs-bundle-1 ) Executing Cluster Transition: * Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-1 * Pseudo action: ovn-dbs-bundle-master_pre_notify_stop_0 * Pseudo action: ovn-dbs-bundle_stop_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_stop_0 * Pseudo action: ovn-dbs-bundle-master_stop_0 * Resource action: ovndb_servers stop on ovn-dbs-bundle-0 * Pseudo action: ovn-dbs-bundle-master_stopped_0 * Resource action: ovn-dbs-bundle-0 stop on controller-0 * Pseudo action: ovn-dbs-bundle-master_post_notify_stopped_0 * Resource action: ovn-dbs-bundle-podman-0 stop on controller-0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_stopped_0 * Pseudo action: ovn-dbs-bundle-master_pre_notify_start_0 * Pseudo action: ovn-dbs-bundle_stopped_0 * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_start_0 * Pseudo action: ovn-dbs-bundle-master_start_0 * Pseudo action: ovn-dbs-bundle-master_running_0 * Pseudo action: ovn-dbs-bundle-master_post_notify_running_0 * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_running_0 * Pseudo action: ovn-dbs-bundle_running_0 * Pseudo action: ovn-dbs-bundle-master_pre_notify_promote_0 * Pseudo action: ovn-dbs-bundle_promote_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: ovn-dbs-bundle-master_promote_0 * Resource action: ovndb_servers promote on ovn-dbs-bundle-1 * Pseudo action: ovn-dbs-bundle-master_promoted_0 * Pseudo action: ovn-dbs-bundle-master_post_notify_promoted_0 * Resource action: ovndb_servers notify on ovn-dbs-bundle-1 * Resource action: ovndb_servers notify on ovn-dbs-bundle-2 * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: ovn-dbs-bundle_promoted_0 * Resource action: ovndb_servers monitor=10000 on ovn-dbs-bundle-1 Using the original execution date of: 2020-05-14 10:49:31Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-1 controller-2 ] * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Full List of Resources: * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-0 * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1 * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2 * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2 * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-0 * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2 * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]: * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Stopped * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Promoted controller-1 * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-2 * stonith-fence_ipmilan-5254005e097a (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400afe30e (stonith:fence_ipmilan): Started controller-2 * stonith-fence_ipmilan-525400985679 (stonith:fence_ipmilan): Started controller-1 * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-0 diff --git a/cts/scheduler/summary/no_quorum_demote.summary b/cts/scheduler/summary/no_quorum_demote.summary index d2cde3eb11..7de1658048 100644 --- a/cts/scheduler/summary/no_quorum_demote.summary +++ b/cts/scheduler/summary/no_quorum_demote.summary @@ -1,40 +1,40 @@ Using the original execution date of: 2020-06-17 17:26:35Z Current cluster status: * Node List: * Online: [ rhel7-1 rhel7-2 ] * OFFLINE: [ rhel7-3 rhel7-4 rhel7-5 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-1 * Clone Set: rsc1-clone [rsc1] (promotable): * Promoted: [ rhel7-1 ] * Unpromoted: [ rhel7-2 ] * Stopped: [ rhel7-3 rhel7-4 rhel7-5 ] * rsc2 (ocf:pacemaker:Dummy): Started rhel7-2 Transition Summary: - * Stop Fencing ( rhel7-1 ) due to no quorum + * Stop Fencing ( rhel7-1 ) due to no quorum * Demote rsc1:0 ( Promoted -> Unpromoted rhel7-1 ) - * Stop rsc2 ( rhel7-2 ) due to no quorum + * Stop rsc2 ( rhel7-2 ) due to no quorum Executing Cluster Transition: * Resource action: Fencing stop on rhel7-1 * Resource action: rsc1 cancel=10000 on rhel7-1 * Pseudo action: rsc1-clone_demote_0 * Resource action: rsc2 stop on rhel7-2 * Resource action: rsc1 demote on rhel7-1 * Pseudo action: rsc1-clone_demoted_0 * Resource action: rsc1 monitor=11000 on rhel7-1 Using the original execution date of: 2020-06-17 17:26:35Z Revised Cluster Status: * Node List: * Online: [ rhel7-1 rhel7-2 ] * OFFLINE: [ rhel7-3 rhel7-4 rhel7-5 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Stopped * Clone Set: rsc1-clone [rsc1] (promotable): * Unpromoted: [ rhel7-1 rhel7-2 ] * Stopped: [ rhel7-3 rhel7-4 rhel7-5 ] * rsc2 (ocf:pacemaker:Dummy): Stopped diff --git a/cts/scheduler/summary/notify-behind-stopping-remote.summary b/cts/scheduler/summary/notify-behind-stopping-remote.summary index cfc7f60544..f5d9162029 100644 --- a/cts/scheduler/summary/notify-behind-stopping-remote.summary +++ b/cts/scheduler/summary/notify-behind-stopping-remote.summary @@ -1,64 +1,64 @@ Using the original execution date of: 2018-11-22 20:36:07Z Current cluster status: * Node List: * Online: [ ra1 ra2 ra3 ] * GuestOnline: [ redis-bundle-0@ra1 redis-bundle-1@ra2 redis-bundle-2@ra3 ] * Full List of Resources: * Container bundle set: redis-bundle [docker.io/tripleoqueens/centos-binary-redis:current-tripleo-rdo]: * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted ra1 * redis-bundle-1 (ocf:heartbeat:redis): Stopped ra2 * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted ra3 Transition Summary: * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 ) - * Stop redis-bundle-docker-1 ( ra2 ) due to node availability - * Stop redis-bundle-1 ( ra2 ) due to unrunnable redis-bundle-docker-1 start - * Start redis:1 ( redis-bundle-1 ) due to unrunnable redis-bundle-docker-1 start (blocked) + * Stop redis-bundle-docker-1 ( ra2 ) due to node availability + * Stop redis-bundle-1 ( ra2 ) due to unrunnable redis-bundle-docker-1 start + * Start redis:1 ( redis-bundle-1 ) due to unrunnable redis-bundle-docker-1 start (blocked) Executing Cluster Transition: * Resource action: redis cancel=45000 on redis-bundle-0 * Resource action: redis cancel=60000 on redis-bundle-0 * Pseudo action: redis-bundle-master_pre_notify_start_0 * Resource action: redis-bundle-0 monitor=30000 on ra1 * Resource action: redis-bundle-0 cancel=60000 on ra1 * Resource action: redis-bundle-1 stop on ra2 * Resource action: redis-bundle-1 cancel=60000 on ra2 * Resource action: redis-bundle-2 monitor=30000 on ra3 * Resource action: redis-bundle-2 cancel=60000 on ra3 * Pseudo action: redis-bundle_stop_0 * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0 * Resource action: redis-bundle-docker-1 stop on ra2 * Pseudo action: redis-bundle_stopped_0 * Pseudo action: redis-bundle_start_0 * Pseudo action: redis-bundle-master_start_0 * Pseudo action: redis-bundle-master_running_0 * Pseudo action: redis-bundle-master_post_notify_running_0 * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0 * Pseudo action: redis-bundle_running_0 * Pseudo action: redis-bundle-master_pre_notify_promote_0 * Pseudo action: redis-bundle_promote_0 * Resource action: redis notify on redis-bundle-0 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0 * Pseudo action: redis-bundle-master_promote_0 * Resource action: redis promote on redis-bundle-0 * Pseudo action: redis-bundle-master_promoted_0 * Pseudo action: redis-bundle-master_post_notify_promoted_0 * Resource action: redis notify on redis-bundle-0 * Resource action: redis notify on redis-bundle-2 * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0 * Pseudo action: redis-bundle_promoted_0 * Resource action: redis monitor=20000 on redis-bundle-0 Using the original execution date of: 2018-11-22 20:36:07Z Revised Cluster Status: * Node List: * Online: [ ra1 ra2 ra3 ] * GuestOnline: [ redis-bundle-0@ra1 redis-bundle-2@ra3 ] * Full List of Resources: * Container bundle set: redis-bundle [docker.io/tripleoqueens/centos-binary-redis:current-tripleo-rdo]: * redis-bundle-0 (ocf:heartbeat:redis): Promoted ra1 * redis-bundle-1 (ocf:heartbeat:redis): Stopped * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted ra3 diff --git a/cts/scheduler/summary/novell-239082.summary b/cts/scheduler/summary/novell-239082.summary index 01af7656e9..051c0220e0 100644 --- a/cts/scheduler/summary/novell-239082.summary +++ b/cts/scheduler/summary/novell-239082.summary @@ -1,59 +1,59 @@ Current cluster status: * Node List: * Online: [ xen-1 xen-2 ] * Full List of Resources: * fs_1 (ocf:heartbeat:Filesystem): Started xen-1 * Clone Set: ms-drbd0 [drbd0] (promotable): * Promoted: [ xen-1 ] * Unpromoted: [ xen-2 ] Transition Summary: - * Move fs_1 ( xen-1 -> xen-2 ) + * Move fs_1 ( xen-1 -> xen-2 ) * Promote drbd0:0 ( Unpromoted -> Promoted xen-2 ) - * Stop drbd0:1 ( Promoted xen-1 ) due to node availability + * Stop drbd0:1 ( Promoted xen-1 ) due to node availability Executing Cluster Transition: * Resource action: fs_1 stop on xen-1 * Pseudo action: ms-drbd0_pre_notify_demote_0 * Resource action: drbd0:0 notify on xen-2 * Resource action: drbd0:1 notify on xen-1 * Pseudo action: ms-drbd0_confirmed-pre_notify_demote_0 * Pseudo action: ms-drbd0_demote_0 * Resource action: drbd0:1 demote on xen-1 * Pseudo action: ms-drbd0_demoted_0 * Pseudo action: ms-drbd0_post_notify_demoted_0 * Resource action: drbd0:0 notify on xen-2 * Resource action: drbd0:1 notify on xen-1 * Pseudo action: ms-drbd0_confirmed-post_notify_demoted_0 * Pseudo action: ms-drbd0_pre_notify_stop_0 * Resource action: drbd0:0 notify on xen-2 * Resource action: drbd0:1 notify on xen-1 * Pseudo action: ms-drbd0_confirmed-pre_notify_stop_0 * Pseudo action: ms-drbd0_stop_0 * Resource action: drbd0:1 stop on xen-1 * Pseudo action: ms-drbd0_stopped_0 * Cluster action: do_shutdown on xen-1 * Pseudo action: ms-drbd0_post_notify_stopped_0 * Resource action: drbd0:0 notify on xen-2 * Pseudo action: ms-drbd0_confirmed-post_notify_stopped_0 * Pseudo action: ms-drbd0_pre_notify_promote_0 * Resource action: drbd0:0 notify on xen-2 * Pseudo action: ms-drbd0_confirmed-pre_notify_promote_0 * Pseudo action: ms-drbd0_promote_0 * Resource action: drbd0:0 promote on xen-2 * Pseudo action: ms-drbd0_promoted_0 * Pseudo action: ms-drbd0_post_notify_promoted_0 * Resource action: drbd0:0 notify on xen-2 * Pseudo action: ms-drbd0_confirmed-post_notify_promoted_0 * Resource action: fs_1 start on xen-2 Revised Cluster Status: * Node List: * Online: [ xen-1 xen-2 ] * Full List of Resources: * fs_1 (ocf:heartbeat:Filesystem): Started xen-2 * Clone Set: ms-drbd0 [drbd0] (promotable): * Promoted: [ xen-2 ] * Stopped: [ xen-1 ] diff --git a/cts/scheduler/summary/on_fail_demote4.summary b/cts/scheduler/summary/on_fail_demote4.summary index b7b1388e58..57eea35753 100644 --- a/cts/scheduler/summary/on_fail_demote4.summary +++ b/cts/scheduler/summary/on_fail_demote4.summary @@ -1,189 +1,189 @@ Using the original execution date of: 2020-06-16 19:23:21Z Current cluster status: * Node List: * RemoteNode remote-rhel7-2: UNCLEAN (offline) * Node rhel7-4: UNCLEAN (offline) * Online: [ rhel7-1 rhel7-3 rhel7-5 ] * GuestOnline: [ lxc1@rhel7-3 stateful-bundle-1@rhel7-1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-4 (UNCLEAN) * Clone Set: rsc1-clone [rsc1] (promotable): * rsc1 (ocf:pacemaker:Stateful): Promoted rhel7-4 (UNCLEAN) * rsc1 (ocf:pacemaker:Stateful): Unpromoted remote-rhel7-2 (UNCLEAN) * Unpromoted: [ lxc1 rhel7-1 rhel7-3 rhel7-5 ] * Clone Set: rsc2-master [rsc2] (promotable): * rsc2 (ocf:pacemaker:Stateful): Unpromoted rhel7-4 (UNCLEAN) * rsc2 (ocf:pacemaker:Stateful): Promoted remote-rhel7-2 (UNCLEAN) * Unpromoted: [ lxc1 rhel7-1 rhel7-3 rhel7-5 ] * remote-rhel7-2 (ocf:pacemaker:remote): FAILED rhel7-1 * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-3 * container2 (ocf:heartbeat:VirtualDomain): FAILED rhel7-3 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Unpromoted: [ lxc1 ] * Stopped: [ remote-rhel7-2 rhel7-1 rhel7-3 rhel7-4 rhel7-5 ] * Container bundle set: stateful-bundle [pcmktest:http]: * stateful-bundle-0 (192.168.122.131) (ocf:pacemaker:Stateful): FAILED Promoted rhel7-5 * stateful-bundle-1 (192.168.122.132) (ocf:pacemaker:Stateful): Unpromoted rhel7-1 * stateful-bundle-2 (192.168.122.133) (ocf:pacemaker:Stateful): FAILED rhel7-4 (UNCLEAN) Transition Summary: * Fence (reboot) stateful-bundle-2 (resource: stateful-bundle-docker-2) 'guest is unclean' * Fence (reboot) stateful-bundle-0 (resource: stateful-bundle-docker-0) 'guest is unclean' * Fence (reboot) lxc2 (resource: container2) 'guest is unclean' * Fence (reboot) remote-rhel7-2 'remote connection is unrecoverable' * Fence (reboot) rhel7-4 'peer is no longer part of the cluster' - * Move Fencing ( rhel7-4 -> rhel7-5 ) - * Stop rsc1:0 ( Promoted rhel7-4 ) due to node availability - * Promote rsc1:1 ( Unpromoted -> Promoted rhel7-3 ) - * Stop rsc1:4 ( Unpromoted remote-rhel7-2 ) due to node availability - * Recover rsc1:5 ( Unpromoted lxc2 ) - * Stop rsc2:0 ( Unpromoted rhel7-4 ) due to node availability - * Promote rsc2:1 ( Unpromoted -> Promoted rhel7-3 ) - * Stop rsc2:4 ( Promoted remote-rhel7-2 ) due to node availability - * Recover rsc2:5 ( Unpromoted lxc2 ) - * Recover remote-rhel7-2 ( rhel7-1 ) - * Recover container2 ( rhel7-3 ) - * Recover lxc-ms:0 ( Promoted lxc2 ) - * Recover stateful-bundle-docker-0 ( rhel7-5 ) - * Restart stateful-bundle-0 ( rhel7-5 ) due to required stateful-bundle-docker-0 start - * Recover bundled:0 ( Promoted stateful-bundle-0 ) - * Move stateful-bundle-ip-192.168.122.133 ( rhel7-4 -> rhel7-3 ) - * Recover stateful-bundle-docker-2 ( rhel7-4 -> rhel7-3 ) - * Move stateful-bundle-2 ( rhel7-4 -> rhel7-3 ) - * Recover bundled:2 ( Unpromoted stateful-bundle-2 ) - * Restart lxc2 ( rhel7-3 ) due to required container2 start + * Move Fencing ( rhel7-4 -> rhel7-5 ) + * Stop rsc1:0 ( Promoted rhel7-4 ) due to node availability + * Promote rsc1:1 ( Unpromoted -> Promoted rhel7-3 ) + * Stop rsc1:4 ( Unpromoted remote-rhel7-2 ) due to node availability + * Recover rsc1:5 ( Unpromoted lxc2 ) + * Stop rsc2:0 ( Unpromoted rhel7-4 ) due to node availability + * Promote rsc2:1 ( Unpromoted -> Promoted rhel7-3 ) + * Stop rsc2:4 ( Promoted remote-rhel7-2 ) due to node availability + * Recover rsc2:5 ( Unpromoted lxc2 ) + * Recover remote-rhel7-2 ( rhel7-1 ) + * Recover container2 ( rhel7-3 ) + * Recover lxc-ms:0 ( Promoted lxc2 ) + * Recover stateful-bundle-docker-0 ( rhel7-5 ) + * Restart stateful-bundle-0 ( rhel7-5 ) due to required stateful-bundle-docker-0 start + * Recover bundled:0 ( Promoted stateful-bundle-0 ) + * Move stateful-bundle-ip-192.168.122.133 ( rhel7-4 -> rhel7-3 ) + * Recover stateful-bundle-docker-2 ( rhel7-4 -> rhel7-3 ) + * Move stateful-bundle-2 ( rhel7-4 -> rhel7-3 ) + * Recover bundled:2 ( Unpromoted stateful-bundle-2 ) + * Restart lxc2 ( rhel7-3 ) due to required container2 start Executing Cluster Transition: * Pseudo action: Fencing_stop_0 * Resource action: rsc1 cancel=11000 on rhel7-3 * Pseudo action: rsc1-clone_demote_0 * Resource action: rsc2 cancel=11000 on rhel7-3 * Pseudo action: rsc2-master_demote_0 * Pseudo action: lxc-ms-master_demote_0 * Resource action: stateful-bundle-0 stop on rhel7-5 * Pseudo action: stateful-bundle-2_stop_0 * Resource action: lxc2 stop on rhel7-3 * Pseudo action: stateful-bundle_demote_0 * Fencing remote-rhel7-2 (reboot) * Fencing rhel7-4 (reboot) * Pseudo action: rsc1_demote_0 * Pseudo action: rsc1-clone_demoted_0 * Pseudo action: rsc2_demote_0 * Pseudo action: rsc2-master_demoted_0 * Resource action: container2 stop on rhel7-3 * Pseudo action: stateful-bundle-master_demote_0 * Pseudo action: stonith-stateful-bundle-2-reboot on stateful-bundle-2 * Pseudo action: stonith-lxc2-reboot on lxc2 * Resource action: Fencing start on rhel7-5 * Pseudo action: rsc1-clone_stop_0 * Pseudo action: rsc2-master_stop_0 * Pseudo action: lxc-ms_demote_0 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Pseudo action: bundled_demote_0 * Pseudo action: stateful-bundle-master_demoted_0 * Pseudo action: stateful-bundle_demoted_0 * Pseudo action: stateful-bundle_stop_0 * Resource action: Fencing monitor=120000 on rhel7-5 * Pseudo action: rsc1_stop_0 * Pseudo action: rsc1_stop_0 * Pseudo action: rsc1_stop_0 * Pseudo action: rsc1-clone_stopped_0 * Pseudo action: rsc1-clone_start_0 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc2-master_stopped_0 * Pseudo action: rsc2-master_start_0 * Resource action: remote-rhel7-2 stop on rhel7-1 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Resource action: stateful-bundle-docker-0 stop on rhel7-5 * Pseudo action: stateful-bundle-docker-2_stop_0 * Pseudo action: stonith-stateful-bundle-0-reboot on stateful-bundle-0 * Resource action: remote-rhel7-2 start on rhel7-1 * Resource action: remote-rhel7-2 monitor=60000 on rhel7-1 * Resource action: container2 start on rhel7-3 * Resource action: container2 monitor=20000 on rhel7-3 * Pseudo action: stateful-bundle-master_stop_0 * Pseudo action: stateful-bundle-ip-192.168.122.133_stop_0 * Resource action: lxc2 start on rhel7-3 * Resource action: lxc2 monitor=30000 on rhel7-3 * Resource action: rsc1 start on lxc2 * Pseudo action: rsc1-clone_running_0 * Resource action: rsc2 start on lxc2 * Pseudo action: rsc2-master_running_0 * Resource action: lxc-ms start on lxc2 * Pseudo action: lxc-ms-master_running_0 * Pseudo action: bundled_stop_0 * Resource action: stateful-bundle-ip-192.168.122.133 start on rhel7-3 * Resource action: rsc1 monitor=11000 on lxc2 * Pseudo action: rsc1-clone_promote_0 * Resource action: rsc2 monitor=11000 on lxc2 * Pseudo action: rsc2-master_promote_0 * Pseudo action: lxc-ms-master_promote_0 * Pseudo action: bundled_stop_0 * Pseudo action: stateful-bundle-master_stopped_0 * Resource action: stateful-bundle-ip-192.168.122.133 monitor=60000 on rhel7-3 * Pseudo action: stateful-bundle_stopped_0 * Pseudo action: stateful-bundle_start_0 * Resource action: rsc1 promote on rhel7-3 * Pseudo action: rsc1-clone_promoted_0 * Resource action: rsc2 promote on rhel7-3 * Pseudo action: rsc2-master_promoted_0 * Resource action: lxc-ms promote on lxc2 * Pseudo action: lxc-ms-master_promoted_0 * Pseudo action: stateful-bundle-master_start_0 * Resource action: stateful-bundle-docker-0 start on rhel7-5 * Resource action: stateful-bundle-docker-0 monitor=60000 on rhel7-5 * Resource action: stateful-bundle-0 start on rhel7-5 * Resource action: stateful-bundle-0 monitor=30000 on rhel7-5 * Resource action: stateful-bundle-docker-2 start on rhel7-3 * Resource action: stateful-bundle-2 start on rhel7-3 * Resource action: rsc1 monitor=10000 on rhel7-3 * Resource action: rsc2 monitor=10000 on rhel7-3 * Resource action: lxc-ms monitor=10000 on lxc2 * Resource action: bundled start on stateful-bundle-0 * Resource action: bundled start on stateful-bundle-2 * Pseudo action: stateful-bundle-master_running_0 * Resource action: stateful-bundle-docker-2 monitor=60000 on rhel7-3 * Resource action: stateful-bundle-2 monitor=30000 on rhel7-3 * Pseudo action: stateful-bundle_running_0 * Resource action: bundled monitor=11000 on stateful-bundle-2 * Pseudo action: stateful-bundle_promote_0 * Pseudo action: stateful-bundle-master_promote_0 * Resource action: bundled promote on stateful-bundle-0 * Pseudo action: stateful-bundle-master_promoted_0 * Pseudo action: stateful-bundle_promoted_0 * Resource action: bundled monitor=10000 on stateful-bundle-0 Using the original execution date of: 2020-06-16 19:23:21Z Revised Cluster Status: * Node List: * Online: [ rhel7-1 rhel7-3 rhel7-5 ] * OFFLINE: [ rhel7-4 ] * RemoteOnline: [ remote-rhel7-2 ] * GuestOnline: [ lxc1@rhel7-3 lxc2@rhel7-3 stateful-bundle-0@rhel7-5 stateful-bundle-1@rhel7-1 stateful-bundle-2@rhel7-3 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-5 * Clone Set: rsc1-clone [rsc1] (promotable): * Promoted: [ rhel7-3 ] * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-5 ] * Stopped: [ remote-rhel7-2 rhel7-4 ] * Clone Set: rsc2-master [rsc2] (promotable): * Promoted: [ rhel7-3 ] * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-5 ] * Stopped: [ remote-rhel7-2 rhel7-4 ] * remote-rhel7-2 (ocf:pacemaker:remote): Started rhel7-1 * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-3 * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-3 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Promoted: [ lxc2 ] * Unpromoted: [ lxc1 ] * Container bundle set: stateful-bundle [pcmktest:http]: * stateful-bundle-0 (192.168.122.131) (ocf:pacemaker:Stateful): Promoted rhel7-5 * stateful-bundle-1 (192.168.122.132) (ocf:pacemaker:Stateful): Unpromoted rhel7-1 * stateful-bundle-2 (192.168.122.133) (ocf:pacemaker:Stateful): Unpromoted rhel7-3 diff --git a/cts/scheduler/summary/order_constraint_stops_promoted.summary b/cts/scheduler/summary/order_constraint_stops_promoted.summary index d0a3fc2f54..8535e36c7e 100644 --- a/cts/scheduler/summary/order_constraint_stops_promoted.summary +++ b/cts/scheduler/summary/order_constraint_stops_promoted.summary @@ -1,44 +1,44 @@ 1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ fc16-builder fc16-builder2 ] * Full List of Resources: * Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable): * Promoted: [ fc16-builder ] * NATIVE_RSC_B (ocf:pacemaker:Dummy): Started fc16-builder2 (disabled) Transition Summary: * Stop NATIVE_RSC_A:0 ( Promoted fc16-builder ) due to required NATIVE_RSC_B start - * Stop NATIVE_RSC_B ( fc16-builder2 ) due to node availability + * Stop NATIVE_RSC_B ( fc16-builder2 ) due to node availability Executing Cluster Transition: * Pseudo action: MASTER_RSC_A_pre_notify_demote_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_demote_0 * Pseudo action: MASTER_RSC_A_demote_0 * Resource action: NATIVE_RSC_A:0 demote on fc16-builder * Pseudo action: MASTER_RSC_A_demoted_0 * Pseudo action: MASTER_RSC_A_post_notify_demoted_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-post_notify_demoted_0 * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Resource action: NATIVE_RSC_A:0 delete on fc16-builder2 * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Resource action: NATIVE_RSC_B stop on fc16-builder2 Revised Cluster Status: * Node List: * Online: [ fc16-builder fc16-builder2 ] * Full List of Resources: * Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable): * Stopped: [ fc16-builder fc16-builder2 ] * NATIVE_RSC_B (ocf:pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/summary/order_constraint_stops_unpromoted.summary b/cts/scheduler/summary/order_constraint_stops_unpromoted.summary index 000500512d..23efd1fe67 100644 --- a/cts/scheduler/summary/order_constraint_stops_unpromoted.summary +++ b/cts/scheduler/summary/order_constraint_stops_unpromoted.summary @@ -1,36 +1,36 @@ 1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ fc16-builder ] * OFFLINE: [ fc16-builder2 ] * Full List of Resources: * Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable): * Unpromoted: [ fc16-builder ] * NATIVE_RSC_B (ocf:pacemaker:Dummy): Started fc16-builder (disabled) Transition Summary: * Stop NATIVE_RSC_A:0 ( Unpromoted fc16-builder ) due to required NATIVE_RSC_B start - * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability + * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability Executing Cluster Transition: * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Resource action: NATIVE_RSC_B stop on fc16-builder Revised Cluster Status: * Node List: * Online: [ fc16-builder ] * OFFLINE: [ fc16-builder2 ] * Full List of Resources: * Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable): * Stopped: [ fc16-builder fc16-builder2 ] * NATIVE_RSC_B (ocf:pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/summary/probe-2.summary b/cts/scheduler/summary/probe-2.summary index 3523891d30..f73d561246 100644 --- a/cts/scheduler/summary/probe-2.summary +++ b/cts/scheduler/summary/probe-2.summary @@ -1,163 +1,163 @@ Current cluster status: * Node List: * Node wc02: standby (with active resources) * Online: [ wc01 ] * Full List of Resources: * Resource Group: group_www_data: * fs_www_data (ocf:heartbeat:Filesystem): Started wc01 * nfs-kernel-server (lsb:nfs-kernel-server): Started wc01 * intip_nfs (ocf:heartbeat:IPaddr2): Started wc01 * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable): * Promoted: [ wc02 ] * Unpromoted: [ wc01 ] * Resource Group: group_mysql: * fs_mysql (ocf:heartbeat:Filesystem): Started wc02 * intip_sql (ocf:heartbeat:IPaddr2): Started wc02 * mysql-server (ocf:heartbeat:mysql): Started wc02 * Clone Set: ms_drbd_www [drbd_www] (promotable): * Promoted: [ wc01 ] * Unpromoted: [ wc02 ] * Clone Set: clone_nfs-common [group_nfs-common]: * Started: [ wc01 wc02 ] * Clone Set: clone_mysql-proxy [group_mysql-proxy]: * Started: [ wc01 wc02 ] * Clone Set: clone_webservice [group_webservice]: * Started: [ wc01 wc02 ] * Resource Group: group_ftpd: * extip_ftp (ocf:heartbeat:IPaddr2): Started wc01 * pure-ftpd (ocf:heartbeat:Pure-FTPd): Started wc01 * Clone Set: DoFencing [stonith_rackpdu] (unique): * stonith_rackpdu:0 (stonith:external/rackpdu): Started wc01 * stonith_rackpdu:1 (stonith:external/rackpdu): Started wc02 Transition Summary: * Promote drbd_mysql:0 ( Unpromoted -> Promoted wc01 ) - * Stop drbd_mysql:1 ( Promoted wc02 ) due to node availability - * Move fs_mysql ( wc02 -> wc01 ) - * Move intip_sql ( wc02 -> wc01 ) - * Move mysql-server ( wc02 -> wc01 ) - * Stop drbd_www:1 ( Unpromoted wc02 ) due to node availability - * Stop nfs-common:1 ( wc02 ) due to node availability - * Stop mysql-proxy:1 ( wc02 ) due to node availability - * Stop fs_www:1 ( wc02 ) due to node availability - * Stop apache2:1 ( wc02 ) due to node availability - * Restart stonith_rackpdu:0 ( wc01 ) - * Stop stonith_rackpdu:1 ( wc02 ) due to node availability + * Stop drbd_mysql:1 ( Promoted wc02 ) due to node availability + * Move fs_mysql ( wc02 -> wc01 ) + * Move intip_sql ( wc02 -> wc01 ) + * Move mysql-server ( wc02 -> wc01 ) + * Stop drbd_www:1 ( Unpromoted wc02 ) due to node availability + * Stop nfs-common:1 ( wc02 ) due to node availability + * Stop mysql-proxy:1 ( wc02 ) due to node availability + * Stop fs_www:1 ( wc02 ) due to node availability + * Stop apache2:1 ( wc02 ) due to node availability + * Restart stonith_rackpdu:0 ( wc01 ) + * Stop stonith_rackpdu:1 ( wc02 ) due to node availability Executing Cluster Transition: * Resource action: drbd_mysql:0 cancel=10000 on wc01 * Pseudo action: ms_drbd_mysql_pre_notify_demote_0 * Pseudo action: group_mysql_stop_0 * Resource action: mysql-server stop on wc02 * Pseudo action: ms_drbd_www_pre_notify_stop_0 * Pseudo action: clone_mysql-proxy_stop_0 * Pseudo action: clone_webservice_stop_0 * Pseudo action: DoFencing_stop_0 * Resource action: drbd_mysql:0 notify on wc01 * Resource action: drbd_mysql:1 notify on wc02 * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_demote_0 * Resource action: intip_sql stop on wc02 * Resource action: drbd_www:0 notify on wc01 * Resource action: drbd_www:1 notify on wc02 * Pseudo action: ms_drbd_www_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_www_stop_0 * Pseudo action: group_mysql-proxy:1_stop_0 * Resource action: mysql-proxy:1 stop on wc02 * Pseudo action: group_webservice:1_stop_0 * Resource action: apache2:1 stop on wc02 * Resource action: stonith_rackpdu:0 stop on wc01 * Resource action: stonith_rackpdu:1 stop on wc02 * Pseudo action: DoFencing_stopped_0 * Pseudo action: DoFencing_start_0 * Resource action: fs_mysql stop on wc02 * Resource action: drbd_www:1 stop on wc02 * Pseudo action: ms_drbd_www_stopped_0 * Pseudo action: group_mysql-proxy:1_stopped_0 * Pseudo action: clone_mysql-proxy_stopped_0 * Resource action: fs_www:1 stop on wc02 * Resource action: stonith_rackpdu:0 start on wc01 * Pseudo action: DoFencing_running_0 * Pseudo action: group_mysql_stopped_0 * Pseudo action: ms_drbd_www_post_notify_stopped_0 * Pseudo action: group_webservice:1_stopped_0 * Pseudo action: clone_webservice_stopped_0 * Resource action: stonith_rackpdu:0 monitor=5000 on wc01 * Pseudo action: ms_drbd_mysql_demote_0 * Resource action: drbd_www:0 notify on wc01 * Pseudo action: ms_drbd_www_confirmed-post_notify_stopped_0 * Pseudo action: clone_nfs-common_stop_0 * Resource action: drbd_mysql:1 demote on wc02 * Pseudo action: ms_drbd_mysql_demoted_0 * Pseudo action: group_nfs-common:1_stop_0 * Resource action: nfs-common:1 stop on wc02 * Pseudo action: ms_drbd_mysql_post_notify_demoted_0 * Pseudo action: group_nfs-common:1_stopped_0 * Pseudo action: clone_nfs-common_stopped_0 * Resource action: drbd_mysql:0 notify on wc01 * Resource action: drbd_mysql:1 notify on wc02 * Pseudo action: ms_drbd_mysql_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_mysql_pre_notify_stop_0 * Resource action: drbd_mysql:0 notify on wc01 * Resource action: drbd_mysql:1 notify on wc02 * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_mysql_stop_0 * Resource action: drbd_mysql:1 stop on wc02 * Pseudo action: ms_drbd_mysql_stopped_0 * Pseudo action: ms_drbd_mysql_post_notify_stopped_0 * Resource action: drbd_mysql:0 notify on wc01 * Pseudo action: ms_drbd_mysql_confirmed-post_notify_stopped_0 * Pseudo action: ms_drbd_mysql_pre_notify_promote_0 * Resource action: drbd_mysql:0 notify on wc01 * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_mysql_promote_0 * Resource action: drbd_mysql:0 promote on wc01 * Pseudo action: ms_drbd_mysql_promoted_0 * Pseudo action: ms_drbd_mysql_post_notify_promoted_0 * Resource action: drbd_mysql:0 notify on wc01 * Pseudo action: ms_drbd_mysql_confirmed-post_notify_promoted_0 * Pseudo action: group_mysql_start_0 * Resource action: fs_mysql start on wc01 * Resource action: intip_sql start on wc01 * Resource action: mysql-server start on wc01 * Resource action: drbd_mysql:0 monitor=5000 on wc01 * Pseudo action: group_mysql_running_0 * Resource action: fs_mysql monitor=30000 on wc01 * Resource action: intip_sql monitor=30000 on wc01 * Resource action: mysql-server monitor=30000 on wc01 Revised Cluster Status: * Node List: * Node wc02: standby * Online: [ wc01 ] * Full List of Resources: * Resource Group: group_www_data: * fs_www_data (ocf:heartbeat:Filesystem): Started wc01 * nfs-kernel-server (lsb:nfs-kernel-server): Started wc01 * intip_nfs (ocf:heartbeat:IPaddr2): Started wc01 * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable): * Promoted: [ wc01 ] * Stopped: [ wc02 ] * Resource Group: group_mysql: * fs_mysql (ocf:heartbeat:Filesystem): Started wc01 * intip_sql (ocf:heartbeat:IPaddr2): Started wc01 * mysql-server (ocf:heartbeat:mysql): Started wc01 * Clone Set: ms_drbd_www [drbd_www] (promotable): * Promoted: [ wc01 ] * Stopped: [ wc02 ] * Clone Set: clone_nfs-common [group_nfs-common]: * Started: [ wc01 ] * Stopped: [ wc02 ] * Clone Set: clone_mysql-proxy [group_mysql-proxy]: * Started: [ wc01 ] * Stopped: [ wc02 ] * Clone Set: clone_webservice [group_webservice]: * Started: [ wc01 ] * Stopped: [ wc02 ] * Resource Group: group_ftpd: * extip_ftp (ocf:heartbeat:IPaddr2): Started wc01 * pure-ftpd (ocf:heartbeat:Pure-FTPd): Started wc01 * Clone Set: DoFencing [stonith_rackpdu] (unique): * stonith_rackpdu:0 (stonith:external/rackpdu): Started wc01 * stonith_rackpdu:1 (stonith:external/rackpdu): Stopped diff --git a/cts/scheduler/summary/promoted-1.summary b/cts/scheduler/summary/promoted-1.summary index 08100f3e36..839de37f1b 100644 --- a/cts/scheduler/summary/promoted-1.summary +++ b/cts/scheduler/summary/promoted-1.summary @@ -1,50 +1,50 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Stopped * child_rsc1:1 (ocf:heartbeat:apache): Stopped * child_rsc1:2 (ocf:heartbeat:apache): Stopped * child_rsc1:3 (ocf:heartbeat:apache): Stopped * child_rsc1:4 (ocf:heartbeat:apache): Stopped Transition Summary: - * Start child_rsc1:0 ( node1 ) + * Start child_rsc1:0 ( node1 ) * Promote child_rsc1:1 ( Stopped -> Promoted node2 ) - * Start child_rsc1:2 ( node1 ) - * Start child_rsc1:3 ( node2 ) + * Start child_rsc1:2 ( node1 ) + * Start child_rsc1:3 ( node2 ) Executing Cluster Transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2 * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:3 (ocf:heartbeat:apache): Unpromoted node2 * child_rsc1:4 (ocf:heartbeat:apache): Stopped diff --git a/cts/scheduler/summary/promoted-10.summary b/cts/scheduler/summary/promoted-10.summary index c35c61c793..7efbce92b6 100644 --- a/cts/scheduler/summary/promoted-10.summary +++ b/cts/scheduler/summary/promoted-10.summary @@ -1,75 +1,75 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Stopped * child_rsc1:1 (ocf:heartbeat:apache): Stopped * child_rsc1:2 (ocf:heartbeat:apache): Stopped * child_rsc1:3 (ocf:heartbeat:apache): Stopped * child_rsc1:4 (ocf:heartbeat:apache): Stopped Transition Summary: * Promote child_rsc1:0 ( Stopped -> Promoted node1 ) - * Start child_rsc1:1 ( node2 ) - * Start child_rsc1:2 ( node1 ) + * Start child_rsc1:1 ( node2 ) + * Start child_rsc1:2 ( node1 ) * Promote child_rsc1:3 ( Stopped -> Promoted node2 ) Executing Cluster Transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_pre_notify_start_0 * Pseudo action: rsc1_confirmed-pre_notify_start_0 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_post_notify_running_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_running_0 * Pseudo action: rsc1_pre_notify_promote_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-pre_notify_promote_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:0 promote on node1 * Resource action: child_rsc1:3 promote on node2 * Pseudo action: rsc1_promoted_0 * Pseudo action: rsc1_post_notify_promoted_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_promoted_0 * Resource action: child_rsc1:0 monitor=11000 on node1 * Resource action: child_rsc1:1 monitor=1000 on node2 * Resource action: child_rsc1:2 monitor=1000 on node1 * Resource action: child_rsc1:3 monitor=11000 on node2 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Promoted node1 * child_rsc1:1 (ocf:heartbeat:apache): Unpromoted node2 * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:3 (ocf:heartbeat:apache): Promoted node2 * child_rsc1:4 (ocf:heartbeat:apache): Stopped diff --git a/cts/scheduler/summary/promoted-11.summary b/cts/scheduler/summary/promoted-11.summary index 47732fb9da..6999bb1af0 100644 --- a/cts/scheduler/summary/promoted-11.summary +++ b/cts/scheduler/summary/promoted-11.summary @@ -1,40 +1,40 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * simple-rsc (ocf:heartbeat:apache): Stopped * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Stopped * child_rsc1:1 (ocf:heartbeat:apache): Stopped Transition Summary: - * Start simple-rsc ( node2 ) - * Start child_rsc1:0 ( node1 ) + * Start simple-rsc ( node2 ) + * Start child_rsc1:0 ( node1 ) * Promote child_rsc1:1 ( Stopped -> Promoted node2 ) Executing Cluster Transition: * Resource action: simple-rsc monitor on node2 * Resource action: simple-rsc monitor on node1 * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Pseudo action: rsc1_start_0 * Resource action: simple-rsc start on node2 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * simple-rsc (ocf:heartbeat:apache): Started node2 * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2 diff --git a/cts/scheduler/summary/promoted-13.summary b/cts/scheduler/summary/promoted-13.summary index 67a95cad79..5f977c8edb 100644 --- a/cts/scheduler/summary/promoted-13.summary +++ b/cts/scheduler/summary/promoted-13.summary @@ -1,62 +1,62 @@ Current cluster status: * Node List: * Online: [ frigg odin ] * Full List of Resources: * Clone Set: ms_drbd [drbd0] (promotable): * Promoted: [ frigg ] * Unpromoted: [ odin ] * Resource Group: group: * IPaddr0 (ocf:heartbeat:IPaddr): Stopped * MailTo (ocf:heartbeat:MailTo): Stopped Transition Summary: * Promote drbd0:0 ( Unpromoted -> Promoted odin ) * Demote drbd0:1 ( Promoted -> Unpromoted frigg ) - * Start IPaddr0 ( odin ) - * Start MailTo ( odin ) + * Start IPaddr0 ( odin ) + * Start MailTo ( odin ) Executing Cluster Transition: * Resource action: drbd0:1 cancel=12000 on odin * Resource action: drbd0:0 cancel=10000 on frigg * Pseudo action: ms_drbd_pre_notify_demote_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-pre_notify_demote_0 * Pseudo action: ms_drbd_demote_0 * Resource action: drbd0:0 demote on frigg * Pseudo action: ms_drbd_demoted_0 * Pseudo action: ms_drbd_post_notify_demoted_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_pre_notify_promote_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_promote_0 * Resource action: drbd0:1 promote on odin * Pseudo action: ms_drbd_promoted_0 * Pseudo action: ms_drbd_post_notify_promoted_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-post_notify_promoted_0 * Pseudo action: group_start_0 * Resource action: IPaddr0 start on odin * Resource action: MailTo start on odin * Resource action: drbd0:1 monitor=10000 on odin * Resource action: drbd0:0 monitor=12000 on frigg * Pseudo action: group_running_0 * Resource action: IPaddr0 monitor=5000 on odin Revised Cluster Status: * Node List: * Online: [ frigg odin ] * Full List of Resources: * Clone Set: ms_drbd [drbd0] (promotable): * Promoted: [ odin ] * Unpromoted: [ frigg ] * Resource Group: group: * IPaddr0 (ocf:heartbeat:IPaddr): Started odin * MailTo (ocf:heartbeat:MailTo): Started odin diff --git a/cts/scheduler/summary/promoted-2.summary b/cts/scheduler/summary/promoted-2.summary index 9adf43ef1d..58e3e2ec82 100644 --- a/cts/scheduler/summary/promoted-2.summary +++ b/cts/scheduler/summary/promoted-2.summary @@ -1,71 +1,71 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Stopped * child_rsc1:1 (ocf:heartbeat:apache): Stopped * child_rsc1:2 (ocf:heartbeat:apache): Stopped * child_rsc1:3 (ocf:heartbeat:apache): Stopped * child_rsc1:4 (ocf:heartbeat:apache): Stopped Transition Summary: * Promote child_rsc1:0 ( Stopped -> Promoted node1 ) - * Start child_rsc1:1 ( node2 ) - * Start child_rsc1:2 ( node1 ) + * Start child_rsc1:1 ( node2 ) + * Start child_rsc1:2 ( node1 ) * Promote child_rsc1:3 ( Stopped -> Promoted node2 ) Executing Cluster Transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_pre_notify_start_0 * Pseudo action: rsc1_confirmed-pre_notify_start_0 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_post_notify_running_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_running_0 * Pseudo action: rsc1_pre_notify_promote_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-pre_notify_promote_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:0 promote on node1 * Resource action: child_rsc1:3 promote on node2 * Pseudo action: rsc1_promoted_0 * Pseudo action: rsc1_post_notify_promoted_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_promoted_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Promoted node1 * child_rsc1:1 (ocf:heartbeat:apache): Unpromoted node2 * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:3 (ocf:heartbeat:apache): Promoted node2 * child_rsc1:4 (ocf:heartbeat:apache): Stopped diff --git a/cts/scheduler/summary/promoted-3.summary b/cts/scheduler/summary/promoted-3.summary index 08100f3e36..839de37f1b 100644 --- a/cts/scheduler/summary/promoted-3.summary +++ b/cts/scheduler/summary/promoted-3.summary @@ -1,50 +1,50 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Stopped * child_rsc1:1 (ocf:heartbeat:apache): Stopped * child_rsc1:2 (ocf:heartbeat:apache): Stopped * child_rsc1:3 (ocf:heartbeat:apache): Stopped * child_rsc1:4 (ocf:heartbeat:apache): Stopped Transition Summary: - * Start child_rsc1:0 ( node1 ) + * Start child_rsc1:0 ( node1 ) * Promote child_rsc1:1 ( Stopped -> Promoted node2 ) - * Start child_rsc1:2 ( node1 ) - * Start child_rsc1:3 ( node2 ) + * Start child_rsc1:2 ( node1 ) + * Start child_rsc1:3 ( node2 ) Executing Cluster Transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: rsc1 [child_rsc1] (promotable, unique): * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2 * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1 * child_rsc1:3 (ocf:heartbeat:apache): Unpromoted node2 * child_rsc1:4 (ocf:heartbeat:apache): Stopped diff --git a/cts/scheduler/summary/promoted-7.summary b/cts/scheduler/summary/promoted-7.summary index e43682c9d4..a1ddea5d99 100644 --- a/cts/scheduler/summary/promoted-7.summary +++ b/cts/scheduler/summary/promoted-7.summary @@ -1,121 +1,121 @@ Current cluster status: * Node List: * Node c001n01: UNCLEAN (offline) * Online: [ c001n02 c001n03 c001n08 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN) * Resource Group: group-1: * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n03 * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n03 * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n03 * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n02 * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN) * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 * Clone Set: DoFencing [child_DoFencing] (unique): * child_DoFencing:0 (stonith:ssh): Started c001n01 (UNCLEAN) * child_DoFencing:1 (stonith:ssh): Started c001n03 * child_DoFencing:2 (stonith:ssh): Started c001n02 * child_DoFencing:3 (stonith:ssh): Started c001n08 * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n01 (UNCLEAN) * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03 * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01 (UNCLEAN) * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03 * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 Transition Summary: * Fence (reboot) c001n01 'peer is no longer part of the cluster' * Move DcIPaddr ( c001n01 -> c001n03 ) * Move ocf_192.168.100.181 ( c001n03 -> c001n02 ) * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 ) * Move ocf_192.168.100.183 ( c001n03 -> c001n02 ) * Move lsb_dummy ( c001n02 -> c001n08 ) * Move rsc_c001n01 ( c001n01 -> c001n03 ) * Stop child_DoFencing:0 ( c001n01 ) due to node availability - * Stop ocf_msdummy:0 ( Promoted c001n01 ) due to node availability - * Stop ocf_msdummy:4 ( Unpromoted c001n01 ) due to node availability + * Stop ocf_msdummy:0 ( Promoted c001n01 ) due to node availability + * Stop ocf_msdummy:4 ( Unpromoted c001n01 ) due to node availability Executing Cluster Transition: * Pseudo action: group-1_stop_0 * Resource action: ocf_192.168.100.183 stop on c001n03 * Resource action: lsb_dummy stop on c001n02 * Resource action: child_DoFencing:2 monitor on c001n08 * Resource action: child_DoFencing:2 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n02 * Pseudo action: DoFencing_stop_0 * Resource action: ocf_msdummy:4 monitor on c001n08 * Resource action: ocf_msdummy:4 monitor on c001n03 * Resource action: ocf_msdummy:4 monitor on c001n02 * Resource action: ocf_msdummy:5 monitor on c001n08 * Resource action: ocf_msdummy:5 monitor on c001n02 * Resource action: ocf_msdummy:6 monitor on c001n08 * Resource action: ocf_msdummy:6 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n02 * Pseudo action: master_rsc_1_demote_0 * Fencing c001n01 (reboot) * Pseudo action: DcIPaddr_stop_0 * Resource action: heartbeat_192.168.100.182 stop on c001n03 * Resource action: lsb_dummy start on c001n08 * Pseudo action: rsc_c001n01_stop_0 * Pseudo action: child_DoFencing:0_stop_0 * Pseudo action: DoFencing_stopped_0 * Pseudo action: ocf_msdummy:0_demote_0 * Pseudo action: master_rsc_1_demoted_0 * Pseudo action: master_rsc_1_stop_0 * Resource action: DcIPaddr start on c001n03 * Resource action: ocf_192.168.100.181 stop on c001n03 * Resource action: lsb_dummy monitor=5000 on c001n08 * Resource action: rsc_c001n01 start on c001n03 * Pseudo action: ocf_msdummy:0_stop_0 * Pseudo action: ocf_msdummy:4_stop_0 * Pseudo action: master_rsc_1_stopped_0 * Resource action: DcIPaddr monitor=5000 on c001n03 * Pseudo action: group-1_stopped_0 * Pseudo action: group-1_start_0 * Resource action: ocf_192.168.100.181 start on c001n02 * Resource action: heartbeat_192.168.100.182 start on c001n02 * Resource action: ocf_192.168.100.183 start on c001n02 * Resource action: rsc_c001n01 monitor=5000 on c001n03 * Pseudo action: group-1_running_0 * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02 * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02 * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02 Revised Cluster Status: * Node List: * Online: [ c001n02 c001n03 c001n08 ] * OFFLINE: [ c001n01 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n03 * Resource Group: group-1: * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02 * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02 * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02 * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n08 * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n03 * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 * Clone Set: DoFencing [child_DoFencing] (unique): * child_DoFencing:0 (stonith:ssh): Stopped * child_DoFencing:1 (stonith:ssh): Started c001n03 * child_DoFencing:2 (stonith:ssh): Started c001n02 * child_DoFencing:3 (stonith:ssh): Started c001n08 * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03 * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03 * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 diff --git a/cts/scheduler/summary/promoted-8.summary b/cts/scheduler/summary/promoted-8.summary index 571eba6945..ed646ed589 100644 --- a/cts/scheduler/summary/promoted-8.summary +++ b/cts/scheduler/summary/promoted-8.summary @@ -1,124 +1,124 @@ Current cluster status: * Node List: * Node c001n01: UNCLEAN (offline) * Online: [ c001n02 c001n03 c001n08 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN) * Resource Group: group-1: * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n03 * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n03 * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n03 * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n02 * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN) * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 * Clone Set: DoFencing [child_DoFencing] (unique): * child_DoFencing:0 (stonith:ssh): Started c001n01 (UNCLEAN) * child_DoFencing:1 (stonith:ssh): Started c001n03 * child_DoFencing:2 (stonith:ssh): Started c001n02 * child_DoFencing:3 (stonith:ssh): Started c001n08 * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n01 (UNCLEAN) * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03 * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 Transition Summary: * Fence (reboot) c001n01 'peer is no longer part of the cluster' - * Move DcIPaddr ( c001n01 -> c001n03 ) - * Move ocf_192.168.100.181 ( c001n03 -> c001n02 ) - * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 ) - * Move ocf_192.168.100.183 ( c001n03 -> c001n02 ) - * Move lsb_dummy ( c001n02 -> c001n08 ) - * Move rsc_c001n01 ( c001n01 -> c001n03 ) - * Stop child_DoFencing:0 ( c001n01 ) due to node availability + * Move DcIPaddr ( c001n01 -> c001n03 ) + * Move ocf_192.168.100.181 ( c001n03 -> c001n02 ) + * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 ) + * Move ocf_192.168.100.183 ( c001n03 -> c001n02 ) + * Move lsb_dummy ( c001n02 -> c001n08 ) + * Move rsc_c001n01 ( c001n01 -> c001n03 ) + * Stop child_DoFencing:0 ( c001n01 ) due to node availability * Move ocf_msdummy:0 ( Promoted c001n01 -> Unpromoted c001n03 ) Executing Cluster Transition: * Pseudo action: group-1_stop_0 * Resource action: ocf_192.168.100.183 stop on c001n03 * Resource action: lsb_dummy stop on c001n02 * Resource action: child_DoFencing:2 monitor on c001n08 * Resource action: child_DoFencing:2 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n02 * Pseudo action: DoFencing_stop_0 * Resource action: ocf_msdummy:4 monitor on c001n08 * Resource action: ocf_msdummy:4 monitor on c001n03 * Resource action: ocf_msdummy:4 monitor on c001n02 * Resource action: ocf_msdummy:5 monitor on c001n08 * Resource action: ocf_msdummy:5 monitor on c001n03 * Resource action: ocf_msdummy:5 monitor on c001n02 * Resource action: ocf_msdummy:6 monitor on c001n08 * Resource action: ocf_msdummy:6 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n02 * Pseudo action: master_rsc_1_demote_0 * Fencing c001n01 (reboot) * Pseudo action: DcIPaddr_stop_0 * Resource action: heartbeat_192.168.100.182 stop on c001n03 * Resource action: lsb_dummy start on c001n08 * Pseudo action: rsc_c001n01_stop_0 * Pseudo action: child_DoFencing:0_stop_0 * Pseudo action: DoFencing_stopped_0 * Pseudo action: ocf_msdummy:0_demote_0 * Pseudo action: master_rsc_1_demoted_0 * Pseudo action: master_rsc_1_stop_0 * Resource action: DcIPaddr start on c001n03 * Resource action: ocf_192.168.100.181 stop on c001n03 * Resource action: lsb_dummy monitor=5000 on c001n08 * Resource action: rsc_c001n01 start on c001n03 * Pseudo action: ocf_msdummy:0_stop_0 * Pseudo action: master_rsc_1_stopped_0 * Pseudo action: master_rsc_1_start_0 * Resource action: DcIPaddr monitor=5000 on c001n03 * Pseudo action: group-1_stopped_0 * Pseudo action: group-1_start_0 * Resource action: ocf_192.168.100.181 start on c001n02 * Resource action: heartbeat_192.168.100.182 start on c001n02 * Resource action: ocf_192.168.100.183 start on c001n02 * Resource action: rsc_c001n01 monitor=5000 on c001n03 * Resource action: ocf_msdummy:0 start on c001n03 * Pseudo action: master_rsc_1_running_0 * Pseudo action: group-1_running_0 * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02 * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02 * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02 * Resource action: ocf_msdummy:0 monitor=5000 on c001n03 Revised Cluster Status: * Node List: * Online: [ c001n02 c001n03 c001n08 ] * OFFLINE: [ c001n01 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n03 * Resource Group: group-1: * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02 * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02 * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02 * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n08 * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n03 * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 * Clone Set: DoFencing [child_DoFencing] (unique): * child_DoFencing:0 (stonith:ssh): Stopped * child_DoFencing:1 (stonith:ssh): Started c001n03 * child_DoFencing:2 (stonith:ssh): Started c001n02 * child_DoFencing:3 (stonith:ssh): Started c001n08 * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03 * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03 * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02 * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08 diff --git a/cts/scheduler/summary/promoted-9.summary b/cts/scheduler/summary/promoted-9.summary index 7dfdbbda99..69dab46a2c 100644 --- a/cts/scheduler/summary/promoted-9.summary +++ b/cts/scheduler/summary/promoted-9.summary @@ -1,100 +1,100 @@ Current cluster status: * Node List: * Node sgi2: UNCLEAN (offline) * Node test02: UNCLEAN (offline) * Online: [ ibm1 va1 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Stopped * Resource Group: group-1: * ocf_127.0.0.11 (ocf:heartbeat:IPaddr): Stopped * heartbeat_127.0.0.12 (ocf:heartbeat:IPaddr): Stopped * ocf_127.0.0.13 (ocf:heartbeat:IPaddr): Stopped * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped * rsc_sgi2 (ocf:heartbeat:IPaddr): Stopped * rsc_ibm1 (ocf:heartbeat:IPaddr): Stopped * rsc_va1 (ocf:heartbeat:IPaddr): Stopped * rsc_test02 (ocf:heartbeat:IPaddr): Stopped * Clone Set: DoFencing [child_DoFencing] (unique): * child_DoFencing:0 (stonith:ssh): Started va1 * child_DoFencing:1 (stonith:ssh): Started ibm1 * child_DoFencing:2 (stonith:ssh): Stopped * child_DoFencing:3 (stonith:ssh): Stopped * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:1 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:2 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:3 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:4 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:5 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:6 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:7 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped Transition Summary: - * Start DcIPaddr ( va1 ) due to no quorum (blocked) - * Start ocf_127.0.0.11 ( va1 ) due to no quorum (blocked) - * Start heartbeat_127.0.0.12 ( va1 ) due to no quorum (blocked) - * Start ocf_127.0.0.13 ( va1 ) due to no quorum (blocked) - * Start lsb_dummy ( va1 ) due to no quorum (blocked) - * Start rsc_sgi2 ( va1 ) due to no quorum (blocked) - * Start rsc_ibm1 ( va1 ) due to no quorum (blocked) - * Start rsc_va1 ( va1 ) due to no quorum (blocked) - * Start rsc_test02 ( va1 ) due to no quorum (blocked) - * Stop child_DoFencing:1 ( ibm1 ) due to node availability + * Start DcIPaddr ( va1 ) due to no quorum (blocked) + * Start ocf_127.0.0.11 ( va1 ) due to no quorum (blocked) + * Start heartbeat_127.0.0.12 ( va1 ) due to no quorum (blocked) + * Start ocf_127.0.0.13 ( va1 ) due to no quorum (blocked) + * Start lsb_dummy ( va1 ) due to no quorum (blocked) + * Start rsc_sgi2 ( va1 ) due to no quorum (blocked) + * Start rsc_ibm1 ( va1 ) due to no quorum (blocked) + * Start rsc_va1 ( va1 ) due to no quorum (blocked) + * Start rsc_test02 ( va1 ) due to no quorum (blocked) + * Stop child_DoFencing:1 ( ibm1 ) due to node availability * Promote ocf_msdummy:0 ( Stopped -> Promoted va1 ) blocked - * Start ocf_msdummy:1 ( va1 ) due to no quorum (blocked) + * Start ocf_msdummy:1 ( va1 ) due to no quorum (blocked) Executing Cluster Transition: * Resource action: child_DoFencing:1 monitor on va1 * Resource action: child_DoFencing:2 monitor on va1 * Resource action: child_DoFencing:2 monitor on ibm1 * Resource action: child_DoFencing:3 monitor on va1 * Resource action: child_DoFencing:3 monitor on ibm1 * Pseudo action: DoFencing_stop_0 * Resource action: ocf_msdummy:2 monitor on va1 * Resource action: ocf_msdummy:2 monitor on ibm1 * Resource action: ocf_msdummy:3 monitor on va1 * Resource action: ocf_msdummy:3 monitor on ibm1 * Resource action: ocf_msdummy:4 monitor on va1 * Resource action: ocf_msdummy:4 monitor on ibm1 * Resource action: ocf_msdummy:5 monitor on va1 * Resource action: ocf_msdummy:5 monitor on ibm1 * Resource action: ocf_msdummy:6 monitor on va1 * Resource action: ocf_msdummy:6 monitor on ibm1 * Resource action: ocf_msdummy:7 monitor on va1 * Resource action: ocf_msdummy:7 monitor on ibm1 * Resource action: child_DoFencing:1 stop on ibm1 * Pseudo action: DoFencing_stopped_0 * Cluster action: do_shutdown on ibm1 Revised Cluster Status: * Node List: * Node sgi2: UNCLEAN (offline) * Node test02: UNCLEAN (offline) * Online: [ ibm1 va1 ] * Full List of Resources: * DcIPaddr (ocf:heartbeat:IPaddr): Stopped * Resource Group: group-1: * ocf_127.0.0.11 (ocf:heartbeat:IPaddr): Stopped * heartbeat_127.0.0.12 (ocf:heartbeat:IPaddr): Stopped * ocf_127.0.0.13 (ocf:heartbeat:IPaddr): Stopped * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped * rsc_sgi2 (ocf:heartbeat:IPaddr): Stopped * rsc_ibm1 (ocf:heartbeat:IPaddr): Stopped * rsc_va1 (ocf:heartbeat:IPaddr): Stopped * rsc_test02 (ocf:heartbeat:IPaddr): Stopped * Clone Set: DoFencing [child_DoFencing] (unique): * child_DoFencing:0 (stonith:ssh): Started va1 * child_DoFencing:1 (stonith:ssh): Stopped * child_DoFencing:2 (stonith:ssh): Stopped * child_DoFencing:3 (stonith:ssh): Stopped * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:1 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:2 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:3 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:4 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:5 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:6 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped * ocf_msdummy:7 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped diff --git a/cts/scheduler/summary/promoted-asymmetrical-order.summary b/cts/scheduler/summary/promoted-asymmetrical-order.summary index 1e49b3084b..591ff18a04 100644 --- a/cts/scheduler/summary/promoted-asymmetrical-order.summary +++ b/cts/scheduler/summary/promoted-asymmetrical-order.summary @@ -1,37 +1,37 @@ 2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: ms1 [rsc1] (promotable, disabled): * Promoted: [ node1 ] * Unpromoted: [ node2 ] * Clone Set: ms2 [rsc2] (promotable): * Promoted: [ node2 ] * Unpromoted: [ node1 ] Transition Summary: - * Stop rsc1:0 ( Promoted node1 ) due to node availability - * Stop rsc1:1 ( Unpromoted node2 ) due to node availability + * Stop rsc1:0 ( Promoted node1 ) due to node availability + * Stop rsc1:1 ( Unpromoted node2 ) due to node availability Executing Cluster Transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:0 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:0 stop on node1 * Resource action: rsc1:1 stop on node2 * Pseudo action: ms1_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * Clone Set: ms1 [rsc1] (promotable, disabled): * Stopped (disabled): [ node1 node2 ] * Clone Set: ms2 [rsc2] (promotable): * Promoted: [ node2 ] * Unpromoted: [ node1 ] diff --git a/cts/scheduler/summary/promoted-demote-2.summary b/cts/scheduler/summary/promoted-demote-2.summary index 115da9aaaf..e371d3f1c1 100644 --- a/cts/scheduler/summary/promoted-demote-2.summary +++ b/cts/scheduler/summary/promoted-demote-2.summary @@ -1,75 +1,75 @@ Current cluster status: * Node List: * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started pcmk-1 * Resource Group: group-1: * r192.168.122.105 (ocf:heartbeat:IPaddr): Stopped * r192.168.122.106 (ocf:heartbeat:IPaddr): Stopped * r192.168.122.107 (ocf:heartbeat:IPaddr): Stopped * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped * migrator (ocf:pacemaker:Dummy): Started pcmk-4 * Clone Set: Connectivity [ping-1]: * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Clone Set: master-1 [stateful-1] (promotable): * stateful-1 (ocf:pacemaker:Stateful): FAILED pcmk-1 * Unpromoted: [ pcmk-2 pcmk-3 pcmk-4 ] Transition Summary: - * Start r192.168.122.105 ( pcmk-2 ) - * Start r192.168.122.106 ( pcmk-2 ) - * Start r192.168.122.107 ( pcmk-2 ) - * Start lsb-dummy ( pcmk-2 ) - * Recover stateful-1:0 ( Unpromoted pcmk-1 ) + * Start r192.168.122.105 ( pcmk-2 ) + * Start r192.168.122.106 ( pcmk-2 ) + * Start r192.168.122.107 ( pcmk-2 ) + * Start lsb-dummy ( pcmk-2 ) + * Recover stateful-1:0 ( Unpromoted pcmk-1 ) * Promote stateful-1:1 ( Unpromoted -> Promoted pcmk-2 ) Executing Cluster Transition: * Resource action: stateful-1:0 cancel=15000 on pcmk-2 * Pseudo action: master-1_stop_0 * Resource action: stateful-1:1 stop on pcmk-1 * Pseudo action: master-1_stopped_0 * Pseudo action: master-1_start_0 * Resource action: stateful-1:1 start on pcmk-1 * Pseudo action: master-1_running_0 * Resource action: stateful-1:1 monitor=15000 on pcmk-1 * Pseudo action: master-1_promote_0 * Resource action: stateful-1:0 promote on pcmk-2 * Pseudo action: master-1_promoted_0 * Pseudo action: group-1_start_0 * Resource action: r192.168.122.105 start on pcmk-2 * Resource action: r192.168.122.106 start on pcmk-2 * Resource action: r192.168.122.107 start on pcmk-2 * Resource action: stateful-1:0 monitor=16000 on pcmk-2 * Pseudo action: group-1_running_0 * Resource action: r192.168.122.105 monitor=5000 on pcmk-2 * Resource action: r192.168.122.106 monitor=5000 on pcmk-2 * Resource action: r192.168.122.107 monitor=5000 on pcmk-2 * Resource action: lsb-dummy start on pcmk-2 * Resource action: lsb-dummy monitor=5000 on pcmk-2 Revised Cluster Status: * Node List: * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started pcmk-1 * Resource Group: group-1: * r192.168.122.105 (ocf:heartbeat:IPaddr): Started pcmk-2 * r192.168.122.106 (ocf:heartbeat:IPaddr): Started pcmk-2 * r192.168.122.107 (ocf:heartbeat:IPaddr): Started pcmk-2 * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 * migrator (ocf:pacemaker:Dummy): Started pcmk-4 * Clone Set: Connectivity [ping-1]: * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ pcmk-2 ] * Unpromoted: [ pcmk-1 pcmk-3 pcmk-4 ] diff --git a/cts/scheduler/summary/promoted-demote.summary b/cts/scheduler/summary/promoted-demote.summary index a597ec76c0..3ba4985afd 100644 --- a/cts/scheduler/summary/promoted-demote.summary +++ b/cts/scheduler/summary/promoted-demote.summary @@ -1,70 +1,70 @@ Current cluster status: * Node List: * Online: [ cxa1 cxb1 ] * Full List of Resources: * cyrus_address (ocf:heartbeat:IPaddr2): Started cxa1 * cyrus_master (ocf:heartbeat:cyrus-imap): Stopped * cyrus_syslogd (ocf:heartbeat:syslogd): Stopped * cyrus_filesys (ocf:heartbeat:Filesystem): Stopped * cyrus_volgroup (ocf:heartbeat:VolGroup): Stopped * Clone Set: cyrus_drbd [cyrus_drbd_node] (promotable): * Promoted: [ cxa1 ] * Unpromoted: [ cxb1 ] * named_address (ocf:heartbeat:IPaddr2): Started cxa1 * named_filesys (ocf:heartbeat:Filesystem): Stopped * named_volgroup (ocf:heartbeat:VolGroup): Stopped * named_daemon (ocf:heartbeat:recursor): Stopped * named_syslogd (ocf:heartbeat:syslogd): Stopped * Clone Set: named_drbd [named_drbd_node] (promotable): * Unpromoted: [ cxa1 cxb1 ] * Clone Set: pingd_clone [pingd_node]: * Started: [ cxa1 cxb1 ] * Clone Set: fence_clone [fence_node]: * Started: [ cxa1 cxb1 ] Transition Summary: - * Move named_address ( cxa1 -> cxb1 ) + * Move named_address ( cxa1 -> cxb1 ) * Promote named_drbd_node:1 ( Unpromoted -> Promoted cxb1 ) Executing Cluster Transition: * Resource action: named_address stop on cxa1 * Pseudo action: named_drbd_pre_notify_promote_0 * Resource action: named_address start on cxb1 * Resource action: named_drbd_node:1 notify on cxa1 * Resource action: named_drbd_node:0 notify on cxb1 * Pseudo action: named_drbd_confirmed-pre_notify_promote_0 * Pseudo action: named_drbd_promote_0 * Resource action: named_drbd_node:0 promote on cxb1 * Pseudo action: named_drbd_promoted_0 * Pseudo action: named_drbd_post_notify_promoted_0 * Resource action: named_drbd_node:1 notify on cxa1 * Resource action: named_drbd_node:0 notify on cxb1 * Pseudo action: named_drbd_confirmed-post_notify_promoted_0 * Resource action: named_drbd_node:0 monitor=10000 on cxb1 Revised Cluster Status: * Node List: * Online: [ cxa1 cxb1 ] * Full List of Resources: * cyrus_address (ocf:heartbeat:IPaddr2): Started cxa1 * cyrus_master (ocf:heartbeat:cyrus-imap): Stopped * cyrus_syslogd (ocf:heartbeat:syslogd): Stopped * cyrus_filesys (ocf:heartbeat:Filesystem): Stopped * cyrus_volgroup (ocf:heartbeat:VolGroup): Stopped * Clone Set: cyrus_drbd [cyrus_drbd_node] (promotable): * Promoted: [ cxa1 ] * Unpromoted: [ cxb1 ] * named_address (ocf:heartbeat:IPaddr2): Started cxb1 * named_filesys (ocf:heartbeat:Filesystem): Stopped * named_volgroup (ocf:heartbeat:VolGroup): Stopped * named_daemon (ocf:heartbeat:recursor): Stopped * named_syslogd (ocf:heartbeat:syslogd): Stopped * Clone Set: named_drbd [named_drbd_node] (promotable): * Promoted: [ cxb1 ] * Unpromoted: [ cxa1 ] * Clone Set: pingd_clone [pingd_node]: * Started: [ cxa1 cxb1 ] * Clone Set: fence_clone [fence_node]: * Started: [ cxa1 cxb1 ] diff --git a/cts/scheduler/summary/promoted-dependent-ban.summary b/cts/scheduler/summary/promoted-dependent-ban.summary index 985326a797..2b24139acc 100644 --- a/cts/scheduler/summary/promoted-dependent-ban.summary +++ b/cts/scheduler/summary/promoted-dependent-ban.summary @@ -1,38 +1,38 @@ Current cluster status: * Node List: * Online: [ c6 c7 c8 ] * Full List of Resources: * Clone Set: ms_drbd-dtest1 [p_drbd-dtest1] (promotable): * Unpromoted: [ c6 c7 ] * p_dtest1 (ocf:heartbeat:Dummy): Stopped Transition Summary: * Promote p_drbd-dtest1:0 ( Unpromoted -> Promoted c7 ) - * Start p_dtest1 ( c7 ) + * Start p_dtest1 ( c7 ) Executing Cluster Transition: * Pseudo action: ms_drbd-dtest1_pre_notify_promote_0 * Resource action: p_drbd-dtest1 notify on c7 * Resource action: p_drbd-dtest1 notify on c6 * Pseudo action: ms_drbd-dtest1_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd-dtest1_promote_0 * Resource action: p_drbd-dtest1 promote on c7 * Pseudo action: ms_drbd-dtest1_promoted_0 * Pseudo action: ms_drbd-dtest1_post_notify_promoted_0 * Resource action: p_drbd-dtest1 notify on c7 * Resource action: p_drbd-dtest1 notify on c6 * Pseudo action: ms_drbd-dtest1_confirmed-post_notify_promoted_0 * Resource action: p_dtest1 start on c7 * Resource action: p_drbd-dtest1 monitor=10000 on c7 * Resource action: p_drbd-dtest1 monitor=20000 on c6 Revised Cluster Status: * Node List: * Online: [ c6 c7 c8 ] * Full List of Resources: * Clone Set: ms_drbd-dtest1 [p_drbd-dtest1] (promotable): * Promoted: [ c7 ] * Unpromoted: [ c6 ] * p_dtest1 (ocf:heartbeat:Dummy): Started c7 diff --git a/cts/scheduler/summary/promoted-failed-demote-2.summary b/cts/scheduler/summary/promoted-failed-demote-2.summary index 453b5b7c9b..3f317fabea 100644 --- a/cts/scheduler/summary/promoted-failed-demote-2.summary +++ b/cts/scheduler/summary/promoted-failed-demote-2.summary @@ -1,47 +1,47 @@ Current cluster status: * Node List: * Online: [ dl380g5a dl380g5b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): FAILED dl380g5b * stateful-2:0 (ocf:heartbeat:Stateful): Stopped * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a * stateful-2:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a Transition Summary: - * Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability + * Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability * Promote stateful-1:1 ( Unpromoted -> Promoted dl380g5a ) * Promote stateful-2:1 ( Unpromoted -> Promoted dl380g5a ) Executing Cluster Transition: * Resource action: stateful-1:1 cancel=20000 on dl380g5a * Resource action: stateful-2:1 cancel=20000 on dl380g5a * Pseudo action: ms-sf_stop_0 * Pseudo action: group:0_stop_0 * Resource action: stateful-1:0 stop on dl380g5b * Pseudo action: group:0_stopped_0 * Pseudo action: ms-sf_stopped_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-1:1 promote on dl380g5a * Resource action: stateful-2:1 promote on dl380g5a * Pseudo action: group:1_promoted_0 * Resource action: stateful-1:1 monitor=10000 on dl380g5a * Resource action: stateful-2:1 monitor=10000 on dl380g5a * Pseudo action: ms-sf_promoted_0 Revised Cluster Status: * Node List: * Online: [ dl380g5a dl380g5b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): Stopped * stateful-2:0 (ocf:heartbeat:Stateful): Stopped * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Promoted dl380g5a * stateful-2:1 (ocf:heartbeat:Stateful): Promoted dl380g5a diff --git a/cts/scheduler/summary/promoted-failed-demote.summary b/cts/scheduler/summary/promoted-failed-demote.summary index 732fba89c7..70b3e1b2cf 100644 --- a/cts/scheduler/summary/promoted-failed-demote.summary +++ b/cts/scheduler/summary/promoted-failed-demote.summary @@ -1,64 +1,64 @@ Current cluster status: * Node List: * Online: [ dl380g5a dl380g5b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): FAILED dl380g5b * stateful-2:0 (ocf:heartbeat:Stateful): Stopped * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a * stateful-2:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a Transition Summary: - * Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability + * Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability * Promote stateful-1:1 ( Unpromoted -> Promoted dl380g5a ) * Promote stateful-2:1 ( Unpromoted -> Promoted dl380g5a ) Executing Cluster Transition: * Resource action: stateful-1:1 cancel=20000 on dl380g5a * Resource action: stateful-2:1 cancel=20000 on dl380g5a * Pseudo action: ms-sf_pre_notify_stop_0 * Resource action: stateful-1:0 notify on dl380g5b * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-pre_notify_stop_0 * Pseudo action: ms-sf_stop_0 * Pseudo action: group:0_stop_0 * Resource action: stateful-1:0 stop on dl380g5b * Pseudo action: group:0_stopped_0 * Pseudo action: ms-sf_stopped_0 * Pseudo action: ms-sf_post_notify_stopped_0 * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-post_notify_stopped_0 * Pseudo action: ms-sf_pre_notify_promote_0 * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-pre_notify_promote_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-1:1 promote on dl380g5a * Resource action: stateful-2:1 promote on dl380g5a * Pseudo action: group:1_promoted_0 * Pseudo action: ms-sf_promoted_0 * Pseudo action: ms-sf_post_notify_promoted_0 * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-post_notify_promoted_0 * Resource action: stateful-1:1 monitor=10000 on dl380g5a * Resource action: stateful-2:1 monitor=10000 on dl380g5a Revised Cluster Status: * Node List: * Online: [ dl380g5a dl380g5b ] * Full List of Resources: * Clone Set: ms-sf [group] (promotable, unique): * Resource Group: group:0: * stateful-1:0 (ocf:heartbeat:Stateful): Stopped * stateful-2:0 (ocf:heartbeat:Stateful): Stopped * Resource Group: group:1: * stateful-1:1 (ocf:heartbeat:Stateful): Promoted dl380g5a * stateful-2:1 (ocf:heartbeat:Stateful): Promoted dl380g5a diff --git a/cts/scheduler/summary/promoted-move.summary b/cts/scheduler/summary/promoted-move.summary index 2fb2206605..4782edb551 100644 --- a/cts/scheduler/summary/promoted-move.summary +++ b/cts/scheduler/summary/promoted-move.summary @@ -1,72 +1,72 @@ Current cluster status: * Node List: * Online: [ bl460g1n13 bl460g1n14 ] * Full List of Resources: * Resource Group: grpDRBD: * dummy01 (ocf:pacemaker:Dummy): FAILED bl460g1n13 * dummy02 (ocf:pacemaker:Dummy): Started bl460g1n13 * dummy03 (ocf:pacemaker:Dummy): Stopped * Clone Set: msDRBD [prmDRBD] (promotable): * Promoted: [ bl460g1n13 ] * Unpromoted: [ bl460g1n14 ] Transition Summary: - * Recover dummy01 ( bl460g1n13 -> bl460g1n14 ) - * Move dummy02 ( bl460g1n13 -> bl460g1n14 ) - * Start dummy03 ( bl460g1n14 ) + * Recover dummy01 ( bl460g1n13 -> bl460g1n14 ) + * Move dummy02 ( bl460g1n13 -> bl460g1n14 ) + * Start dummy03 ( bl460g1n14 ) * Demote prmDRBD:0 ( Promoted -> Unpromoted bl460g1n13 ) * Promote prmDRBD:1 ( Unpromoted -> Promoted bl460g1n14 ) Executing Cluster Transition: * Pseudo action: grpDRBD_stop_0 * Resource action: dummy02 stop on bl460g1n13 * Resource action: prmDRBD:0 cancel=10000 on bl460g1n13 * Resource action: prmDRBD:1 cancel=20000 on bl460g1n14 * Pseudo action: msDRBD_pre_notify_demote_0 * Resource action: dummy01 stop on bl460g1n13 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-pre_notify_demote_0 * Pseudo action: grpDRBD_stopped_0 * Pseudo action: msDRBD_demote_0 * Resource action: prmDRBD:0 demote on bl460g1n13 * Pseudo action: msDRBD_demoted_0 * Pseudo action: msDRBD_post_notify_demoted_0 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-post_notify_demoted_0 * Pseudo action: msDRBD_pre_notify_promote_0 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-pre_notify_promote_0 * Pseudo action: msDRBD_promote_0 * Resource action: prmDRBD:1 promote on bl460g1n14 * Pseudo action: msDRBD_promoted_0 * Pseudo action: msDRBD_post_notify_promoted_0 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-post_notify_promoted_0 * Pseudo action: grpDRBD_start_0 * Resource action: dummy01 start on bl460g1n14 * Resource action: dummy02 start on bl460g1n14 * Resource action: dummy03 start on bl460g1n14 * Resource action: prmDRBD:0 monitor=20000 on bl460g1n13 * Resource action: prmDRBD:1 monitor=10000 on bl460g1n14 * Pseudo action: grpDRBD_running_0 * Resource action: dummy01 monitor=10000 on bl460g1n14 * Resource action: dummy02 monitor=10000 on bl460g1n14 * Resource action: dummy03 monitor=10000 on bl460g1n14 Revised Cluster Status: * Node List: * Online: [ bl460g1n13 bl460g1n14 ] * Full List of Resources: * Resource Group: grpDRBD: * dummy01 (ocf:pacemaker:Dummy): Started bl460g1n14 * dummy02 (ocf:pacemaker:Dummy): Started bl460g1n14 * dummy03 (ocf:pacemaker:Dummy): Started bl460g1n14 * Clone Set: msDRBD [prmDRBD] (promotable): * Promoted: [ bl460g1n14 ] * Unpromoted: [ bl460g1n13 ] diff --git a/cts/scheduler/summary/promoted-partially-demoted-group.summary b/cts/scheduler/summary/promoted-partially-demoted-group.summary index e5b35480d7..b85c805711 100644 --- a/cts/scheduler/summary/promoted-partially-demoted-group.summary +++ b/cts/scheduler/summary/promoted-partially-demoted-group.summary @@ -1,118 +1,118 @@ Current cluster status: * Node List: * Online: [ sd01-0 sd01-1 ] * Full List of Resources: * stonith-xvm-sd01-0 (stonith:fence_xvm): Started sd01-1 * stonith-xvm-sd01-1 (stonith:fence_xvm): Started sd01-0 * Resource Group: cdev-pool-0-iscsi-export: * cdev-pool-0-iscsi-target (ocf:vds-ok:iSCSITarget): Started sd01-1 * cdev-pool-0-iscsi-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started sd01-1 * Clone Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] (promotable): * Promoted: [ sd01-1 ] * Unpromoted: [ sd01-0 ] * Clone Set: cl-ietd [ietd]: * Started: [ sd01-0 sd01-1 ] * Clone Set: cl-vlan1-net [vlan1-net]: * Started: [ sd01-0 sd01-1 ] * Resource Group: cdev-pool-0-iscsi-vips: * vip-164 (ocf:heartbeat:IPaddr2): Started sd01-1 * vip-165 (ocf:heartbeat:IPaddr2): Started sd01-1 * Clone Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] (promotable): * Promoted: [ sd01-1 ] * Unpromoted: [ sd01-0 ] Transition Summary: - * Move vip-164 ( sd01-1 -> sd01-0 ) - * Move vip-165 ( sd01-1 -> sd01-0 ) - * Move cdev-pool-0-iscsi-target ( sd01-1 -> sd01-0 ) - * Move cdev-pool-0-iscsi-lun-1 ( sd01-1 -> sd01-0 ) + * Move vip-164 ( sd01-1 -> sd01-0 ) + * Move vip-165 ( sd01-1 -> sd01-0 ) + * Move cdev-pool-0-iscsi-target ( sd01-1 -> sd01-0 ) + * Move cdev-pool-0-iscsi-lun-1 ( sd01-1 -> sd01-0 ) * Demote vip-164-fw:0 ( Promoted -> Unpromoted sd01-1 ) * Promote vip-164-fw:1 ( Unpromoted -> Promoted sd01-0 ) * Promote vip-165-fw:1 ( Unpromoted -> Promoted sd01-0 ) * Demote cdev-pool-0-drbd:0 ( Promoted -> Unpromoted sd01-1 ) * Promote cdev-pool-0-drbd:1 ( Unpromoted -> Promoted sd01-0 ) Executing Cluster Transition: * Resource action: vip-165-fw monitor=10000 on sd01-1 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_demote_0 * Pseudo action: ms-cdev-pool-0-drbd_pre_notify_demote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_demote_0 * Resource action: vip-164-fw demote on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-pre_notify_demote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_demoted_0 * Resource action: vip-164-fw monitor=10000 on sd01-1 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_demoted_0 * Pseudo action: cdev-pool-0-iscsi-vips_stop_0 * Resource action: vip-165 stop on sd01-1 * Resource action: vip-164 stop on sd01-1 * Pseudo action: cdev-pool-0-iscsi-vips_stopped_0 * Pseudo action: cdev-pool-0-iscsi-export_stop_0 * Resource action: cdev-pool-0-iscsi-lun-1 stop on sd01-1 * Resource action: cdev-pool-0-iscsi-target stop on sd01-1 * Pseudo action: cdev-pool-0-iscsi-export_stopped_0 * Pseudo action: ms-cdev-pool-0-drbd_demote_0 * Resource action: cdev-pool-0-drbd demote on sd01-1 * Pseudo action: ms-cdev-pool-0-drbd_demoted_0 * Pseudo action: ms-cdev-pool-0-drbd_post_notify_demoted_0 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-post_notify_demoted_0 * Pseudo action: ms-cdev-pool-0-drbd_pre_notify_promote_0 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-pre_notify_promote_0 * Pseudo action: ms-cdev-pool-0-drbd_promote_0 * Resource action: cdev-pool-0-drbd promote on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_promoted_0 * Pseudo action: ms-cdev-pool-0-drbd_post_notify_promoted_0 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-post_notify_promoted_0 * Pseudo action: cdev-pool-0-iscsi-export_start_0 * Resource action: cdev-pool-0-iscsi-target start on sd01-0 * Resource action: cdev-pool-0-iscsi-lun-1 start on sd01-0 * Resource action: cdev-pool-0-drbd monitor=20000 on sd01-1 * Resource action: cdev-pool-0-drbd monitor=10000 on sd01-0 * Pseudo action: cdev-pool-0-iscsi-export_running_0 * Resource action: cdev-pool-0-iscsi-target monitor=10000 on sd01-0 * Resource action: cdev-pool-0-iscsi-lun-1 monitor=10000 on sd01-0 * Pseudo action: cdev-pool-0-iscsi-vips_start_0 * Resource action: vip-164 start on sd01-0 * Resource action: vip-165 start on sd01-0 * Pseudo action: cdev-pool-0-iscsi-vips_running_0 * Resource action: vip-164 monitor=30000 on sd01-0 * Resource action: vip-165 monitor=30000 on sd01-0 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_promote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_promote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:1_promote_0 * Resource action: vip-164-fw promote on sd01-0 * Resource action: vip-165-fw promote on sd01-0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:1_promoted_0 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_promoted_0 Revised Cluster Status: * Node List: * Online: [ sd01-0 sd01-1 ] * Full List of Resources: * stonith-xvm-sd01-0 (stonith:fence_xvm): Started sd01-1 * stonith-xvm-sd01-1 (stonith:fence_xvm): Started sd01-0 * Resource Group: cdev-pool-0-iscsi-export: * cdev-pool-0-iscsi-target (ocf:vds-ok:iSCSITarget): Started sd01-0 * cdev-pool-0-iscsi-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started sd01-0 * Clone Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] (promotable): * Promoted: [ sd01-0 ] * Unpromoted: [ sd01-1 ] * Clone Set: cl-ietd [ietd]: * Started: [ sd01-0 sd01-1 ] * Clone Set: cl-vlan1-net [vlan1-net]: * Started: [ sd01-0 sd01-1 ] * Resource Group: cdev-pool-0-iscsi-vips: * vip-164 (ocf:heartbeat:IPaddr2): Started sd01-0 * vip-165 (ocf:heartbeat:IPaddr2): Started sd01-0 * Clone Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] (promotable): * Promoted: [ sd01-0 ] * Unpromoted: [ sd01-1 ] diff --git a/cts/scheduler/summary/promoted-probed-score.summary b/cts/scheduler/summary/promoted-probed-score.summary index acf3171fe9..3c9326cc45 100644 --- a/cts/scheduler/summary/promoted-probed-score.summary +++ b/cts/scheduler/summary/promoted-probed-score.summary @@ -1,329 +1,329 @@ 1 of 60 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Full List of Resources: * Clone Set: AdminClone [AdminDrbd] (promotable): * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * CronAmbientTemperature (ocf:heartbeat:symlink): Stopped * StonithHypatia (stonith:fence_nut): Stopped * StonithOrestes (stonith:fence_nut): Stopped * Resource Group: DhcpGroup: * SymlinkDhcpdConf (ocf:heartbeat:symlink): Stopped * SymlinkSysconfigDhcpd (ocf:heartbeat:symlink): Stopped * SymlinkDhcpdLeases (ocf:heartbeat:symlink): Stopped * Dhcpd (lsb:dhcpd): Stopped (disabled) * DhcpIP (ocf:heartbeat:IPaddr2): Stopped * Clone Set: CupsClone [CupsGroup]: * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: IPClone [IPGroup] (unique): * Resource Group: IPGroup:0: * ClusterIP:0 (ocf:heartbeat:IPaddr2): Stopped * ClusterIPLocal:0 (ocf:heartbeat:IPaddr2): Stopped * ClusterIPSandbox:0 (ocf:heartbeat:IPaddr2): Stopped * Resource Group: IPGroup:1: * ClusterIP:1 (ocf:heartbeat:IPaddr2): Stopped * ClusterIPLocal:1 (ocf:heartbeat:IPaddr2): Stopped * ClusterIPSandbox:1 (ocf:heartbeat:IPaddr2): Stopped * Clone Set: LibvirtdClone [LibvirtdGroup]: * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: TftpClone [TftpGroup]: * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: ExportsClone [ExportsGroup]: * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: FilesystemClone [FilesystemGroup]: * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * KVM-guest (ocf:heartbeat:VirtualDomain): Stopped * Proxy (ocf:heartbeat:VirtualDomain): Stopped Transition Summary: * Promote AdminDrbd:0 ( Stopped -> Promoted hypatia-corosync.nevis.columbia.edu ) * Promote AdminDrbd:1 ( Stopped -> Promoted orestes-corosync.nevis.columbia.edu ) - * Start CronAmbientTemperature ( hypatia-corosync.nevis.columbia.edu ) - * Start StonithHypatia ( orestes-corosync.nevis.columbia.edu ) - * Start StonithOrestes ( hypatia-corosync.nevis.columbia.edu ) - * Start SymlinkDhcpdConf ( orestes-corosync.nevis.columbia.edu ) - * Start SymlinkSysconfigDhcpd ( orestes-corosync.nevis.columbia.edu ) - * Start SymlinkDhcpdLeases ( orestes-corosync.nevis.columbia.edu ) - * Start SymlinkUsrShareCups:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start SymlinkCupsdConf:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start Cups:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start SymlinkUsrShareCups:1 ( orestes-corosync.nevis.columbia.edu ) - * Start SymlinkCupsdConf:1 ( orestes-corosync.nevis.columbia.edu ) - * Start Cups:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ClusterIP:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ClusterIPLocal:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ClusterIPSandbox:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ClusterIP:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ClusterIPLocal:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ClusterIPSandbox:1 ( orestes-corosync.nevis.columbia.edu ) - * Start SymlinkEtcLibvirt:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start Libvirtd:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start SymlinkEtcLibvirt:1 ( orestes-corosync.nevis.columbia.edu ) - * Start Libvirtd:1 ( orestes-corosync.nevis.columbia.edu ) - * Start SymlinkTftp:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start Xinetd:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start SymlinkTftp:1 ( orestes-corosync.nevis.columbia.edu ) - * Start Xinetd:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportMail:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportMailInbox:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportMailFolders:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportMailForward:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportMailProcmailrc:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportUsrNevisOffsite:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportWWW:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start ExportMail:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportMailInbox:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportMailFolders:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportMailForward:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportMailProcmailrc:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportUsrNevis:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportUsrNevisOffsite:1 ( orestes-corosync.nevis.columbia.edu ) - * Start ExportWWW:1 ( orestes-corosync.nevis.columbia.edu ) - * Start AdminLvm:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start FSUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start FSVarNevis:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start FSVirtualMachines:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start FSMail:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start FSWork:0 ( hypatia-corosync.nevis.columbia.edu ) - * Start AdminLvm:1 ( orestes-corosync.nevis.columbia.edu ) - * Start FSUsrNevis:1 ( orestes-corosync.nevis.columbia.edu ) - * Start FSVarNevis:1 ( orestes-corosync.nevis.columbia.edu ) - * Start FSVirtualMachines:1 ( orestes-corosync.nevis.columbia.edu ) - * Start FSMail:1 ( orestes-corosync.nevis.columbia.edu ) - * Start FSWork:1 ( orestes-corosync.nevis.columbia.edu ) - * Start KVM-guest ( hypatia-corosync.nevis.columbia.edu ) - * Start Proxy ( orestes-corosync.nevis.columbia.edu ) + * Start CronAmbientTemperature ( hypatia-corosync.nevis.columbia.edu ) + * Start StonithHypatia ( orestes-corosync.nevis.columbia.edu ) + * Start StonithOrestes ( hypatia-corosync.nevis.columbia.edu ) + * Start SymlinkDhcpdConf ( orestes-corosync.nevis.columbia.edu ) + * Start SymlinkSysconfigDhcpd ( orestes-corosync.nevis.columbia.edu ) + * Start SymlinkDhcpdLeases ( orestes-corosync.nevis.columbia.edu ) + * Start SymlinkUsrShareCups:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start SymlinkCupsdConf:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start Cups:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start SymlinkUsrShareCups:1 ( orestes-corosync.nevis.columbia.edu ) + * Start SymlinkCupsdConf:1 ( orestes-corosync.nevis.columbia.edu ) + * Start Cups:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ClusterIP:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ClusterIPLocal:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ClusterIPSandbox:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ClusterIP:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ClusterIPLocal:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ClusterIPSandbox:1 ( orestes-corosync.nevis.columbia.edu ) + * Start SymlinkEtcLibvirt:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start Libvirtd:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start SymlinkEtcLibvirt:1 ( orestes-corosync.nevis.columbia.edu ) + * Start Libvirtd:1 ( orestes-corosync.nevis.columbia.edu ) + * Start SymlinkTftp:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start Xinetd:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start SymlinkTftp:1 ( orestes-corosync.nevis.columbia.edu ) + * Start Xinetd:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportMail:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportMailInbox:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportMailFolders:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportMailForward:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportMailProcmailrc:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportUsrNevisOffsite:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportWWW:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start ExportMail:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportMailInbox:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportMailFolders:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportMailForward:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportMailProcmailrc:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportUsrNevis:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportUsrNevisOffsite:1 ( orestes-corosync.nevis.columbia.edu ) + * Start ExportWWW:1 ( orestes-corosync.nevis.columbia.edu ) + * Start AdminLvm:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start FSUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start FSVarNevis:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start FSVirtualMachines:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start FSMail:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start FSWork:0 ( hypatia-corosync.nevis.columbia.edu ) + * Start AdminLvm:1 ( orestes-corosync.nevis.columbia.edu ) + * Start FSUsrNevis:1 ( orestes-corosync.nevis.columbia.edu ) + * Start FSVarNevis:1 ( orestes-corosync.nevis.columbia.edu ) + * Start FSVirtualMachines:1 ( orestes-corosync.nevis.columbia.edu ) + * Start FSMail:1 ( orestes-corosync.nevis.columbia.edu ) + * Start FSWork:1 ( orestes-corosync.nevis.columbia.edu ) + * Start KVM-guest ( hypatia-corosync.nevis.columbia.edu ) + * Start Proxy ( orestes-corosync.nevis.columbia.edu ) Executing Cluster Transition: * Pseudo action: AdminClone_pre_notify_start_0 * Resource action: StonithHypatia start on orestes-corosync.nevis.columbia.edu * Resource action: StonithOrestes start on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkEtcLibvirt:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkTftp:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Xinetd:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkTftp:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: Xinetd:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMail:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailForward:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportWWW:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMail:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailForward:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportWWW:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: AdminLvm:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: AdminLvm:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: KVM-guest monitor on orestes-corosync.nevis.columbia.edu * Resource action: KVM-guest monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Proxy monitor on orestes-corosync.nevis.columbia.edu * Resource action: Proxy monitor on hypatia-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-pre_notify_start_0 * Pseudo action: AdminClone_start_0 * Resource action: AdminDrbd:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_running_0 * Pseudo action: AdminClone_post_notify_running_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-post_notify_running_0 * Pseudo action: AdminClone_pre_notify_promote_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-pre_notify_promote_0 * Pseudo action: AdminClone_promote_0 * Resource action: AdminDrbd:0 promote on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 promote on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_promoted_0 * Pseudo action: AdminClone_post_notify_promoted_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-post_notify_promoted_0 * Pseudo action: FilesystemClone_start_0 * Resource action: AdminDrbd:0 monitor=59000 on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 monitor=59000 on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:0_start_0 * Resource action: AdminLvm:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:1_start_0 * Resource action: AdminLvm:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:0_running_0 * Resource action: AdminLvm:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:1_running_0 * Resource action: AdminLvm:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemClone_running_0 * Resource action: CronAmbientTemperature start on hypatia-corosync.nevis.columbia.edu * Pseudo action: DhcpGroup_start_0 * Resource action: SymlinkDhcpdConf start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkSysconfigDhcpd start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdLeases start on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsClone_start_0 * Pseudo action: IPClone_start_0 * Pseudo action: LibvirtdClone_start_0 * Pseudo action: TftpClone_start_0 * Pseudo action: ExportsClone_start_0 * Resource action: CronAmbientTemperature monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdConf monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkSysconfigDhcpd monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdLeases monitor=60000 on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:0_start_0 * Resource action: SymlinkUsrShareCups:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Cups:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:1_start_0 * Resource action: SymlinkUsrShareCups:1 start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Cups:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: IPGroup:0_start_0 * Resource action: ClusterIP:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: IPGroup:1_start_0 * Resource action: ClusterIP:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:0_start_0 * Resource action: SymlinkEtcLibvirt:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:1_start_0 * Resource action: SymlinkEtcLibvirt:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:0_start_0 * Resource action: SymlinkTftp:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Xinetd:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:1_start_0 * Resource action: SymlinkTftp:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Xinetd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: ExportsGroup:0_start_0 * Resource action: ExportMail:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailForward:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportWWW:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: ExportsGroup:1_start_0 * Resource action: ExportMail:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailForward:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportWWW:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:0_running_0 * Resource action: SymlinkUsrShareCups:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: Cups:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:1_running_0 * Resource action: SymlinkUsrShareCups:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: Cups:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsClone_running_0 * Pseudo action: IPGroup:0_running_0 * Resource action: ClusterIP:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:0 monitor=31000 on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:0 monitor=32000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: IPGroup:1_running_0 * Resource action: ClusterIP:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:1 monitor=31000 on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:1 monitor=32000 on orestes-corosync.nevis.columbia.edu * Pseudo action: IPClone_running_0 * Pseudo action: LibvirtdGroup:0_running_0 * Resource action: SymlinkEtcLibvirt:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:1_running_0 * Resource action: SymlinkEtcLibvirt:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Pseudo action: LibvirtdClone_running_0 * Pseudo action: TftpGroup:0_running_0 * Resource action: SymlinkTftp:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:1_running_0 * Resource action: SymlinkTftp:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Pseudo action: TftpClone_running_0 * Pseudo action: ExportsGroup:0_running_0 * Pseudo action: ExportsGroup:1_running_0 * Pseudo action: ExportsClone_running_0 * Resource action: KVM-guest start on hypatia-corosync.nevis.columbia.edu * Resource action: Proxy start on orestes-corosync.nevis.columbia.edu Revised Cluster Status: * Node List: * Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Full List of Resources: * Clone Set: AdminClone [AdminDrbd] (promotable): * Promoted: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * CronAmbientTemperature (ocf:heartbeat:symlink): Started hypatia-corosync.nevis.columbia.edu * StonithHypatia (stonith:fence_nut): Started orestes-corosync.nevis.columbia.edu * StonithOrestes (stonith:fence_nut): Started hypatia-corosync.nevis.columbia.edu * Resource Group: DhcpGroup: * SymlinkDhcpdConf (ocf:heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu * SymlinkSysconfigDhcpd (ocf:heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu * SymlinkDhcpdLeases (ocf:heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu * Dhcpd (lsb:dhcpd): Stopped (disabled) * DhcpIP (ocf:heartbeat:IPaddr2): Stopped * Clone Set: CupsClone [CupsGroup]: * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: IPClone [IPGroup] (unique): * Resource Group: IPGroup:0: * ClusterIP:0 (ocf:heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu * ClusterIPLocal:0 (ocf:heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu * ClusterIPSandbox:0 (ocf:heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu * Resource Group: IPGroup:1: * ClusterIP:1 (ocf:heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu * ClusterIPLocal:1 (ocf:heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu * ClusterIPSandbox:1 (ocf:heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu * Clone Set: LibvirtdClone [LibvirtdGroup]: * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: TftpClone [TftpGroup]: * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: ExportsClone [ExportsGroup]: * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * Clone Set: FilesystemClone [FilesystemGroup]: * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] * KVM-guest (ocf:heartbeat:VirtualDomain): Started hypatia-corosync.nevis.columbia.edu * Proxy (ocf:heartbeat:VirtualDomain): Started orestes-corosync.nevis.columbia.edu diff --git a/cts/scheduler/summary/promoted-pseudo.summary b/cts/scheduler/summary/promoted-pseudo.summary index b28ab7168d..92302e773d 100644 --- a/cts/scheduler/summary/promoted-pseudo.summary +++ b/cts/scheduler/summary/promoted-pseudo.summary @@ -1,60 +1,60 @@ Current cluster status: * Node List: * Node raki.linbit: standby * Online: [ sambuca.linbit ] * Full List of Resources: * ip_float_right (ocf:heartbeat:IPaddr2): Stopped * Clone Set: ms_drbd_float [drbd_float] (promotable): * Unpromoted: [ sambuca.linbit ] * Resource Group: nfsexport: * ip_nfs (ocf:heartbeat:IPaddr2): Stopped * fs_float (ocf:heartbeat:Filesystem): Stopped Transition Summary: - * Start ip_float_right ( sambuca.linbit ) + * Start ip_float_right ( sambuca.linbit ) * Restart drbd_float:0 ( Unpromoted -> Promoted sambuca.linbit ) due to required ip_float_right start - * Start ip_nfs ( sambuca.linbit ) + * Start ip_nfs ( sambuca.linbit ) Executing Cluster Transition: * Resource action: ip_float_right start on sambuca.linbit * Pseudo action: ms_drbd_float_pre_notify_stop_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_float_stop_0 * Resource action: drbd_float:0 stop on sambuca.linbit * Pseudo action: ms_drbd_float_stopped_0 * Pseudo action: ms_drbd_float_post_notify_stopped_0 * Pseudo action: ms_drbd_float_confirmed-post_notify_stopped_0 * Pseudo action: ms_drbd_float_pre_notify_start_0 * Pseudo action: ms_drbd_float_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_float_start_0 * Resource action: drbd_float:0 start on sambuca.linbit * Pseudo action: ms_drbd_float_running_0 * Pseudo action: ms_drbd_float_post_notify_running_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-post_notify_running_0 * Pseudo action: ms_drbd_float_pre_notify_promote_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_float_promote_0 * Resource action: drbd_float:0 promote on sambuca.linbit * Pseudo action: ms_drbd_float_promoted_0 * Pseudo action: ms_drbd_float_post_notify_promoted_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-post_notify_promoted_0 * Pseudo action: nfsexport_start_0 * Resource action: ip_nfs start on sambuca.linbit Revised Cluster Status: * Node List: * Node raki.linbit: standby * Online: [ sambuca.linbit ] * Full List of Resources: * ip_float_right (ocf:heartbeat:IPaddr2): Started sambuca.linbit * Clone Set: ms_drbd_float [drbd_float] (promotable): * Promoted: [ sambuca.linbit ] * Resource Group: nfsexport: * ip_nfs (ocf:heartbeat:IPaddr2): Started sambuca.linbit * fs_float (ocf:heartbeat:Filesystem): Stopped diff --git a/cts/scheduler/summary/promoted-score-startup.summary b/cts/scheduler/summary/promoted-score-startup.summary index 9f527815d8..f9d36405d1 100644 --- a/cts/scheduler/summary/promoted-score-startup.summary +++ b/cts/scheduler/summary/promoted-score-startup.summary @@ -1,54 +1,54 @@ Current cluster status: * Node List: * Online: [ srv1 srv2 ] * Full List of Resources: * Clone Set: pgsql-ha [pgsqld] (promotable): * Stopped: [ srv1 srv2 ] * pgsql-master-ip (ocf:heartbeat:IPaddr2): Stopped Transition Summary: * Promote pgsqld:0 ( Stopped -> Promoted srv1 ) - * Start pgsqld:1 ( srv2 ) - * Start pgsql-master-ip ( srv1 ) + * Start pgsqld:1 ( srv2 ) + * Start pgsql-master-ip ( srv1 ) Executing Cluster Transition: * Resource action: pgsqld:0 monitor on srv1 * Resource action: pgsqld:1 monitor on srv2 * Pseudo action: pgsql-ha_pre_notify_start_0 * Resource action: pgsql-master-ip monitor on srv2 * Resource action: pgsql-master-ip monitor on srv1 * Pseudo action: pgsql-ha_confirmed-pre_notify_start_0 * Pseudo action: pgsql-ha_start_0 * Resource action: pgsqld:0 start on srv1 * Resource action: pgsqld:1 start on srv2 * Pseudo action: pgsql-ha_running_0 * Pseudo action: pgsql-ha_post_notify_running_0 * Resource action: pgsqld:0 notify on srv1 * Resource action: pgsqld:1 notify on srv2 * Pseudo action: pgsql-ha_confirmed-post_notify_running_0 * Pseudo action: pgsql-ha_pre_notify_promote_0 * Resource action: pgsqld:0 notify on srv1 * Resource action: pgsqld:1 notify on srv2 * Pseudo action: pgsql-ha_confirmed-pre_notify_promote_0 * Pseudo action: pgsql-ha_promote_0 * Resource action: pgsqld:0 promote on srv1 * Pseudo action: pgsql-ha_promoted_0 * Pseudo action: pgsql-ha_post_notify_promoted_0 * Resource action: pgsqld:0 notify on srv1 * Resource action: pgsqld:1 notify on srv2 * Pseudo action: pgsql-ha_confirmed-post_notify_promoted_0 * Resource action: pgsql-master-ip start on srv1 * Resource action: pgsqld:0 monitor=15000 on srv1 * Resource action: pgsqld:1 monitor=16000 on srv2 * Resource action: pgsql-master-ip monitor=10000 on srv1 Revised Cluster Status: * Node List: * Online: [ srv1 srv2 ] * Full List of Resources: * Clone Set: pgsql-ha [pgsqld] (promotable): * Promoted: [ srv1 ] * Unpromoted: [ srv2 ] * pgsql-master-ip (ocf:heartbeat:IPaddr2): Started srv1 diff --git a/cts/scheduler/summary/remote-connection-unrecoverable.summary b/cts/scheduler/summary/remote-connection-unrecoverable.summary index 727dad2b29..ad8f353b6a 100644 --- a/cts/scheduler/summary/remote-connection-unrecoverable.summary +++ b/cts/scheduler/summary/remote-connection-unrecoverable.summary @@ -1,54 +1,54 @@ Current cluster status: * Node List: * Node node1: UNCLEAN (offline) * Online: [ node2 ] * RemoteOnline: [ remote1 ] * Full List of Resources: * remote1 (ocf:pacemaker:remote): Started node1 (UNCLEAN) * killer (stonith:fence_xvm): Started node2 * rsc1 (ocf:pacemaker:Dummy): Started remote1 * Clone Set: rsc2-master [rsc2] (promotable): * rsc2 (ocf:pacemaker:Stateful): Promoted node1 (UNCLEAN) * Promoted: [ node2 ] * Stopped: [ remote1 ] Transition Summary: * Fence (reboot) remote1 'resources are active but connection is unrecoverable' * Fence (reboot) node1 'peer is no longer part of the cluster' * Stop remote1 ( node1 ) due to node availability * Restart killer ( node2 ) due to resource definition change * Move rsc1 ( remote1 -> node2 ) - * Stop rsc2:0 ( Promoted node1 ) due to node availability + * Stop rsc2:0 ( Promoted node1 ) due to node availability Executing Cluster Transition: * Pseudo action: remote1_stop_0 * Resource action: killer stop on node2 * Resource action: rsc1 monitor on node2 * Fencing node1 (reboot) * Fencing remote1 (reboot) * Resource action: killer start on node2 * Resource action: killer monitor=60000 on node2 * Pseudo action: rsc1_stop_0 * Pseudo action: rsc2-master_demote_0 * Resource action: rsc1 start on node2 * Pseudo action: rsc2_demote_0 * Pseudo action: rsc2-master_demoted_0 * Pseudo action: rsc2-master_stop_0 * Resource action: rsc1 monitor=10000 on node2 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc2-master_stopped_0 Revised Cluster Status: * Node List: * Online: [ node2 ] * OFFLINE: [ node1 ] * RemoteOFFLINE: [ remote1 ] * Full List of Resources: * remote1 (ocf:pacemaker:remote): Stopped * killer (stonith:fence_xvm): Started node2 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Clone Set: rsc2-master [rsc2] (promotable): * Promoted: [ node2 ] * Stopped: [ node1 remote1 ] diff --git a/cts/scheduler/summary/remote-recover-all.summary b/cts/scheduler/summary/remote-recover-all.summary index 5a7d3ce3fa..5052ad7cfc 100644 --- a/cts/scheduler/summary/remote-recover-all.summary +++ b/cts/scheduler/summary/remote-recover-all.summary @@ -1,146 +1,146 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: * Node List: * Node controller-1: UNCLEAN (offline) * Online: [ controller-0 controller-2 ] * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-1 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 galera-2 ] * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN) * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * Clone Set: haproxy-clone [haproxy]: * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) * Started: [ controller-0 controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) messaging-1 'resources are active but connection is unrecoverable' * Fence (reboot) galera-2 'resources are active but connection is unrecoverable' * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Stop messaging-1 ( controller-1 ) due to node availability * Move galera-0 ( controller-1 -> controller-2 ) * Stop galera-2 ( controller-1 ) due to node availability * Stop rabbitmq:2 ( messaging-1 ) due to node availability - * Stop galera:1 ( Promoted galera-2 ) due to node availability - * Stop redis:0 ( Unpromoted controller-1 ) due to node availability + * Stop galera:1 ( Promoted galera-2 ) due to node availability + * Stop redis:0 ( Unpromoted controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 ( controller-1 ) due to node availability * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing Cluster Transition: * Pseudo action: messaging-1_stop_0 * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: galera-master_demote_0 * Pseudo action: redis-master_pre_notify_stop_0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Fencing galera-2 (reboot) * Pseudo action: galera_demote_0 * Pseudo action: galera-master_demoted_0 * Pseudo action: galera-master_stop_0 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Fencing messaging-1 (reboot) * Resource action: galera-0 start on controller-2 * Pseudo action: rabbitmq_post_notify_stop_0 * Pseudo action: rabbitmq-clone_stop_0 * Pseudo action: galera_stop_0 * Resource action: galera monitor=10000 on galera-0 * Pseudo action: galera-master_stopped_0 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: rabbitmq notify on messaging-2 * Resource action: rabbitmq notify on messaging-0 * Pseudo action: rabbitmq_notified_0 * Pseudo action: rabbitmq_stop_0 * Pseudo action: rabbitmq-clone_stopped_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-2 ] * OFFLINE: [ controller-1 ] * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ] * RemoteOFFLINE: [ galera-2 messaging-1 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Stopped * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-2 * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Stopped * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 ] * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2 * Clone Set: haproxy-clone [haproxy]: * Started: [ controller-0 controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/summary/remote-recover-connection.summary b/cts/scheduler/summary/remote-recover-connection.summary index a9723bc5e1..fd6900dd96 100644 --- a/cts/scheduler/summary/remote-recover-connection.summary +++ b/cts/scheduler/summary/remote-recover-connection.summary @@ -1,132 +1,132 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: * Node List: * Node controller-1: UNCLEAN (offline) * Online: [ controller-0 controller-2 ] * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-1 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 galera-2 ] * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN) * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * Clone Set: haproxy-clone [haproxy]: * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) * Started: [ controller-0 controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Move messaging-1 ( controller-1 -> controller-2 ) * Move galera-0 ( controller-1 -> controller-2 ) * Move galera-2 ( controller-1 -> controller-2 ) - * Stop redis:0 ( Unpromoted controller-1 ) due to node availability + * Stop redis:0 ( Unpromoted controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 ( controller-1 ) due to node availability * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing Cluster Transition: * Pseudo action: messaging-1_stop_0 * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Resource action: messaging-1 start on controller-2 * Resource action: galera-0 start on controller-2 * Resource action: galera-2 start on controller-2 * Resource action: rabbitmq monitor=10000 on messaging-1 * Resource action: galera monitor=10000 on galera-2 * Resource action: galera monitor=10000 on galera-0 * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: messaging-1 monitor=20000 on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: galera-2 monitor=20000 on controller-2 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-2 ] * OFFLINE: [ controller-1 ] * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Started controller-2 * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-2 * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Started controller-2 * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-1 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 galera-2 ] * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2 * Clone Set: haproxy-clone [haproxy]: * Started: [ controller-0 controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/summary/remote-recover-no-resources.summary b/cts/scheduler/summary/remote-recover-no-resources.summary index eb94266763..0e2be90a2a 100644 --- a/cts/scheduler/summary/remote-recover-no-resources.summary +++ b/cts/scheduler/summary/remote-recover-no-resources.summary @@ -1,137 +1,137 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: * Node List: * Node controller-1: UNCLEAN (offline) * Online: [ controller-0 controller-2 ] * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-1 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 ] * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN) * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * Clone Set: haproxy-clone [haproxy]: * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) * Started: [ controller-0 controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) messaging-1 'resources are active but connection is unrecoverable' * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Stop messaging-1 ( controller-1 ) due to node availability * Move galera-0 ( controller-1 -> controller-2 ) * Stop galera-2 ( controller-1 ) due to node availability * Stop rabbitmq:2 ( messaging-1 ) due to node availability - * Stop redis:0 ( Unpromoted controller-1 ) due to node availability + * Stop redis:0 ( Unpromoted controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 ( controller-1 ) due to node availability * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing Cluster Transition: * Pseudo action: messaging-1_stop_0 * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Fencing messaging-1 (reboot) * Resource action: galera-0 start on controller-2 * Pseudo action: rabbitmq_post_notify_stop_0 * Pseudo action: rabbitmq-clone_stop_0 * Resource action: galera monitor=10000 on galera-0 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: rabbitmq notify on messaging-2 * Resource action: rabbitmq notify on messaging-0 * Pseudo action: rabbitmq_notified_0 * Pseudo action: rabbitmq_stop_0 * Pseudo action: rabbitmq-clone_stopped_0 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-2 ] * OFFLINE: [ controller-1 ] * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ] * RemoteOFFLINE: [ galera-2 messaging-1 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Stopped * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-2 * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Stopped * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 ] * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2 * Clone Set: haproxy-clone [haproxy]: * Started: [ controller-0 controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/summary/remote-recover-unknown.summary b/cts/scheduler/summary/remote-recover-unknown.summary index e04e969988..59e1085c12 100644 --- a/cts/scheduler/summary/remote-recover-unknown.summary +++ b/cts/scheduler/summary/remote-recover-unknown.summary @@ -1,139 +1,139 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: * Node List: * Node controller-1: UNCLEAN (offline) * Online: [ controller-0 controller-2 ] * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-1 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 ] * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN) * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * Clone Set: haproxy-clone [haproxy]: * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) * Started: [ controller-0 controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) galera-2 'resources are in unknown state and connection is unrecoverable' * Fence (reboot) messaging-1 'resources are active but connection is unrecoverable' * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Stop messaging-1 ( controller-1 ) due to node availability * Move galera-0 ( controller-1 -> controller-2 ) * Stop galera-2 ( controller-1 ) due to node availability * Stop rabbitmq:2 ( messaging-1 ) due to node availability - * Stop redis:0 ( Unpromoted controller-1 ) due to node availability + * Stop redis:0 ( Unpromoted controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 ( controller-1 ) due to node availability * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing Cluster Transition: * Pseudo action: messaging-1_stop_0 * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Fencing galera-2 (reboot) * Fencing messaging-1 (reboot) * Resource action: galera-0 start on controller-2 * Pseudo action: rabbitmq_post_notify_stop_0 * Pseudo action: rabbitmq-clone_stop_0 * Resource action: galera monitor=10000 on galera-0 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: rabbitmq notify on messaging-2 * Resource action: rabbitmq notify on messaging-0 * Pseudo action: rabbitmq_notified_0 * Pseudo action: rabbitmq_stop_0 * Pseudo action: rabbitmq-clone_stopped_0 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-2 ] * OFFLINE: [ controller-1 ] * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ] * RemoteOFFLINE: [ galera-2 messaging-1 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Stopped * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-2 * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Stopped * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 ] * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2 * Clone Set: haproxy-clone [haproxy]: * Started: [ controller-0 controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/summary/remote-recovery.summary b/cts/scheduler/summary/remote-recovery.summary index a9723bc5e1..fd6900dd96 100644 --- a/cts/scheduler/summary/remote-recovery.summary +++ b/cts/scheduler/summary/remote-recovery.summary @@ -1,132 +1,132 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: * Node List: * Node controller-1: UNCLEAN (offline) * Online: [ controller-0 controller-2 ] * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN) * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-1 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 galera-2 ] * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN) * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN) * Clone Set: haproxy-clone [haproxy]: * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) * Started: [ controller-0 controller-2 ] * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Move messaging-1 ( controller-1 -> controller-2 ) * Move galera-0 ( controller-1 -> controller-2 ) * Move galera-2 ( controller-1 -> controller-2 ) - * Stop redis:0 ( Unpromoted controller-1 ) due to node availability + * Stop redis:0 ( Unpromoted controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 ( controller-1 ) due to node availability * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing Cluster Transition: * Pseudo action: messaging-1_stop_0 * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Resource action: messaging-1 start on controller-2 * Resource action: galera-0 start on controller-2 * Resource action: galera-2 start on controller-2 * Resource action: rabbitmq monitor=10000 on messaging-1 * Resource action: galera monitor=10000 on galera-2 * Resource action: galera monitor=10000 on galera-0 * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: messaging-1 monitor=20000 on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: galera-2 monitor=20000 on controller-2 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised Cluster Status: * Node List: * Online: [ controller-0 controller-2 ] * OFFLINE: [ controller-1 ] * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * Full List of Resources: * messaging-0 (ocf:pacemaker:remote): Started controller-0 * messaging-1 (ocf:pacemaker:remote): Started controller-2 * messaging-2 (ocf:pacemaker:remote): Started controller-0 * galera-0 (ocf:pacemaker:remote): Started controller-2 * galera-1 (ocf:pacemaker:remote): Started controller-0 * galera-2 (ocf:pacemaker:remote): Started controller-2 * Clone Set: rabbitmq-clone [rabbitmq]: * Started: [ messaging-0 messaging-1 messaging-2 ] * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] * Clone Set: galera-master [galera] (promotable): * Promoted: [ galera-0 galera-1 galera-2 ] * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] * Clone Set: redis-master [redis] (promotable): * Promoted: [ controller-0 ] * Unpromoted: [ controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2 * Clone Set: haproxy-clone [haproxy]: * Started: [ controller-0 controller-2 ] * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/summary/rsc-sets-promoted.summary b/cts/scheduler/summary/rsc-sets-promoted.summary index 3db15881a0..af78ecbaa3 100644 --- a/cts/scheduler/summary/rsc-sets-promoted.summary +++ b/cts/scheduler/summary/rsc-sets-promoted.summary @@ -1,49 +1,49 @@ Current cluster status: * Node List: * Node node1: standby (with active resources) * Online: [ node2 ] * Full List of Resources: * Clone Set: ms-rsc [rsc] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] * rsc1 (ocf:pacemaker:Dummy): Started node1 * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 Transition Summary: - * Stop rsc:0 ( Promoted node1 ) due to node availability + * Stop rsc:0 ( Promoted node1 ) due to node availability * Promote rsc:1 ( Unpromoted -> Promoted node2 ) - * Move rsc1 ( node1 -> node2 ) - * Move rsc2 ( node1 -> node2 ) - * Move rsc3 ( node1 -> node2 ) + * Move rsc1 ( node1 -> node2 ) + * Move rsc2 ( node1 -> node2 ) + * Move rsc3 ( node1 -> node2 ) Executing Cluster Transition: * Resource action: rsc1 stop on node1 * Resource action: rsc2 stop on node1 * Resource action: rsc3 stop on node1 * Pseudo action: ms-rsc_demote_0 * Resource action: rsc:0 demote on node1 * Pseudo action: ms-rsc_demoted_0 * Pseudo action: ms-rsc_stop_0 * Resource action: rsc:0 stop on node1 * Pseudo action: ms-rsc_stopped_0 * Pseudo action: ms-rsc_promote_0 * Resource action: rsc:1 promote on node2 * Pseudo action: ms-rsc_promoted_0 * Resource action: rsc1 start on node2 * Resource action: rsc2 start on node2 * Resource action: rsc3 start on node2 Revised Cluster Status: * Node List: * Node node1: standby * Online: [ node2 ] * Full List of Resources: * Clone Set: ms-rsc [rsc] (promotable): * Promoted: [ node2 ] * Stopped: [ node1 ] * rsc1 (ocf:pacemaker:Dummy): Started node2 * rsc2 (ocf:pacemaker:Dummy): Started node2 * rsc3 (ocf:pacemaker:Dummy): Started node2 diff --git a/cts/scheduler/summary/stonith-1.summary b/cts/scheduler/summary/stonith-1.summary index 29b979cacc..dfb4be43ee 100644 --- a/cts/scheduler/summary/stonith-1.summary +++ b/cts/scheduler/summary/stonith-1.summary @@ -1,113 +1,113 @@ Current cluster status: * Node List: * Node sles-3: UNCLEAN (offline) * Online: [ sles-1 sles-2 sles-4 ] * Full List of Resources: * Resource Group: group-1: * r192.168.100.181 (ocf:heartbeat:IPaddr): Started sles-1 * r192.168.100.182 (ocf:heartbeat:IPaddr): Started sles-1 * r192.168.100.183 (ocf:heartbeat:IPaddr): Stopped * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Started sles-2 * migrator (ocf:heartbeat:Dummy): Started sles-3 (UNCLEAN) * rsc_sles-1 (ocf:heartbeat:IPaddr): Started sles-1 * rsc_sles-2 (ocf:heartbeat:IPaddr): Started sles-2 * rsc_sles-3 (ocf:heartbeat:IPaddr): Started sles-3 (UNCLEAN) * rsc_sles-4 (ocf:heartbeat:IPaddr): Started sles-4 * Clone Set: DoFencing [child_DoFencing]: * child_DoFencing (stonith:external/vmware): Started sles-3 (UNCLEAN) * Started: [ sles-1 sles-2 ] * Stopped: [ sles-4 ] * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted sles-3 (UNCLEAN) * ocf_msdummy:3 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:4 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted sles-3 (UNCLEAN) * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped Transition Summary: * Fence (reboot) sles-3 'peer is no longer part of the cluster' - * Start r192.168.100.183 ( sles-1 ) - * Move migrator ( sles-3 -> sles-4 ) - * Move rsc_sles-3 ( sles-3 -> sles-4 ) - * Move child_DoFencing:2 ( sles-3 -> sles-4 ) - * Start ocf_msdummy:0 ( sles-4 ) - * Start ocf_msdummy:1 ( sles-1 ) + * Start r192.168.100.183 ( sles-1 ) + * Move migrator ( sles-3 -> sles-4 ) + * Move rsc_sles-3 ( sles-3 -> sles-4 ) + * Move child_DoFencing:2 ( sles-3 -> sles-4 ) + * Start ocf_msdummy:0 ( sles-4 ) + * Start ocf_msdummy:1 ( sles-1 ) * Move ocf_msdummy:2 ( sles-3 -> sles-2 Unpromoted ) - * Start ocf_msdummy:3 ( sles-4 ) - * Start ocf_msdummy:4 ( sles-1 ) + * Start ocf_msdummy:3 ( sles-4 ) + * Start ocf_msdummy:4 ( sles-1 ) * Move ocf_msdummy:5 ( sles-3 -> sles-2 Unpromoted ) Executing Cluster Transition: * Pseudo action: group-1_start_0 * Resource action: r192.168.100.182 monitor=5000 on sles-1 * Resource action: lsb_dummy monitor=5000 on sles-2 * Resource action: rsc_sles-2 monitor=5000 on sles-2 * Resource action: rsc_sles-4 monitor=5000 on sles-4 * Pseudo action: DoFencing_stop_0 * Fencing sles-3 (reboot) * Resource action: r192.168.100.183 start on sles-1 * Pseudo action: migrator_stop_0 * Pseudo action: rsc_sles-3_stop_0 * Pseudo action: child_DoFencing:2_stop_0 * Pseudo action: DoFencing_stopped_0 * Pseudo action: DoFencing_start_0 * Pseudo action: master_rsc_1_stop_0 * Pseudo action: group-1_running_0 * Resource action: r192.168.100.183 monitor=5000 on sles-1 * Resource action: migrator start on sles-4 * Resource action: rsc_sles-3 start on sles-4 * Resource action: child_DoFencing:2 start on sles-4 * Pseudo action: DoFencing_running_0 * Pseudo action: ocf_msdummy:2_stop_0 * Pseudo action: ocf_msdummy:5_stop_0 * Pseudo action: master_rsc_1_stopped_0 * Pseudo action: master_rsc_1_start_0 * Resource action: migrator monitor=10000 on sles-4 * Resource action: rsc_sles-3 monitor=5000 on sles-4 * Resource action: child_DoFencing:2 monitor=60000 on sles-4 * Resource action: ocf_msdummy:0 start on sles-4 * Resource action: ocf_msdummy:1 start on sles-1 * Resource action: ocf_msdummy:2 start on sles-2 * Resource action: ocf_msdummy:3 start on sles-4 * Resource action: ocf_msdummy:4 start on sles-1 * Resource action: ocf_msdummy:5 start on sles-2 * Pseudo action: master_rsc_1_running_0 * Resource action: ocf_msdummy:0 monitor=5000 on sles-4 * Resource action: ocf_msdummy:1 monitor=5000 on sles-1 * Resource action: ocf_msdummy:2 monitor=5000 on sles-2 * Resource action: ocf_msdummy:3 monitor=5000 on sles-4 * Resource action: ocf_msdummy:4 monitor=5000 on sles-1 * Resource action: ocf_msdummy:5 monitor=5000 on sles-2 Revised Cluster Status: * Node List: * Online: [ sles-1 sles-2 sles-4 ] * OFFLINE: [ sles-3 ] * Full List of Resources: * Resource Group: group-1: * r192.168.100.181 (ocf:heartbeat:IPaddr): Started sles-1 * r192.168.100.182 (ocf:heartbeat:IPaddr): Started sles-1 * r192.168.100.183 (ocf:heartbeat:IPaddr): Started sles-1 * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Started sles-2 * migrator (ocf:heartbeat:Dummy): Started sles-4 * rsc_sles-1 (ocf:heartbeat:IPaddr): Started sles-1 * rsc_sles-2 (ocf:heartbeat:IPaddr): Started sles-2 * rsc_sles-3 (ocf:heartbeat:IPaddr): Started sles-4 * rsc_sles-4 (ocf:heartbeat:IPaddr): Started sles-4 * Clone Set: DoFencing [child_DoFencing]: * Started: [ sles-1 sles-2 sles-4 ] * Stopped: [ sles-3 ] * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique): * ocf_msdummy:0 (ocf:heartbeat:Stateful): Unpromoted sles-4 * ocf_msdummy:1 (ocf:heartbeat:Stateful): Unpromoted sles-1 * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted sles-2 * ocf_msdummy:3 (ocf:heartbeat:Stateful): Unpromoted sles-4 * ocf_msdummy:4 (ocf:heartbeat:Stateful): Unpromoted sles-1 * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted sles-2 * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped diff --git a/cts/scheduler/summary/ticket-promoted-14.summary b/cts/scheduler/summary/ticket-promoted-14.summary index 80ff84346b..ee8912b2e9 100644 --- a/cts/scheduler/summary/ticket-promoted-14.summary +++ b/cts/scheduler/summary/ticket-promoted-14.summary @@ -1,31 +1,31 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1:0 ( Promoted node1 ) due to node availability - * Stop rsc1:1 ( Unpromoted node2 ) due to node availability + * Stop rsc1:0 ( Promoted node1 ) due to node availability + * Stop rsc1:1 ( Unpromoted node2 ) due to node availability Executing Cluster Transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:1 stop on node1 * Resource action: rsc1:0 stop on node2 * Pseudo action: ms1_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Stopped: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-promoted-15.summary b/cts/scheduler/summary/ticket-promoted-15.summary index 80ff84346b..ee8912b2e9 100644 --- a/cts/scheduler/summary/ticket-promoted-15.summary +++ b/cts/scheduler/summary/ticket-promoted-15.summary @@ -1,31 +1,31 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1:0 ( Promoted node1 ) due to node availability - * Stop rsc1:1 ( Unpromoted node2 ) due to node availability + * Stop rsc1:0 ( Promoted node1 ) due to node availability + * Stop rsc1:1 ( Unpromoted node2 ) due to node availability Executing Cluster Transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:1 stop on node1 * Resource action: rsc1:0 stop on node2 * Pseudo action: ms1_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Stopped: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-promoted-2.summary b/cts/scheduler/summary/ticket-promoted-2.summary index 4da760a8ac..dc67f96156 100644 --- a/cts/scheduler/summary/ticket-promoted-2.summary +++ b/cts/scheduler/summary/ticket-promoted-2.summary @@ -1,31 +1,31 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Stopped: [ node1 node2 ] Transition Summary: - * Start rsc1:0 ( node2 ) + * Start rsc1:0 ( node2 ) * Promote rsc1:1 ( Stopped -> Promoted node1 ) Executing Cluster Transition: * Pseudo action: ms1_start_0 * Resource action: rsc1:0 start on node2 * Resource action: rsc1:1 start on node1 * Pseudo action: ms1_running_0 * Pseudo action: ms1_promote_0 * Resource action: rsc1:1 promote on node1 * Pseudo action: ms1_promoted_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] diff --git a/cts/scheduler/summary/ticket-promoted-21.summary b/cts/scheduler/summary/ticket-promoted-21.summary index 788573facb..f116a2eea0 100644 --- a/cts/scheduler/summary/ticket-promoted-21.summary +++ b/cts/scheduler/summary/ticket-promoted-21.summary @@ -1,36 +1,36 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: * Fence (reboot) node1 'deadman ticket was lost' * Move rsc_stonith ( node1 -> node2 ) - * Stop rsc1:0 ( Promoted node1 ) due to node availability + * Stop rsc1:0 ( Promoted node1 ) due to node availability Executing Cluster Transition: * Pseudo action: rsc_stonith_stop_0 * Pseudo action: ms1_demote_0 * Fencing node1 (reboot) * Resource action: rsc_stonith start on node2 * Pseudo action: rsc1:1_demote_0 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Pseudo action: rsc1:1_stop_0 * Pseudo action: ms1_stopped_0 Revised Cluster Status: * Node List: * Online: [ node2 ] * OFFLINE: [ node1 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node2 * Clone Set: ms1 [rsc1] (promotable): * Unpromoted: [ node2 ] * Stopped: [ node1 ] diff --git a/cts/scheduler/summary/ticket-promoted-3.summary b/cts/scheduler/summary/ticket-promoted-3.summary index 80ff84346b..ee8912b2e9 100644 --- a/cts/scheduler/summary/ticket-promoted-3.summary +++ b/cts/scheduler/summary/ticket-promoted-3.summary @@ -1,31 +1,31 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1:0 ( Promoted node1 ) due to node availability - * Stop rsc1:1 ( Unpromoted node2 ) due to node availability + * Stop rsc1:0 ( Promoted node1 ) due to node availability + * Stop rsc1:1 ( Unpromoted node2 ) due to node availability Executing Cluster Transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:1 stop on node1 * Resource action: rsc1:0 stop on node2 * Pseudo action: ms1_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Stopped: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-promoted-9.summary b/cts/scheduler/summary/ticket-promoted-9.summary index 788573facb..f116a2eea0 100644 --- a/cts/scheduler/summary/ticket-promoted-9.summary +++ b/cts/scheduler/summary/ticket-promoted-9.summary @@ -1,36 +1,36 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * Clone Set: ms1 [rsc1] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: * Fence (reboot) node1 'deadman ticket was lost' * Move rsc_stonith ( node1 -> node2 ) - * Stop rsc1:0 ( Promoted node1 ) due to node availability + * Stop rsc1:0 ( Promoted node1 ) due to node availability Executing Cluster Transition: * Pseudo action: rsc_stonith_stop_0 * Pseudo action: ms1_demote_0 * Fencing node1 (reboot) * Resource action: rsc_stonith start on node2 * Pseudo action: rsc1:1_demote_0 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Pseudo action: rsc1:1_stop_0 * Pseudo action: ms1_stopped_0 Revised Cluster Status: * Node List: * Online: [ node2 ] * OFFLINE: [ node1 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node2 * Clone Set: ms1 [rsc1] (promotable): * Unpromoted: [ node2 ] * Stopped: [ node1 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-10.summary b/cts/scheduler/summary/ticket-rsc-sets-10.summary index acf79003f8..3bc9d648ac 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-10.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-10.summary @@ -1,52 +1,52 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1 ( node2 ) due to node availability - * Stop rsc2 ( node1 ) due to node availability - * Stop rsc3 ( node1 ) due to node availability - * Stop rsc4:0 ( node1 ) due to node availability - * Stop rsc4:1 ( node2 ) due to node availability + * Stop rsc1 ( node2 ) due to node availability + * Stop rsc2 ( node1 ) due to node availability + * Stop rsc3 ( node1 ) due to node availability + * Stop rsc4:0 ( node1 ) due to node availability + * Stop rsc4:1 ( node2 ) due to node availability * Demote rsc5:0 ( Promoted -> Unpromoted node1 ) Executing Cluster Transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: group2_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Stopped * rsc3 (ocf:pacemaker:Dummy): Stopped * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-13.summary b/cts/scheduler/summary/ticket-rsc-sets-13.summary index acf79003f8..3bc9d648ac 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-13.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-13.summary @@ -1,52 +1,52 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1 ( node2 ) due to node availability - * Stop rsc2 ( node1 ) due to node availability - * Stop rsc3 ( node1 ) due to node availability - * Stop rsc4:0 ( node1 ) due to node availability - * Stop rsc4:1 ( node2 ) due to node availability + * Stop rsc1 ( node2 ) due to node availability + * Stop rsc2 ( node1 ) due to node availability + * Stop rsc3 ( node1 ) due to node availability + * Stop rsc4:0 ( node1 ) due to node availability + * Stop rsc4:1 ( node2 ) due to node availability * Demote rsc5:0 ( Promoted -> Unpromoted node1 ) Executing Cluster Transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: group2_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Stopped * rsc3 (ocf:pacemaker:Dummy): Stopped * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-14.summary b/cts/scheduler/summary/ticket-rsc-sets-14.summary index acf79003f8..3bc9d648ac 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-14.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-14.summary @@ -1,52 +1,52 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1 ( node2 ) due to node availability - * Stop rsc2 ( node1 ) due to node availability - * Stop rsc3 ( node1 ) due to node availability - * Stop rsc4:0 ( node1 ) due to node availability - * Stop rsc4:1 ( node2 ) due to node availability + * Stop rsc1 ( node2 ) due to node availability + * Stop rsc2 ( node1 ) due to node availability + * Stop rsc3 ( node1 ) due to node availability + * Stop rsc4:0 ( node1 ) due to node availability + * Stop rsc4:1 ( node2 ) due to node availability * Demote rsc5:0 ( Promoted -> Unpromoted node1 ) Executing Cluster Transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: group2_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Stopped * rsc3 (ocf:pacemaker:Dummy): Stopped * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-2.summary b/cts/scheduler/summary/ticket-rsc-sets-2.summary index 673d205880..fccf3cad1b 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-2.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-2.summary @@ -1,57 +1,57 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Stopped * rsc3 (ocf:pacemaker:Dummy): Stopped * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] Transition Summary: - * Start rsc1 ( node2 ) - * Start rsc2 ( node1 ) - * Start rsc3 ( node1 ) - * Start rsc4:0 ( node2 ) - * Start rsc4:1 ( node1 ) + * Start rsc1 ( node2 ) + * Start rsc2 ( node1 ) + * Start rsc3 ( node1 ) + * Start rsc4:0 ( node2 ) + * Start rsc4:1 ( node1 ) * Promote rsc5:0 ( Unpromoted -> Promoted node1 ) Executing Cluster Transition: * Resource action: rsc1 start on node2 * Pseudo action: group2_start_0 * Resource action: rsc2 start on node1 * Resource action: rsc3 start on node1 * Pseudo action: clone4_start_0 * Pseudo action: ms5_promote_0 * Resource action: rsc1 monitor=10000 on node2 * Pseudo action: group2_running_0 * Resource action: rsc2 monitor=5000 on node1 * Resource action: rsc3 monitor=5000 on node1 * Resource action: rsc4:0 start on node2 * Resource action: rsc4:1 start on node1 * Pseudo action: clone4_running_0 * Resource action: rsc5:1 promote on node1 * Pseudo action: ms5_promoted_0 * Resource action: rsc4:0 monitor=5000 on node2 * Resource action: rsc4:1 monitor=5000 on node1 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-3.summary b/cts/scheduler/summary/ticket-rsc-sets-3.summary index acf79003f8..3bc9d648ac 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-3.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-3.summary @@ -1,52 +1,52 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1 ( node2 ) due to node availability - * Stop rsc2 ( node1 ) due to node availability - * Stop rsc3 ( node1 ) due to node availability - * Stop rsc4:0 ( node1 ) due to node availability - * Stop rsc4:1 ( node2 ) due to node availability + * Stop rsc1 ( node2 ) due to node availability + * Stop rsc2 ( node1 ) due to node availability + * Stop rsc3 ( node1 ) due to node availability + * Stop rsc4:0 ( node1 ) due to node availability + * Stop rsc4:1 ( node2 ) due to node availability * Demote rsc5:0 ( Promoted -> Unpromoted node1 ) Executing Cluster Transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: group2_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Stopped * rsc3 (ocf:pacemaker:Dummy): Stopped * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-6.summary b/cts/scheduler/summary/ticket-rsc-sets-6.summary index 651c55dccb..7336f70db3 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-6.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-6.summary @@ -1,46 +1,46 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] Transition Summary: - * Start rsc4:0 ( node2 ) - * Start rsc4:1 ( node1 ) + * Start rsc4:0 ( node2 ) + * Start rsc4:1 ( node1 ) * Promote rsc5:0 ( Unpromoted -> Promoted node1 ) Executing Cluster Transition: * Pseudo action: clone4_start_0 * Pseudo action: ms5_promote_0 * Resource action: rsc4:0 start on node2 * Resource action: rsc4:1 start on node1 * Pseudo action: clone4_running_0 * Resource action: rsc5:1 promote on node1 * Pseudo action: ms5_promoted_0 * Resource action: rsc4:0 monitor=5000 on node2 * Resource action: rsc4:1 monitor=5000 on node1 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-7.summary b/cts/scheduler/summary/ticket-rsc-sets-7.summary index acf79003f8..3bc9d648ac 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-7.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-7.summary @@ -1,52 +1,52 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1 ( node2 ) due to node availability - * Stop rsc2 ( node1 ) due to node availability - * Stop rsc3 ( node1 ) due to node availability - * Stop rsc4:0 ( node1 ) due to node availability - * Stop rsc4:1 ( node2 ) due to node availability + * Stop rsc1 ( node2 ) due to node availability + * Stop rsc2 ( node1 ) due to node availability + * Stop rsc3 ( node1 ) due to node availability + * Stop rsc4:0 ( node1 ) due to node availability + * Stop rsc4:1 ( node2 ) due to node availability * Demote rsc5:0 ( Promoted -> Unpromoted node1 ) Executing Cluster Transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: group2_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Stopped * rsc3 (ocf:pacemaker:Dummy): Stopped * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] diff --git a/cts/scheduler/summary/ticket-rsc-sets-9.summary b/cts/scheduler/summary/ticket-rsc-sets-9.summary index acf79003f8..3bc9d648ac 100644 --- a/cts/scheduler/summary/ticket-rsc-sets-9.summary +++ b/cts/scheduler/summary/ticket-rsc-sets-9.summary @@ -1,52 +1,52 @@ Current cluster status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Started node2 * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Started node1 * rsc3 (ocf:pacemaker:Dummy): Started node1 * Clone Set: clone4 [rsc4]: * Started: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 ] Transition Summary: - * Stop rsc1 ( node2 ) due to node availability - * Stop rsc2 ( node1 ) due to node availability - * Stop rsc3 ( node1 ) due to node availability - * Stop rsc4:0 ( node1 ) due to node availability - * Stop rsc4:1 ( node2 ) due to node availability + * Stop rsc1 ( node2 ) due to node availability + * Stop rsc2 ( node1 ) due to node availability + * Stop rsc3 ( node1 ) due to node availability + * Stop rsc4:0 ( node1 ) due to node availability + * Stop rsc4:1 ( node2 ) due to node availability * Demote rsc5:0 ( Promoted -> Unpromoted node1 ) Executing Cluster Transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: group2_stopped_0 Revised Cluster Status: * Node List: * Online: [ node1 node2 ] * Full List of Resources: * rsc_stonith (stonith:null): Started node1 * rsc1 (ocf:pacemaker:Dummy): Stopped * Resource Group: group2: * rsc2 (ocf:pacemaker:Dummy): Stopped * rsc3 (ocf:pacemaker:Dummy): Stopped * Clone Set: clone4 [rsc4]: * Stopped: [ node1 node2 ] * Clone Set: ms5 [rsc5] (promotable): * Unpromoted: [ node1 node2 ] diff --git a/cts/scheduler/summary/whitebox-ms-ordering-move.summary b/cts/scheduler/summary/whitebox-ms-ordering-move.summary index c9b13e032d..6a5fb6eaeb 100644 --- a/cts/scheduler/summary/whitebox-ms-ordering-move.summary +++ b/cts/scheduler/summary/whitebox-ms-ordering-move.summary @@ -1,107 +1,107 @@ Current cluster status: * Node List: * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * GuestOnline: [ lxc1@rhel7-1 lxc2@rhel7-1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-3 * FencingPass (stonith:fence_dummy): Started rhel7-4 * FencingFail (stonith:fence_dummy): Started rhel7-5 * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1 * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Started rhel7-2 * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Started rhel7-3 * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Started rhel7-4 * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Started rhel7-5 * migrator (ocf:pacemaker:Dummy): Started rhel7-4 * Clone Set: Connectivity [ping-1]: * Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * Stopped: [ lxc1 lxc2 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ rhel7-3 ] * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] * Resource Group: group-1: * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-3 * petulant (service:DummySD): Started rhel7-3 * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel7-3 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-3 * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-1 * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Promoted: [ lxc1 ] * Unpromoted: [ lxc2 ] Transition Summary: * Move container1 ( rhel7-1 -> rhel7-2 ) - * Restart lxc-ms:0 ( Promoted lxc1 ) due to required container1 start + * Restart lxc-ms:0 ( Promoted lxc1 ) due to required container1 start * Move lxc1 ( rhel7-1 -> rhel7-2 ) Executing Cluster Transition: * Resource action: rsc_rhel7-1 monitor on lxc2 * Resource action: rsc_rhel7-2 monitor on lxc2 * Resource action: rsc_rhel7-3 monitor on lxc2 * Resource action: rsc_rhel7-4 monitor on lxc2 * Resource action: rsc_rhel7-5 monitor on lxc2 * Resource action: migrator monitor on lxc2 * Resource action: ping-1 monitor on lxc2 * Resource action: stateful-1 monitor on lxc2 * Resource action: r192.168.122.207 monitor on lxc2 * Resource action: petulant monitor on lxc2 * Resource action: r192.168.122.208 monitor on lxc2 * Resource action: lsb-dummy monitor on lxc2 * Pseudo action: lxc-ms-master_demote_0 * Resource action: lxc1 monitor on rhel7-5 * Resource action: lxc1 monitor on rhel7-4 * Resource action: lxc1 monitor on rhel7-3 * Resource action: lxc1 monitor on rhel7-2 * Resource action: lxc2 monitor on rhel7-5 * Resource action: lxc2 monitor on rhel7-4 * Resource action: lxc2 monitor on rhel7-3 * Resource action: lxc2 monitor on rhel7-2 * Resource action: lxc-ms demote on lxc1 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Resource action: lxc-ms stop on lxc1 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Resource action: lxc1 stop on rhel7-1 * Resource action: container1 stop on rhel7-1 * Resource action: container1 start on rhel7-2 * Resource action: lxc1 start on rhel7-2 * Resource action: lxc-ms start on lxc1 * Pseudo action: lxc-ms-master_running_0 * Resource action: lxc1 monitor=30000 on rhel7-2 * Pseudo action: lxc-ms-master_promote_0 * Resource action: lxc-ms promote on lxc1 * Pseudo action: lxc-ms-master_promoted_0 Revised Cluster Status: * Node List: * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * GuestOnline: [ lxc1@rhel7-2 lxc2@rhel7-1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started rhel7-3 * FencingPass (stonith:fence_dummy): Started rhel7-4 * FencingFail (stonith:fence_dummy): Started rhel7-5 * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1 * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Started rhel7-2 * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Started rhel7-3 * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Started rhel7-4 * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Started rhel7-5 * migrator (ocf:pacemaker:Dummy): Started rhel7-4 * Clone Set: Connectivity [ping-1]: * Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] * Stopped: [ lxc1 lxc2 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ rhel7-3 ] * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] * Resource Group: group-1: * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-3 * petulant (service:DummySD): Started rhel7-3 * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel7-3 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-3 * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-2 * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Promoted: [ lxc1 ] * Unpromoted: [ lxc2 ] diff --git a/cts/scheduler/summary/whitebox-ms-ordering.summary b/cts/scheduler/summary/whitebox-ms-ordering.summary index 4d23221fa6..066763f31d 100644 --- a/cts/scheduler/summary/whitebox-ms-ordering.summary +++ b/cts/scheduler/summary/whitebox-ms-ordering.summary @@ -1,73 +1,73 @@ Current cluster status: * Node List: * Online: [ 18node1 18node2 18node3 ] * Full List of Resources: * shooter (stonith:fence_xvm): Started 18node2 * container1 (ocf:heartbeat:VirtualDomain): FAILED * container2 (ocf:heartbeat:VirtualDomain): FAILED * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Stopped: [ 18node1 18node2 18node3 ] Transition Summary: * Fence (reboot) lxc2 (resource: container2) 'guest is unclean' * Fence (reboot) lxc1 (resource: container1) 'guest is unclean' - * Start container1 ( 18node1 ) - * Start container2 ( 18node1 ) - * Recover lxc-ms:0 ( Promoted lxc1 ) - * Recover lxc-ms:1 ( Unpromoted lxc2 ) - * Start lxc1 ( 18node1 ) - * Start lxc2 ( 18node1 ) + * Start container1 ( 18node1 ) + * Start container2 ( 18node1 ) + * Recover lxc-ms:0 ( Promoted lxc1 ) + * Recover lxc-ms:1 ( Unpromoted lxc2 ) + * Start lxc1 ( 18node1 ) + * Start lxc2 ( 18node1 ) Executing Cluster Transition: * Resource action: container1 monitor on 18node3 * Resource action: container1 monitor on 18node2 * Resource action: container1 monitor on 18node1 * Resource action: container2 monitor on 18node3 * Resource action: container2 monitor on 18node2 * Resource action: container2 monitor on 18node1 * Resource action: lxc-ms monitor on 18node3 * Resource action: lxc-ms monitor on 18node2 * Resource action: lxc-ms monitor on 18node1 * Pseudo action: lxc-ms-master_demote_0 * Resource action: lxc1 monitor on 18node3 * Resource action: lxc1 monitor on 18node2 * Resource action: lxc1 monitor on 18node1 * Resource action: lxc2 monitor on 18node3 * Resource action: lxc2 monitor on 18node2 * Resource action: lxc2 monitor on 18node1 * Pseudo action: stonith-lxc2-reboot on lxc2 * Pseudo action: stonith-lxc1-reboot on lxc1 * Resource action: container1 start on 18node1 * Resource action: container2 start on 18node1 * Pseudo action: lxc-ms_demote_0 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Resource action: lxc1 start on 18node1 * Resource action: lxc2 start on 18node1 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Resource action: lxc1 monitor=30000 on 18node1 * Resource action: lxc2 monitor=30000 on 18node1 * Resource action: lxc-ms start on lxc1 * Resource action: lxc-ms start on lxc2 * Pseudo action: lxc-ms-master_running_0 * Resource action: lxc-ms monitor=10000 on lxc2 * Pseudo action: lxc-ms-master_promote_0 * Resource action: lxc-ms promote on lxc1 * Pseudo action: lxc-ms-master_promoted_0 Revised Cluster Status: * Node List: * Online: [ 18node1 18node2 18node3 ] * GuestOnline: [ lxc1@18node1 lxc2@18node1 ] * Full List of Resources: * shooter (stonith:fence_xvm): Started 18node2 * container1 (ocf:heartbeat:VirtualDomain): Started 18node1 * container2 (ocf:heartbeat:VirtualDomain): Started 18node1 * Clone Set: lxc-ms-master [lxc-ms] (promotable): * Promoted: [ lxc1 ] * Unpromoted: [ lxc2 ] diff --git a/cts/scheduler/summary/whitebox-orphan-ms.summary b/cts/scheduler/summary/whitebox-orphan-ms.summary index 7e1b45b272..0d0007dcc6 100644 --- a/cts/scheduler/summary/whitebox-orphan-ms.summary +++ b/cts/scheduler/summary/whitebox-orphan-ms.summary @@ -1,87 +1,87 @@ Current cluster status: * Node List: * Online: [ 18node1 18node2 18node3 ] * GuestOnline: [ lxc1@18node1 lxc2@18node1 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started 18node2 * FencingPass (stonith:fence_dummy): Started 18node3 * FencingFail (stonith:fence_dummy): Started 18node3 * rsc_18node1 (ocf:heartbeat:IPaddr2): Started 18node1 * rsc_18node2 (ocf:heartbeat:IPaddr2): Started 18node2 * rsc_18node3 (ocf:heartbeat:IPaddr2): Started 18node3 * migrator (ocf:pacemaker:Dummy): Started 18node1 * Clone Set: Connectivity [ping-1]: * Started: [ 18node1 18node2 18node3 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ 18node1 ] * Unpromoted: [ 18node2 18node3 ] * Resource Group: group-1: * r192.168.122.87 (ocf:heartbeat:IPaddr2): Started 18node1 * r192.168.122.88 (ocf:heartbeat:IPaddr2): Started 18node1 * r192.168.122.89 (ocf:heartbeat:IPaddr2): Started 18node1 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1 * container2 (ocf:heartbeat:VirtualDomain): ORPHANED Started 18node1 * lxc1 (ocf:pacemaker:remote): ORPHANED Started 18node1 * lxc-ms (ocf:pacemaker:Stateful): ORPHANED Promoted [ lxc1 lxc2 ] * lxc2 (ocf:pacemaker:remote): ORPHANED Started 18node1 * container1 (ocf:heartbeat:VirtualDomain): ORPHANED Started 18node1 Transition Summary: * Move FencingFail ( 18node3 -> 18node1 ) * Stop container2 ( 18node1 ) due to node availability * Stop lxc1 ( 18node1 ) due to node availability - * Stop lxc-ms ( Promoted lxc1 ) due to node availability - * Stop lxc-ms ( Promoted lxc2 ) due to node availability + * Stop lxc-ms ( Promoted lxc1 ) due to node availability + * Stop lxc-ms ( Promoted lxc2 ) due to node availability * Stop lxc2 ( 18node1 ) due to node availability * Stop container1 ( 18node1 ) due to node availability Executing Cluster Transition: * Resource action: FencingFail stop on 18node3 * Resource action: lxc-ms demote on lxc2 * Resource action: lxc-ms demote on lxc1 * Resource action: FencingFail start on 18node1 * Resource action: lxc-ms stop on lxc2 * Resource action: lxc-ms stop on lxc1 * Resource action: lxc-ms delete on 18node3 * Resource action: lxc-ms delete on 18node2 * Resource action: lxc-ms delete on 18node1 * Resource action: lxc2 stop on 18node1 * Resource action: lxc2 delete on 18node3 * Resource action: lxc2 delete on 18node2 * Resource action: lxc2 delete on 18node1 * Resource action: container2 stop on 18node1 * Resource action: container2 delete on 18node3 * Resource action: container2 delete on 18node2 * Resource action: container2 delete on 18node1 * Resource action: lxc1 stop on 18node1 * Resource action: lxc1 delete on 18node3 * Resource action: lxc1 delete on 18node2 * Resource action: lxc1 delete on 18node1 * Resource action: container1 stop on 18node1 * Resource action: container1 delete on 18node3 * Resource action: container1 delete on 18node2 * Resource action: container1 delete on 18node1 Revised Cluster Status: * Node List: * Online: [ 18node1 18node2 18node3 ] * Full List of Resources: * Fencing (stonith:fence_xvm): Started 18node2 * FencingPass (stonith:fence_dummy): Started 18node3 * FencingFail (stonith:fence_dummy): Started 18node1 * rsc_18node1 (ocf:heartbeat:IPaddr2): Started 18node1 * rsc_18node2 (ocf:heartbeat:IPaddr2): Started 18node2 * rsc_18node3 (ocf:heartbeat:IPaddr2): Started 18node3 * migrator (ocf:pacemaker:Dummy): Started 18node1 * Clone Set: Connectivity [ping-1]: * Started: [ 18node1 18node2 18node3 ] * Clone Set: master-1 [stateful-1] (promotable): * Promoted: [ 18node1 ] * Unpromoted: [ 18node2 18node3 ] * Resource Group: group-1: * r192.168.122.87 (ocf:heartbeat:IPaddr2): Started 18node1 * r192.168.122.88 (ocf:heartbeat:IPaddr2): Started 18node1 * r192.168.122.89 (ocf:heartbeat:IPaddr2): Started 18node1 * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1