diff --git a/cts/scheduler/a-demote-then-b-migrate.summary b/cts/scheduler/a-demote-then-b-migrate.summary index 9e461e8dfc..a12399f99a 100644 --- a/cts/scheduler/a-demote-then-b-migrate.summary +++ b/cts/scheduler/a-demote-then-b-migrate.summary @@ -1,56 +1,56 @@ Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] rsc2 (ocf::pacemaker:Dummy): Started node1 Transition Summary: * Demote rsc1:0 ( Master -> Slave node1 ) * Promote rsc1:1 (Slave -> Master node2) * Migrate rsc2 ( node1 -> node2 ) Executing cluster transition: * Resource action: rsc1:1 cancel=5000 on node1 * Resource action: rsc1:0 cancel=10000 on node2 * Pseudo action: ms1_pre_notify_demote_0 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-pre_notify_demote_0 * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_post_notify_demoted_0 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-post_notify_demoted_0 * Pseudo action: ms1_pre_notify_promote_0 * Resource action: rsc2 migrate_to on node1 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-pre_notify_promote_0 * Resource action: rsc2 migrate_from on node2 * Resource action: rsc2 stop on node1 * Pseudo action: all_stopped * Pseudo action: rsc2_start_0 * Pseudo action: ms1_promote_0 * Resource action: rsc2 monitor=5000 on node2 * Resource action: rsc1:0 promote on node2 * Pseudo action: ms1_promoted_0 * Pseudo action: ms1_post_notify_promoted_0 * Resource action: rsc1:1 notify on node1 * Resource action: rsc1:0 notify on node2 * Pseudo action: ms1_confirmed-post_notify_promoted_0 * Resource action: rsc1:1 monitor=10000 on node1 * Resource action: rsc1:0 monitor=5000 on node2 Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node2 ] Slaves: [ node1 ] rsc2 (ocf::pacemaker:Dummy): Started node2 diff --git a/cts/scheduler/a-promote-then-b-migrate.summary b/cts/scheduler/a-promote-then-b-migrate.summary index 166b7b0b09..5457fe6520 100644 --- a/cts/scheduler/a-promote-then-b-migrate.summary +++ b/cts/scheduler/a-promote-then-b-migrate.summary @@ -1,41 +1,41 @@ Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] rsc2 (ocf::pacemaker:Dummy): Started node1 Transition Summary: * Promote rsc1:1 (Slave -> Master node2) * Migrate rsc2 ( node1 -> node2 ) Executing cluster transition: * Resource action: rsc1:1 cancel=10000 on node2 * Pseudo action: ms1_pre_notify_promote_0 * Resource action: rsc1:0 notify on node1 * Resource action: rsc1:1 notify on node2 * Pseudo action: ms1_confirmed-pre_notify_promote_0 * Pseudo action: ms1_promote_0 * Resource action: rsc1:1 promote on node2 * Pseudo action: ms1_promoted_0 * Pseudo action: ms1_post_notify_promoted_0 * Resource action: rsc1:0 notify on node1 * Resource action: rsc1:1 notify on node2 * Pseudo action: ms1_confirmed-post_notify_promoted_0 * Resource action: rsc2 migrate_to on node1 * Resource action: rsc1:1 monitor=5000 on node2 * Resource action: rsc2 migrate_from on node2 * Resource action: rsc2 stop on node1 * Pseudo action: all_stopped * Pseudo action: rsc2_start_0 * Resource action: rsc2 monitor=5000 on node2 Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 node2 ] rsc2 (ocf::pacemaker:Dummy): Started node2 diff --git a/cts/scheduler/anon-instance-pending.summary b/cts/scheduler/anon-instance-pending.summary index 6ee4e7df69..6a35e8eeda 100644 --- a/cts/scheduler/anon-instance-pending.summary +++ b/cts/scheduler/anon-instance-pending.summary @@ -1,223 +1,223 @@ Current cluster status: Online: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ] Fencing (stonith:fence_imaginary): Started node1 - Master/Slave Set: clone1 [clone1rsc] + Clone Set: clone1 [clone1rsc] (promotable) clone1rsc (ocf::pacemaker:Stateful): Starting node4 Masters: [ node3 ] Slaves: [ node1 node2 ] Stopped: [ node5 node6 node7 node8 node9 node10 node11 ] Clone Set: clone2 [clone2rsc] clone2rsc (ocf::pacemaker:Dummy): Starting node4 Started: [ node2 ] Stopped: [ node1 node3 node5 node6 node7 node8 node9 node10 node11 ] Clone Set: clone3 [clone3rsc] Started: [ node3 ] Stopped: [ node1 node2 node4 node5 node6 node7 node8 node9 node10 node11 ] Clone Set: clone4 [clone4rsc] clone4rsc (ocf::pacemaker:Dummy): Stopping node8 clone4rsc (ocf::pacemaker:Dummy): ORPHANED Started node9 Started: [ node1 node5 node6 node7 ] Stopped: [ node2 node3 node4 node10 node11 ] Clone Set: clone5 [clone5group] Resource Group: clone5group:2 clone5rsc1 (ocf::pacemaker:Dummy): Started node3 clone5rsc2 (ocf::pacemaker:Dummy): Starting node3 clone5rsc3 (ocf::pacemaker:Dummy): Stopped Started: [ node1 node2 ] Stopped: [ node4 node5 node6 node7 node8 node9 node10 node11 ] Transition Summary: * Start clone1rsc:4 ( node9 ) * Start clone1rsc:5 ( node10 ) * Start clone1rsc:6 ( node11 ) * Start clone1rsc:7 ( node5 ) * Start clone1rsc:8 ( node6 ) * Start clone1rsc:9 ( node7 ) * Start clone1rsc:10 ( node8 ) * Start clone2rsc:2 ( node10 ) * Start clone2rsc:3 ( node11 ) * Start clone2rsc:4 ( node3 ) * Start clone3rsc:1 ( node5 ) * Start clone3rsc:2 ( node6 ) * Start clone3rsc:3 ( node7 ) * Start clone3rsc:4 ( node8 ) * Start clone3rsc:5 ( node9 ) * Start clone3rsc:6 ( node1 ) * Start clone3rsc:7 ( node10 ) * Start clone3rsc:8 ( node11 ) * Start clone3rsc:9 ( node2 ) * Start clone3rsc:10 ( node4 ) * Stop clone4rsc:5 ( node9 ) due to node availability * Start clone5rsc3:2 ( node3 ) * Start clone5rsc1:3 ( node9 ) * Start clone5rsc2:3 ( node9 ) * Start clone5rsc3:3 ( node9 ) * Start clone5rsc1:4 ( node10 ) * Start clone5rsc2:4 ( node10 ) * Start clone5rsc3:4 ( node10 ) * Start clone5rsc1:5 ( node11 ) * Start clone5rsc2:5 ( node11 ) * Start clone5rsc3:5 ( node11 ) * Start clone5rsc1:6 ( node4 ) * Start clone5rsc2:6 ( node4 ) * Start clone5rsc3:6 ( node4 ) * Start clone5rsc1:7 ( node5 ) * Start clone5rsc2:7 ( node5 ) * Start clone5rsc3:7 ( node5 ) * Start clone5rsc1:8 ( node6 ) * Start clone5rsc2:8 ( node6 ) * Start clone5rsc3:8 ( node6 ) * Start clone5rsc1:9 ( node7 ) * Start clone5rsc2:9 ( node7 ) * Start clone5rsc3:9 ( node7 ) * Start clone5rsc1:10 ( node8 ) * Start clone5rsc2:10 ( node8 ) * Start clone5rsc3:10 ( node8 ) Executing cluster transition: * Pseudo action: clone1_start_0 * Pseudo action: clone2_start_0 * Resource action: clone3rsc monitor on node2 * Pseudo action: clone3_start_0 * Pseudo action: clone4_stop_0 * Pseudo action: clone5_start_0 * Resource action: clone1rsc start on node4 * Resource action: clone1rsc start on node9 * Resource action: clone1rsc start on node10 * Resource action: clone1rsc start on node11 * Resource action: clone1rsc start on node5 * Resource action: clone1rsc start on node6 * Resource action: clone1rsc start on node7 * Resource action: clone1rsc start on node8 * Pseudo action: clone1_running_0 * Resource action: clone2rsc start on node4 * Resource action: clone2rsc start on node10 * Resource action: clone2rsc start on node11 * Resource action: clone2rsc start on node3 * Pseudo action: clone2_running_0 * Resource action: clone3rsc start on node5 * Resource action: clone3rsc start on node6 * Resource action: clone3rsc start on node7 * Resource action: clone3rsc start on node8 * Resource action: clone3rsc start on node9 * Resource action: clone3rsc start on node1 * Resource action: clone3rsc start on node10 * Resource action: clone3rsc start on node11 * Resource action: clone3rsc start on node2 * Resource action: clone3rsc start on node4 * Pseudo action: clone3_running_0 * Resource action: clone4rsc stop on node9 * Pseudo action: clone4_stopped_0 * Pseudo action: clone5group:2_start_0 * Resource action: clone5rsc2 start on node3 * Resource action: clone5rsc3 start on node3 * Pseudo action: clone5group:3_start_0 * Resource action: clone5rsc1 start on node9 * Resource action: clone5rsc2 start on node9 * Resource action: clone5rsc3 start on node9 * Pseudo action: clone5group:4_start_0 * Resource action: clone5rsc1 start on node10 * Resource action: clone5rsc2 start on node10 * Resource action: clone5rsc3 start on node10 * Pseudo action: clone5group:5_start_0 * Resource action: clone5rsc1 start on node11 * Resource action: clone5rsc2 start on node11 * Resource action: clone5rsc3 start on node11 * Pseudo action: clone5group:6_start_0 * Resource action: clone5rsc1 start on node4 * Resource action: clone5rsc2 start on node4 * Resource action: clone5rsc3 start on node4 * Pseudo action: clone5group:7_start_0 * Resource action: clone5rsc1 start on node5 * Resource action: clone5rsc2 start on node5 * Resource action: clone5rsc3 start on node5 * Pseudo action: clone5group:8_start_0 * Resource action: clone5rsc1 start on node6 * Resource action: clone5rsc2 start on node6 * Resource action: clone5rsc3 start on node6 * Pseudo action: clone5group:9_start_0 * Resource action: clone5rsc1 start on node7 * Resource action: clone5rsc2 start on node7 * Resource action: clone5rsc3 start on node7 * Pseudo action: clone5group:10_start_0 * Resource action: clone5rsc1 start on node8 * Resource action: clone5rsc2 start on node8 * Resource action: clone5rsc3 start on node8 * Pseudo action: all_stopped * Resource action: clone1rsc monitor=10000 on node4 * Resource action: clone1rsc monitor=10000 on node9 * Resource action: clone1rsc monitor=10000 on node10 * Resource action: clone1rsc monitor=10000 on node11 * Resource action: clone1rsc monitor=10000 on node5 * Resource action: clone1rsc monitor=10000 on node6 * Resource action: clone1rsc monitor=10000 on node7 * Resource action: clone1rsc monitor=10000 on node8 * Resource action: clone2rsc monitor=10000 on node4 * Resource action: clone2rsc monitor=10000 on node10 * Resource action: clone2rsc monitor=10000 on node11 * Resource action: clone2rsc monitor=10000 on node3 * Resource action: clone3rsc monitor=10000 on node5 * Resource action: clone3rsc monitor=10000 on node6 * Resource action: clone3rsc monitor=10000 on node7 * Resource action: clone3rsc monitor=10000 on node8 * Resource action: clone3rsc monitor=10000 on node9 * Resource action: clone3rsc monitor=10000 on node1 * Resource action: clone3rsc monitor=10000 on node10 * Resource action: clone3rsc monitor=10000 on node11 * Resource action: clone3rsc monitor=10000 on node2 * Resource action: clone3rsc monitor=10000 on node4 * Pseudo action: clone5group:2_running_0 * Resource action: clone5rsc2 monitor=10000 on node3 * Resource action: clone5rsc3 monitor=10000 on node3 * Pseudo action: clone5group:3_running_0 * Resource action: clone5rsc1 monitor=10000 on node9 * Resource action: clone5rsc2 monitor=10000 on node9 * Resource action: clone5rsc3 monitor=10000 on node9 * Pseudo action: clone5group:4_running_0 * Resource action: clone5rsc1 monitor=10000 on node10 * Resource action: clone5rsc2 monitor=10000 on node10 * Resource action: clone5rsc3 monitor=10000 on node10 * Pseudo action: clone5group:5_running_0 * Resource action: clone5rsc1 monitor=10000 on node11 * Resource action: clone5rsc2 monitor=10000 on node11 * Resource action: clone5rsc3 monitor=10000 on node11 * Pseudo action: clone5group:6_running_0 * Resource action: clone5rsc1 monitor=10000 on node4 * Resource action: clone5rsc2 monitor=10000 on node4 * Resource action: clone5rsc3 monitor=10000 on node4 * Pseudo action: clone5group:7_running_0 * Resource action: clone5rsc1 monitor=10000 on node5 * Resource action: clone5rsc2 monitor=10000 on node5 * Resource action: clone5rsc3 monitor=10000 on node5 * Pseudo action: clone5group:8_running_0 * Resource action: clone5rsc1 monitor=10000 on node6 * Resource action: clone5rsc2 monitor=10000 on node6 * Resource action: clone5rsc3 monitor=10000 on node6 * Pseudo action: clone5group:9_running_0 * Resource action: clone5rsc1 monitor=10000 on node7 * Resource action: clone5rsc2 monitor=10000 on node7 * Resource action: clone5rsc3 monitor=10000 on node7 * Pseudo action: clone5group:10_running_0 * Resource action: clone5rsc1 monitor=10000 on node8 * Resource action: clone5rsc2 monitor=10000 on node8 * Resource action: clone5rsc3 monitor=10000 on node8 * Pseudo action: clone5_running_0 Revised cluster status: Online: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ] Fencing (stonith:fence_imaginary): Started node1 - Master/Slave Set: clone1 [clone1rsc] + Clone Set: clone1 [clone1rsc] (promotable) Masters: [ node3 ] Slaves: [ node1 node2 node4 node5 node6 node7 node8 node9 node10 node11 ] Clone Set: clone2 [clone2rsc] Started: [ node2 node3 node4 node10 node11 ] Clone Set: clone3 [clone3rsc] Started: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ] Clone Set: clone4 [clone4rsc] Started: [ node1 node5 node6 node7 node8 ] Clone Set: clone5 [clone5group] Started: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ] diff --git a/cts/scheduler/anti-colocation-master.summary b/cts/scheduler/anti-colocation-master.summary index df4c4ed991..1e593bcff2 100644 --- a/cts/scheduler/anti-colocation-master.summary +++ b/cts/scheduler/anti-colocation-master.summary @@ -1,37 +1,37 @@ Using the original execution date of: 2016-04-29 09:06:59Z Current cluster status: Online: [ sle12sp2-1 sle12sp2-2 ] st_sbd (stonith:external/sbd): Started sle12sp2-2 dummy1 (ocf::pacemaker:Dummy): Started sle12sp2-2 - Master/Slave Set: ms1 [state1] + Clone Set: ms1 [state1] (promotable) Masters: [ sle12sp2-1 ] Slaves: [ sle12sp2-2 ] Transition Summary: * Move dummy1 ( sle12sp2-2 -> sle12sp2-1 ) * Promote state1:0 (Slave -> Master sle12sp2-2) * Demote state1:1 ( Master -> Slave sle12sp2-1 ) Executing cluster transition: * Resource action: dummy1 stop on sle12sp2-2 * Pseudo action: ms1_demote_0 * Pseudo action: all_stopped * Resource action: state1 demote on sle12sp2-1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_promote_0 * Resource action: dummy1 start on sle12sp2-1 * Resource action: state1 promote on sle12sp2-2 * Pseudo action: ms1_promoted_0 Using the original execution date of: 2016-04-29 09:06:59Z Revised cluster status: Online: [ sle12sp2-1 sle12sp2-2 ] st_sbd (stonith:external/sbd): Started sle12sp2-2 dummy1 (ocf::pacemaker:Dummy): Started sle12sp2-1 - Master/Slave Set: ms1 [state1] + Clone Set: ms1 [state1] (promotable) Masters: [ sle12sp2-2 ] Slaves: [ sle12sp2-1 ] diff --git a/cts/scheduler/anti-colocation-slave.summary b/cts/scheduler/anti-colocation-slave.summary index 0d77064db7..c9681f4437 100644 --- a/cts/scheduler/anti-colocation-slave.summary +++ b/cts/scheduler/anti-colocation-slave.summary @@ -1,35 +1,35 @@ Current cluster status: Online: [ sle12sp2-1 sle12sp2-2 ] st_sbd (stonith:external/sbd): Started sle12sp2-1 - Master/Slave Set: ms1 [state1] + Clone Set: ms1 [state1] (promotable) Masters: [ sle12sp2-1 ] Slaves: [ sle12sp2-2 ] dummy1 (ocf::pacemaker:Dummy): Started sle12sp2-1 Transition Summary: * Demote state1:0 ( Master -> Slave sle12sp2-1 ) * Promote state1:1 (Slave -> Master sle12sp2-2) * Move dummy1 ( sle12sp2-1 -> sle12sp2-2 ) Executing cluster transition: * Resource action: dummy1 stop on sle12sp2-1 * Pseudo action: all_stopped * Pseudo action: ms1_demote_0 * Resource action: state1 demote on sle12sp2-1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_promote_0 * Resource action: state1 promote on sle12sp2-2 * Pseudo action: ms1_promoted_0 * Resource action: dummy1 start on sle12sp2-2 Revised cluster status: Online: [ sle12sp2-1 sle12sp2-2 ] st_sbd (stonith:external/sbd): Started sle12sp2-1 - Master/Slave Set: ms1 [state1] + Clone Set: ms1 [state1] (promotable) Masters: [ sle12sp2-2 ] Slaves: [ sle12sp2-1 ] dummy1 (ocf::pacemaker:Dummy): Started sle12sp2-2 diff --git a/cts/scheduler/asymmetric.summary b/cts/scheduler/asymmetric.summary index 7c51fd2679..6a3df9fa03 100644 --- a/cts/scheduler/asymmetric.summary +++ b/cts/scheduler/asymmetric.summary @@ -1,27 +1,27 @@ Current cluster status: Online: [ puma1 puma3 ] - Master/Slave Set: ms_drbd_poolA [ebe3fb6e-7778-426e-be58-190ab1ff3dd3] + Clone Set: ms_drbd_poolA [ebe3fb6e-7778-426e-be58-190ab1ff3dd3] (promotable) Masters: [ puma3 ] Slaves: [ puma1 ] vpool_ip_poolA (ocf::heartbeat:IPaddr2): Stopped drbd_target_poolA (ocf::vpools:iscsi_target): Stopped Transition Summary: Executing cluster transition: * Resource action: ebe3fb6e-7778-426e-be58-190ab1ff3dd3:1 monitor=19000 on puma1 * Resource action: ebe3fb6e-7778-426e-be58-190ab1ff3dd3:0 monitor=20000 on puma3 * Resource action: drbd_target_poolA monitor on puma3 * Resource action: drbd_target_poolA monitor on puma1 Revised cluster status: Online: [ puma1 puma3 ] - Master/Slave Set: ms_drbd_poolA [ebe3fb6e-7778-426e-be58-190ab1ff3dd3] + Clone Set: ms_drbd_poolA [ebe3fb6e-7778-426e-be58-190ab1ff3dd3] (promotable) Masters: [ puma3 ] Slaves: [ puma1 ] vpool_ip_poolA (ocf::heartbeat:IPaddr2): Stopped drbd_target_poolA (ocf::vpools:iscsi_target): Stopped diff --git a/cts/scheduler/bug-1572-1.summary b/cts/scheduler/bug-1572-1.summary index 7ca83a9b48..96a5e5ddaa 100644 --- a/cts/scheduler/bug-1572-1.summary +++ b/cts/scheduler/bug-1572-1.summary @@ -1,85 +1,85 @@ Current cluster status: Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] - Master/Slave Set: ms_drbd_7788 [rsc_drbd_7788] + Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable) Masters: [ arc-tkincaidlx.wsicorp.com ] Slaves: [ arc-dknightlx ] Resource Group: grp_pgsql_mirror fs_mirror (ocf::heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com pgsql_5555 (ocf::heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com IPaddr_147_81_84_133 (ocf::heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com Transition Summary: * Shutdown arc-dknightlx * Stop rsc_drbd_7788:0 ( Slave arc-dknightlx ) due to node availability * Restart rsc_drbd_7788:1 ( Master arc-tkincaidlx.wsicorp.com ) due to resource definition change * Restart fs_mirror ( arc-tkincaidlx.wsicorp.com ) due to required ms_drbd_7788 notified * Restart pgsql_5555 ( arc-tkincaidlx.wsicorp.com ) due to required fs_mirror start * Restart IPaddr_147_81_84_133 ( arc-tkincaidlx.wsicorp.com ) due to required pgsql_5555 start Executing cluster transition: * Pseudo action: ms_drbd_7788_pre_notify_demote_0 * Pseudo action: grp_pgsql_mirror_stop_0 * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_demote_0 * Resource action: pgsql_5555 stop on arc-tkincaidlx.wsicorp.com * Resource action: fs_mirror stop on arc-tkincaidlx.wsicorp.com * Pseudo action: grp_pgsql_mirror_stopped_0 * Pseudo action: ms_drbd_7788_demote_0 * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_demoted_0 * Pseudo action: ms_drbd_7788_post_notify_demoted_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_7788_pre_notify_stop_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_7788_stop_0 * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx * Resource action: rsc_drbd_7788:1 stop on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_stopped_0 * Cluster action: do_shutdown on arc-dknightlx * Pseudo action: ms_drbd_7788_post_notify_stopped_0 * Pseudo action: ms_drbd_7788_confirmed-post_notify_stopped_0 * Pseudo action: ms_drbd_7788_pre_notify_start_0 * Pseudo action: all_stopped * Pseudo action: ms_drbd_7788_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_7788_start_0 * Resource action: rsc_drbd_7788:1 start on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_running_0 * Pseudo action: ms_drbd_7788_post_notify_running_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_running_0 * Pseudo action: ms_drbd_7788_pre_notify_promote_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_7788_promote_0 * Resource action: rsc_drbd_7788:1 promote on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_promoted_0 * Pseudo action: ms_drbd_7788_post_notify_promoted_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_promoted_0 * Pseudo action: grp_pgsql_mirror_start_0 * Resource action: fs_mirror start on arc-tkincaidlx.wsicorp.com * Resource action: pgsql_5555 start on arc-tkincaidlx.wsicorp.com * Resource action: pgsql_5555 monitor=30000 on arc-tkincaidlx.wsicorp.com * Resource action: IPaddr_147_81_84_133 start on arc-tkincaidlx.wsicorp.com * Resource action: IPaddr_147_81_84_133 monitor=25000 on arc-tkincaidlx.wsicorp.com * Pseudo action: grp_pgsql_mirror_running_0 Revised cluster status: Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] - Master/Slave Set: ms_drbd_7788 [rsc_drbd_7788] + Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable) Masters: [ arc-tkincaidlx.wsicorp.com ] Stopped: [ arc-dknightlx ] Resource Group: grp_pgsql_mirror fs_mirror (ocf::heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com pgsql_5555 (ocf::heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com IPaddr_147_81_84_133 (ocf::heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com diff --git a/cts/scheduler/bug-1572-2.summary b/cts/scheduler/bug-1572-2.summary index 9d2b8854d3..f4f118a680 100644 --- a/cts/scheduler/bug-1572-2.summary +++ b/cts/scheduler/bug-1572-2.summary @@ -1,61 +1,61 @@ Current cluster status: Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] - Master/Slave Set: ms_drbd_7788 [rsc_drbd_7788] + Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable) Masters: [ arc-tkincaidlx.wsicorp.com ] Slaves: [ arc-dknightlx ] Resource Group: grp_pgsql_mirror fs_mirror (ocf::heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com pgsql_5555 (ocf::heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com IPaddr_147_81_84_133 (ocf::heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com Transition Summary: * Shutdown arc-dknightlx * Stop rsc_drbd_7788:0 ( Slave arc-dknightlx ) due to node availability * Demote rsc_drbd_7788:1 (Master -> Slave arc-tkincaidlx.wsicorp.com) * Stop fs_mirror (arc-tkincaidlx.wsicorp.com) due to node availability * Stop pgsql_5555 (arc-tkincaidlx.wsicorp.com) due to node availability * Stop IPaddr_147_81_84_133 (arc-tkincaidlx.wsicorp.com) due to node availability Executing cluster transition: * Pseudo action: ms_drbd_7788_pre_notify_demote_0 * Pseudo action: grp_pgsql_mirror_stop_0 * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_demote_0 * Resource action: pgsql_5555 stop on arc-tkincaidlx.wsicorp.com * Resource action: fs_mirror stop on arc-tkincaidlx.wsicorp.com * Pseudo action: grp_pgsql_mirror_stopped_0 * Pseudo action: ms_drbd_7788_demote_0 * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_demoted_0 * Pseudo action: ms_drbd_7788_post_notify_demoted_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_7788_pre_notify_stop_0 * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_7788_stop_0 * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx * Pseudo action: ms_drbd_7788_stopped_0 * Cluster action: do_shutdown on arc-dknightlx * Pseudo action: ms_drbd_7788_post_notify_stopped_0 * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com * Pseudo action: ms_drbd_7788_confirmed-post_notify_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ] - Master/Slave Set: ms_drbd_7788 [rsc_drbd_7788] + Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable) Slaves: [ arc-tkincaidlx.wsicorp.com ] Stopped: [ arc-dknightlx ] Resource Group: grp_pgsql_mirror fs_mirror (ocf::heartbeat:Filesystem): Stopped pgsql_5555 (ocf::heartbeat:pgsql): Stopped IPaddr_147_81_84_133 (ocf::heartbeat:IPaddr): Stopped diff --git a/cts/scheduler/bug-1685.summary b/cts/scheduler/bug-1685.summary index 22a636c21b..b839b0330c 100644 --- a/cts/scheduler/bug-1685.summary +++ b/cts/scheduler/bug-1685.summary @@ -1,36 +1,36 @@ Current cluster status: Online: [ redun1 redun2 ] - Master/Slave Set: shared_storage [prim_shared_storage] + Clone Set: shared_storage [prim_shared_storage] (promotable) Slaves: [ redun1 redun2 ] shared_filesystem (ocf::heartbeat:Filesystem): Stopped Transition Summary: * Promote prim_shared_storage:0 (Slave -> Master redun2) * Start shared_filesystem (redun2) Executing cluster transition: * Pseudo action: shared_storage_pre_notify_promote_0 * Resource action: prim_shared_storage:0 notify on redun2 * Resource action: prim_shared_storage:1 notify on redun1 * Pseudo action: shared_storage_confirmed-pre_notify_promote_0 * Pseudo action: shared_storage_promote_0 * Resource action: prim_shared_storage:0 promote on redun2 * Pseudo action: shared_storage_promoted_0 * Pseudo action: shared_storage_post_notify_promoted_0 * Resource action: prim_shared_storage:0 notify on redun2 * Resource action: prim_shared_storage:1 notify on redun1 * Pseudo action: shared_storage_confirmed-post_notify_promoted_0 * Resource action: shared_filesystem start on redun2 * Resource action: prim_shared_storage:1 monitor=120000 on redun1 * Resource action: shared_filesystem monitor=120000 on redun2 Revised cluster status: Online: [ redun1 redun2 ] - Master/Slave Set: shared_storage [prim_shared_storage] + Clone Set: shared_storage [prim_shared_storage] (promotable) Masters: [ redun2 ] Slaves: [ redun1 ] shared_filesystem (ocf::heartbeat:Filesystem): Started redun2 diff --git a/cts/scheduler/bug-1765.summary b/cts/scheduler/bug-1765.summary index 593bac392c..069aef717d 100644 --- a/cts/scheduler/bug-1765.summary +++ b/cts/scheduler/bug-1765.summary @@ -1,36 +1,36 @@ Current cluster status: Online: [ sles236 sles238 ] - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ sles236 ] Stopped: [ sles238 ] - Master/Slave Set: ms-drbd1 [drbd1] + Clone Set: ms-drbd1 [drbd1] (promotable) Masters: [ sles236 ] Slaves: [ sles238 ] Transition Summary: * Start drbd0:1 (sles238) Executing cluster transition: * Pseudo action: ms-drbd0_pre_notify_start_0 * Resource action: drbd0:0 notify on sles236 * Pseudo action: ms-drbd0_confirmed-pre_notify_start_0 * Pseudo action: ms-drbd0_start_0 * Resource action: drbd0:1 start on sles238 * Pseudo action: ms-drbd0_running_0 * Pseudo action: ms-drbd0_post_notify_running_0 * Resource action: drbd0:0 notify on sles236 * Resource action: drbd0:1 notify on sles238 * Pseudo action: ms-drbd0_confirmed-post_notify_running_0 Revised cluster status: Online: [ sles236 sles238 ] - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ sles236 ] Slaves: [ sles238 ] - Master/Slave Set: ms-drbd1 [drbd1] + Clone Set: ms-drbd1 [drbd1] (promotable) Masters: [ sles236 ] Slaves: [ sles238 ] diff --git a/cts/scheduler/bug-1822.summary b/cts/scheduler/bug-1822.summary index 5bf91b9858..66a692d03f 100644 --- a/cts/scheduler/bug-1822.summary +++ b/cts/scheduler/bug-1822.summary @@ -1,44 +1,44 @@ Current cluster status: Online: [ process1a process2b ] - Master/Slave Set: ms-sf [ms-sf_group] (unique) + Clone Set: ms-sf [ms-sf_group] (promotable) (unique) Resource Group: ms-sf_group:0 master_slave_Stateful:0 (ocf::heartbeat:Dummy-statful): Slave process2b master_slave_procdctl:0 (ocf::heartbeat:procdctl): Stopped Resource Group: ms-sf_group:1 master_slave_Stateful:1 (ocf::heartbeat:Dummy-statful): Master process1a master_slave_procdctl:1 (ocf::heartbeat:procdctl): Master process1a Transition Summary: * Shutdown process1a * Stop master_slave_Stateful:1 ( Master process1a ) due to node availability * Stop master_slave_procdctl:1 ( Master process1a ) due to node availability Executing cluster transition: * Pseudo action: ms-sf_demote_0 * Pseudo action: ms-sf_group:1_demote_0 * Resource action: master_slave_Stateful:1 demote on process1a * Resource action: master_slave_procdctl:1 demote on process1a * Pseudo action: ms-sf_group:1_demoted_0 * Pseudo action: ms-sf_demoted_0 * Pseudo action: ms-sf_stop_0 * Pseudo action: ms-sf_group:1_stop_0 * Resource action: master_slave_Stateful:1 stop on process1a * Resource action: master_slave_procdctl:1 stop on process1a * Cluster action: do_shutdown on process1a * Pseudo action: all_stopped * Pseudo action: ms-sf_group:1_stopped_0 * Pseudo action: ms-sf_stopped_0 Revised cluster status: Online: [ process1a process2b ] - Master/Slave Set: ms-sf [ms-sf_group] (unique) + Clone Set: ms-sf [ms-sf_group] (promotable) (unique) Resource Group: ms-sf_group:0 master_slave_Stateful:0 (ocf::heartbeat:Dummy-statful): Slave process2b master_slave_procdctl:0 (ocf::heartbeat:procdctl): Stopped Resource Group: ms-sf_group:1 master_slave_Stateful:1 (ocf::heartbeat:Dummy-statful): Stopped master_slave_procdctl:1 (ocf::heartbeat:procdctl): Stopped diff --git a/cts/scheduler/bug-5007-masterslave_colocation.summary b/cts/scheduler/bug-5007-masterslave_colocation.summary index adbc1f1430..14ff6e4c9f 100644 --- a/cts/scheduler/bug-5007-masterslave_colocation.summary +++ b/cts/scheduler/bug-5007-masterslave_colocation.summary @@ -1,30 +1,30 @@ Current cluster status: Online: [ fc16-builder fc16-builder2 ] - Master/Slave Set: MS_DUMMY [DUMMY] + Clone Set: MS_DUMMY [DUMMY] (promotable) Masters: [ fc16-builder ] Slaves: [ fc16-builder2 ] SLAVE_IP (ocf::pacemaker:Dummy): Started fc16-builder MASTER_IP (ocf::pacemaker:Dummy): Started fc16-builder2 Transition Summary: * Move SLAVE_IP ( fc16-builder -> fc16-builder2 ) * Move MASTER_IP ( fc16-builder2 -> fc16-builder ) Executing cluster transition: * Resource action: SLAVE_IP stop on fc16-builder * Resource action: MASTER_IP stop on fc16-builder2 * Pseudo action: all_stopped * Resource action: SLAVE_IP start on fc16-builder2 * Resource action: MASTER_IP start on fc16-builder Revised cluster status: Online: [ fc16-builder fc16-builder2 ] - Master/Slave Set: MS_DUMMY [DUMMY] + Clone Set: MS_DUMMY [DUMMY] (promotable) Masters: [ fc16-builder ] Slaves: [ fc16-builder2 ] SLAVE_IP (ocf::pacemaker:Dummy): Started fc16-builder2 MASTER_IP (ocf::pacemaker:Dummy): Started fc16-builder diff --git a/cts/scheduler/bug-5059.summary b/cts/scheduler/bug-5059.summary index 3122cf9d56..f3a3d2f275 100644 --- a/cts/scheduler/bug-5059.summary +++ b/cts/scheduler/bug-5059.summary @@ -1,75 +1,75 @@ Current cluster status: Node gluster03.h: standby Online: [ gluster01.h gluster02.h ] OFFLINE: [ gluster04.h ] - Master/Slave Set: ms_stateful [g_stateful] + Clone Set: ms_stateful [g_stateful] (promotable) Resource Group: g_stateful:0 p_stateful1 (ocf::pacemaker:Stateful): Slave gluster01.h p_stateful2 (ocf::pacemaker:Stateful): Stopped Resource Group: g_stateful:1 p_stateful1 (ocf::pacemaker:Stateful): Slave gluster02.h p_stateful2 (ocf::pacemaker:Stateful): Stopped Stopped: [ gluster03.h gluster04.h ] Clone Set: c_dummy [p_dummy1] Started: [ gluster01.h gluster02.h ] Transition Summary: * Promote p_stateful1:0 (Slave -> Master gluster01.h) * Promote p_stateful2:0 (Stopped -> Master gluster01.h) * Start p_stateful2:1 (gluster02.h) Executing cluster transition: * Pseudo action: ms_stateful_pre_notify_start_0 * Resource action: iptest delete on gluster02.h * Resource action: ipsrc2 delete on gluster02.h * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-pre_notify_start_0 * Pseudo action: ms_stateful_start_0 * Pseudo action: g_stateful:0_start_0 * Resource action: p_stateful2:0 start on gluster01.h * Pseudo action: g_stateful:1_start_0 * Resource action: p_stateful2:1 start on gluster02.h * Pseudo action: g_stateful:0_running_0 * Pseudo action: g_stateful:1_running_0 * Pseudo action: ms_stateful_running_0 * Pseudo action: ms_stateful_post_notify_running_0 * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful2:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Resource action: p_stateful2:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-post_notify_running_0 * Pseudo action: ms_stateful_pre_notify_promote_0 * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful2:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Resource action: p_stateful2:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-pre_notify_promote_0 * Pseudo action: ms_stateful_promote_0 * Pseudo action: g_stateful:0_promote_0 * Resource action: p_stateful1:0 promote on gluster01.h * Resource action: p_stateful2:0 promote on gluster01.h * Pseudo action: g_stateful:0_promoted_0 * Pseudo action: ms_stateful_promoted_0 * Pseudo action: ms_stateful_post_notify_promoted_0 * Resource action: p_stateful1:0 notify on gluster01.h * Resource action: p_stateful2:0 notify on gluster01.h * Resource action: p_stateful1:1 notify on gluster02.h * Resource action: p_stateful2:1 notify on gluster02.h * Pseudo action: ms_stateful_confirmed-post_notify_promoted_0 * Resource action: p_stateful1:1 monitor=10000 on gluster02.h * Resource action: p_stateful2:1 monitor=10000 on gluster02.h Revised cluster status: Node gluster03.h: standby Online: [ gluster01.h gluster02.h ] OFFLINE: [ gluster04.h ] - Master/Slave Set: ms_stateful [g_stateful] + Clone Set: ms_stateful [g_stateful] (promotable) Masters: [ gluster01.h ] Slaves: [ gluster02.h ] Clone Set: c_dummy [p_dummy1] Started: [ gluster01.h gluster02.h ] diff --git a/cts/scheduler/bug-5140-require-all-false.summary b/cts/scheduler/bug-5140-require-all-false.summary index cf5193c685..79874b79a0 100644 --- a/cts/scheduler/bug-5140-require-all-false.summary +++ b/cts/scheduler/bug-5140-require-all-false.summary @@ -1,81 +1,81 @@ 4 of 35 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Node hex-1: standby Node hex-2: standby Node hex-3: OFFLINE (standby) fencing (stonith:external/sbd): Stopped Clone Set: baseclone [basegrp] Resource Group: basegrp:0 dlm (ocf::pacemaker:controld): Started hex-2 clvmd (ocf::lvm2:clvmd): Started hex-2 o2cb (ocf::ocfs2:o2cb): Started hex-2 vg1 (ocf::heartbeat:LVM): Stopped fs-ocfs-1 (ocf::heartbeat:Filesystem): Stopped Stopped: [ hex-1 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Stopped Clone Set: fs2 [fs-ocfs-2] Stopped: [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r0 [drbd-r0] + Clone Set: ms-r0 [drbd-r0] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r1 [drbd-r1] + Clone Set: ms-r1 [drbd-r1] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Stopped vg-md0 (ocf::heartbeat:LVM): Stopped fs-md0 (ocf::heartbeat:Filesystem): Stopped dummy1 (ocf::heartbeat:Delay): Stopped dummy3 (ocf::heartbeat:Delay): Stopped dummy4 (ocf::heartbeat:Delay): Stopped dummy5 (ocf::heartbeat:Delay): Stopped dummy6 (ocf::heartbeat:Delay): Stopped Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped dummy2 (ocf::heartbeat:Delay): Stopped cluster-md0 (ocf::heartbeat:Raid1): Stopped Transition Summary: * Stop dlm:0 (hex-2) due to node availability * Stop clvmd:0 (hex-2) due to node availability * Stop o2cb:0 (hex-2) due to node availability Executing cluster transition: * Pseudo action: baseclone_stop_0 * Pseudo action: basegrp:0_stop_0 * Resource action: o2cb stop on hex-2 * Resource action: clvmd stop on hex-2 * Resource action: dlm stop on hex-2 * Pseudo action: all_stopped * Pseudo action: basegrp:0_stopped_0 * Pseudo action: baseclone_stopped_0 Revised cluster status: Node hex-1: standby Node hex-2: standby Node hex-3: OFFLINE (standby) fencing (stonith:external/sbd): Stopped Clone Set: baseclone [basegrp] Stopped: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Stopped Clone Set: fs2 [fs-ocfs-2] Stopped: [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r0 [drbd-r0] + Clone Set: ms-r0 [drbd-r0] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r1 [drbd-r1] + Clone Set: ms-r1 [drbd-r1] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Stopped vg-md0 (ocf::heartbeat:LVM): Stopped fs-md0 (ocf::heartbeat:Filesystem): Stopped dummy1 (ocf::heartbeat:Delay): Stopped dummy3 (ocf::heartbeat:Delay): Stopped dummy4 (ocf::heartbeat:Delay): Stopped dummy5 (ocf::heartbeat:Delay): Stopped dummy6 (ocf::heartbeat:Delay): Stopped Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped dummy2 (ocf::heartbeat:Delay): Stopped cluster-md0 (ocf::heartbeat:Raid1): Stopped diff --git a/cts/scheduler/bug-5143-ms-shuffle.summary b/cts/scheduler/bug-5143-ms-shuffle.summary index 4aa3fd3735..eb21a003e8 100644 --- a/cts/scheduler/bug-5143-ms-shuffle.summary +++ b/cts/scheduler/bug-5143-ms-shuffle.summary @@ -1,75 +1,75 @@ 2 of 34 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ hex-1 hex-2 hex-3 ] fencing (stonith:external/sbd): Started hex-1 Clone Set: baseclone [basegrp] Started: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Started hex-2 Clone Set: fs2 [fs-ocfs-2] Started: [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r0 [drbd-r0] + Clone Set: ms-r0 [drbd-r0] (promotable) Masters: [ hex-1 ] Slaves: [ hex-2 ] - Master/Slave Set: ms-r1 [drbd-r1] + Clone Set: ms-r1 [drbd-r1] (promotable) Slaves: [ hex-2 hex-3 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Started hex-3 vg-md0 (ocf::heartbeat:LVM): Started hex-3 fs-md0 (ocf::heartbeat:Filesystem): Started hex-3 dummy1 (ocf::heartbeat:Delay): Started hex-3 dummy3 (ocf::heartbeat:Delay): Started hex-1 dummy4 (ocf::heartbeat:Delay): Started hex-2 dummy5 (ocf::heartbeat:Delay): Started hex-1 dummy6 (ocf::heartbeat:Delay): Started hex-2 Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped ( disabled ) dummy2 (ocf::heartbeat:Delay): Stopped Transition Summary: * Promote drbd-r1:1 (Slave -> Master hex-3) Executing cluster transition: * Pseudo action: ms-r1_pre_notify_promote_0 * Resource action: drbd-r1 notify on hex-2 * Resource action: drbd-r1 notify on hex-3 * Pseudo action: ms-r1_confirmed-pre_notify_promote_0 * Pseudo action: ms-r1_promote_0 * Resource action: drbd-r1 promote on hex-3 * Pseudo action: ms-r1_promoted_0 * Pseudo action: ms-r1_post_notify_promoted_0 * Resource action: drbd-r1 notify on hex-2 * Resource action: drbd-r1 notify on hex-3 * Pseudo action: ms-r1_confirmed-post_notify_promoted_0 * Resource action: drbd-r1 monitor=29000 on hex-2 * Resource action: drbd-r1 monitor=31000 on hex-3 Revised cluster status: Online: [ hex-1 hex-2 hex-3 ] fencing (stonith:external/sbd): Started hex-1 Clone Set: baseclone [basegrp] Started: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Started hex-2 Clone Set: fs2 [fs-ocfs-2] Started: [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r0 [drbd-r0] + Clone Set: ms-r0 [drbd-r0] (promotable) Masters: [ hex-1 ] Slaves: [ hex-2 ] - Master/Slave Set: ms-r1 [drbd-r1] + Clone Set: ms-r1 [drbd-r1] (promotable) Masters: [ hex-3 ] Slaves: [ hex-2 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Started hex-3 vg-md0 (ocf::heartbeat:LVM): Started hex-3 fs-md0 (ocf::heartbeat:Filesystem): Started hex-3 dummy1 (ocf::heartbeat:Delay): Started hex-3 dummy3 (ocf::heartbeat:Delay): Started hex-1 dummy4 (ocf::heartbeat:Delay): Started hex-2 dummy5 (ocf::heartbeat:Delay): Started hex-1 dummy6 (ocf::heartbeat:Delay): Started hex-2 Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped ( disabled ) dummy2 (ocf::heartbeat:Delay): Stopped diff --git a/cts/scheduler/bug-cl-5168.summary b/cts/scheduler/bug-cl-5168.summary index 7b8ff6f055..e5034a9720 100644 --- a/cts/scheduler/bug-cl-5168.summary +++ b/cts/scheduler/bug-cl-5168.summary @@ -1,74 +1,74 @@ Current cluster status: Online: [ hex-1 hex-2 hex-3 ] fencing (stonith:external/sbd): Started hex-1 Clone Set: baseclone [basegrp] Started: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Started hex-2 Clone Set: fs2 [fs-ocfs-2] Started: [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r0 [drbd-r0] + Clone Set: ms-r0 [drbd-r0] (promotable) Masters: [ hex-1 ] Slaves: [ hex-2 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Started hex-3 vg-md0 (ocf::heartbeat:LVM): Started hex-3 fs-md0 (ocf::heartbeat:Filesystem): Started hex-3 dummy1 (ocf::heartbeat:Delay): Started hex-3 dummy3 (ocf::heartbeat:Delay): Started hex-1 dummy4 (ocf::heartbeat:Delay): Started hex-2 dummy5 (ocf::heartbeat:Delay): Started hex-1 dummy6 (ocf::heartbeat:Delay): Started hex-2 Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Started hex-1 dummy2 (ocf::heartbeat:Delay): Started hex-1 - Master/Slave Set: ms-r1 [drbd-r1] + Clone Set: ms-r1 [drbd-r1] (promotable) Slaves: [ hex-2 hex-3 ] Transition Summary: * Promote drbd-r1:1 (Slave -> Master hex-3) Executing cluster transition: * Pseudo action: ms-r1_pre_notify_promote_0 * Resource action: drbd-r1 notify on hex-2 * Resource action: drbd-r1 notify on hex-3 * Pseudo action: ms-r1_confirmed-pre_notify_promote_0 * Pseudo action: ms-r1_promote_0 * Resource action: drbd-r1 promote on hex-3 * Pseudo action: ms-r1_promoted_0 * Pseudo action: ms-r1_post_notify_promoted_0 * Resource action: drbd-r1 notify on hex-2 * Resource action: drbd-r1 notify on hex-3 * Pseudo action: ms-r1_confirmed-post_notify_promoted_0 * Resource action: drbd-r1 monitor=29000 on hex-2 * Resource action: drbd-r1 monitor=31000 on hex-3 Revised cluster status: Online: [ hex-1 hex-2 hex-3 ] fencing (stonith:external/sbd): Started hex-1 Clone Set: baseclone [basegrp] Started: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Started hex-2 Clone Set: fs2 [fs-ocfs-2] Started: [ hex-1 hex-2 hex-3 ] - Master/Slave Set: ms-r0 [drbd-r0] + Clone Set: ms-r0 [drbd-r0] (promotable) Masters: [ hex-1 ] Slaves: [ hex-2 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Started hex-3 vg-md0 (ocf::heartbeat:LVM): Started hex-3 fs-md0 (ocf::heartbeat:Filesystem): Started hex-3 dummy1 (ocf::heartbeat:Delay): Started hex-3 dummy3 (ocf::heartbeat:Delay): Started hex-1 dummy4 (ocf::heartbeat:Delay): Started hex-2 dummy5 (ocf::heartbeat:Delay): Started hex-1 dummy6 (ocf::heartbeat:Delay): Started hex-2 Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Started hex-1 dummy2 (ocf::heartbeat:Delay): Started hex-1 - Master/Slave Set: ms-r1 [drbd-r1] + Clone Set: ms-r1 [drbd-r1] (promotable) Masters: [ hex-3 ] Slaves: [ hex-2 ] diff --git a/cts/scheduler/bug-cl-5212.summary b/cts/scheduler/bug-cl-5212.summary index 1800f06e51..40f1dc9abd 100644 --- a/cts/scheduler/bug-cl-5212.summary +++ b/cts/scheduler/bug-cl-5212.summary @@ -1,67 +1,67 @@ Current cluster status: Node srv01 (3232238280): UNCLEAN (offline) Node srv02 (3232238290): UNCLEAN (offline) Online: [ srv03 ] Resource Group: grpStonith1 prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN) Resource Group: grpStonith2 prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN) Resource Group: grpStonith3 prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN) - Master/Slave Set: msPostgresql [pgsql] + Clone Set: msPostgresql [pgsql] (promotable) pgsql (ocf::pacemaker:Stateful): Slave srv02 ( UNCLEAN ) pgsql (ocf::pacemaker:Stateful): Master srv01 (UNCLEAN) Slaves: [ srv03 ] Clone Set: clnPingd [prmPingd] prmPingd (ocf::pacemaker:ping): Started srv02 (UNCLEAN) prmPingd (ocf::pacemaker:ping): Started srv01 (UNCLEAN) Started: [ srv03 ] Transition Summary: * Stop prmStonith1-1 ( srv02 ) blocked * Stop prmStonith2-1 ( srv01 ) blocked * Stop prmStonith3-1 ( srv01 ) due to node availability (blocked) * Stop pgsql:0 ( Slave srv02 ) due to node availability (blocked) * Stop pgsql:1 ( Master srv01 ) due to node availability (blocked) * Stop prmPingd:0 ( srv02 ) due to node availability (blocked) * Stop prmPingd:1 ( srv01 ) due to node availability (blocked) Executing cluster transition: * Pseudo action: grpStonith1_stop_0 * Pseudo action: grpStonith1_start_0 * Pseudo action: grpStonith2_stop_0 * Pseudo action: grpStonith2_start_0 * Pseudo action: grpStonith3_stop_0 * Pseudo action: msPostgresql_pre_notify_stop_0 * Pseudo action: clnPingd_stop_0 * Resource action: pgsql notify on srv03 * Pseudo action: msPostgresql_confirmed-pre_notify_stop_0 * Pseudo action: msPostgresql_stop_0 * Pseudo action: clnPingd_stopped_0 * Pseudo action: msPostgresql_stopped_0 * Pseudo action: msPostgresql_post_notify_stopped_0 * Resource action: pgsql notify on srv03 * Pseudo action: msPostgresql_confirmed-post_notify_stopped_0 Revised cluster status: Node srv01 (3232238280): UNCLEAN (offline) Node srv02 (3232238290): UNCLEAN (offline) Online: [ srv03 ] Resource Group: grpStonith1 prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN) Resource Group: grpStonith2 prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN) Resource Group: grpStonith3 prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN) - Master/Slave Set: msPostgresql [pgsql] + Clone Set: msPostgresql [pgsql] (promotable) pgsql (ocf::pacemaker:Stateful): Slave srv02 ( UNCLEAN ) pgsql (ocf::pacemaker:Stateful): Master srv01 (UNCLEAN) Slaves: [ srv03 ] Clone Set: clnPingd [prmPingd] prmPingd (ocf::pacemaker:ping): Started srv02 (UNCLEAN) prmPingd (ocf::pacemaker:ping): Started srv01 (UNCLEAN) Started: [ srv03 ] diff --git a/cts/scheduler/bug-cl-5213.summary b/cts/scheduler/bug-cl-5213.summary index 54eda2b08c..24d4c98a06 100644 --- a/cts/scheduler/bug-cl-5213.summary +++ b/cts/scheduler/bug-cl-5213.summary @@ -1,20 +1,20 @@ Current cluster status: Online: [ srv01 srv02 ] A-master (ocf::heartbeat:Dummy): Started srv02 - Master/Slave Set: msPostgresql [pgsql] + Clone Set: msPostgresql [pgsql] (promotable) Slaves: [ srv01 srv02 ] Transition Summary: Executing cluster transition: * Resource action: pgsql monitor=10000 on srv01 Revised cluster status: Online: [ srv01 srv02 ] A-master (ocf::heartbeat:Dummy): Started srv02 - Master/Slave Set: msPostgresql [pgsql] + Clone Set: msPostgresql [pgsql] (promotable) Slaves: [ srv01 srv02 ] diff --git a/cts/scheduler/bug-cl-5219.summary b/cts/scheduler/bug-cl-5219.summary index c9ee54a352..81a3a97644 100644 --- a/cts/scheduler/bug-cl-5219.summary +++ b/cts/scheduler/bug-cl-5219.summary @@ -1,41 +1,41 @@ 1 of 9 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ ha1.test.anchor.net.au ha2.test.anchor.net.au ] child1-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au ( disabled ) child2-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au parent-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au - Master/Slave Set: child1 [stateful-child1] + Clone Set: child1 [stateful-child1] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] - Master/Slave Set: child2 [stateful-child2] + Clone Set: child2 [stateful-child2] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] - Master/Slave Set: parent [stateful-parent] + Clone Set: parent [stateful-parent] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] Transition Summary: * Stop child1-service ( ha2.test.anchor.net.au ) due to node availability Executing cluster transition: * Resource action: child1-service stop on ha2.test.anchor.net.au * Pseudo action: all_stopped Revised cluster status: Online: [ ha1.test.anchor.net.au ha2.test.anchor.net.au ] child1-service (ocf::pacemaker:Dummy): Stopped ( disabled ) child2-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au parent-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au - Master/Slave Set: child1 [stateful-child1] + Clone Set: child1 [stateful-child1] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] - Master/Slave Set: child2 [stateful-child2] + Clone Set: child2 [stateful-child2] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] - Master/Slave Set: parent [stateful-parent] + Clone Set: parent [stateful-parent] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] diff --git a/cts/scheduler/bug-cl-5247.summary b/cts/scheduler/bug-cl-5247.summary index dbb612c4ec..8183d3617c 100644 --- a/cts/scheduler/bug-cl-5247.summary +++ b/cts/scheduler/bug-cl-5247.summary @@ -1,103 +1,103 @@ Using the original execution date of: 2015-08-12 02:53:40Z Current cluster status: Online: [ bl460g8n3 bl460g8n4 ] Containers: [ pgsr01:prmDB1 ] prmDB1 (ocf::heartbeat:VirtualDomain): Started bl460g8n3 prmDB2 (ocf::heartbeat:VirtualDomain): FAILED bl460g8n4 Resource Group: grpStonith1 prmStonith1-2 (stonith:external/ipmi): Started bl460g8n4 Resource Group: grpStonith2 prmStonith2-2 (stonith:external/ipmi): Started bl460g8n3 Resource Group: master-group vip-master (ocf::heartbeat:Dummy): FAILED pgsr02 vip-rep (ocf::heartbeat:Dummy): FAILED pgsr02 - Master/Slave Set: msPostgresql [pgsql] + Clone Set: msPostgresql [pgsql] (promotable) Masters: [ pgsr01 ] Stopped: [ bl460g8n3 bl460g8n4 ] Transition Summary: * Fence (off) pgsr02 (resource: prmDB2) 'guest is unclean' * Stop prmDB2 (bl460g8n4) due to node availability * Restart prmStonith1-2 ( bl460g8n4 ) due to resource definition change * Restart prmStonith2-2 ( bl460g8n3 ) due to resource definition change * Recover vip-master ( pgsr02 -> pgsr01 ) * Recover vip-rep ( pgsr02 -> pgsr01 ) * Stop pgsql:0 ( Master pgsr02 ) due to node availability * Stop pgsr02 ( bl460g8n4 ) due to node availability Executing cluster transition: * Pseudo action: grpStonith1_stop_0 * Resource action: prmStonith1-2 stop on bl460g8n4 * Pseudo action: grpStonith2_stop_0 * Resource action: prmStonith2-2 stop on bl460g8n3 * Resource action: vip-master monitor on pgsr01 * Resource action: vip-rep monitor on pgsr01 * Pseudo action: msPostgresql_pre_notify_demote_0 * Resource action: pgsr01 monitor on bl460g8n4 * Resource action: pgsr02 stop on bl460g8n4 * Resource action: pgsr02 monitor on bl460g8n3 * Resource action: prmDB2 stop on bl460g8n4 * Pseudo action: grpStonith1_stopped_0 * Pseudo action: grpStonith1_start_0 * Pseudo action: grpStonith2_stopped_0 * Pseudo action: grpStonith2_start_0 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-pre_notify_demote_0 * Pseudo action: msPostgresql_demote_0 * Pseudo action: stonith-pgsr02-off on pgsr02 * Pseudo action: stonith_complete * Pseudo action: pgsql_post_notify_stop_0 * Pseudo action: pgsql_demote_0 * Pseudo action: msPostgresql_demoted_0 * Pseudo action: msPostgresql_post_notify_demoted_0 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-post_notify_demoted_0 * Pseudo action: msPostgresql_pre_notify_stop_0 * Pseudo action: master-group_stop_0 * Pseudo action: vip-rep_stop_0 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-pre_notify_stop_0 * Pseudo action: msPostgresql_stop_0 * Pseudo action: vip-master_stop_0 * Pseudo action: pgsql_stop_0 * Pseudo action: msPostgresql_stopped_0 * Pseudo action: master-group_stopped_0 * Pseudo action: master-group_start_0 * Resource action: vip-master start on pgsr01 * Resource action: vip-rep start on pgsr01 * Pseudo action: msPostgresql_post_notify_stopped_0 * Pseudo action: master-group_running_0 * Resource action: vip-master monitor=10000 on pgsr01 * Resource action: vip-rep monitor=10000 on pgsr01 * Resource action: pgsql notify on pgsr01 * Pseudo action: msPostgresql_confirmed-post_notify_stopped_0 * Pseudo action: pgsql_notified_0 * Resource action: pgsql monitor=9000 on pgsr01 * Pseudo action: all_stopped * Resource action: prmStonith1-2 start on bl460g8n4 * Resource action: prmStonith1-2 monitor=3600000 on bl460g8n4 * Resource action: prmStonith2-2 start on bl460g8n3 * Resource action: prmStonith2-2 monitor=3600000 on bl460g8n3 * Pseudo action: grpStonith1_running_0 * Pseudo action: grpStonith2_running_0 Using the original execution date of: 2015-08-12 02:53:40Z Revised cluster status: Online: [ bl460g8n3 bl460g8n4 ] Containers: [ pgsr01:prmDB1 ] prmDB1 (ocf::heartbeat:VirtualDomain): Started bl460g8n3 prmDB2 (ocf::heartbeat:VirtualDomain): FAILED Resource Group: grpStonith1 prmStonith1-2 (stonith:external/ipmi): Started bl460g8n4 Resource Group: grpStonith2 prmStonith2-2 (stonith:external/ipmi): Started bl460g8n3 Resource Group: master-group vip-master (ocf::heartbeat:Dummy): FAILED[ pgsr01 pgsr02 ] vip-rep (ocf::heartbeat:Dummy): FAILED[ pgsr01 pgsr02 ] - Master/Slave Set: msPostgresql [pgsql] + Clone Set: msPostgresql [pgsql] (promotable) Masters: [ pgsr01 ] Stopped: [ bl460g8n3 bl460g8n4 ] diff --git a/cts/scheduler/bug-lf-1852.summary b/cts/scheduler/bug-lf-1852.summary index 337ad6aff8..bc7e271d9d 100644 --- a/cts/scheduler/bug-lf-1852.summary +++ b/cts/scheduler/bug-lf-1852.summary @@ -1,38 +1,38 @@ Current cluster status: Online: [ mysql-01 mysql-02 ] - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ mysql-02 ] Stopped: [ mysql-01 ] Resource Group: fs_mysql_ip fs0 (ocf::heartbeat:Filesystem): Started mysql-02 mysqlid (lsb:mysql): Started mysql-02 ip_resource (ocf::heartbeat:IPaddr2): Started mysql-02 Transition Summary: * Start drbd0:1 (mysql-01) Executing cluster transition: * Pseudo action: ms-drbd0_pre_notify_start_0 * Resource action: drbd0:0 notify on mysql-02 * Pseudo action: ms-drbd0_confirmed-pre_notify_start_0 * Pseudo action: ms-drbd0_start_0 * Resource action: drbd0:1 start on mysql-01 * Pseudo action: ms-drbd0_running_0 * Pseudo action: ms-drbd0_post_notify_running_0 * Resource action: drbd0:0 notify on mysql-02 * Resource action: drbd0:1 notify on mysql-01 * Pseudo action: ms-drbd0_confirmed-post_notify_running_0 Revised cluster status: Online: [ mysql-01 mysql-02 ] - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ mysql-02 ] Slaves: [ mysql-01 ] Resource Group: fs_mysql_ip fs0 (ocf::heartbeat:Filesystem): Started mysql-02 mysqlid (lsb:mysql): Started mysql-02 ip_resource (ocf::heartbeat:IPaddr2): Started mysql-02 diff --git a/cts/scheduler/bug-lf-2106.summary b/cts/scheduler/bug-lf-2106.summary index 0c7c485bd2..bff720773d 100644 --- a/cts/scheduler/bug-lf-2106.summary +++ b/cts/scheduler/bug-lf-2106.summary @@ -1,90 +1,90 @@ Current cluster status: Online: [ cl-virt-1 cl-virt-2 ] apcstonith (stonith:apcmastersnmp): Started cl-virt-1 Clone Set: pingdclone [pingd] Started: [ cl-virt-1 cl-virt-2 ] Resource Group: ssh ssh-ip1 (ocf::heartbeat:IPaddr2): Started cl-virt-2 ssh-ip2 (ocf::heartbeat:IPaddr2): Started cl-virt-2 ssh-bin (ocf::dk:opensshd): Started cl-virt-2 itwiki (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-itwiki [drbd-itwiki] + Clone Set: ms-itwiki [drbd-itwiki] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] bugtrack (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-bugtrack [drbd-bugtrack] + Clone Set: ms-bugtrack [drbd-bugtrack] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] servsyslog (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-servsyslog [drbd-servsyslog] + Clone Set: ms-servsyslog [drbd-servsyslog] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] smsprod2 (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-smsprod2 [drbd-smsprod2] + Clone Set: ms-smsprod2 [drbd-smsprod2] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] medomus-cvs (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-medomus-cvs [drbd-medomus-cvs] + Clone Set: ms-medomus-cvs [drbd-medomus-cvs] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] infotos (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-infotos [drbd-infotos] + Clone Set: ms-infotos [drbd-infotos] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] Transition Summary: * Restart pingd:0 ( cl-virt-1 ) due to resource definition change * Restart pingd:1 ( cl-virt-2 ) due to resource definition change Executing cluster transition: * Cluster action: clear_failcount for pingd on cl-virt-1 * Cluster action: clear_failcount for pingd on cl-virt-2 * Pseudo action: pingdclone_stop_0 * Resource action: pingd:0 stop on cl-virt-1 * Resource action: pingd:0 stop on cl-virt-2 * Pseudo action: pingdclone_stopped_0 * Pseudo action: pingdclone_start_0 * Pseudo action: all_stopped * Resource action: pingd:0 start on cl-virt-1 * Resource action: pingd:0 monitor=30000 on cl-virt-1 * Resource action: pingd:0 start on cl-virt-2 * Resource action: pingd:0 monitor=30000 on cl-virt-2 * Pseudo action: pingdclone_running_0 Revised cluster status: Online: [ cl-virt-1 cl-virt-2 ] apcstonith (stonith:apcmastersnmp): Started cl-virt-1 Clone Set: pingdclone [pingd] Started: [ cl-virt-1 cl-virt-2 ] Resource Group: ssh ssh-ip1 (ocf::heartbeat:IPaddr2): Started cl-virt-2 ssh-ip2 (ocf::heartbeat:IPaddr2): Started cl-virt-2 ssh-bin (ocf::dk:opensshd): Started cl-virt-2 itwiki (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-itwiki [drbd-itwiki] + Clone Set: ms-itwiki [drbd-itwiki] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] bugtrack (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-bugtrack [drbd-bugtrack] + Clone Set: ms-bugtrack [drbd-bugtrack] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] servsyslog (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-servsyslog [drbd-servsyslog] + Clone Set: ms-servsyslog [drbd-servsyslog] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] smsprod2 (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-smsprod2 [drbd-smsprod2] + Clone Set: ms-smsprod2 [drbd-smsprod2] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] medomus-cvs (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-medomus-cvs [drbd-medomus-cvs] + Clone Set: ms-medomus-cvs [drbd-medomus-cvs] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] infotos (ocf::heartbeat:VirtualDomain): Started cl-virt-2 - Master/Slave Set: ms-infotos [drbd-infotos] + Clone Set: ms-infotos [drbd-infotos] (promotable) Masters: [ cl-virt-2 ] Slaves: [ cl-virt-1 ] diff --git a/cts/scheduler/bug-lf-2153.summary b/cts/scheduler/bug-lf-2153.summary index 01567b5c01..e670814bc2 100644 --- a/cts/scheduler/bug-lf-2153.summary +++ b/cts/scheduler/bug-lf-2153.summary @@ -1,58 +1,58 @@ Current cluster status: Node bob (9a4cafd3-fcfc-4de9-9440-10bc8822d9af): standby Online: [ alice ] - Master/Slave Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] + Clone Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] (promotable) Masters: [ alice ] Slaves: [ bob ] Clone Set: cl_tgtd [res_tgtd] Started: [ alice bob ] Resource Group: rg_iscsivg01 res_portblock_iscsivg01_block (ocf::heartbeat:portblock): Started alice res_lvm_iscsivg01 (ocf::heartbeat:LVM): Started alice res_target_iscsivg01 (ocf::heartbeat:iSCSITarget): Started alice res_lu_iscsivg01_lun1 (ocf::heartbeat:iSCSILogicalUnit): Started alice res_lu_iscsivg01_lun2 (ocf::heartbeat:iSCSILogicalUnit): Started alice res_ip_alicebob01 (ocf::heartbeat:IPaddr2): Started alice res_portblock_iscsivg01_unblock (ocf::heartbeat:portblock): Started alice Transition Summary: * Stop res_drbd_iscsivg01:0 ( Slave bob ) due to node availability * Stop res_tgtd:0 (bob) due to node availability Executing cluster transition: * Pseudo action: ms_drbd_iscsivg01_pre_notify_stop_0 * Pseudo action: cl_tgtd_stop_0 * Resource action: res_drbd_iscsivg01:0 notify on bob * Resource action: res_drbd_iscsivg01:1 notify on alice * Pseudo action: ms_drbd_iscsivg01_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_iscsivg01_stop_0 * Resource action: res_tgtd:0 stop on bob * Pseudo action: cl_tgtd_stopped_0 * Resource action: res_drbd_iscsivg01:0 stop on bob * Pseudo action: ms_drbd_iscsivg01_stopped_0 * Pseudo action: ms_drbd_iscsivg01_post_notify_stopped_0 * Resource action: res_drbd_iscsivg01:1 notify on alice * Pseudo action: ms_drbd_iscsivg01_confirmed-post_notify_stopped_0 * Pseudo action: all_stopped Revised cluster status: Node bob (9a4cafd3-fcfc-4de9-9440-10bc8822d9af): standby Online: [ alice ] - Master/Slave Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] + Clone Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] (promotable) Masters: [ alice ] Stopped: [ bob ] Clone Set: cl_tgtd [res_tgtd] Started: [ alice ] Stopped: [ bob ] Resource Group: rg_iscsivg01 res_portblock_iscsivg01_block (ocf::heartbeat:portblock): Started alice res_lvm_iscsivg01 (ocf::heartbeat:LVM): Started alice res_target_iscsivg01 (ocf::heartbeat:iSCSITarget): Started alice res_lu_iscsivg01_lun1 (ocf::heartbeat:iSCSILogicalUnit): Started alice res_lu_iscsivg01_lun2 (ocf::heartbeat:iSCSILogicalUnit): Started alice res_ip_alicebob01 (ocf::heartbeat:IPaddr2): Started alice res_portblock_iscsivg01_unblock (ocf::heartbeat:portblock): Started alice diff --git a/cts/scheduler/bug-lf-2317.summary b/cts/scheduler/bug-lf-2317.summary index f6b0ae406b..c14aedaaaa 100644 --- a/cts/scheduler/bug-lf-2317.summary +++ b/cts/scheduler/bug-lf-2317.summary @@ -1,34 +1,34 @@ Current cluster status: Online: [ ibm1.isg.si ibm2.isg.si ] HostingIsg (ocf::heartbeat:Xen): Started ibm2.isg.si - Master/Slave Set: ms_drbd_r0 [drbd_r0] + Clone Set: ms_drbd_r0 [drbd_r0] (promotable) Masters: [ ibm2.isg.si ] Slaves: [ ibm1.isg.si ] Transition Summary: * Promote drbd_r0:1 (Slave -> Master ibm1.isg.si) Executing cluster transition: * Resource action: drbd_r0:0 cancel=30000 on ibm1.isg.si * Pseudo action: ms_drbd_r0_pre_notify_promote_0 * Resource action: drbd_r0:1 notify on ibm2.isg.si * Resource action: drbd_r0:0 notify on ibm1.isg.si * Pseudo action: ms_drbd_r0_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_r0_promote_0 * Resource action: drbd_r0:0 promote on ibm1.isg.si * Pseudo action: ms_drbd_r0_promoted_0 * Pseudo action: ms_drbd_r0_post_notify_promoted_0 * Resource action: drbd_r0:1 notify on ibm2.isg.si * Resource action: drbd_r0:0 notify on ibm1.isg.si * Pseudo action: ms_drbd_r0_confirmed-post_notify_promoted_0 * Resource action: drbd_r0:0 monitor=15000 on ibm1.isg.si Revised cluster status: Online: [ ibm1.isg.si ibm2.isg.si ] HostingIsg (ocf::heartbeat:Xen): Started ibm2.isg.si - Master/Slave Set: ms_drbd_r0 [drbd_r0] + Clone Set: ms_drbd_r0 [drbd_r0] (promotable) Masters: [ ibm1.isg.si ibm2.isg.si ] diff --git a/cts/scheduler/bug-lf-2358.summary b/cts/scheduler/bug-lf-2358.summary index 98b26eff29..d16661394f 100644 --- a/cts/scheduler/bug-lf-2358.summary +++ b/cts/scheduler/bug-lf-2358.summary @@ -1,65 +1,65 @@ 2 of 15 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ alice.demo bob.demo ] - Master/Slave Set: ms_drbd_nfsexport [res_drbd_nfsexport] + Clone Set: ms_drbd_nfsexport [res_drbd_nfsexport] (promotable) Stopped (disabled): [ alice.demo bob.demo ] Resource Group: rg_nfs res_fs_nfsexport (ocf::heartbeat:Filesystem): Stopped res_ip_nfs (ocf::heartbeat:IPaddr2): Stopped res_nfs (lsb:nfs): Stopped Resource Group: rg_mysql1 res_fs_mysql1 (ocf::heartbeat:Filesystem): Started bob.demo res_ip_mysql1 (ocf::heartbeat:IPaddr2): Started bob.demo res_mysql1 (ocf::heartbeat:mysql): Started bob.demo - Master/Slave Set: ms_drbd_mysql1 [res_drbd_mysql1] + Clone Set: ms_drbd_mysql1 [res_drbd_mysql1] (promotable) Masters: [ bob.demo ] Stopped: [ alice.demo ] - Master/Slave Set: ms_drbd_mysql2 [res_drbd_mysql2] + Clone Set: ms_drbd_mysql2 [res_drbd_mysql2] (promotable) Masters: [ alice.demo ] Slaves: [ bob.demo ] Resource Group: rg_mysql2 res_fs_mysql2 (ocf::heartbeat:Filesystem): Started alice.demo res_ip_mysql2 (ocf::heartbeat:IPaddr2): Started alice.demo res_mysql2 (ocf::heartbeat:mysql): Started alice.demo Transition Summary: * Start res_drbd_mysql1:1 (alice.demo) Executing cluster transition: * Pseudo action: ms_drbd_mysql1_pre_notify_start_0 * Resource action: res_drbd_mysql1:0 notify on bob.demo * Pseudo action: ms_drbd_mysql1_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_mysql1_start_0 * Resource action: res_drbd_mysql1:1 start on alice.demo * Pseudo action: ms_drbd_mysql1_running_0 * Pseudo action: ms_drbd_mysql1_post_notify_running_0 * Resource action: res_drbd_mysql1:0 notify on bob.demo * Resource action: res_drbd_mysql1:1 notify on alice.demo * Pseudo action: ms_drbd_mysql1_confirmed-post_notify_running_0 Revised cluster status: Online: [ alice.demo bob.demo ] - Master/Slave Set: ms_drbd_nfsexport [res_drbd_nfsexport] + Clone Set: ms_drbd_nfsexport [res_drbd_nfsexport] (promotable) Stopped (disabled): [ alice.demo bob.demo ] Resource Group: rg_nfs res_fs_nfsexport (ocf::heartbeat:Filesystem): Stopped res_ip_nfs (ocf::heartbeat:IPaddr2): Stopped res_nfs (lsb:nfs): Stopped Resource Group: rg_mysql1 res_fs_mysql1 (ocf::heartbeat:Filesystem): Started bob.demo res_ip_mysql1 (ocf::heartbeat:IPaddr2): Started bob.demo res_mysql1 (ocf::heartbeat:mysql): Started bob.demo - Master/Slave Set: ms_drbd_mysql1 [res_drbd_mysql1] + Clone Set: ms_drbd_mysql1 [res_drbd_mysql1] (promotable) Masters: [ bob.demo ] Slaves: [ alice.demo ] - Master/Slave Set: ms_drbd_mysql2 [res_drbd_mysql2] + Clone Set: ms_drbd_mysql2 [res_drbd_mysql2] (promotable) Masters: [ alice.demo ] Slaves: [ bob.demo ] Resource Group: rg_mysql2 res_fs_mysql2 (ocf::heartbeat:Filesystem): Started alice.demo res_ip_mysql2 (ocf::heartbeat:IPaddr2): Started alice.demo res_mysql2 (ocf::heartbeat:mysql): Started alice.demo diff --git a/cts/scheduler/bug-lf-2361.summary b/cts/scheduler/bug-lf-2361.summary index b88cd90ede..c36514e737 100644 --- a/cts/scheduler/bug-lf-2361.summary +++ b/cts/scheduler/bug-lf-2361.summary @@ -1,42 +1,42 @@ Current cluster status: Online: [ alice.demo bob.demo ] dummy1 (ocf::heartbeat:Dummy): Stopped - Master/Slave Set: ms_stateful [stateful] + Clone Set: ms_stateful [stateful] (promotable) Stopped: [ alice.demo bob.demo ] Clone Set: cl_dummy2 [dummy2] Stopped: [ alice.demo bob.demo ] Transition Summary: * Start stateful:0 (alice.demo) * Start stateful:1 (bob.demo) * Start dummy2:0 ( alice.demo ) due to unrunnable dummy1 start (blocked) * Start dummy2:1 ( bob.demo ) due to unrunnable dummy1 start (blocked) Executing cluster transition: * Pseudo action: ms_stateful_pre_notify_start_0 * Resource action: service2:0 delete on alice.demo * Resource action: service2:0 delete on bob.demo * Resource action: service2:1 delete on bob.demo * Resource action: service1 delete on alice.demo * Resource action: service1 delete on bob.demo * Pseudo action: ms_stateful_confirmed-pre_notify_start_0 * Pseudo action: ms_stateful_start_0 * Resource action: stateful:0 start on alice.demo * Resource action: stateful:1 start on bob.demo * Pseudo action: ms_stateful_running_0 * Pseudo action: ms_stateful_post_notify_running_0 * Resource action: stateful:0 notify on alice.demo * Resource action: stateful:1 notify on bob.demo * Pseudo action: ms_stateful_confirmed-post_notify_running_0 Revised cluster status: Online: [ alice.demo bob.demo ] dummy1 (ocf::heartbeat:Dummy): Stopped - Master/Slave Set: ms_stateful [stateful] + Clone Set: ms_stateful [stateful] (promotable) Slaves: [ alice.demo bob.demo ] Clone Set: cl_dummy2 [dummy2] Stopped: [ alice.demo bob.demo ] diff --git a/cts/scheduler/bug-lf-2493.summary b/cts/scheduler/bug-lf-2493.summary index 6b61b1c716..3bc5d8e6c1 100644 --- a/cts/scheduler/bug-lf-2493.summary +++ b/cts/scheduler/bug-lf-2493.summary @@ -1,64 +1,64 @@ Current cluster status: Online: [ hpn07 hpn08 ] p_dummy1 (ocf::pacemaker:Dummy): Started hpn07 p_dummy2 (ocf::pacemaker:Dummy): Stopped p_dummy4 (ocf::pacemaker:Dummy): Stopped p_dummy3 (ocf::pacemaker:Dummy): Stopped - Master/Slave Set: ms_stateful1 [p_stateful1] + Clone Set: ms_stateful1 [p_stateful1] (promotable) Masters: [ hpn07 ] Slaves: [ hpn08 ] Transition Summary: * Start p_dummy2 (hpn08) * Start p_dummy4 (hpn07) * Start p_dummy3 (hpn08) Executing cluster transition: * Resource action: p_dummy2 start on hpn08 * Resource action: p_dummy3 start on hpn08 * Resource action: res_Filesystem_nfs_fs1 delete on hpn08 * Resource action: res_Filesystem_nfs_fs1 delete on hpn07 * Resource action: res_drbd_nfs:0 delete on hpn08 * Resource action: res_drbd_nfs:0 delete on hpn07 * Resource action: res_Filesystem_nfs_fs2 delete on hpn08 * Resource action: res_Filesystem_nfs_fs2 delete on hpn07 * Resource action: res_Filesystem_nfs_fs3 delete on hpn08 * Resource action: res_Filesystem_nfs_fs3 delete on hpn07 * Resource action: res_exportfs_fs1 delete on hpn08 * Resource action: res_exportfs_fs1 delete on hpn07 * Resource action: res_exportfs_fs2 delete on hpn08 * Resource action: res_exportfs_fs2 delete on hpn07 * Resource action: res_exportfs_fs3 delete on hpn08 * Resource action: res_exportfs_fs3 delete on hpn07 * Resource action: res_drbd_nfs:1 delete on hpn08 * Resource action: res_drbd_nfs:1 delete on hpn07 * Resource action: res_LVM_nfs delete on hpn08 * Resource action: res_LVM_nfs delete on hpn07 * Resource action: res_LVM_p_vg-sap delete on hpn08 * Resource action: res_LVM_p_vg-sap delete on hpn07 * Resource action: res_exportfs_rootfs:0 delete on hpn07 * Resource action: res_IPaddr2_nfs delete on hpn08 * Resource action: res_IPaddr2_nfs delete on hpn07 * Resource action: res_drbd_hpn78:0 delete on hpn08 * Resource action: res_drbd_hpn78:0 delete on hpn07 * Resource action: res_Filesystem_sap_db delete on hpn08 * Resource action: res_Filesystem_sap_db delete on hpn07 * Resource action: res_Filesystem_sap_ci delete on hpn08 * Resource action: res_Filesystem_sap_ci delete on hpn07 * Resource action: res_exportfs_rootfs:1 delete on hpn08 * Resource action: res_drbd_hpn78:1 delete on hpn08 * Resource action: p_dummy4 start on hpn07 Revised cluster status: Online: [ hpn07 hpn08 ] p_dummy1 (ocf::pacemaker:Dummy): Started hpn07 p_dummy2 (ocf::pacemaker:Dummy): Started hpn08 p_dummy4 (ocf::pacemaker:Dummy): Started hpn07 p_dummy3 (ocf::pacemaker:Dummy): Started hpn08 - Master/Slave Set: ms_stateful1 [p_stateful1] + Clone Set: ms_stateful1 [p_stateful1] (promotable) Masters: [ hpn07 ] Slaves: [ hpn08 ] diff --git a/cts/scheduler/bug-lf-2544.summary b/cts/scheduler/bug-lf-2544.summary index 67bbf093a8..4a32624588 100644 --- a/cts/scheduler/bug-lf-2544.summary +++ b/cts/scheduler/bug-lf-2544.summary @@ -1,22 +1,22 @@ Current cluster status: Online: [ node-0 node-1 ] - Master/Slave Set: ms0 [s0] + Clone Set: ms0 [s0] (promotable) Slaves: [ node-0 node-1 ] Transition Summary: * Promote s0:1 (Slave -> Master node-1) Executing cluster transition: * Pseudo action: ms0_promote_0 * Resource action: s0:1 promote on node-1 * Pseudo action: ms0_promoted_0 Revised cluster status: Online: [ node-0 node-1 ] - Master/Slave Set: ms0 [s0] + Clone Set: ms0 [s0] (promotable) Masters: [ node-1 ] Slaves: [ node-0 ] diff --git a/cts/scheduler/bug-lf-2606.summary b/cts/scheduler/bug-lf-2606.summary index ef30bacef0..d37b414ff6 100644 --- a/cts/scheduler/bug-lf-2606.summary +++ b/cts/scheduler/bug-lf-2606.summary @@ -1,45 +1,45 @@ 1 of 5 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Node node2: UNCLEAN (online) Online: [ node1 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): FAILED node2 ( disabled ) rsc2 (ocf::pacemaker:Dummy): Started node2 - Master/Slave Set: ms3 [rsc3] + Clone Set: ms3 [rsc3] (promotable) Masters: [ node2 ] Slaves: [ node1 ] Transition Summary: * Fence (reboot) node2 'rsc1 failed there' * Stop rsc1 ( node2 ) due to node availability * Move rsc2 ( node2 -> node1 ) * Stop rsc3:1 ( Master node2 ) due to node availability Executing cluster transition: * Pseudo action: ms3_demote_0 * Fencing node2 (reboot) * Pseudo action: rsc1_stop_0 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc3:1_demote_0 * Pseudo action: ms3_demoted_0 * Pseudo action: ms3_stop_0 * Pseudo action: stonith_complete * Resource action: rsc2 start on node1 * Pseudo action: rsc3:1_stop_0 * Pseudo action: ms3_stopped_0 * Pseudo action: all_stopped * Resource action: rsc2 monitor=10000 on node1 Revised cluster status: Online: [ node1 ] OFFLINE: [ node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped ( disabled ) rsc2 (ocf::pacemaker:Dummy): Started node1 - Master/Slave Set: ms3 [rsc3] + Clone Set: ms3 [rsc3] (promotable) Slaves: [ node1 ] Stopped: [ node2 ] diff --git a/cts/scheduler/bug-pm-11.summary b/cts/scheduler/bug-pm-11.summary index dc26a2ea1d..5fad125772 100644 --- a/cts/scheduler/bug-pm-11.summary +++ b/cts/scheduler/bug-pm-11.summary @@ -1,46 +1,46 @@ Current cluster status: Online: [ node-a node-b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): Slave node-b stateful-2:0 (ocf::heartbeat:Stateful): Stopped Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Master node-a stateful-2:1 (ocf::heartbeat:Stateful): Stopped Transition Summary: * Start stateful-2:0 (node-b) * Promote stateful-2:1 (Stopped -> Master node-a) Executing cluster transition: * Resource action: stateful-2:0 monitor on node-b * Resource action: stateful-2:0 monitor on node-a * Resource action: stateful-2:1 monitor on node-b * Resource action: stateful-2:1 monitor on node-a * Pseudo action: ms-sf_start_0 * Pseudo action: group:0_start_0 * Resource action: stateful-2:0 start on node-b * Pseudo action: group:1_start_0 * Resource action: stateful-2:1 start on node-a * Pseudo action: group:0_running_0 * Pseudo action: group:1_running_0 * Pseudo action: ms-sf_running_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-2:1 promote on node-a * Pseudo action: group:1_promoted_0 * Pseudo action: ms-sf_promoted_0 Revised cluster status: Online: [ node-a node-b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): Slave node-b stateful-2:0 (ocf::heartbeat:Stateful): Slave node-b Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Master node-a stateful-2:1 (ocf::heartbeat:Stateful): Master node-a diff --git a/cts/scheduler/bug-pm-12.summary b/cts/scheduler/bug-pm-12.summary index 1ec6b8d150..7a3c876f50 100644 --- a/cts/scheduler/bug-pm-12.summary +++ b/cts/scheduler/bug-pm-12.summary @@ -1,56 +1,56 @@ Current cluster status: Online: [ node-a node-b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): Slave node-b stateful-2:0 (ocf::heartbeat:Stateful): Slave node-b Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Master node-a stateful-2:1 (ocf::heartbeat:Stateful): Master node-a Transition Summary: * Restart stateful-2:0 ( Slave node-b ) due to resource definition change * Restart stateful-2:1 ( Master node-a ) due to resource definition change Executing cluster transition: * Pseudo action: ms-sf_demote_0 * Pseudo action: group:1_demote_0 * Resource action: stateful-2:1 demote on node-a * Pseudo action: group:1_demoted_0 * Pseudo action: ms-sf_demoted_0 * Pseudo action: ms-sf_stop_0 * Pseudo action: group:0_stop_0 * Resource action: stateful-2:0 stop on node-b * Pseudo action: group:1_stop_0 * Resource action: stateful-2:1 stop on node-a * Pseudo action: all_stopped * Pseudo action: group:0_stopped_0 * Pseudo action: group:1_stopped_0 * Pseudo action: ms-sf_stopped_0 * Pseudo action: ms-sf_start_0 * Pseudo action: group:0_start_0 * Resource action: stateful-2:0 start on node-b * Pseudo action: group:1_start_0 * Resource action: stateful-2:1 start on node-a * Pseudo action: group:0_running_0 * Pseudo action: group:1_running_0 * Pseudo action: ms-sf_running_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-2:1 promote on node-a * Pseudo action: group:1_promoted_0 * Pseudo action: ms-sf_promoted_0 Revised cluster status: Online: [ node-a node-b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): Slave node-b stateful-2:0 (ocf::heartbeat:Stateful): Slave node-b Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Master node-a stateful-2:1 (ocf::heartbeat:Stateful): Master node-a diff --git a/cts/scheduler/clone-no-shuffle.summary b/cts/scheduler/clone-no-shuffle.summary index 50dd872159..0d0d8cf643 100644 --- a/cts/scheduler/clone-no-shuffle.summary +++ b/cts/scheduler/clone-no-shuffle.summary @@ -1,60 +1,60 @@ Current cluster status: Online: [ dktest1sles10 dktest2sles10 ] stonith-1 (stonith:dummy): Stopped - Master/Slave Set: ms-drbd1 [drbd1] + Clone Set: ms-drbd1 [drbd1] (promotable) Masters: [ dktest2sles10 ] Stopped: [ dktest1sles10 ] testip (ocf::heartbeat:IPaddr2): Started dktest2sles10 Transition Summary: * Start stonith-1 (dktest1sles10) * Stop drbd1:0 ( Master dktest2sles10 ) due to node availability * Start drbd1:1 (dktest1sles10) * Stop testip ( dktest2sles10 ) due to node availability Executing cluster transition: * Resource action: stonith-1 monitor on dktest2sles10 * Resource action: stonith-1 monitor on dktest1sles10 * Resource action: drbd1:1 monitor on dktest1sles10 * Pseudo action: ms-drbd1_pre_notify_demote_0 * Resource action: testip stop on dktest2sles10 * Resource action: testip monitor on dktest1sles10 * Resource action: stonith-1 start on dktest1sles10 * Resource action: drbd1:0 notify on dktest2sles10 * Pseudo action: ms-drbd1_confirmed-pre_notify_demote_0 * Pseudo action: ms-drbd1_demote_0 * Resource action: drbd1:0 demote on dktest2sles10 * Pseudo action: ms-drbd1_demoted_0 * Pseudo action: ms-drbd1_post_notify_demoted_0 * Resource action: drbd1:0 notify on dktest2sles10 * Pseudo action: ms-drbd1_confirmed-post_notify_demoted_0 * Pseudo action: ms-drbd1_pre_notify_stop_0 * Resource action: drbd1:0 notify on dktest2sles10 * Pseudo action: ms-drbd1_confirmed-pre_notify_stop_0 * Pseudo action: ms-drbd1_stop_0 * Resource action: drbd1:0 stop on dktest2sles10 * Pseudo action: ms-drbd1_stopped_0 * Pseudo action: ms-drbd1_post_notify_stopped_0 * Pseudo action: ms-drbd1_confirmed-post_notify_stopped_0 * Pseudo action: ms-drbd1_pre_notify_start_0 * Pseudo action: all_stopped * Pseudo action: ms-drbd1_confirmed-pre_notify_start_0 * Pseudo action: ms-drbd1_start_0 * Resource action: drbd1:1 start on dktest1sles10 * Pseudo action: ms-drbd1_running_0 * Pseudo action: ms-drbd1_post_notify_running_0 * Resource action: drbd1:1 notify on dktest1sles10 * Pseudo action: ms-drbd1_confirmed-post_notify_running_0 * Resource action: drbd1:1 monitor=11000 on dktest1sles10 Revised cluster status: Online: [ dktest1sles10 dktest2sles10 ] stonith-1 (stonith:dummy): Started dktest1sles10 - Master/Slave Set: ms-drbd1 [drbd1] + Clone Set: ms-drbd1 [drbd1] (promotable) Slaves: [ dktest1sles10 ] Stopped: [ dktest2sles10 ] testip (ocf::heartbeat:IPaddr2): Stopped diff --git a/cts/scheduler/clone-requires-quorum-recovery.summary b/cts/scheduler/clone-requires-quorum-recovery.summary index 7cc4552fcc..99eefacca7 100644 --- a/cts/scheduler/clone-requires-quorum-recovery.summary +++ b/cts/scheduler/clone-requires-quorum-recovery.summary @@ -1,48 +1,48 @@ Using the original execution date of: 2018-05-24 15:29:56Z Current cluster status: Node rhel7-5 (5): UNCLEAN (offline) Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ] Fencing (stonith:fence_xvm): Started rhel7-1 FencingFail (stonith:fence_dummy): Started rhel7-2 dummy-solo (ocf::pacemaker:Dummy): Started rhel7-3 Clone Set: dummy-crowd-clone [dummy-crowd] dummy-crowd (ocf::pacemaker:Dummy): ORPHANED Started rhel7-5 (UNCLEAN) Started: [ rhel7-1 rhel7-4 ] Stopped: [ rhel7-2 rhel7-3 ] - Master/Slave Set: dummy-boss-clone [dummy-boss] + Clone Set: dummy-boss-clone [dummy-boss] (promotable) Masters: [ rhel7-3 ] Slaves: [ rhel7-2 rhel7-4 ] Transition Summary: * Fence (reboot) rhel7-5 'peer is no longer part of the cluster' * Start dummy-crowd:2 ( rhel7-2 ) * Stop dummy-crowd:3 ( rhel7-5 ) due to node availability Executing cluster transition: * Pseudo action: dummy-crowd-clone_stop_0 * Fencing rhel7-5 (reboot) * Pseudo action: dummy-crowd_stop_0 * Pseudo action: dummy-crowd-clone_stopped_0 * Pseudo action: dummy-crowd-clone_start_0 * Pseudo action: stonith_complete * Pseudo action: all_stopped * Resource action: dummy-crowd start on rhel7-2 * Pseudo action: dummy-crowd-clone_running_0 * Resource action: dummy-crowd monitor=10000 on rhel7-2 Using the original execution date of: 2018-05-24 15:29:56Z Revised cluster status: Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ] OFFLINE: [ rhel7-5 ] Fencing (stonith:fence_xvm): Started rhel7-1 FencingFail (stonith:fence_dummy): Started rhel7-2 dummy-solo (ocf::pacemaker:Dummy): Started rhel7-3 Clone Set: dummy-crowd-clone [dummy-crowd] Started: [ rhel7-1 rhel7-2 rhel7-4 ] - Master/Slave Set: dummy-boss-clone [dummy-boss] + Clone Set: dummy-boss-clone [dummy-boss] (promotable) Masters: [ rhel7-3 ] Slaves: [ rhel7-2 rhel7-4 ] diff --git a/cts/scheduler/clone-requires-quorum.summary b/cts/scheduler/clone-requires-quorum.summary index 0123a08b5b..64b76b1ebd 100644 --- a/cts/scheduler/clone-requires-quorum.summary +++ b/cts/scheduler/clone-requires-quorum.summary @@ -1,42 +1,42 @@ Using the original execution date of: 2018-05-24 15:30:29Z Current cluster status: Node rhel7-5 (5): UNCLEAN (offline) Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ] Fencing (stonith:fence_xvm): Started rhel7-1 FencingFail (stonith:fence_dummy): Started rhel7-2 dummy-solo (ocf::pacemaker:Dummy): Started rhel7-3 Clone Set: dummy-crowd-clone [dummy-crowd] dummy-crowd (ocf::pacemaker:Dummy): ORPHANED Started rhel7-5 (UNCLEAN) Started: [ rhel7-1 rhel7-2 rhel7-4 ] - Master/Slave Set: dummy-boss-clone [dummy-boss] + Clone Set: dummy-boss-clone [dummy-boss] (promotable) Masters: [ rhel7-3 ] Slaves: [ rhel7-2 rhel7-4 ] Transition Summary: * Fence (reboot) rhel7-5 'peer is no longer part of the cluster' * Stop dummy-crowd:3 ( rhel7-5 ) due to node availability Executing cluster transition: * Pseudo action: dummy-crowd-clone_stop_0 * Fencing rhel7-5 (reboot) * Pseudo action: dummy-crowd_stop_0 * Pseudo action: dummy-crowd-clone_stopped_0 * Pseudo action: stonith_complete * Pseudo action: all_stopped Using the original execution date of: 2018-05-24 15:30:29Z Revised cluster status: Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ] OFFLINE: [ rhel7-5 ] Fencing (stonith:fence_xvm): Started rhel7-1 FencingFail (stonith:fence_dummy): Started rhel7-2 dummy-solo (ocf::pacemaker:Dummy): Started rhel7-3 Clone Set: dummy-crowd-clone [dummy-crowd] Started: [ rhel7-1 rhel7-2 rhel7-4 ] - Master/Slave Set: dummy-boss-clone [dummy-boss] + Clone Set: dummy-boss-clone [dummy-boss] (promotable) Masters: [ rhel7-3 ] Slaves: [ rhel7-2 rhel7-4 ] diff --git a/cts/scheduler/colo_master_w_native.summary b/cts/scheduler/colo_master_w_native.summary index fda8e85e5f..a535278a0f 100644 --- a/cts/scheduler/colo_master_w_native.summary +++ b/cts/scheduler/colo_master_w_native.summary @@ -1,47 +1,47 @@ Current cluster status: Online: [ node1 node2 ] A (ocf::pacemaker:Dummy): Started node1 - Master/Slave Set: MS_RSC [MS_RSC_NATIVE] + Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable) Masters: [ node2 ] Slaves: [ node1 ] Transition Summary: * Demote MS_RSC_NATIVE:0 ( Master -> Slave node2 ) * Promote MS_RSC_NATIVE:1 (Slave -> Master node1) Executing cluster transition: * Resource action: MS_RSC_NATIVE:1 cancel=15000 on node1 * Pseudo action: MS_RSC_pre_notify_demote_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-pre_notify_demote_0 * Pseudo action: MS_RSC_demote_0 * Resource action: MS_RSC_NATIVE:0 demote on node2 * Pseudo action: MS_RSC_demoted_0 * Pseudo action: MS_RSC_post_notify_demoted_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-post_notify_demoted_0 * Pseudo action: MS_RSC_pre_notify_promote_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-pre_notify_promote_0 * Pseudo action: MS_RSC_promote_0 * Resource action: MS_RSC_NATIVE:1 promote on node1 * Pseudo action: MS_RSC_promoted_0 * Pseudo action: MS_RSC_post_notify_promoted_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-post_notify_promoted_0 * Resource action: MS_RSC_NATIVE:0 monitor=15000 on node2 Revised cluster status: Online: [ node1 node2 ] A (ocf::pacemaker:Dummy): Started node1 - Master/Slave Set: MS_RSC [MS_RSC_NATIVE] + Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/colo_slave_w_native.summary b/cts/scheduler/colo_slave_w_native.summary index f59d93b286..307a7003d9 100644 --- a/cts/scheduler/colo_slave_w_native.summary +++ b/cts/scheduler/colo_slave_w_native.summary @@ -1,52 +1,52 @@ Current cluster status: Online: [ node1 node2 ] A (ocf::pacemaker:Dummy): Started node1 - Master/Slave Set: MS_RSC [MS_RSC_NATIVE] + Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable) Masters: [ node2 ] Slaves: [ node1 ] Transition Summary: * Move A ( node1 -> node2 ) * Demote MS_RSC_NATIVE:0 ( Master -> Slave node2 ) * Promote MS_RSC_NATIVE:1 (Slave -> Master node1) Executing cluster transition: * Resource action: A stop on node1 * Resource action: MS_RSC_NATIVE:1 cancel=15000 on node1 * Pseudo action: MS_RSC_pre_notify_demote_0 * Pseudo action: all_stopped * Resource action: A start on node2 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-pre_notify_demote_0 * Pseudo action: MS_RSC_demote_0 * Resource action: A monitor=10000 on node2 * Resource action: MS_RSC_NATIVE:0 demote on node2 * Pseudo action: MS_RSC_demoted_0 * Pseudo action: MS_RSC_post_notify_demoted_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-post_notify_demoted_0 * Pseudo action: MS_RSC_pre_notify_promote_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-pre_notify_promote_0 * Pseudo action: MS_RSC_promote_0 * Resource action: MS_RSC_NATIVE:1 promote on node1 * Pseudo action: MS_RSC_promoted_0 * Pseudo action: MS_RSC_post_notify_promoted_0 * Resource action: MS_RSC_NATIVE:0 notify on node2 * Resource action: MS_RSC_NATIVE:1 notify on node1 * Pseudo action: MS_RSC_confirmed-post_notify_promoted_0 * Resource action: MS_RSC_NATIVE:0 monitor=15000 on node2 Revised cluster status: Online: [ node1 node2 ] A (ocf::pacemaker:Dummy): Started node2 - Master/Slave Set: MS_RSC [MS_RSC_NATIVE] + Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/coloc-clone-stays-active.summary b/cts/scheduler/coloc-clone-stays-active.summary index df9b92c58a..edf00a03e2 100644 --- a/cts/scheduler/coloc-clone-stays-active.summary +++ b/cts/scheduler/coloc-clone-stays-active.summary @@ -1,207 +1,207 @@ 12 of 87 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ s01-0 s01-1 ] stonith-s01-0 (stonith:external/ipmi): Started s01-1 stonith-s01-1 (stonith:external/ipmi): Started s01-0 Resource Group: iscsi-pool-0-target-all iscsi-pool-0-target (ocf::vds-ok:iSCSITarget): Started s01-0 iscsi-pool-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-0 Resource Group: iscsi-pool-0-vips vip-235 (ocf::heartbeat:IPaddr2): Started s01-0 vip-236 (ocf::heartbeat:IPaddr2): Started s01-0 Resource Group: iscsi-pool-1-target-all iscsi-pool-1-target (ocf::vds-ok:iSCSITarget): Started s01-1 iscsi-pool-1-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-1 Resource Group: iscsi-pool-1-vips vip-237 (ocf::heartbeat:IPaddr2): Started s01-1 vip-238 (ocf::heartbeat:IPaddr2): Started s01-1 - Master/Slave Set: ms-drbd-pool-0 [drbd-pool-0] + Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] - Master/Slave Set: ms-drbd-pool-1 [drbd-pool-1] + Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] - Master/Slave Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] + Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] - Master/Slave Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] + Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] Clone Set: cl-o2cb [o2cb] Stopped (disabled): [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-s01-service [drbd-s01-service] + Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-service-fs [s01-service-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-ietd [ietd] Started: [ s01-0 s01-1 ] Clone Set: cl-dhcpd [dhcpd] Stopped (disabled): [ s01-0 s01-1 ] Resource Group: http-server vip-233 (ocf::heartbeat:IPaddr2): Started s01-0 nginx (lsb:nginx): Stopped ( disabled ) - Master/Slave Set: ms-drbd-s01-logs [drbd-s01-logs] + Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-logs-fs [s01-logs-fs] Started: [ s01-0 s01-1 ] Resource Group: syslog-server vip-234 (ocf::heartbeat:IPaddr2): Started s01-1 syslog-ng (ocf::heartbeat:syslog-ng): Started s01-1 Resource Group: tftp-server vip-232 (ocf::heartbeat:IPaddr2): Stopped tftpd (ocf::heartbeat:Xinetd): Stopped Clone Set: cl-xinetd [xinetd] Started: [ s01-0 s01-1 ] Clone Set: cl-ospf-routing [ospf-routing] Started: [ s01-0 s01-1 ] Clone Set: connected-outer [ping-bmc-and-switch] Started: [ s01-0 s01-1 ] Resource Group: iscsi-vds-dom0-stateless-0-target-all iscsi-vds-dom0-stateless-0-target (ocf::vds-ok:iSCSITarget): Stopped ( disabled ) iscsi-vds-dom0-stateless-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Stopped ( disabled ) Resource Group: iscsi-vds-dom0-stateless-0-vips vip-227 (ocf::heartbeat:IPaddr2): Stopped vip-228 (ocf::heartbeat:IPaddr2): Stopped - Master/Slave Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] + Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] - Master/Slave Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] + Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable) Slaves: [ s01-0 s01-1 ] Clone Set: cl-dlm [dlm] Started: [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] + Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] Stopped (disabled): [ s01-0 s01-1 ] Clone Set: cl-gfs2 [gfs2] Started: [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-vds-http [drbd-vds-http] + Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-http-fs [vds-http-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-clvmd [clvmd] Started: [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] + Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data] Started: [ s01-0 s01-1 ] mgmt-vm (ocf::vds-ok:VirtualDomain): Started s01-0 Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service] Started: [ s01-0 s01-1 ] Clone Set: cl-libvirtd [libvirtd] Started: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool] Started: [ s01-0 s01-1 ] Transition Summary: * Migrate mgmt-vm ( s01-0 -> s01-1 ) Executing cluster transition: * Resource action: mgmt-vm migrate_to on s01-0 * Resource action: mgmt-vm migrate_from on s01-1 * Resource action: mgmt-vm stop on s01-0 * Pseudo action: all_stopped * Pseudo action: mgmt-vm_start_0 * Resource action: mgmt-vm monitor=10000 on s01-1 Revised cluster status: Online: [ s01-0 s01-1 ] stonith-s01-0 (stonith:external/ipmi): Started s01-1 stonith-s01-1 (stonith:external/ipmi): Started s01-0 Resource Group: iscsi-pool-0-target-all iscsi-pool-0-target (ocf::vds-ok:iSCSITarget): Started s01-0 iscsi-pool-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-0 Resource Group: iscsi-pool-0-vips vip-235 (ocf::heartbeat:IPaddr2): Started s01-0 vip-236 (ocf::heartbeat:IPaddr2): Started s01-0 Resource Group: iscsi-pool-1-target-all iscsi-pool-1-target (ocf::vds-ok:iSCSITarget): Started s01-1 iscsi-pool-1-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-1 Resource Group: iscsi-pool-1-vips vip-237 (ocf::heartbeat:IPaddr2): Started s01-1 vip-238 (ocf::heartbeat:IPaddr2): Started s01-1 - Master/Slave Set: ms-drbd-pool-0 [drbd-pool-0] + Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] - Master/Slave Set: ms-drbd-pool-1 [drbd-pool-1] + Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] - Master/Slave Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] + Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] - Master/Slave Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] + Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] Clone Set: cl-o2cb [o2cb] Stopped (disabled): [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-s01-service [drbd-s01-service] + Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-service-fs [s01-service-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-ietd [ietd] Started: [ s01-0 s01-1 ] Clone Set: cl-dhcpd [dhcpd] Stopped (disabled): [ s01-0 s01-1 ] Resource Group: http-server vip-233 (ocf::heartbeat:IPaddr2): Started s01-0 nginx (lsb:nginx): Stopped ( disabled ) - Master/Slave Set: ms-drbd-s01-logs [drbd-s01-logs] + Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-logs-fs [s01-logs-fs] Started: [ s01-0 s01-1 ] Resource Group: syslog-server vip-234 (ocf::heartbeat:IPaddr2): Started s01-1 syslog-ng (ocf::heartbeat:syslog-ng): Started s01-1 Resource Group: tftp-server vip-232 (ocf::heartbeat:IPaddr2): Stopped tftpd (ocf::heartbeat:Xinetd): Stopped Clone Set: cl-xinetd [xinetd] Started: [ s01-0 s01-1 ] Clone Set: cl-ospf-routing [ospf-routing] Started: [ s01-0 s01-1 ] Clone Set: connected-outer [ping-bmc-and-switch] Started: [ s01-0 s01-1 ] Resource Group: iscsi-vds-dom0-stateless-0-target-all iscsi-vds-dom0-stateless-0-target (ocf::vds-ok:iSCSITarget): Stopped ( disabled ) iscsi-vds-dom0-stateless-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Stopped ( disabled ) Resource Group: iscsi-vds-dom0-stateless-0-vips vip-227 (ocf::heartbeat:IPaddr2): Stopped vip-228 (ocf::heartbeat:IPaddr2): Stopped - Master/Slave Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] + Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] - Master/Slave Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] + Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable) Slaves: [ s01-0 s01-1 ] Clone Set: cl-dlm [dlm] Started: [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] + Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] Stopped (disabled): [ s01-0 s01-1 ] Clone Set: cl-gfs2 [gfs2] Started: [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-vds-http [drbd-vds-http] + Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-http-fs [vds-http-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-clvmd [clvmd] Started: [ s01-0 s01-1 ] - Master/Slave Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] + Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data] Started: [ s01-0 s01-1 ] mgmt-vm (ocf::vds-ok:VirtualDomain): Started s01-1 Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service] Started: [ s01-0 s01-1 ] Clone Set: cl-libvirtd [libvirtd] Started: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool] Started: [ s01-0 s01-1 ] diff --git a/cts/scheduler/coloc-slave-anti.summary b/cts/scheduler/coloc-slave-anti.summary index 82ab9e42d4..221f896835 100644 --- a/cts/scheduler/coloc-slave-anti.summary +++ b/cts/scheduler/coloc-slave-anti.summary @@ -1,46 +1,46 @@ Current cluster status: Online: [ pollux sirius ] Clone Set: pingd-clone [pingd-1] Started: [ pollux sirius ] - Master/Slave Set: drbd-msr [drbd-r0] + Clone Set: drbd-msr [drbd-r0] (promotable) Masters: [ pollux ] Slaves: [ sirius ] Resource Group: group-1 fs-1 (ocf::heartbeat:Filesystem): Stopped ip-198 (ocf::heartbeat:IPaddr2): Stopped apache (ocf::custom:apache2): Stopped pollux-fencing (stonith:external/ipmi-soft): Started sirius sirius-fencing (stonith:external/ipmi-soft): Started pollux Transition Summary: * Start fs-1 (pollux) * Start ip-198 (pollux) * Start apache (pollux) Executing cluster transition: * Pseudo action: group-1_start_0 * Resource action: fs-1 start on pollux * Resource action: ip-198 start on pollux * Resource action: apache start on pollux * Pseudo action: group-1_running_0 * Resource action: fs-1 monitor=20000 on pollux * Resource action: ip-198 monitor=30000 on pollux * Resource action: apache monitor=60000 on pollux Revised cluster status: Online: [ pollux sirius ] Clone Set: pingd-clone [pingd-1] Started: [ pollux sirius ] - Master/Slave Set: drbd-msr [drbd-r0] + Clone Set: drbd-msr [drbd-r0] (promotable) Masters: [ pollux ] Slaves: [ sirius ] Resource Group: group-1 fs-1 (ocf::heartbeat:Filesystem): Started pollux ip-198 (ocf::heartbeat:IPaddr2): Started pollux apache (ocf::custom:apache2): Started pollux pollux-fencing (stonith:external/ipmi-soft): Started sirius sirius-fencing (stonith:external/ipmi-soft): Started pollux diff --git a/cts/scheduler/colocation_constraint_stops_master.summary b/cts/scheduler/colocation_constraint_stops_master.summary index e4b8697d1c..1c51ac2ebe 100644 --- a/cts/scheduler/colocation_constraint_stops_master.summary +++ b/cts/scheduler/colocation_constraint_stops_master.summary @@ -1,37 +1,37 @@ Current cluster status: Online: [ fc16-builder fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Masters: [ fc16-builder ] Transition Summary: * Stop NATIVE_RSC_A:0 ( Master fc16-builder ) due to node availability Executing cluster transition: * Pseudo action: MASTER_RSC_A_pre_notify_demote_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_demote_0 * Pseudo action: MASTER_RSC_A_demote_0 * Resource action: NATIVE_RSC_A:0 demote on fc16-builder * Pseudo action: MASTER_RSC_A_demoted_0 * Pseudo action: MASTER_RSC_A_post_notify_demoted_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-post_notify_demoted_0 * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Resource action: NATIVE_RSC_A:0 delete on fc16-builder2 * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ fc16-builder fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Stopped: [ fc16-builder fc16-builder2 ] diff --git a/cts/scheduler/colocation_constraint_stops_slave.summary b/cts/scheduler/colocation_constraint_stops_slave.summary index 4a5a5820c9..625394fea8 100644 --- a/cts/scheduler/colocation_constraint_stops_slave.summary +++ b/cts/scheduler/colocation_constraint_stops_slave.summary @@ -1,34 +1,34 @@ 1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Slaves: [ fc16-builder ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Started fc16-builder ( disabled ) Transition Summary: * Stop NATIVE_RSC_A:0 ( Slave fc16-builder ) due to node availability * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability Executing cluster transition: * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_B stop on fc16-builder * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Stopped: [ fc16-builder fc16-builder2 ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Stopped ( disabled ) diff --git a/cts/scheduler/complex_enforce_colo.summary b/cts/scheduler/complex_enforce_colo.summary index 0426e98a41..57789869f1 100644 --- a/cts/scheduler/complex_enforce_colo.summary +++ b/cts/scheduler/complex_enforce_colo.summary @@ -1,453 +1,453 @@ 3 of 132 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ rhos6-node1 rhos6-node2 rhos6-node3 ] node1-fence (stonith:fence_xvm): Started rhos6-node1 node2-fence (stonith:fence_xvm): Started rhos6-node2 node3-fence (stonith:fence_xvm): Started rhos6-node3 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] vip-db (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-rabbitmq (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-qpid (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-keystone (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-glance (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-cinder (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-swift (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-neutron (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-nova (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-horizon (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-heat (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-ceilometer (ocf::heartbeat:IPaddr2): Started rhos6-node3 - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: rabbitmq-server-clone [rabbitmq-server] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: memcached-clone [memcached] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: mongodb-clone [mongodb] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: keystone-clone [keystone] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-fs-clone [glance-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-registry-clone [glance-registry] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-api-clone [glance-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] cinder-api (systemd:openstack-cinder-api): Started rhos6-node1 cinder-scheduler (systemd:openstack-cinder-scheduler): Started rhos6-node1 cinder-volume (systemd:openstack-cinder-volume): Started rhos6-node1 Clone Set: swift-fs-clone [swift-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-account-clone [swift-account] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-container-clone [swift-container] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-object-clone [swift-object] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-proxy-clone [swift-proxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] swift-object-expirer (systemd:openstack-swift-object-expirer): Started rhos6-node2 Clone Set: neutron-server-clone [neutron-server] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-scale-clone [neutron-scale] (unique) neutron-scale:0 (ocf::neutron:NeutronScale): Started rhos6-node3 neutron-scale:1 (ocf::neutron:NeutronScale): Started rhos6-node2 neutron-scale:2 (ocf::neutron:NeutronScale): Started rhos6-node1 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-consoleauth-clone [nova-consoleauth] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-novncproxy-clone [nova-novncproxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-api-clone [nova-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-scheduler-clone [nova-scheduler] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-conductor-clone [nova-conductor] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] ceilometer-central (systemd:openstack-ceilometer-central): Started rhos6-node3 Clone Set: ceilometer-collector-clone [ceilometer-collector] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-api-clone [ceilometer-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-delay-clone [ceilometer-delay] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-notification-clone [ceilometer-notification] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-clone [heat-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cfn-clone [heat-api-cfn] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] heat-engine (systemd:openstack-heat-engine): Started rhos6-node2 Clone Set: horizon-clone [horizon] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Transition Summary: * Stop keystone:0 (rhos6-node1) due to node availability * Stop keystone:1 (rhos6-node2) due to node availability * Stop keystone:2 (rhos6-node3) due to node availability * Stop glance-registry:0 (rhos6-node1) * Stop glance-registry:1 (rhos6-node2) * Stop glance-registry:2 (rhos6-node3) * Stop glance-api:0 (rhos6-node1) * Stop glance-api:1 (rhos6-node2) * Stop glance-api:2 (rhos6-node3) * Stop cinder-api ( rhos6-node1 ) due to unrunnable keystone-clone running * Stop cinder-scheduler ( rhos6-node1 ) due to required cinder-api start * Stop cinder-volume ( rhos6-node1 ) due to colocation with cinder-scheduler * Stop swift-account:0 (rhos6-node1) * Stop swift-account:1 (rhos6-node2) * Stop swift-account:2 (rhos6-node3) * Stop swift-container:0 (rhos6-node1) * Stop swift-container:1 (rhos6-node2) * Stop swift-container:2 (rhos6-node3) * Stop swift-object:0 (rhos6-node1) * Stop swift-object:1 (rhos6-node2) * Stop swift-object:2 (rhos6-node3) * Stop swift-proxy:0 (rhos6-node1) * Stop swift-proxy:1 (rhos6-node2) * Stop swift-proxy:2 (rhos6-node3) * Stop swift-object-expirer ( rhos6-node2 ) due to required swift-proxy-clone running * Stop neutron-server:0 (rhos6-node1) * Stop neutron-server:1 (rhos6-node2) * Stop neutron-server:2 (rhos6-node3) * Stop neutron-scale:0 (rhos6-node3) * Stop neutron-scale:1 (rhos6-node2) * Stop neutron-scale:2 (rhos6-node1) * Stop neutron-ovs-cleanup:0 (rhos6-node1) * Stop neutron-ovs-cleanup:1 (rhos6-node2) * Stop neutron-ovs-cleanup:2 (rhos6-node3) * Stop neutron-netns-cleanup:0 (rhos6-node1) * Stop neutron-netns-cleanup:1 (rhos6-node2) * Stop neutron-netns-cleanup:2 (rhos6-node3) * Stop neutron-openvswitch-agent:0 (rhos6-node1) * Stop neutron-openvswitch-agent:1 (rhos6-node2) * Stop neutron-openvswitch-agent:2 (rhos6-node3) * Stop neutron-dhcp-agent:0 (rhos6-node1) * Stop neutron-dhcp-agent:1 (rhos6-node2) * Stop neutron-dhcp-agent:2 (rhos6-node3) * Stop neutron-l3-agent:0 (rhos6-node1) * Stop neutron-l3-agent:1 (rhos6-node2) * Stop neutron-l3-agent:2 (rhos6-node3) * Stop neutron-metadata-agent:0 (rhos6-node1) * Stop neutron-metadata-agent:1 (rhos6-node2) * Stop neutron-metadata-agent:2 (rhos6-node3) * Stop nova-consoleauth:0 (rhos6-node1) * Stop nova-consoleauth:1 (rhos6-node2) * Stop nova-consoleauth:2 (rhos6-node3) * Stop nova-novncproxy:0 (rhos6-node1) * Stop nova-novncproxy:1 (rhos6-node2) * Stop nova-novncproxy:2 (rhos6-node3) * Stop nova-api:0 (rhos6-node1) * Stop nova-api:1 (rhos6-node2) * Stop nova-api:2 (rhos6-node3) * Stop nova-scheduler:0 (rhos6-node1) * Stop nova-scheduler:1 (rhos6-node2) * Stop nova-scheduler:2 (rhos6-node3) * Stop nova-conductor:0 (rhos6-node1) * Stop nova-conductor:1 (rhos6-node2) * Stop nova-conductor:2 (rhos6-node3) * Stop ceilometer-central ( rhos6-node3 ) due to unrunnable keystone-clone running * Stop ceilometer-collector:0 ( rhos6-node1 ) due to required ceilometer-central start * Stop ceilometer-collector:1 ( rhos6-node2 ) due to required ceilometer-central start * Stop ceilometer-collector:2 ( rhos6-node3 ) due to required ceilometer-central start * Stop ceilometer-api:0 ( rhos6-node1 ) due to required ceilometer-collector:0 start * Stop ceilometer-api:1 ( rhos6-node2 ) due to required ceilometer-collector:1 start * Stop ceilometer-api:2 ( rhos6-node3 ) due to required ceilometer-collector:2 start * Stop ceilometer-delay:0 ( rhos6-node1 ) due to required ceilometer-api:0 start * Stop ceilometer-delay:1 ( rhos6-node2 ) due to required ceilometer-api:1 start * Stop ceilometer-delay:2 ( rhos6-node3 ) due to required ceilometer-api:2 start * Stop ceilometer-alarm-evaluator:0 ( rhos6-node1 ) due to required ceilometer-delay:0 start * Stop ceilometer-alarm-evaluator:1 ( rhos6-node2 ) due to required ceilometer-delay:1 start * Stop ceilometer-alarm-evaluator:2 ( rhos6-node3 ) due to required ceilometer-delay:2 start * Stop ceilometer-alarm-notifier:0 ( rhos6-node1 ) due to required ceilometer-alarm-evaluator:0 start * Stop ceilometer-alarm-notifier:1 ( rhos6-node2 ) due to required ceilometer-alarm-evaluator:1 start * Stop ceilometer-alarm-notifier:2 ( rhos6-node3 ) due to required ceilometer-alarm-evaluator:2 start * Stop ceilometer-notification:0 ( rhos6-node1 ) due to required ceilometer-alarm-notifier:0 start * Stop ceilometer-notification:1 ( rhos6-node2 ) due to required ceilometer-alarm-notifier:1 start * Stop ceilometer-notification:2 ( rhos6-node3 ) due to required ceilometer-alarm-notifier:2 start * Stop heat-api:0 ( rhos6-node1 ) due to required ceilometer-notification:0 start * Stop heat-api:1 ( rhos6-node2 ) due to required ceilometer-notification:1 start * Stop heat-api:2 ( rhos6-node3 ) due to required ceilometer-notification:2 start * Stop heat-api-cfn:0 ( rhos6-node1 ) due to required heat-api:0 start * Stop heat-api-cfn:1 ( rhos6-node2 ) due to required heat-api:1 start * Stop heat-api-cfn:2 ( rhos6-node3 ) due to required heat-api:2 start * Stop heat-api-cloudwatch:0 ( rhos6-node1 ) due to required heat-api-cfn:0 start * Stop heat-api-cloudwatch:1 ( rhos6-node2 ) due to required heat-api-cfn:1 start * Stop heat-api-cloudwatch:2 ( rhos6-node3 ) due to required heat-api-cfn:2 start * Stop heat-engine ( rhos6-node2 ) due to colocation with heat-api-cloudwatch-clone Executing cluster transition: * Pseudo action: glance-api-clone_stop_0 * Resource action: cinder-volume stop on rhos6-node1 * Pseudo action: swift-object-clone_stop_0 * Resource action: swift-object-expirer stop on rhos6-node2 * Pseudo action: neutron-metadata-agent-clone_stop_0 * Pseudo action: nova-conductor-clone_stop_0 * Resource action: heat-engine stop on rhos6-node2 * Resource action: glance-api stop on rhos6-node1 * Resource action: glance-api stop on rhos6-node2 * Resource action: glance-api stop on rhos6-node3 * Pseudo action: glance-api-clone_stopped_0 * Resource action: cinder-scheduler stop on rhos6-node1 * Resource action: swift-object stop on rhos6-node1 * Resource action: swift-object stop on rhos6-node2 * Resource action: swift-object stop on rhos6-node3 * Pseudo action: swift-object-clone_stopped_0 * Pseudo action: swift-proxy-clone_stop_0 * Resource action: neutron-metadata-agent stop on rhos6-node1 * Resource action: neutron-metadata-agent stop on rhos6-node2 * Resource action: neutron-metadata-agent stop on rhos6-node3 * Pseudo action: neutron-metadata-agent-clone_stopped_0 * Resource action: nova-conductor stop on rhos6-node1 * Resource action: nova-conductor stop on rhos6-node2 * Resource action: nova-conductor stop on rhos6-node3 * Pseudo action: nova-conductor-clone_stopped_0 * Pseudo action: heat-api-cloudwatch-clone_stop_0 * Pseudo action: glance-registry-clone_stop_0 * Resource action: cinder-api stop on rhos6-node1 * Pseudo action: swift-container-clone_stop_0 * Resource action: swift-proxy stop on rhos6-node1 * Resource action: swift-proxy stop on rhos6-node2 * Resource action: swift-proxy stop on rhos6-node3 * Pseudo action: swift-proxy-clone_stopped_0 * Pseudo action: neutron-l3-agent-clone_stop_0 * Pseudo action: nova-scheduler-clone_stop_0 * Resource action: heat-api-cloudwatch stop on rhos6-node1 * Resource action: heat-api-cloudwatch stop on rhos6-node2 * Resource action: heat-api-cloudwatch stop on rhos6-node3 * Pseudo action: heat-api-cloudwatch-clone_stopped_0 * Resource action: glance-registry stop on rhos6-node1 * Resource action: glance-registry stop on rhos6-node2 * Resource action: glance-registry stop on rhos6-node3 * Pseudo action: glance-registry-clone_stopped_0 * Resource action: swift-container stop on rhos6-node1 * Resource action: swift-container stop on rhos6-node2 * Resource action: swift-container stop on rhos6-node3 * Pseudo action: swift-container-clone_stopped_0 * Resource action: neutron-l3-agent stop on rhos6-node1 * Resource action: neutron-l3-agent stop on rhos6-node2 * Resource action: neutron-l3-agent stop on rhos6-node3 * Pseudo action: neutron-l3-agent-clone_stopped_0 * Resource action: nova-scheduler stop on rhos6-node1 * Resource action: nova-scheduler stop on rhos6-node2 * Resource action: nova-scheduler stop on rhos6-node3 * Pseudo action: nova-scheduler-clone_stopped_0 * Pseudo action: heat-api-cfn-clone_stop_0 * Pseudo action: swift-account-clone_stop_0 * Pseudo action: neutron-dhcp-agent-clone_stop_0 * Pseudo action: nova-api-clone_stop_0 * Resource action: heat-api-cfn stop on rhos6-node1 * Resource action: heat-api-cfn stop on rhos6-node2 * Resource action: heat-api-cfn stop on rhos6-node3 * Pseudo action: heat-api-cfn-clone_stopped_0 * Resource action: swift-account stop on rhos6-node1 * Resource action: swift-account stop on rhos6-node2 * Resource action: swift-account stop on rhos6-node3 * Pseudo action: swift-account-clone_stopped_0 * Resource action: neutron-dhcp-agent stop on rhos6-node1 * Resource action: neutron-dhcp-agent stop on rhos6-node2 * Resource action: neutron-dhcp-agent stop on rhos6-node3 * Pseudo action: neutron-dhcp-agent-clone_stopped_0 * Resource action: nova-api stop on rhos6-node1 * Resource action: nova-api stop on rhos6-node2 * Resource action: nova-api stop on rhos6-node3 * Pseudo action: nova-api-clone_stopped_0 * Pseudo action: heat-api-clone_stop_0 * Pseudo action: neutron-openvswitch-agent-clone_stop_0 * Pseudo action: nova-novncproxy-clone_stop_0 * Resource action: heat-api stop on rhos6-node1 * Resource action: heat-api stop on rhos6-node2 * Resource action: heat-api stop on rhos6-node3 * Pseudo action: heat-api-clone_stopped_0 * Resource action: neutron-openvswitch-agent stop on rhos6-node1 * Resource action: neutron-openvswitch-agent stop on rhos6-node2 * Resource action: neutron-openvswitch-agent stop on rhos6-node3 * Pseudo action: neutron-openvswitch-agent-clone_stopped_0 * Resource action: nova-novncproxy stop on rhos6-node1 * Resource action: nova-novncproxy stop on rhos6-node2 * Resource action: nova-novncproxy stop on rhos6-node3 * Pseudo action: nova-novncproxy-clone_stopped_0 * Pseudo action: ceilometer-notification-clone_stop_0 * Pseudo action: neutron-netns-cleanup-clone_stop_0 * Pseudo action: nova-consoleauth-clone_stop_0 * Resource action: ceilometer-notification stop on rhos6-node1 * Resource action: ceilometer-notification stop on rhos6-node2 * Resource action: ceilometer-notification stop on rhos6-node3 * Pseudo action: ceilometer-notification-clone_stopped_0 * Resource action: neutron-netns-cleanup stop on rhos6-node1 * Resource action: neutron-netns-cleanup stop on rhos6-node2 * Resource action: neutron-netns-cleanup stop on rhos6-node3 * Pseudo action: neutron-netns-cleanup-clone_stopped_0 * Resource action: nova-consoleauth stop on rhos6-node1 * Resource action: nova-consoleauth stop on rhos6-node2 * Resource action: nova-consoleauth stop on rhos6-node3 * Pseudo action: nova-consoleauth-clone_stopped_0 * Pseudo action: ceilometer-alarm-notifier-clone_stop_0 * Pseudo action: neutron-ovs-cleanup-clone_stop_0 * Resource action: ceilometer-alarm-notifier stop on rhos6-node1 * Resource action: ceilometer-alarm-notifier stop on rhos6-node2 * Resource action: ceilometer-alarm-notifier stop on rhos6-node3 * Pseudo action: ceilometer-alarm-notifier-clone_stopped_0 * Resource action: neutron-ovs-cleanup stop on rhos6-node1 * Resource action: neutron-ovs-cleanup stop on rhos6-node2 * Resource action: neutron-ovs-cleanup stop on rhos6-node3 * Pseudo action: neutron-ovs-cleanup-clone_stopped_0 * Pseudo action: ceilometer-alarm-evaluator-clone_stop_0 * Pseudo action: neutron-scale-clone_stop_0 * Resource action: ceilometer-alarm-evaluator stop on rhos6-node1 * Resource action: ceilometer-alarm-evaluator stop on rhos6-node2 * Resource action: ceilometer-alarm-evaluator stop on rhos6-node3 * Pseudo action: ceilometer-alarm-evaluator-clone_stopped_0 * Resource action: neutron-scale:0 stop on rhos6-node3 * Resource action: neutron-scale:1 stop on rhos6-node2 * Resource action: neutron-scale:2 stop on rhos6-node1 * Pseudo action: neutron-scale-clone_stopped_0 * Pseudo action: ceilometer-delay-clone_stop_0 * Pseudo action: neutron-server-clone_stop_0 * Resource action: ceilometer-delay stop on rhos6-node1 * Resource action: ceilometer-delay stop on rhos6-node2 * Resource action: ceilometer-delay stop on rhos6-node3 * Pseudo action: ceilometer-delay-clone_stopped_0 * Resource action: neutron-server stop on rhos6-node1 * Resource action: neutron-server stop on rhos6-node2 * Resource action: neutron-server stop on rhos6-node3 * Pseudo action: neutron-server-clone_stopped_0 * Pseudo action: ceilometer-api-clone_stop_0 * Resource action: ceilometer-api stop on rhos6-node1 * Resource action: ceilometer-api stop on rhos6-node2 * Resource action: ceilometer-api stop on rhos6-node3 * Pseudo action: ceilometer-api-clone_stopped_0 * Pseudo action: ceilometer-collector-clone_stop_0 * Resource action: ceilometer-collector stop on rhos6-node1 * Resource action: ceilometer-collector stop on rhos6-node2 * Resource action: ceilometer-collector stop on rhos6-node3 * Pseudo action: ceilometer-collector-clone_stopped_0 * Resource action: ceilometer-central stop on rhos6-node3 * Pseudo action: keystone-clone_stop_0 * Resource action: keystone stop on rhos6-node1 * Resource action: keystone stop on rhos6-node2 * Resource action: keystone stop on rhos6-node3 * Pseudo action: keystone-clone_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ rhos6-node1 rhos6-node2 rhos6-node3 ] node1-fence (stonith:fence_xvm): Started rhos6-node1 node2-fence (stonith:fence_xvm): Started rhos6-node2 node3-fence (stonith:fence_xvm): Started rhos6-node3 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] vip-db (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-rabbitmq (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-qpid (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-keystone (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-glance (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-cinder (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-swift (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-neutron (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-nova (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-horizon (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-heat (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-ceilometer (ocf::heartbeat:IPaddr2): Started rhos6-node3 - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: rabbitmq-server-clone [rabbitmq-server] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: memcached-clone [memcached] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: mongodb-clone [mongodb] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: keystone-clone [keystone] Stopped (disabled): [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-fs-clone [glance-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-registry-clone [glance-registry] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-api-clone [glance-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] cinder-api (systemd:openstack-cinder-api): Stopped cinder-scheduler (systemd:openstack-cinder-scheduler): Stopped cinder-volume (systemd:openstack-cinder-volume): Stopped Clone Set: swift-fs-clone [swift-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-account-clone [swift-account] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-container-clone [swift-container] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-object-clone [swift-object] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-proxy-clone [swift-proxy] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] swift-object-expirer (systemd:openstack-swift-object-expirer): Stopped Clone Set: neutron-server-clone [neutron-server] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-scale-clone [neutron-scale] (unique) neutron-scale:0 (ocf::neutron:NeutronScale): Stopped neutron-scale:1 (ocf::neutron:NeutronScale): Stopped neutron-scale:2 (ocf::neutron:NeutronScale): Stopped Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-consoleauth-clone [nova-consoleauth] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-novncproxy-clone [nova-novncproxy] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-api-clone [nova-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-scheduler-clone [nova-scheduler] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-conductor-clone [nova-conductor] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] ceilometer-central (systemd:openstack-ceilometer-central): Stopped Clone Set: ceilometer-collector-clone [ceilometer-collector] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-api-clone [ceilometer-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-delay-clone [ceilometer-delay] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-notification-clone [ceilometer-notification] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-clone [heat-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cfn-clone [heat-api-cfn] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] heat-engine (systemd:openstack-heat-engine): Stopped Clone Set: horizon-clone [horizon] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] diff --git a/cts/scheduler/failed-demote-recovery-master.summary b/cts/scheduler/failed-demote-recovery-master.summary index 61e2065167..b6b8b9d448 100644 --- a/cts/scheduler/failed-demote-recovery-master.summary +++ b/cts/scheduler/failed-demote-recovery-master.summary @@ -1,59 +1,59 @@ Using the original execution date of: 2017-11-30 12:37:50Z Current cluster status: Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ] fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96 fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95 - Master/Slave Set: DB2_HADR-master [DB2_HADR] + Clone Set: DB2_HADR-master [DB2_HADR] (promotable) DB2_HADR (ocf::heartbeat:db2): FAILED fastvm-rhel-7-4-96 Slaves: [ fastvm-rhel-7-4-95 ] Transition Summary: * Recover DB2_HADR:1 ( Slave -> Master fastvm-rhel-7-4-96 ) Executing cluster transition: * Pseudo action: DB2_HADR-master_pre_notify_stop_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_confirmed-pre_notify_stop_0 * Pseudo action: DB2_HADR-master_stop_0 * Resource action: DB2_HADR stop on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_stopped_0 * Pseudo action: DB2_HADR-master_post_notify_stopped_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Pseudo action: DB2_HADR-master_confirmed-post_notify_stopped_0 * Pseudo action: DB2_HADR-master_pre_notify_start_0 * Pseudo action: all_stopped * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Pseudo action: DB2_HADR-master_confirmed-pre_notify_start_0 * Pseudo action: DB2_HADR-master_start_0 * Resource action: DB2_HADR start on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_running_0 * Pseudo action: DB2_HADR-master_post_notify_running_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_confirmed-post_notify_running_0 * Pseudo action: DB2_HADR-master_pre_notify_promote_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_confirmed-pre_notify_promote_0 * Pseudo action: DB2_HADR-master_promote_0 * Resource action: DB2_HADR promote on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_promoted_0 * Pseudo action: DB2_HADR-master_post_notify_promoted_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_confirmed-post_notify_promoted_0 * Resource action: DB2_HADR monitor=22000 on fastvm-rhel-7-4-96 Using the original execution date of: 2017-11-30 12:37:50Z Revised cluster status: Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ] fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96 fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95 - Master/Slave Set: DB2_HADR-master [DB2_HADR] + Clone Set: DB2_HADR-master [DB2_HADR] (promotable) Masters: [ fastvm-rhel-7-4-96 ] Slaves: [ fastvm-rhel-7-4-95 ] diff --git a/cts/scheduler/failed-demote-recovery.summary b/cts/scheduler/failed-demote-recovery.summary index 32c6a80811..773ab81741 100644 --- a/cts/scheduler/failed-demote-recovery.summary +++ b/cts/scheduler/failed-demote-recovery.summary @@ -1,47 +1,47 @@ Using the original execution date of: 2017-11-30 12:37:50Z Current cluster status: Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ] fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96 fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95 - Master/Slave Set: DB2_HADR-master [DB2_HADR] + Clone Set: DB2_HADR-master [DB2_HADR] (promotable) DB2_HADR (ocf::heartbeat:db2): FAILED fastvm-rhel-7-4-96 Slaves: [ fastvm-rhel-7-4-95 ] Transition Summary: * Recover DB2_HADR:1 ( Slave fastvm-rhel-7-4-96 ) Executing cluster transition: * Pseudo action: DB2_HADR-master_pre_notify_stop_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_confirmed-pre_notify_stop_0 * Pseudo action: DB2_HADR-master_stop_0 * Resource action: DB2_HADR stop on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_stopped_0 * Pseudo action: DB2_HADR-master_post_notify_stopped_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Pseudo action: DB2_HADR-master_confirmed-post_notify_stopped_0 * Pseudo action: DB2_HADR-master_pre_notify_start_0 * Pseudo action: all_stopped * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Pseudo action: DB2_HADR-master_confirmed-pre_notify_start_0 * Pseudo action: DB2_HADR-master_start_0 * Resource action: DB2_HADR start on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_running_0 * Pseudo action: DB2_HADR-master_post_notify_running_0 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95 * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96 * Pseudo action: DB2_HADR-master_confirmed-post_notify_running_0 * Resource action: DB2_HADR monitor=5000 on fastvm-rhel-7-4-96 Using the original execution date of: 2017-11-30 12:37:50Z Revised cluster status: Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ] fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96 fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95 - Master/Slave Set: DB2_HADR-master [DB2_HADR] + Clone Set: DB2_HADR-master [DB2_HADR] (promotable) Slaves: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ] diff --git a/cts/scheduler/group-dependents.summary b/cts/scheduler/group-dependents.summary index 15b750b172..1598104d38 100644 --- a/cts/scheduler/group-dependents.summary +++ b/cts/scheduler/group-dependents.summary @@ -1,195 +1,195 @@ Current cluster status: Online: [ asttest1 asttest2 ] Resource Group: voip mysqld (lsb:mysql): Started asttest1 dahdi (lsb:dahdi): Started asttest1 fonulator (lsb:fonulator): Stopped asterisk (lsb:asterisk-11.0.1): Stopped iax2_mon (lsb:iax2_mon): Stopped httpd (lsb:apache2): Stopped tftp (lsb:tftp-srce): Stopped Resource Group: ip_voip_routes ip_voip_route_test1 (ocf::heartbeat:Route): Started asttest1 ip_voip_route_test2 (ocf::heartbeat:Route): Started asttest1 Resource Group: ip_voip_addresses_p ip_voip_vlan850 (ocf::heartbeat:IPaddr2): Started asttest1 ip_voip_vlan998 (ocf::heartbeat:IPaddr2): Started asttest1 ip_voip_vlan851 (ocf::heartbeat:IPaddr2): Started asttest1 ip_voip_vlan852 (ocf::heartbeat:IPaddr2): Started asttest1 ip_voip_vlan853 (ocf::heartbeat:IPaddr2): Started asttest1 ip_voip_vlan854 (ocf::heartbeat:IPaddr2): Started asttest1 ip_voip_vlan855 (ocf::heartbeat:IPaddr2): Started asttest1 ip_voip_vlan856 (ocf::heartbeat:IPaddr2): Started asttest1 Clone Set: cl_route [ip_voip_route_default] Started: [ asttest1 asttest2 ] fs_drbd (ocf::heartbeat:Filesystem): Started asttest1 - Master/Slave Set: ms_drbd [drbd] + Clone Set: ms_drbd [drbd] (promotable) Masters: [ asttest1 ] Slaves: [ asttest2 ] Transition Summary: * Migrate mysqld ( asttest1 -> asttest2 ) * Migrate dahdi ( asttest1 -> asttest2 ) * Start fonulator (asttest2) * Start asterisk (asttest2) * Start iax2_mon (asttest2) * Start httpd (asttest2) * Start tftp (asttest2) * Migrate ip_voip_route_test1 ( asttest1 -> asttest2 ) * Migrate ip_voip_route_test2 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan850 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan998 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan851 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan852 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan853 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan854 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan855 ( asttest1 -> asttest2 ) * Migrate ip_voip_vlan856 ( asttest1 -> asttest2 ) * Move fs_drbd ( asttest1 -> asttest2 ) * Demote drbd:0 ( Master -> Slave asttest1 ) * Promote drbd:1 (Slave -> Master asttest2) Executing cluster transition: * Pseudo action: voip_stop_0 * Resource action: mysqld migrate_to on asttest1 * Resource action: ip_voip_route_test1 migrate_to on asttest1 * Resource action: ip_voip_route_test2 migrate_to on asttest1 * Resource action: ip_voip_vlan850 migrate_to on asttest1 * Resource action: ip_voip_vlan998 migrate_to on asttest1 * Resource action: ip_voip_vlan851 migrate_to on asttest1 * Resource action: ip_voip_vlan852 migrate_to on asttest1 * Resource action: ip_voip_vlan853 migrate_to on asttest1 * Resource action: ip_voip_vlan854 migrate_to on asttest1 * Resource action: ip_voip_vlan855 migrate_to on asttest1 * Resource action: ip_voip_vlan856 migrate_to on asttest1 * Resource action: drbd:1 cancel=31000 on asttest2 * Pseudo action: ms_drbd_pre_notify_demote_0 * Resource action: mysqld migrate_from on asttest2 * Resource action: dahdi migrate_to on asttest1 * Resource action: ip_voip_route_test1 migrate_from on asttest2 * Resource action: ip_voip_route_test2 migrate_from on asttest2 * Resource action: ip_voip_vlan850 migrate_from on asttest2 * Resource action: ip_voip_vlan998 migrate_from on asttest2 * Resource action: ip_voip_vlan851 migrate_from on asttest2 * Resource action: ip_voip_vlan852 migrate_from on asttest2 * Resource action: ip_voip_vlan853 migrate_from on asttest2 * Resource action: ip_voip_vlan854 migrate_from on asttest2 * Resource action: ip_voip_vlan855 migrate_from on asttest2 * Resource action: ip_voip_vlan856 migrate_from on asttest2 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-pre_notify_demote_0 * Resource action: dahdi migrate_from on asttest2 * Resource action: dahdi stop on asttest1 * Resource action: mysqld stop on asttest1 * Pseudo action: voip_stopped_0 * Pseudo action: ip_voip_routes_stop_0 * Resource action: ip_voip_route_test1 stop on asttest1 * Resource action: ip_voip_route_test2 stop on asttest1 * Pseudo action: ip_voip_routes_stopped_0 * Pseudo action: ip_voip_addresses_p_stop_0 * Resource action: ip_voip_vlan850 stop on asttest1 * Resource action: ip_voip_vlan998 stop on asttest1 * Resource action: ip_voip_vlan851 stop on asttest1 * Resource action: ip_voip_vlan852 stop on asttest1 * Resource action: ip_voip_vlan853 stop on asttest1 * Resource action: ip_voip_vlan854 stop on asttest1 * Resource action: ip_voip_vlan855 stop on asttest1 * Resource action: ip_voip_vlan856 stop on asttest1 * Pseudo action: ip_voip_addresses_p_stopped_0 * Resource action: fs_drbd stop on asttest1 * Pseudo action: ms_drbd_demote_0 * Pseudo action: all_stopped * Resource action: drbd:0 demote on asttest1 * Pseudo action: ms_drbd_demoted_0 * Pseudo action: ms_drbd_post_notify_demoted_0 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_pre_notify_promote_0 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_promote_0 * Resource action: drbd:1 promote on asttest2 * Pseudo action: ms_drbd_promoted_0 * Pseudo action: ms_drbd_post_notify_promoted_0 * Resource action: drbd:0 notify on asttest1 * Resource action: drbd:1 notify on asttest2 * Pseudo action: ms_drbd_confirmed-post_notify_promoted_0 * Resource action: fs_drbd start on asttest2 * Resource action: drbd:0 monitor=31000 on asttest1 * Pseudo action: ip_voip_addresses_p_start_0 * Pseudo action: ip_voip_vlan850_start_0 * Pseudo action: ip_voip_vlan998_start_0 * Pseudo action: ip_voip_vlan851_start_0 * Pseudo action: ip_voip_vlan852_start_0 * Pseudo action: ip_voip_vlan853_start_0 * Pseudo action: ip_voip_vlan854_start_0 * Pseudo action: ip_voip_vlan855_start_0 * Pseudo action: ip_voip_vlan856_start_0 * Resource action: fs_drbd monitor=1000 on asttest2 * Pseudo action: ip_voip_addresses_p_running_0 * Resource action: ip_voip_vlan850 monitor=1000 on asttest2 * Resource action: ip_voip_vlan998 monitor=1000 on asttest2 * Resource action: ip_voip_vlan851 monitor=1000 on asttest2 * Resource action: ip_voip_vlan852 monitor=1000 on asttest2 * Resource action: ip_voip_vlan853 monitor=1000 on asttest2 * Resource action: ip_voip_vlan854 monitor=1000 on asttest2 * Resource action: ip_voip_vlan855 monitor=1000 on asttest2 * Resource action: ip_voip_vlan856 monitor=1000 on asttest2 * Pseudo action: ip_voip_routes_start_0 * Pseudo action: ip_voip_route_test1_start_0 * Pseudo action: ip_voip_route_test2_start_0 * Pseudo action: ip_voip_routes_running_0 * Resource action: ip_voip_route_test1 monitor=1000 on asttest2 * Resource action: ip_voip_route_test2 monitor=1000 on asttest2 * Pseudo action: voip_start_0 * Pseudo action: mysqld_start_0 * Pseudo action: dahdi_start_0 * Resource action: fonulator start on asttest2 * Resource action: asterisk start on asttest2 * Resource action: iax2_mon start on asttest2 * Resource action: httpd start on asttest2 * Resource action: tftp start on asttest2 * Pseudo action: voip_running_0 * Resource action: mysqld monitor=1000 on asttest2 * Resource action: dahdi monitor=1000 on asttest2 * Resource action: fonulator monitor=1000 on asttest2 * Resource action: asterisk monitor=1000 on asttest2 * Resource action: iax2_mon monitor=60000 on asttest2 * Resource action: httpd monitor=1000 on asttest2 * Resource action: tftp monitor=60000 on asttest2 Revised cluster status: Online: [ asttest1 asttest2 ] Resource Group: voip mysqld (lsb:mysql): Started asttest2 dahdi (lsb:dahdi): Started asttest2 fonulator (lsb:fonulator): Started asttest2 asterisk (lsb:asterisk-11.0.1): Started asttest2 iax2_mon (lsb:iax2_mon): Started asttest2 httpd (lsb:apache2): Started asttest2 tftp (lsb:tftp-srce): Started asttest2 Resource Group: ip_voip_routes ip_voip_route_test1 (ocf::heartbeat:Route): Started asttest2 ip_voip_route_test2 (ocf::heartbeat:Route): Started asttest2 Resource Group: ip_voip_addresses_p ip_voip_vlan850 (ocf::heartbeat:IPaddr2): Started asttest2 ip_voip_vlan998 (ocf::heartbeat:IPaddr2): Started asttest2 ip_voip_vlan851 (ocf::heartbeat:IPaddr2): Started asttest2 ip_voip_vlan852 (ocf::heartbeat:IPaddr2): Started asttest2 ip_voip_vlan853 (ocf::heartbeat:IPaddr2): Started asttest2 ip_voip_vlan854 (ocf::heartbeat:IPaddr2): Started asttest2 ip_voip_vlan855 (ocf::heartbeat:IPaddr2): Started asttest2 ip_voip_vlan856 (ocf::heartbeat:IPaddr2): Started asttest2 Clone Set: cl_route [ip_voip_route_default] Started: [ asttest1 asttest2 ] fs_drbd (ocf::heartbeat:Filesystem): Started asttest2 - Master/Slave Set: ms_drbd [drbd] + Clone Set: ms_drbd [drbd] (promotable) Masters: [ asttest2 ] Slaves: [ asttest1 ] diff --git a/cts/scheduler/group14.summary b/cts/scheduler/group14.summary index 351f03802d..b562a8ba28 100644 --- a/cts/scheduler/group14.summary +++ b/cts/scheduler/group14.summary @@ -1,101 +1,101 @@ Current cluster status: Online: [ c001n06 c001n07 ] OFFLINE: [ c001n02 c001n03 c001n04 c001n05 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 r192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n06 r192.168.100.182 (ocf::heartbeat:IPaddr): Stopped r192.168.100.183 (ocf::heartbeat:IPaddr): Stopped lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped migrator (ocf::heartbeat:Dummy): Stopped rsc_c001n03 (ocf::heartbeat:IPaddr): Stopped rsc_c001n02 (ocf::heartbeat:IPaddr): Stopped rsc_c001n04 (ocf::heartbeat:IPaddr): Stopped rsc_c001n05 (ocf::heartbeat:IPaddr): Stopped rsc_c001n06 (ocf::heartbeat:IPaddr): Stopped rsc_c001n07 (ocf::heartbeat:IPaddr): Stopped Clone Set: DoFencing [child_DoFencing] Stopped: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:1 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:2 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:3 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:4 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:5 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:6 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:7 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:8 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:9 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:10 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:11 (ocf::heartbeat:Stateful): Stopped Transition Summary: * Start DcIPaddr ( c001n06 ) due to no quorum (blocked) * Stop r192.168.100.181 ( c001n06 ) due to no quorum * Start r192.168.100.182 ( c001n07 ) due to no quorum (blocked) * Start r192.168.100.183 ( c001n07 ) due to no quorum (blocked) * Start lsb_dummy ( c001n06 ) due to no quorum (blocked) * Start migrator ( c001n06 ) due to no quorum (blocked) * Start rsc_c001n03 ( c001n06 ) due to no quorum (blocked) * Start rsc_c001n02 ( c001n07 ) due to no quorum (blocked) * Start rsc_c001n04 ( c001n06 ) due to no quorum (blocked) * Start rsc_c001n05 ( c001n07 ) due to no quorum (blocked) * Start rsc_c001n06 ( c001n06 ) due to no quorum (blocked) * Start rsc_c001n07 ( c001n07 ) due to no quorum (blocked) * Start child_DoFencing:0 (c001n06) * Start child_DoFencing:1 (c001n07) * Start ocf_msdummy:0 ( c001n06 ) due to no quorum (blocked) * Start ocf_msdummy:1 ( c001n07 ) due to no quorum (blocked) * Start ocf_msdummy:2 ( c001n06 ) due to no quorum (blocked) * Start ocf_msdummy:3 ( c001n07 ) due to no quorum (blocked) Executing cluster transition: * Pseudo action: group-1_stop_0 * Resource action: r192.168.100.181 stop on c001n06 * Pseudo action: DoFencing_start_0 * Pseudo action: all_stopped * Pseudo action: group-1_stopped_0 * Pseudo action: group-1_start_0 * Resource action: child_DoFencing:0 start on c001n06 * Resource action: child_DoFencing:1 start on c001n07 * Pseudo action: DoFencing_running_0 * Resource action: child_DoFencing:0 monitor=20000 on c001n06 * Resource action: child_DoFencing:1 monitor=20000 on c001n07 Revised cluster status: Online: [ c001n06 c001n07 ] OFFLINE: [ c001n02 c001n03 c001n04 c001n05 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 r192.168.100.181 (ocf::heartbeat:IPaddr): Stopped r192.168.100.182 (ocf::heartbeat:IPaddr): Stopped r192.168.100.183 (ocf::heartbeat:IPaddr): Stopped lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped migrator (ocf::heartbeat:Dummy): Stopped rsc_c001n03 (ocf::heartbeat:IPaddr): Stopped rsc_c001n02 (ocf::heartbeat:IPaddr): Stopped rsc_c001n04 (ocf::heartbeat:IPaddr): Stopped rsc_c001n05 (ocf::heartbeat:IPaddr): Stopped rsc_c001n06 (ocf::heartbeat:IPaddr): Stopped rsc_c001n07 (ocf::heartbeat:IPaddr): Stopped Clone Set: DoFencing [child_DoFencing] Started: [ c001n06 c001n07 ] Stopped: [ c001n02 c001n03 c001n04 c001n05 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:1 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:2 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:3 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:4 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:5 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:6 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:7 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:8 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:9 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:10 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:11 (ocf::heartbeat:Stateful): Stopped diff --git a/cts/scheduler/guest-node-host-dies.summary b/cts/scheduler/guest-node-host-dies.summary index 9813d2b97d..89de43521a 100644 --- a/cts/scheduler/guest-node-host-dies.summary +++ b/cts/scheduler/guest-node-host-dies.summary @@ -1,82 +1,82 @@ Current cluster status: Node rhel7-1 (1): UNCLEAN (offline) Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Fencing (stonith:fence_xvm): Started rhel7-4 rsc_rhel7-1 (ocf::heartbeat:IPaddr2): Started rhel7-1 ( UNCLEAN ) container1 (ocf::heartbeat:VirtualDomain): FAILED rhel7-1 (UNCLEAN) container2 (ocf::heartbeat:VirtualDomain): FAILED rhel7-1 (UNCLEAN) - Master/Slave Set: lxc-ms-master [lxc-ms] + Clone Set: lxc-ms-master [lxc-ms] (promotable) Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Transition Summary: * Fence (reboot) lxc2 (resource: container2) 'guest is unclean' * Fence (reboot) lxc1 (resource: container1) 'guest is unclean' * Fence (reboot) rhel7-1 'rsc_rhel7-1 is thought to be active there' * Restart Fencing ( rhel7-4 ) due to resource definition change * Move rsc_rhel7-1 ( rhel7-1 -> rhel7-5 ) * Recover container1 ( rhel7-1 -> rhel7-2 ) * Recover container2 ( rhel7-1 -> rhel7-3 ) * Recover lxc-ms:0 (Master lxc1) * Recover lxc-ms:1 (Slave lxc2) * Move lxc1 ( rhel7-1 -> rhel7-2 ) * Move lxc2 ( rhel7-1 -> rhel7-3 ) Executing cluster transition: * Resource action: Fencing stop on rhel7-4 * Pseudo action: lxc-ms-master_demote_0 * Pseudo action: lxc1_stop_0 * Resource action: lxc1 monitor on rhel7-5 * Resource action: lxc1 monitor on rhel7-4 * Resource action: lxc1 monitor on rhel7-3 * Pseudo action: lxc2_stop_0 * Resource action: lxc2 monitor on rhel7-5 * Resource action: lxc2 monitor on rhel7-4 * Resource action: lxc2 monitor on rhel7-2 * Fencing rhel7-1 (reboot) * Pseudo action: rsc_rhel7-1_stop_0 * Pseudo action: container1_stop_0 * Pseudo action: container2_stop_0 * Pseudo action: stonith-lxc2-reboot on lxc2 * Pseudo action: stonith-lxc1-reboot on lxc1 * Pseudo action: stonith_complete * Resource action: rsc_rhel7-1 start on rhel7-5 * Resource action: container1 start on rhel7-2 * Resource action: container2 start on rhel7-3 * Pseudo action: lxc-ms_demote_0 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Resource action: rsc_rhel7-1 monitor=5000 on rhel7-5 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Pseudo action: all_stopped * Resource action: Fencing start on rhel7-4 * Resource action: Fencing monitor=120000 on rhel7-4 * Resource action: lxc1 start on rhel7-2 * Resource action: lxc2 start on rhel7-3 * Resource action: lxc-ms start on lxc1 * Resource action: lxc-ms start on lxc2 * Pseudo action: lxc-ms-master_running_0 * Resource action: lxc1 monitor=30000 on rhel7-2 * Resource action: lxc2 monitor=30000 on rhel7-3 * Resource action: lxc-ms monitor=10000 on lxc2 * Pseudo action: lxc-ms-master_promote_0 * Resource action: lxc-ms promote on lxc1 * Pseudo action: lxc-ms-master_promoted_0 Revised cluster status: Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] OFFLINE: [ rhel7-1 ] Containers: [ lxc1:container1 lxc2:container2 ] Fencing (stonith:fence_xvm): Started rhel7-4 rsc_rhel7-1 (ocf::heartbeat:IPaddr2): Started rhel7-5 container1 (ocf::heartbeat:VirtualDomain): Started rhel7-2 container2 (ocf::heartbeat:VirtualDomain): Started rhel7-3 - Master/Slave Set: lxc-ms-master [lxc-ms] + Clone Set: lxc-ms-master [lxc-ms] (promotable) Masters: [ lxc1 ] Slaves: [ lxc2 ] diff --git a/cts/scheduler/history-1.summary b/cts/scheduler/history-1.summary index 6ae03e2d5a..243cae8056 100644 --- a/cts/scheduler/history-1.summary +++ b/cts/scheduler/history-1.summary @@ -1,53 +1,53 @@ Current cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 ] OFFLINE: [ pcmk-4 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Stopped r192.168.101.182 (ocf::heartbeat:IPaddr): Stopped r192.168.101.183 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-3 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped migrator (ocf::pacemaker:Dummy): Started pcmk-1 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Slaves: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 ] OFFLINE: [ pcmk-4 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Stopped r192.168.101.182 (ocf::heartbeat:IPaddr): Stopped r192.168.101.183 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-3 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped migrator (ocf::pacemaker:Dummy): Started pcmk-1 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Slaves: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] diff --git a/cts/scheduler/inc11.summary b/cts/scheduler/inc11.summary index d8e844fd99..f522ba41e4 100644 --- a/cts/scheduler/inc11.summary +++ b/cts/scheduler/inc11.summary @@ -1,41 +1,41 @@ Current cluster status: Online: [ node0 node1 node2 ] simple-rsc (ocf::heartbeat:apache): Stopped - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Stopped child_rsc1:1 (ocf::heartbeat:apache): Stopped Transition Summary: * Start simple-rsc (node2) * Start child_rsc1:0 (node1) * Promote child_rsc1:1 ( Stopped -> Master node2 ) Executing cluster transition: * Resource action: simple-rsc monitor on node2 * Resource action: simple-rsc monitor on node1 * Resource action: simple-rsc monitor on node0 * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:0 monitor on node0 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:1 monitor on node0 * Pseudo action: rsc1_start_0 * Resource action: simple-rsc start on node2 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised cluster status: Online: [ node0 node1 node2 ] simple-rsc (ocf::heartbeat:apache): Started node2 - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Slave node1 child_rsc1:1 (ocf::heartbeat:apache): Master node2 diff --git a/cts/scheduler/inc12.summary b/cts/scheduler/inc12.summary index 2a6a088d57..a9e79f429c 100644 --- a/cts/scheduler/inc12.summary +++ b/cts/scheduler/inc12.summary @@ -1,137 +1,137 @@ Current cluster status: Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n04 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n05 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n04 (ocf::heartbeat:IPaddr): Started c001n04 rsc_c001n05 (ocf::heartbeat:IPaddr): Started c001n05 rsc_c001n06 (ocf::heartbeat:IPaddr): Started c001n06 rsc_c001n07 (ocf::heartbeat:IPaddr): Started c001n07 Clone Set: DoFencing [child_DoFencing] Started: [ c001n02 c001n04 c001n05 c001n06 c001n07 ] Stopped: [ c001n03 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:1 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:2 (ocf::heartbeat:Stateful): Slave c001n04 ocf_msdummy:3 (ocf::heartbeat:Stateful): Slave c001n04 ocf_msdummy:4 (ocf::heartbeat:Stateful): Slave c001n05 ocf_msdummy:5 (ocf::heartbeat:Stateful): Slave c001n05 ocf_msdummy:6 (ocf::heartbeat:Stateful): Slave c001n06 ocf_msdummy:7 (ocf::heartbeat:Stateful): Slave c001n06 ocf_msdummy:8 (ocf::heartbeat:Stateful): Slave c001n07 ocf_msdummy:9 (ocf::heartbeat:Stateful): Slave c001n07 ocf_msdummy:10 (ocf::heartbeat:Stateful): Slave c001n02 ocf_msdummy:11 (ocf::heartbeat:Stateful): Slave c001n02 Transition Summary: * Shutdown c001n07 * Shutdown c001n06 * Shutdown c001n05 * Shutdown c001n04 * Shutdown c001n03 * Shutdown c001n02 * Stop ocf_192.168.100.181 (c001n02) due to node availability * Stop heartbeat_192.168.100.182 (c001n02) due to node availability * Stop ocf_192.168.100.183 (c001n02) due to node availability * Stop lsb_dummy ( c001n04 ) due to node availability * Stop rsc_c001n03 ( c001n05 ) due to node availability * Stop rsc_c001n02 ( c001n02 ) due to node availability * Stop rsc_c001n04 ( c001n04 ) due to node availability * Stop rsc_c001n05 ( c001n05 ) due to node availability * Stop rsc_c001n06 ( c001n06 ) due to node availability * Stop rsc_c001n07 ( c001n07 ) due to node availability * Stop child_DoFencing:0 (c001n02) due to node availability * Stop child_DoFencing:1 (c001n04) due to node availability * Stop child_DoFencing:2 (c001n05) due to node availability * Stop child_DoFencing:3 (c001n06) due to node availability * Stop child_DoFencing:4 (c001n07) due to node availability * Stop ocf_msdummy:2 ( Slave c001n04 ) due to node availability * Stop ocf_msdummy:3 ( Slave c001n04 ) due to node availability * Stop ocf_msdummy:4 ( Slave c001n05 ) due to node availability * Stop ocf_msdummy:5 ( Slave c001n05 ) due to node availability * Stop ocf_msdummy:6 ( Slave c001n06 ) due to node availability * Stop ocf_msdummy:7 ( Slave c001n06 ) due to node availability * Stop ocf_msdummy:8 ( Slave c001n07 ) due to node availability * Stop ocf_msdummy:9 ( Slave c001n07 ) due to node availability * Stop ocf_msdummy:10 ( Slave c001n02 ) due to node availability * Stop ocf_msdummy:11 ( Slave c001n02 ) due to node availability Executing cluster transition: * Pseudo action: group-1_stop_0 * Resource action: ocf_192.168.100.183 stop on c001n02 * Resource action: lsb_dummy stop on c001n04 * Resource action: rsc_c001n03 stop on c001n05 * Resource action: rsc_c001n02 stop on c001n02 * Resource action: rsc_c001n04 stop on c001n04 * Resource action: rsc_c001n05 stop on c001n05 * Resource action: rsc_c001n06 stop on c001n06 * Resource action: rsc_c001n07 stop on c001n07 * Pseudo action: DoFencing_stop_0 * Pseudo action: master_rsc_1_stop_0 * Resource action: heartbeat_192.168.100.182 stop on c001n02 * Resource action: child_DoFencing:1 stop on c001n02 * Resource action: child_DoFencing:2 stop on c001n04 * Resource action: child_DoFencing:3 stop on c001n05 * Resource action: child_DoFencing:4 stop on c001n06 * Resource action: child_DoFencing:5 stop on c001n07 * Pseudo action: DoFencing_stopped_0 * Resource action: ocf_msdummy:2 stop on c001n04 * Resource action: ocf_msdummy:3 stop on c001n04 * Resource action: ocf_msdummy:4 stop on c001n05 * Resource action: ocf_msdummy:5 stop on c001n05 * Resource action: ocf_msdummy:6 stop on c001n06 * Resource action: ocf_msdummy:7 stop on c001n06 * Resource action: ocf_msdummy:8 stop on c001n07 * Resource action: ocf_msdummy:9 stop on c001n07 * Resource action: ocf_msdummy:10 stop on c001n02 * Resource action: ocf_msdummy:11 stop on c001n02 * Pseudo action: master_rsc_1_stopped_0 * Cluster action: do_shutdown on c001n07 * Cluster action: do_shutdown on c001n06 * Cluster action: do_shutdown on c001n05 * Cluster action: do_shutdown on c001n04 * Resource action: ocf_192.168.100.181 stop on c001n02 * Cluster action: do_shutdown on c001n02 * Pseudo action: all_stopped * Pseudo action: group-1_stopped_0 * Cluster action: do_shutdown on c001n03 Revised cluster status: Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Stopped heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Stopped ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Stopped lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped rsc_c001n03 (ocf::heartbeat:IPaddr): Stopped rsc_c001n02 (ocf::heartbeat:IPaddr): Stopped rsc_c001n04 (ocf::heartbeat:IPaddr): Stopped rsc_c001n05 (ocf::heartbeat:IPaddr): Stopped rsc_c001n06 (ocf::heartbeat:IPaddr): Stopped rsc_c001n07 (ocf::heartbeat:IPaddr): Stopped Clone Set: DoFencing [child_DoFencing] Stopped: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:1 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:2 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:3 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:4 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:5 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:6 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:7 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:8 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:9 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:10 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:11 (ocf::heartbeat:Stateful): Stopped diff --git a/cts/scheduler/master-0.summary b/cts/scheduler/master-0.summary index 6d2bd02a12..43fc587e7a 100644 --- a/cts/scheduler/master-0.summary +++ b/cts/scheduler/master-0.summary @@ -1,45 +1,45 @@ Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Stopped child_rsc1:1 (ocf::heartbeat:apache): Stopped child_rsc1:2 (ocf::heartbeat:apache): Stopped child_rsc1:3 (ocf::heartbeat:apache): Stopped child_rsc1:4 (ocf::heartbeat:apache): Stopped Transition Summary: * Start child_rsc1:0 (node1) * Start child_rsc1:1 (node2) * Start child_rsc1:2 (node1) * Start child_rsc1:3 (node2) Executing cluster transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Slave node1 child_rsc1:1 (ocf::heartbeat:apache): Slave node2 child_rsc1:2 (ocf::heartbeat:apache): Slave node1 child_rsc1:3 (ocf::heartbeat:apache): Slave node2 child_rsc1:4 (ocf::heartbeat:apache): Stopped diff --git a/cts/scheduler/master-1.summary b/cts/scheduler/master-1.summary index b0e502585a..53ec7dc6f0 100644 --- a/cts/scheduler/master-1.summary +++ b/cts/scheduler/master-1.summary @@ -1,48 +1,48 @@ Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Stopped child_rsc1:1 (ocf::heartbeat:apache): Stopped child_rsc1:2 (ocf::heartbeat:apache): Stopped child_rsc1:3 (ocf::heartbeat:apache): Stopped child_rsc1:4 (ocf::heartbeat:apache): Stopped Transition Summary: * Start child_rsc1:0 (node1) * Promote child_rsc1:1 (Stopped -> Master node2) * Start child_rsc1:2 (node1) * Start child_rsc1:3 (node2) Executing cluster transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Slave node1 child_rsc1:1 (ocf::heartbeat:apache): Master node2 child_rsc1:2 (ocf::heartbeat:apache): Slave node1 child_rsc1:3 (ocf::heartbeat:apache): Slave node2 child_rsc1:4 (ocf::heartbeat:apache): Stopped diff --git a/cts/scheduler/master-10.summary b/cts/scheduler/master-10.summary index c73fbda6a0..60c039508e 100644 --- a/cts/scheduler/master-10.summary +++ b/cts/scheduler/master-10.summary @@ -1,73 +1,73 @@ Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Stopped child_rsc1:1 (ocf::heartbeat:apache): Stopped child_rsc1:2 (ocf::heartbeat:apache): Stopped child_rsc1:3 (ocf::heartbeat:apache): Stopped child_rsc1:4 (ocf::heartbeat:apache): Stopped Transition Summary: * Promote child_rsc1:0 (Stopped -> Master node1) * Start child_rsc1:1 (node2) * Start child_rsc1:2 (node1) * Promote child_rsc1:3 ( Stopped -> Master node2 ) Executing cluster transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_pre_notify_start_0 * Pseudo action: rsc1_confirmed-pre_notify_start_0 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_post_notify_running_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_running_0 * Pseudo action: rsc1_pre_notify_promote_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-pre_notify_promote_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:0 promote on node1 * Resource action: child_rsc1:3 promote on node2 * Pseudo action: rsc1_promoted_0 * Pseudo action: rsc1_post_notify_promoted_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_promoted_0 * Resource action: child_rsc1:0 monitor=11000 on node1 * Resource action: child_rsc1:1 monitor=1000 on node2 * Resource action: child_rsc1:2 monitor=1000 on node1 * Resource action: child_rsc1:3 monitor=11000 on node2 Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Master node1 child_rsc1:1 (ocf::heartbeat:apache): Slave node2 child_rsc1:2 (ocf::heartbeat:apache): Slave node1 child_rsc1:3 (ocf::heartbeat:apache): Master node2 child_rsc1:4 (ocf::heartbeat:apache): Stopped diff --git a/cts/scheduler/master-11.summary b/cts/scheduler/master-11.summary index a5ab8c2129..dc43ebac79 100644 --- a/cts/scheduler/master-11.summary +++ b/cts/scheduler/master-11.summary @@ -1,38 +1,38 @@ Current cluster status: Online: [ node1 node2 ] simple-rsc (ocf::heartbeat:apache): Stopped - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Stopped child_rsc1:1 (ocf::heartbeat:apache): Stopped Transition Summary: * Start simple-rsc (node2) * Start child_rsc1:0 (node1) * Promote child_rsc1:1 (Stopped -> Master node2) Executing cluster transition: * Resource action: simple-rsc monitor on node2 * Resource action: simple-rsc monitor on node1 * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Pseudo action: rsc1_start_0 * Resource action: simple-rsc start on node2 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised cluster status: Online: [ node1 node2 ] simple-rsc (ocf::heartbeat:apache): Started node2 - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Slave node1 child_rsc1:1 (ocf::heartbeat:apache): Master node2 diff --git a/cts/scheduler/master-12.summary b/cts/scheduler/master-12.summary index 59f2a3b45b..08e03ac8cb 100644 --- a/cts/scheduler/master-12.summary +++ b/cts/scheduler/master-12.summary @@ -1,31 +1,31 @@ Current cluster status: Online: [ sel3 sel4 ] - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ sel3 ] Slaves: [ sel4 ] - Master/Slave Set: ms-sf [sf] (unique) + Clone Set: ms-sf [sf] (promotable) (unique) sf:0 (ocf::heartbeat:Stateful): Slave sel3 sf:1 (ocf::heartbeat:Stateful): Slave sel4 fs0 (ocf::heartbeat:Filesystem): Started sel3 Transition Summary: * Promote sf:0 (Slave -> Master sel3) Executing cluster transition: * Pseudo action: ms-sf_promote_0 * Resource action: sf:0 promote on sel3 * Pseudo action: ms-sf_promoted_0 Revised cluster status: Online: [ sel3 sel4 ] - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ sel3 ] Slaves: [ sel4 ] - Master/Slave Set: ms-sf [sf] (unique) + Clone Set: ms-sf [sf] (promotable) (unique) sf:0 (ocf::heartbeat:Stateful): Master sel3 sf:1 (ocf::heartbeat:Stateful): Slave sel4 fs0 (ocf::heartbeat:Filesystem): Started sel3 diff --git a/cts/scheduler/master-13.summary b/cts/scheduler/master-13.summary index 1488a48fc4..19db0b7348 100644 --- a/cts/scheduler/master-13.summary +++ b/cts/scheduler/master-13.summary @@ -1,60 +1,60 @@ Current cluster status: Online: [ frigg odin ] - Master/Slave Set: ms_drbd [drbd0] + Clone Set: ms_drbd [drbd0] (promotable) Masters: [ frigg ] Slaves: [ odin ] Resource Group: group IPaddr0 (ocf::heartbeat:IPaddr): Stopped MailTo (ocf::heartbeat:MailTo): Stopped Transition Summary: * Promote drbd0:0 (Slave -> Master odin) * Demote drbd0:1 (Master -> Slave frigg) * Start IPaddr0 (odin) * Start MailTo (odin) Executing cluster transition: * Resource action: drbd0:1 cancel=12000 on odin * Resource action: drbd0:0 cancel=10000 on frigg * Pseudo action: ms_drbd_pre_notify_demote_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-pre_notify_demote_0 * Pseudo action: ms_drbd_demote_0 * Resource action: drbd0:0 demote on frigg * Pseudo action: ms_drbd_demoted_0 * Pseudo action: ms_drbd_post_notify_demoted_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_pre_notify_promote_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_promote_0 * Resource action: drbd0:1 promote on odin * Pseudo action: ms_drbd_promoted_0 * Pseudo action: ms_drbd_post_notify_promoted_0 * Resource action: drbd0:1 notify on odin * Resource action: drbd0:0 notify on frigg * Pseudo action: ms_drbd_confirmed-post_notify_promoted_0 * Pseudo action: group_start_0 * Resource action: IPaddr0 start on odin * Resource action: MailTo start on odin * Resource action: drbd0:1 monitor=10000 on odin * Resource action: drbd0:0 monitor=12000 on frigg * Pseudo action: group_running_0 * Resource action: IPaddr0 monitor=5000 on odin Revised cluster status: Online: [ frigg odin ] - Master/Slave Set: ms_drbd [drbd0] + Clone Set: ms_drbd [drbd0] (promotable) Masters: [ odin ] Slaves: [ frigg ] Resource Group: group IPaddr0 (ocf::heartbeat:IPaddr): Started odin MailTo (ocf::heartbeat:MailTo): Started odin diff --git a/cts/scheduler/master-2.summary b/cts/scheduler/master-2.summary index 6d872b46ab..a21193887e 100644 --- a/cts/scheduler/master-2.summary +++ b/cts/scheduler/master-2.summary @@ -1,69 +1,69 @@ Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Stopped child_rsc1:1 (ocf::heartbeat:apache): Stopped child_rsc1:2 (ocf::heartbeat:apache): Stopped child_rsc1:3 (ocf::heartbeat:apache): Stopped child_rsc1:4 (ocf::heartbeat:apache): Stopped Transition Summary: * Promote child_rsc1:0 (Stopped -> Master node1) * Start child_rsc1:1 (node2) * Start child_rsc1:2 (node1) * Promote child_rsc1:3 ( Stopped -> Master node2 ) Executing cluster transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_pre_notify_start_0 * Pseudo action: rsc1_confirmed-pre_notify_start_0 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_post_notify_running_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_running_0 * Pseudo action: rsc1_pre_notify_promote_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-pre_notify_promote_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:0 promote on node1 * Resource action: child_rsc1:3 promote on node2 * Pseudo action: rsc1_promoted_0 * Pseudo action: rsc1_post_notify_promoted_0 * Resource action: child_rsc1:0 notify on node1 * Resource action: child_rsc1:1 notify on node2 * Resource action: child_rsc1:2 notify on node1 * Resource action: child_rsc1:3 notify on node2 * Pseudo action: rsc1_confirmed-post_notify_promoted_0 Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Master node1 child_rsc1:1 (ocf::heartbeat:apache): Slave node2 child_rsc1:2 (ocf::heartbeat:apache): Slave node1 child_rsc1:3 (ocf::heartbeat:apache): Master node2 child_rsc1:4 (ocf::heartbeat:apache): Stopped diff --git a/cts/scheduler/master-3.summary b/cts/scheduler/master-3.summary index b0e502585a..53ec7dc6f0 100644 --- a/cts/scheduler/master-3.summary +++ b/cts/scheduler/master-3.summary @@ -1,48 +1,48 @@ Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Stopped child_rsc1:1 (ocf::heartbeat:apache): Stopped child_rsc1:2 (ocf::heartbeat:apache): Stopped child_rsc1:3 (ocf::heartbeat:apache): Stopped child_rsc1:4 (ocf::heartbeat:apache): Stopped Transition Summary: * Start child_rsc1:0 (node1) * Promote child_rsc1:1 (Stopped -> Master node2) * Start child_rsc1:2 (node1) * Start child_rsc1:3 (node2) Executing cluster transition: * Resource action: child_rsc1:0 monitor on node2 * Resource action: child_rsc1:0 monitor on node1 * Resource action: child_rsc1:1 monitor on node2 * Resource action: child_rsc1:1 monitor on node1 * Resource action: child_rsc1:2 monitor on node2 * Resource action: child_rsc1:2 monitor on node1 * Resource action: child_rsc1:3 monitor on node2 * Resource action: child_rsc1:3 monitor on node1 * Resource action: child_rsc1:4 monitor on node2 * Resource action: child_rsc1:4 monitor on node1 * Pseudo action: rsc1_start_0 * Resource action: child_rsc1:0 start on node1 * Resource action: child_rsc1:1 start on node2 * Resource action: child_rsc1:2 start on node1 * Resource action: child_rsc1:3 start on node2 * Pseudo action: rsc1_running_0 * Pseudo action: rsc1_promote_0 * Resource action: child_rsc1:1 promote on node2 * Pseudo action: rsc1_promoted_0 Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: rsc1 [child_rsc1] (unique) + Clone Set: rsc1 [child_rsc1] (promotable) (unique) child_rsc1:0 (ocf::heartbeat:apache): Slave node1 child_rsc1:1 (ocf::heartbeat:apache): Master node2 child_rsc1:2 (ocf::heartbeat:apache): Slave node1 child_rsc1:3 (ocf::heartbeat:apache): Slave node2 child_rsc1:4 (ocf::heartbeat:apache): Stopped diff --git a/cts/scheduler/master-4.summary b/cts/scheduler/master-4.summary index 97072e4e93..741ec38dd7 100644 --- a/cts/scheduler/master-4.summary +++ b/cts/scheduler/master-4.summary @@ -1,92 +1,92 @@ Current cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n08 Resource Group: group-1 ocf_child (ocf::heartbeat:IPaddr): Started c001n03 heartbeat_child (ocf::heartbeat:IPaddr): Started c001n03 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n08 child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n01 child_DoFencing:3 (stonith:ssh): Started c001n02 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 Transition Summary: * Promote ocf_msdummy:0 (Slave -> Master c001n08) Executing cluster transition: * Resource action: child_DoFencing:1 monitor on c001n08 * Resource action: child_DoFencing:1 monitor on c001n02 * Resource action: child_DoFencing:1 monitor on c001n01 * Resource action: child_DoFencing:2 monitor on c001n08 * Resource action: child_DoFencing:2 monitor on c001n03 * Resource action: child_DoFencing:2 monitor on c001n02 * Resource action: child_DoFencing:3 monitor on c001n08 * Resource action: child_DoFencing:3 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n01 * Resource action: ocf_msdummy:0 cancel=5000 on c001n08 * Resource action: ocf_msdummy:2 monitor on c001n08 * Resource action: ocf_msdummy:2 monitor on c001n03 * Resource action: ocf_msdummy:2 monitor on c001n02 * Resource action: ocf_msdummy:3 monitor on c001n03 * Resource action: ocf_msdummy:3 monitor on c001n02 * Resource action: ocf_msdummy:3 monitor on c001n01 * Resource action: ocf_msdummy:4 monitor on c001n08 * Resource action: ocf_msdummy:4 monitor on c001n02 * Resource action: ocf_msdummy:4 monitor on c001n01 * Resource action: ocf_msdummy:5 monitor on c001n08 * Resource action: ocf_msdummy:5 monitor on c001n03 * Resource action: ocf_msdummy:5 monitor on c001n02 * Resource action: ocf_msdummy:6 monitor on c001n08 * Resource action: ocf_msdummy:6 monitor on c001n03 * Resource action: ocf_msdummy:6 monitor on c001n01 * Resource action: ocf_msdummy:7 monitor on c001n08 * Resource action: ocf_msdummy:7 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n01 * Pseudo action: master_rsc_1_promote_0 * Resource action: ocf_msdummy:0 promote on c001n08 * Pseudo action: master_rsc_1_promoted_0 * Resource action: ocf_msdummy:0 monitor=6000 on c001n08 Revised cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n08 Resource Group: group-1 ocf_child (ocf::heartbeat:IPaddr): Started c001n03 heartbeat_child (ocf::heartbeat:IPaddr): Started c001n03 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n08 child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n01 child_DoFencing:3 (stonith:ssh): Started c001n02 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n08 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 diff --git a/cts/scheduler/master-5.summary b/cts/scheduler/master-5.summary index 838bd959c3..e1a0db0301 100644 --- a/cts/scheduler/master-5.summary +++ b/cts/scheduler/master-5.summary @@ -1,86 +1,86 @@ Current cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n08 Resource Group: group-1 ocf_child (ocf::heartbeat:IPaddr): Started c001n03 heartbeat_child (ocf::heartbeat:IPaddr): Started c001n03 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n08 child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n01 child_DoFencing:3 (stonith:ssh): Started c001n02 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n08 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 Transition Summary: Executing cluster transition: * Resource action: child_DoFencing:1 monitor on c001n08 * Resource action: child_DoFencing:1 monitor on c001n02 * Resource action: child_DoFencing:1 monitor on c001n01 * Resource action: child_DoFencing:2 monitor on c001n08 * Resource action: child_DoFencing:2 monitor on c001n03 * Resource action: child_DoFencing:2 monitor on c001n02 * Resource action: child_DoFencing:3 monitor on c001n08 * Resource action: child_DoFencing:3 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n01 * Resource action: ocf_msdummy:2 monitor on c001n08 * Resource action: ocf_msdummy:2 monitor on c001n03 * Resource action: ocf_msdummy:2 monitor on c001n02 * Resource action: ocf_msdummy:3 monitor on c001n03 * Resource action: ocf_msdummy:3 monitor on c001n02 * Resource action: ocf_msdummy:3 monitor on c001n01 * Resource action: ocf_msdummy:4 monitor on c001n08 * Resource action: ocf_msdummy:4 monitor on c001n02 * Resource action: ocf_msdummy:4 monitor on c001n01 * Resource action: ocf_msdummy:5 monitor on c001n08 * Resource action: ocf_msdummy:5 monitor on c001n03 * Resource action: ocf_msdummy:5 monitor on c001n02 * Resource action: ocf_msdummy:6 monitor on c001n08 * Resource action: ocf_msdummy:6 monitor on c001n03 * Resource action: ocf_msdummy:6 monitor on c001n01 * Resource action: ocf_msdummy:7 monitor on c001n08 * Resource action: ocf_msdummy:7 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n01 Revised cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n08 Resource Group: group-1 ocf_child (ocf::heartbeat:IPaddr): Started c001n03 heartbeat_child (ocf::heartbeat:IPaddr): Started c001n03 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n08 child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n01 child_DoFencing:3 (stonith:ssh): Started c001n02 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n08 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 diff --git a/cts/scheduler/master-6.summary b/cts/scheduler/master-6.summary index e8f016bc18..84cea9ab70 100644 --- a/cts/scheduler/master-6.summary +++ b/cts/scheduler/master-6.summary @@ -1,85 +1,85 @@ Current cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n08 Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n03 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n08 child_DoFencing:1 (stonith:ssh): Started c001n02 child_DoFencing:2 (stonith:ssh): Started c001n03 child_DoFencing:3 (stonith:ssh): Started c001n01 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n08 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 Transition Summary: Executing cluster transition: * Resource action: child_DoFencing:1 monitor on c001n08 * Resource action: child_DoFencing:1 monitor on c001n03 * Resource action: child_DoFencing:1 monitor on c001n01 * Resource action: child_DoFencing:2 monitor on c001n08 * Resource action: child_DoFencing:2 monitor on c001n01 * Resource action: child_DoFencing:3 monitor on c001n08 * Resource action: child_DoFencing:3 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n02 * Resource action: ocf_msdummy:2 monitor on c001n08 * Resource action: ocf_msdummy:2 monitor on c001n01 * Resource action: ocf_msdummy:3 monitor on c001n03 * Resource action: ocf_msdummy:3 monitor on c001n01 * Resource action: ocf_msdummy:4 monitor on c001n08 * Resource action: ocf_msdummy:4 monitor on c001n03 * Resource action: ocf_msdummy:4 monitor on c001n01 * Resource action: ocf_msdummy:5 monitor on c001n08 * Resource action: ocf_msdummy:5 monitor on c001n02 * Resource action: ocf_msdummy:5 monitor on c001n01 * Resource action: ocf_msdummy:6 monitor on c001n08 * Resource action: ocf_msdummy:6 monitor on c001n03 * Resource action: ocf_msdummy:6 monitor on c001n02 * Resource action: ocf_msdummy:7 monitor on c001n08 * Resource action: ocf_msdummy:7 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n02 Revised cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n08 Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n03 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n08 child_DoFencing:1 (stonith:ssh): Started c001n02 child_DoFencing:2 (stonith:ssh): Started c001n03 child_DoFencing:3 (stonith:ssh): Started c001n01 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n08 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 diff --git a/cts/scheduler/master-7.summary b/cts/scheduler/master-7.summary index 1bbc593a66..fc20c08ada 100644 --- a/cts/scheduler/master-7.summary +++ b/cts/scheduler/master-7.summary @@ -1,121 +1,121 @@ Current cluster status: Node c001n01 (de937e3d-0309-4b5d-b85c-f96edc1ed8e3): UNCLEAN (offline) Online: [ c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n01 (UNCLEAN) Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n03 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n03 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n03 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n02 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 (UNCLEAN) rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n01 (UNCLEAN) child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n02 child_DoFencing:3 (stonith:ssh): Started c001n08 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n01 (UNCLEAN) ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n01 ( UNCLEAN ) ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 Transition Summary: * Fence (reboot) c001n01 'peer is no longer part of the cluster' * Move DcIPaddr ( c001n01 -> c001n03 ) * Move ocf_192.168.100.181 ( c001n03 -> c001n02 ) * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 ) * Move ocf_192.168.100.183 ( c001n03 -> c001n02 ) * Move lsb_dummy ( c001n02 -> c001n08 ) * Move rsc_c001n01 ( c001n01 -> c001n03 ) * Stop child_DoFencing:0 (c001n01) due to node availability * Stop ocf_msdummy:0 ( Master c001n01 ) due to node availability * Stop ocf_msdummy:4 ( Slave c001n01 ) due to node availability Executing cluster transition: * Pseudo action: group-1_stop_0 * Resource action: ocf_192.168.100.183 stop on c001n03 * Resource action: lsb_dummy stop on c001n02 * Resource action: child_DoFencing:2 monitor on c001n08 * Resource action: child_DoFencing:2 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n02 * Pseudo action: DoFencing_stop_0 * Resource action: ocf_msdummy:4 monitor on c001n08 * Resource action: ocf_msdummy:4 monitor on c001n03 * Resource action: ocf_msdummy:4 monitor on c001n02 * Resource action: ocf_msdummy:5 monitor on c001n08 * Resource action: ocf_msdummy:5 monitor on c001n02 * Resource action: ocf_msdummy:6 monitor on c001n08 * Resource action: ocf_msdummy:6 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n02 * Pseudo action: master_rsc_1_demote_0 * Fencing c001n01 (reboot) * Pseudo action: DcIPaddr_stop_0 * Resource action: heartbeat_192.168.100.182 stop on c001n03 * Pseudo action: rsc_c001n01_stop_0 * Pseudo action: child_DoFencing:0_stop_0 * Pseudo action: DoFencing_stopped_0 * Pseudo action: ocf_msdummy:0_demote_0 * Pseudo action: master_rsc_1_demoted_0 * Pseudo action: master_rsc_1_stop_0 * Pseudo action: stonith_complete * Resource action: DcIPaddr start on c001n03 * Resource action: ocf_192.168.100.181 stop on c001n03 * Resource action: lsb_dummy start on c001n08 * Resource action: rsc_c001n01 start on c001n03 * Pseudo action: ocf_msdummy:0_stop_0 * Pseudo action: ocf_msdummy:4_stop_0 * Pseudo action: master_rsc_1_stopped_0 * Pseudo action: all_stopped * Resource action: DcIPaddr monitor=5000 on c001n03 * Pseudo action: group-1_stopped_0 * Pseudo action: group-1_start_0 * Resource action: ocf_192.168.100.181 start on c001n02 * Resource action: heartbeat_192.168.100.182 start on c001n02 * Resource action: ocf_192.168.100.183 start on c001n02 * Resource action: lsb_dummy monitor=5000 on c001n08 * Resource action: rsc_c001n01 monitor=5000 on c001n03 * Pseudo action: group-1_running_0 * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02 * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02 * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02 Revised cluster status: Online: [ c001n02 c001n03 c001n08 ] OFFLINE: [ c001n01 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n03 Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n08 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Stopped child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n02 child_DoFencing:3 (stonith:ssh): Started c001n08 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 diff --git a/cts/scheduler/master-8.summary b/cts/scheduler/master-8.summary index 34474c10c7..c3b8690a67 100644 --- a/cts/scheduler/master-8.summary +++ b/cts/scheduler/master-8.summary @@ -1,124 +1,124 @@ Current cluster status: Node c001n01 (de937e3d-0309-4b5d-b85c-f96edc1ed8e3): UNCLEAN (offline) Online: [ c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n01 (UNCLEAN) Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n03 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n03 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n03 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n02 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 (UNCLEAN) rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started c001n01 (UNCLEAN) child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n02 child_DoFencing:3 (stonith:ssh): Started c001n08 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n01 (UNCLEAN) ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 Transition Summary: * Fence (reboot) c001n01 'peer is no longer part of the cluster' * Move DcIPaddr ( c001n01 -> c001n03 ) * Move ocf_192.168.100.181 ( c001n03 -> c001n02 ) * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 ) * Move ocf_192.168.100.183 ( c001n03 -> c001n02 ) * Move lsb_dummy ( c001n02 -> c001n08 ) * Move rsc_c001n01 ( c001n01 -> c001n03 ) * Stop child_DoFencing:0 (c001n01) due to node availability * Move ocf_msdummy:0 ( Master c001n01 -> Slave c001n03 ) Executing cluster transition: * Pseudo action: group-1_stop_0 * Resource action: ocf_192.168.100.183 stop on c001n03 * Resource action: lsb_dummy stop on c001n02 * Resource action: child_DoFencing:2 monitor on c001n08 * Resource action: child_DoFencing:2 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n03 * Resource action: child_DoFencing:3 monitor on c001n02 * Pseudo action: DoFencing_stop_0 * Resource action: ocf_msdummy:4 monitor on c001n08 * Resource action: ocf_msdummy:4 monitor on c001n03 * Resource action: ocf_msdummy:4 monitor on c001n02 * Resource action: ocf_msdummy:5 monitor on c001n08 * Resource action: ocf_msdummy:5 monitor on c001n03 * Resource action: ocf_msdummy:5 monitor on c001n02 * Resource action: ocf_msdummy:6 monitor on c001n08 * Resource action: ocf_msdummy:6 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n03 * Resource action: ocf_msdummy:7 monitor on c001n02 * Pseudo action: master_rsc_1_demote_0 * Fencing c001n01 (reboot) * Pseudo action: DcIPaddr_stop_0 * Resource action: heartbeat_192.168.100.182 stop on c001n03 * Pseudo action: rsc_c001n01_stop_0 * Pseudo action: child_DoFencing:0_stop_0 * Pseudo action: DoFencing_stopped_0 * Pseudo action: ocf_msdummy:0_demote_0 * Pseudo action: master_rsc_1_demoted_0 * Pseudo action: master_rsc_1_stop_0 * Pseudo action: stonith_complete * Resource action: DcIPaddr start on c001n03 * Resource action: ocf_192.168.100.181 stop on c001n03 * Resource action: lsb_dummy start on c001n08 * Resource action: rsc_c001n01 start on c001n03 * Pseudo action: ocf_msdummy:0_stop_0 * Pseudo action: master_rsc_1_stopped_0 * Pseudo action: master_rsc_1_start_0 * Pseudo action: all_stopped * Resource action: DcIPaddr monitor=5000 on c001n03 * Pseudo action: group-1_stopped_0 * Pseudo action: group-1_start_0 * Resource action: ocf_192.168.100.181 start on c001n02 * Resource action: heartbeat_192.168.100.182 start on c001n02 * Resource action: ocf_192.168.100.183 start on c001n02 * Resource action: lsb_dummy monitor=5000 on c001n08 * Resource action: rsc_c001n01 monitor=5000 on c001n03 * Resource action: ocf_msdummy:0 start on c001n03 * Pseudo action: master_rsc_1_running_0 * Pseudo action: group-1_running_0 * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02 * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02 * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02 * Resource action: ocf_msdummy:0 monitor=5000 on c001n03 Revised cluster status: Online: [ c001n02 c001n03 c001n08 ] OFFLINE: [ c001n01 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n03 Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n08 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Stopped child_DoFencing:1 (stonith:ssh): Started c001n03 child_DoFencing:2 (stonith:ssh): Started c001n02 child_DoFencing:3 (stonith:ssh): Started c001n08 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n03 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 diff --git a/cts/scheduler/master-9.summary b/cts/scheduler/master-9.summary index 2c5eb32607..2cd6c3216b 100644 --- a/cts/scheduler/master-9.summary +++ b/cts/scheduler/master-9.summary @@ -1,100 +1,100 @@ Current cluster status: Node sgi2 (619e8a37-147a-4782-ac11-46afad7c32b8): UNCLEAN (offline) Node test02 (f75e684a-be1e-4036-89e5-a14f8dcdc947): UNCLEAN (offline) Online: [ ibm1 va1 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_127.0.0.11 (ocf::heartbeat:IPaddr): Stopped heartbeat_127.0.0.12 (ocf::heartbeat:IPaddr): Stopped ocf_127.0.0.13 (ocf::heartbeat:IPaddr): Stopped lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped rsc_sgi2 (ocf::heartbeat:IPaddr): Stopped rsc_ibm1 (ocf::heartbeat:IPaddr): Stopped rsc_va1 (ocf::heartbeat:IPaddr): Stopped rsc_test02 (ocf::heartbeat:IPaddr): Stopped Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started va1 child_DoFencing:1 (stonith:ssh): Started ibm1 child_DoFencing:2 (stonith:ssh): Stopped child_DoFencing:3 (stonith:ssh): Stopped - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:1 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:2 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:3 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:4 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:5 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:6 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:7 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped Transition Summary: * Shutdown ibm1 * Start DcIPaddr ( va1 ) due to no quorum (blocked) * Start ocf_127.0.0.11 ( va1 ) due to no quorum (blocked) * Start heartbeat_127.0.0.12 ( va1 ) due to no quorum (blocked) * Start ocf_127.0.0.13 ( va1 ) due to no quorum (blocked) * Start lsb_dummy ( va1 ) due to no quorum (blocked) * Start rsc_sgi2 ( va1 ) due to no quorum (blocked) * Start rsc_ibm1 ( va1 ) due to no quorum (blocked) * Start rsc_va1 ( va1 ) due to no quorum (blocked) * Start rsc_test02 ( va1 ) due to no quorum (blocked) * Stop child_DoFencing:1 (ibm1) due to node availability * Promote ocf_msdummy:0 ( Stopped -> Master va1 ) blocked * Start ocf_msdummy:1 ( va1 ) due to no quorum (blocked) Executing cluster transition: * Resource action: child_DoFencing:1 monitor on va1 * Resource action: child_DoFencing:2 monitor on va1 * Resource action: child_DoFencing:2 monitor on ibm1 * Resource action: child_DoFencing:3 monitor on va1 * Resource action: child_DoFencing:3 monitor on ibm1 * Pseudo action: DoFencing_stop_0 * Resource action: ocf_msdummy:2 monitor on va1 * Resource action: ocf_msdummy:2 monitor on ibm1 * Resource action: ocf_msdummy:3 monitor on va1 * Resource action: ocf_msdummy:3 monitor on ibm1 * Resource action: ocf_msdummy:4 monitor on va1 * Resource action: ocf_msdummy:4 monitor on ibm1 * Resource action: ocf_msdummy:5 monitor on va1 * Resource action: ocf_msdummy:5 monitor on ibm1 * Resource action: ocf_msdummy:6 monitor on va1 * Resource action: ocf_msdummy:6 monitor on ibm1 * Resource action: ocf_msdummy:7 monitor on va1 * Resource action: ocf_msdummy:7 monitor on ibm1 * Resource action: child_DoFencing:1 stop on ibm1 * Pseudo action: DoFencing_stopped_0 * Cluster action: do_shutdown on ibm1 * Pseudo action: all_stopped Revised cluster status: Node sgi2 (619e8a37-147a-4782-ac11-46afad7c32b8): UNCLEAN (offline) Node test02 (f75e684a-be1e-4036-89e5-a14f8dcdc947): UNCLEAN (offline) Online: [ ibm1 va1 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_127.0.0.11 (ocf::heartbeat:IPaddr): Stopped heartbeat_127.0.0.12 (ocf::heartbeat:IPaddr): Stopped ocf_127.0.0.13 (ocf::heartbeat:IPaddr): Stopped lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped rsc_sgi2 (ocf::heartbeat:IPaddr): Stopped rsc_ibm1 (ocf::heartbeat:IPaddr): Stopped rsc_va1 (ocf::heartbeat:IPaddr): Stopped rsc_test02 (ocf::heartbeat:IPaddr): Stopped Clone Set: DoFencing [child_DoFencing] (unique) child_DoFencing:0 (stonith:ssh): Started va1 child_DoFencing:1 (stonith:ssh): Stopped child_DoFencing:2 (stonith:ssh): Stopped child_DoFencing:3 (stonith:ssh): Stopped - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:1 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:2 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:3 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:4 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:5 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:6 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:7 (ocf::heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped diff --git a/cts/scheduler/master-allow-start.summary b/cts/scheduler/master-allow-start.summary index f0e78e3d90..0b50c9a2c7 100644 --- a/cts/scheduler/master-allow-start.summary +++ b/cts/scheduler/master-allow-start.summary @@ -1,19 +1,19 @@ Current cluster status: Online: [ sles11-a sles11-b ] - Master/Slave Set: ms_res_Stateful_1 [res_Stateful_1] + Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable) Masters: [ sles11-a ] Slaves: [ sles11-b ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ sles11-a sles11-b ] - Master/Slave Set: ms_res_Stateful_1 [res_Stateful_1] + Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable) Masters: [ sles11-a ] Slaves: [ sles11-b ] diff --git a/cts/scheduler/master-asymmetrical-order.summary b/cts/scheduler/master-asymmetrical-order.summary index 50f717e411..cec72b54be 100644 --- a/cts/scheduler/master-asymmetrical-order.summary +++ b/cts/scheduler/master-asymmetrical-order.summary @@ -1,35 +1,35 @@ 2 of 4 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ node1 node2 ] - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] - Master/Slave Set: ms2 [rsc2] + Clone Set: ms2 [rsc2] (promotable) Masters: [ node2 ] Slaves: [ node1 ] Transition Summary: * Stop rsc1:0 ( Master node1 ) due to node availability * Stop rsc1:1 ( Slave node2 ) due to node availability Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:0 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:0 stop on node1 * Resource action: rsc1:1 stop on node2 * Pseudo action: ms1_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ node1 node2 ] - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped (disabled): [ node1 node2 ] - Master/Slave Set: ms2 [rsc2] + Clone Set: ms2 [rsc2] (promotable) Masters: [ node2 ] Slaves: [ node1 ] diff --git a/cts/scheduler/master-colocation.summary b/cts/scheduler/master-colocation.summary index c5d708bc27..d1039482c9 100644 --- a/cts/scheduler/master-colocation.summary +++ b/cts/scheduler/master-colocation.summary @@ -1,32 +1,32 @@ Current cluster status: Online: [ box1 box2 ] - Master/Slave Set: ms-conntrackd [conntrackd-stateful] + Clone Set: ms-conntrackd [conntrackd-stateful] (promotable) Slaves: [ box1 box2 ] Resource Group: virtualips externalip (ocf::heartbeat:IPaddr2): Started box2 internalip (ocf::heartbeat:IPaddr2): Started box2 sship (ocf::heartbeat:IPaddr2): Started box2 Transition Summary: * Promote conntrackd-stateful:1 (Slave -> Master box2) Executing cluster transition: * Resource action: conntrackd-stateful:0 monitor=29000 on box1 * Pseudo action: ms-conntrackd_promote_0 * Resource action: conntrackd-stateful:1 promote on box2 * Pseudo action: ms-conntrackd_promoted_0 * Resource action: conntrackd-stateful:1 monitor=30000 on box2 Revised cluster status: Online: [ box1 box2 ] - Master/Slave Set: ms-conntrackd [conntrackd-stateful] + Clone Set: ms-conntrackd [conntrackd-stateful] (promotable) Masters: [ box2 ] Slaves: [ box1 ] Resource Group: virtualips externalip (ocf::heartbeat:IPaddr2): Started box2 internalip (ocf::heartbeat:IPaddr2): Started box2 sship (ocf::heartbeat:IPaddr2): Started box2 diff --git a/cts/scheduler/master-demote-2.summary b/cts/scheduler/master-demote-2.summary index 02fe0555ae..47e6b865fd 100644 --- a/cts/scheduler/master-demote-2.summary +++ b/cts/scheduler/master-demote-2.summary @@ -1,74 +1,74 @@ Current cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Fencing (stonith:fence_xvm): Started pcmk-1 Resource Group: group-1 r192.168.122.105 (ocf::heartbeat:IPaddr): Stopped r192.168.122.106 (ocf::heartbeat:IPaddr): Stopped r192.168.122.107 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped migrator (ocf::pacemaker:Dummy): Started pcmk-4 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) stateful-1 (ocf::pacemaker:Stateful): FAILED pcmk-1 Slaves: [ pcmk-2 pcmk-3 pcmk-4 ] Transition Summary: * Start r192.168.122.105 (pcmk-2) * Start r192.168.122.106 (pcmk-2) * Start r192.168.122.107 (pcmk-2) * Start lsb-dummy (pcmk-2) * Recover stateful-1:0 (Slave pcmk-1) * Promote stateful-1:1 (Slave -> Master pcmk-2) Executing cluster transition: * Resource action: stateful-1:0 cancel=15000 on pcmk-2 * Pseudo action: master-1_stop_0 * Resource action: stateful-1:1 stop on pcmk-1 * Pseudo action: master-1_stopped_0 * Pseudo action: master-1_start_0 * Pseudo action: all_stopped * Resource action: stateful-1:1 start on pcmk-1 * Pseudo action: master-1_running_0 * Resource action: stateful-1:1 monitor=15000 on pcmk-1 * Pseudo action: master-1_promote_0 * Resource action: stateful-1:0 promote on pcmk-2 * Pseudo action: master-1_promoted_0 * Pseudo action: group-1_start_0 * Resource action: r192.168.122.105 start on pcmk-2 * Resource action: r192.168.122.106 start on pcmk-2 * Resource action: r192.168.122.107 start on pcmk-2 * Resource action: stateful-1:0 monitor=16000 on pcmk-2 * Pseudo action: group-1_running_0 * Resource action: r192.168.122.105 monitor=5000 on pcmk-2 * Resource action: r192.168.122.106 monitor=5000 on pcmk-2 * Resource action: r192.168.122.107 monitor=5000 on pcmk-2 * Resource action: lsb-dummy start on pcmk-2 * Resource action: lsb-dummy monitor=5000 on pcmk-2 Revised cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Fencing (stonith:fence_xvm): Started pcmk-1 Resource Group: group-1 r192.168.122.105 (ocf::heartbeat:IPaddr): Started pcmk-2 r192.168.122.106 (ocf::heartbeat:IPaddr): Started pcmk-2 r192.168.122.107 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 migrator (ocf::pacemaker:Dummy): Started pcmk-4 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-2 ] Slaves: [ pcmk-1 pcmk-3 pcmk-4 ] diff --git a/cts/scheduler/master-demote-block.summary b/cts/scheduler/master-demote-block.summary index 611b36c0d2..8b0c3295f6 100644 --- a/cts/scheduler/master-demote-block.summary +++ b/cts/scheduler/master-demote-block.summary @@ -1,22 +1,22 @@ Current cluster status: Node dl380g5c (21c624bd-c426-43dc-9665-bbfb92054bcd): standby Online: [ dl380g5d ] - Master/Slave Set: stateful [dummy] + Clone Set: stateful [dummy] (promotable) dummy (ocf::pacemaker:Stateful): FAILED Master dl380g5c ( blocked ) Slaves: [ dl380g5d ] Transition Summary: Executing cluster transition: * Resource action: dummy:1 monitor=20000 on dl380g5d Revised cluster status: Node dl380g5c (21c624bd-c426-43dc-9665-bbfb92054bcd): standby Online: [ dl380g5d ] - Master/Slave Set: stateful [dummy] + Clone Set: stateful [dummy] (promotable) dummy (ocf::pacemaker:Stateful): FAILED Master dl380g5c ( blocked ) Slaves: [ dl380g5d ] diff --git a/cts/scheduler/master-demote.summary b/cts/scheduler/master-demote.summary index b50fb90d2b..3c60e021db 100644 --- a/cts/scheduler/master-demote.summary +++ b/cts/scheduler/master-demote.summary @@ -1,69 +1,69 @@ Current cluster status: Online: [ cxa1 cxb1 ] cyrus_address (ocf::heartbeat:IPaddr2): Started cxa1 cyrus_master (ocf::heartbeat:cyrus-imap): Stopped cyrus_syslogd (ocf::heartbeat:syslogd): Stopped cyrus_filesys (ocf::heartbeat:Filesystem): Stopped cyrus_volgroup (ocf::heartbeat:VolGroup): Stopped - Master/Slave Set: cyrus_drbd [cyrus_drbd_node] + Clone Set: cyrus_drbd [cyrus_drbd_node] (promotable) Masters: [ cxa1 ] Slaves: [ cxb1 ] named_address (ocf::heartbeat:IPaddr2): Started cxa1 named_filesys (ocf::heartbeat:Filesystem): Stopped named_volgroup (ocf::heartbeat:VolGroup): Stopped named_daemon (ocf::heartbeat:recursor): Stopped named_syslogd (ocf::heartbeat:syslogd): Stopped - Master/Slave Set: named_drbd [named_drbd_node] + Clone Set: named_drbd [named_drbd_node] (promotable) Slaves: [ cxa1 cxb1 ] Clone Set: pingd_clone [pingd_node] Started: [ cxa1 cxb1 ] Clone Set: fence_clone [fence_node] Started: [ cxa1 cxb1 ] Transition Summary: * Move named_address ( cxa1 -> cxb1 ) * Promote named_drbd_node:1 (Slave -> Master cxb1) Executing cluster transition: * Resource action: named_address stop on cxa1 * Pseudo action: named_drbd_pre_notify_promote_0 * Pseudo action: all_stopped * Resource action: named_address start on cxb1 * Resource action: named_drbd_node:1 notify on cxa1 * Resource action: named_drbd_node:0 notify on cxb1 * Pseudo action: named_drbd_confirmed-pre_notify_promote_0 * Pseudo action: named_drbd_promote_0 * Resource action: named_drbd_node:0 promote on cxb1 * Pseudo action: named_drbd_promoted_0 * Pseudo action: named_drbd_post_notify_promoted_0 * Resource action: named_drbd_node:1 notify on cxa1 * Resource action: named_drbd_node:0 notify on cxb1 * Pseudo action: named_drbd_confirmed-post_notify_promoted_0 * Resource action: named_drbd_node:0 monitor=10000 on cxb1 Revised cluster status: Online: [ cxa1 cxb1 ] cyrus_address (ocf::heartbeat:IPaddr2): Started cxa1 cyrus_master (ocf::heartbeat:cyrus-imap): Stopped cyrus_syslogd (ocf::heartbeat:syslogd): Stopped cyrus_filesys (ocf::heartbeat:Filesystem): Stopped cyrus_volgroup (ocf::heartbeat:VolGroup): Stopped - Master/Slave Set: cyrus_drbd [cyrus_drbd_node] + Clone Set: cyrus_drbd [cyrus_drbd_node] (promotable) Masters: [ cxa1 ] Slaves: [ cxb1 ] named_address (ocf::heartbeat:IPaddr2): Started cxb1 named_filesys (ocf::heartbeat:Filesystem): Stopped named_volgroup (ocf::heartbeat:VolGroup): Stopped named_daemon (ocf::heartbeat:recursor): Stopped named_syslogd (ocf::heartbeat:syslogd): Stopped - Master/Slave Set: named_drbd [named_drbd_node] + Clone Set: named_drbd [named_drbd_node] (promotable) Masters: [ cxb1 ] Slaves: [ cxa1 ] Clone Set: pingd_clone [pingd_node] Started: [ cxa1 cxb1 ] Clone Set: fence_clone [fence_node] Started: [ cxa1 cxb1 ] diff --git a/cts/scheduler/master-depend.summary b/cts/scheduler/master-depend.summary index e6f33cb7fd..c807b27e3b 100644 --- a/cts/scheduler/master-depend.summary +++ b/cts/scheduler/master-depend.summary @@ -1,59 +1,59 @@ 3 of 10 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ vbox4 ] OFFLINE: [ vbox3 ] - Master/Slave Set: drbd [drbd0] + Clone Set: drbd [drbd0] (promotable) Stopped: [ vbox3 vbox4 ] Clone Set: cman_clone [cman] Stopped: [ vbox3 vbox4 ] Clone Set: clvmd_clone [clvmd] Stopped: [ vbox3 vbox4 ] vmnci36 (ocf::heartbeat:vm): Stopped vmnci37 (ocf::heartbeat:vm): Stopped ( disabled ) vmnci38 (ocf::heartbeat:vm): Stopped ( disabled ) vmnci55 (ocf::heartbeat:vm): Stopped ( disabled ) Transition Summary: * Start drbd0:0 (vbox4) * Start cman:0 (vbox4) Executing cluster transition: * Resource action: drbd0:0 monitor on vbox4 * Pseudo action: drbd_pre_notify_start_0 * Resource action: cman:0 monitor on vbox4 * Pseudo action: cman_clone_start_0 * Resource action: clvmd:0 monitor on vbox4 * Resource action: vmnci36 monitor on vbox4 * Resource action: vmnci37 monitor on vbox4 * Resource action: vmnci38 monitor on vbox4 * Resource action: vmnci55 monitor on vbox4 * Pseudo action: drbd_confirmed-pre_notify_start_0 * Pseudo action: drbd_start_0 * Resource action: cman:0 start on vbox4 * Pseudo action: cman_clone_running_0 * Resource action: drbd0:0 start on vbox4 * Pseudo action: drbd_running_0 * Pseudo action: drbd_post_notify_running_0 * Resource action: drbd0:0 notify on vbox4 * Pseudo action: drbd_confirmed-post_notify_running_0 * Resource action: drbd0:0 monitor=60000 on vbox4 Revised cluster status: Online: [ vbox4 ] OFFLINE: [ vbox3 ] - Master/Slave Set: drbd [drbd0] + Clone Set: drbd [drbd0] (promotable) Slaves: [ vbox4 ] Stopped: [ vbox3 ] Clone Set: cman_clone [cman] Started: [ vbox4 ] Stopped: [ vbox3 ] Clone Set: clvmd_clone [clvmd] Stopped: [ vbox3 vbox4 ] vmnci36 (ocf::heartbeat:vm): Stopped vmnci37 (ocf::heartbeat:vm): Stopped ( disabled ) vmnci38 (ocf::heartbeat:vm): Stopped ( disabled ) vmnci55 (ocf::heartbeat:vm): Stopped ( disabled ) diff --git a/cts/scheduler/master-dependent-ban.summary b/cts/scheduler/master-dependent-ban.summary index 58e5ab8439..8479b3817c 100644 --- a/cts/scheduler/master-dependent-ban.summary +++ b/cts/scheduler/master-dependent-ban.summary @@ -1,36 +1,36 @@ Current cluster status: Online: [ c6 c7 c8 ] - Master/Slave Set: ms_drbd-dtest1 [p_drbd-dtest1] + Clone Set: ms_drbd-dtest1 [p_drbd-dtest1] (promotable) Slaves: [ c6 c7 ] p_dtest1 (ocf::heartbeat:Dummy): Stopped Transition Summary: * Promote p_drbd-dtest1:0 (Slave -> Master c7) * Start p_dtest1 (c7) Executing cluster transition: * Pseudo action: ms_drbd-dtest1_pre_notify_promote_0 * Resource action: p_drbd-dtest1 notify on c7 * Resource action: p_drbd-dtest1 notify on c6 * Pseudo action: ms_drbd-dtest1_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd-dtest1_promote_0 * Resource action: p_drbd-dtest1 promote on c7 * Pseudo action: ms_drbd-dtest1_promoted_0 * Pseudo action: ms_drbd-dtest1_post_notify_promoted_0 * Resource action: p_drbd-dtest1 notify on c7 * Resource action: p_drbd-dtest1 notify on c6 * Pseudo action: ms_drbd-dtest1_confirmed-post_notify_promoted_0 * Resource action: p_dtest1 start on c7 * Resource action: p_drbd-dtest1 monitor=10000 on c7 * Resource action: p_drbd-dtest1 monitor=20000 on c6 Revised cluster status: Online: [ c6 c7 c8 ] - Master/Slave Set: ms_drbd-dtest1 [p_drbd-dtest1] + Clone Set: ms_drbd-dtest1 [p_drbd-dtest1] (promotable) Masters: [ c7 ] Slaves: [ c6 ] p_dtest1 (ocf::heartbeat:Dummy): Started c7 diff --git a/cts/scheduler/master-failed-demote-2.summary b/cts/scheduler/master-failed-demote-2.summary index f5f535c703..f5335b7050 100644 --- a/cts/scheduler/master-failed-demote-2.summary +++ b/cts/scheduler/master-failed-demote-2.summary @@ -1,46 +1,46 @@ Current cluster status: Online: [ dl380g5a dl380g5b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): FAILED dl380g5b stateful-2:0 (ocf::heartbeat:Stateful): Stopped Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Slave dl380g5a stateful-2:1 (ocf::heartbeat:Stateful): Slave dl380g5a Transition Summary: * Stop stateful-1:0 ( Slave dl380g5b ) due to node availability * Promote stateful-1:1 (Slave -> Master dl380g5a) * Promote stateful-2:1 (Slave -> Master dl380g5a) Executing cluster transition: * Resource action: stateful-1:1 cancel=20000 on dl380g5a * Resource action: stateful-2:1 cancel=20000 on dl380g5a * Pseudo action: ms-sf_stop_0 * Pseudo action: group:0_stop_0 * Resource action: stateful-1:0 stop on dl380g5b * Pseudo action: all_stopped * Pseudo action: group:0_stopped_0 * Pseudo action: ms-sf_stopped_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-1:1 promote on dl380g5a * Resource action: stateful-2:1 promote on dl380g5a * Pseudo action: group:1_promoted_0 * Resource action: stateful-1:1 monitor=10000 on dl380g5a * Resource action: stateful-2:1 monitor=10000 on dl380g5a * Pseudo action: ms-sf_promoted_0 Revised cluster status: Online: [ dl380g5a dl380g5b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): Stopped stateful-2:0 (ocf::heartbeat:Stateful): Stopped Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Master dl380g5a stateful-2:1 (ocf::heartbeat:Stateful): Master dl380g5a diff --git a/cts/scheduler/master-failed-demote.summary b/cts/scheduler/master-failed-demote.summary index ec31e42598..043325e4e6 100644 --- a/cts/scheduler/master-failed-demote.summary +++ b/cts/scheduler/master-failed-demote.summary @@ -1,63 +1,63 @@ Current cluster status: Online: [ dl380g5a dl380g5b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): FAILED dl380g5b stateful-2:0 (ocf::heartbeat:Stateful): Stopped Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Slave dl380g5a stateful-2:1 (ocf::heartbeat:Stateful): Slave dl380g5a Transition Summary: * Stop stateful-1:0 ( Slave dl380g5b ) due to node availability * Promote stateful-1:1 (Slave -> Master dl380g5a) * Promote stateful-2:1 (Slave -> Master dl380g5a) Executing cluster transition: * Resource action: stateful-1:1 cancel=20000 on dl380g5a * Resource action: stateful-2:1 cancel=20000 on dl380g5a * Pseudo action: ms-sf_pre_notify_stop_0 * Resource action: stateful-1:0 notify on dl380g5b * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-pre_notify_stop_0 * Pseudo action: ms-sf_stop_0 * Pseudo action: group:0_stop_0 * Resource action: stateful-1:0 stop on dl380g5b * Pseudo action: group:0_stopped_0 * Pseudo action: ms-sf_stopped_0 * Pseudo action: ms-sf_post_notify_stopped_0 * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-post_notify_stopped_0 * Pseudo action: all_stopped * Pseudo action: ms-sf_pre_notify_promote_0 * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-pre_notify_promote_0 * Pseudo action: ms-sf_promote_0 * Pseudo action: group:1_promote_0 * Resource action: stateful-1:1 promote on dl380g5a * Resource action: stateful-2:1 promote on dl380g5a * Pseudo action: group:1_promoted_0 * Pseudo action: ms-sf_promoted_0 * Pseudo action: ms-sf_post_notify_promoted_0 * Resource action: stateful-1:1 notify on dl380g5a * Resource action: stateful-2:1 notify on dl380g5a * Pseudo action: ms-sf_confirmed-post_notify_promoted_0 * Resource action: stateful-1:1 monitor=10000 on dl380g5a * Resource action: stateful-2:1 monitor=10000 on dl380g5a Revised cluster status: Online: [ dl380g5a dl380g5b ] - Master/Slave Set: ms-sf [group] (unique) + Clone Set: ms-sf [group] (promotable) (unique) Resource Group: group:0 stateful-1:0 (ocf::heartbeat:Stateful): Stopped stateful-2:0 (ocf::heartbeat:Stateful): Stopped Resource Group: group:1 stateful-1:1 (ocf::heartbeat:Stateful): Master dl380g5a stateful-2:1 (ocf::heartbeat:Stateful): Master dl380g5a diff --git a/cts/scheduler/master-group.summary b/cts/scheduler/master-group.summary index 397401083e..6e8cdadaf3 100644 --- a/cts/scheduler/master-group.summary +++ b/cts/scheduler/master-group.summary @@ -1,35 +1,35 @@ Current cluster status: Online: [ rh44-1 rh44-2 ] Resource Group: test resource_1 (ocf::heartbeat:IPaddr): Started rh44-1 - Master/Slave Set: ms-sf [grp_ms_sf] (unique) + Clone Set: ms-sf [grp_ms_sf] (promotable) (unique) Resource Group: grp_ms_sf:0 master_slave_Stateful:0 (ocf::heartbeat:Stateful): Slave rh44-2 Resource Group: grp_ms_sf:1 master_slave_Stateful:1 (ocf::heartbeat:Stateful): Slave rh44-1 Transition Summary: * Promote master_slave_Stateful:1 (Slave -> Master rh44-1) Executing cluster transition: * Resource action: master_slave_Stateful:1 cancel=5000 on rh44-1 * Pseudo action: ms-sf_promote_0 * Pseudo action: grp_ms_sf:1_promote_0 * Resource action: master_slave_Stateful:1 promote on rh44-1 * Pseudo action: grp_ms_sf:1_promoted_0 * Resource action: master_slave_Stateful:1 monitor=6000 on rh44-1 * Pseudo action: ms-sf_promoted_0 Revised cluster status: Online: [ rh44-1 rh44-2 ] Resource Group: test resource_1 (ocf::heartbeat:IPaddr): Started rh44-1 - Master/Slave Set: ms-sf [grp_ms_sf] (unique) + Clone Set: ms-sf [grp_ms_sf] (promotable) (unique) Resource Group: grp_ms_sf:0 master_slave_Stateful:0 (ocf::heartbeat:Stateful): Slave rh44-2 Resource Group: grp_ms_sf:1 master_slave_Stateful:1 (ocf::heartbeat:Stateful): Master rh44-1 diff --git a/cts/scheduler/master-move.summary b/cts/scheduler/master-move.summary index e42fa27d69..0bc2839711 100644 --- a/cts/scheduler/master-move.summary +++ b/cts/scheduler/master-move.summary @@ -1,71 +1,71 @@ Current cluster status: Online: [ bl460g1n13 bl460g1n14 ] Resource Group: grpDRBD dummy01 (ocf::pacemaker:Dummy): FAILED bl460g1n13 dummy02 (ocf::pacemaker:Dummy): Started bl460g1n13 dummy03 (ocf::pacemaker:Dummy): Stopped - Master/Slave Set: msDRBD [prmDRBD] + Clone Set: msDRBD [prmDRBD] (promotable) Masters: [ bl460g1n13 ] Slaves: [ bl460g1n14 ] Transition Summary: * Recover dummy01 ( bl460g1n13 -> bl460g1n14 ) * Move dummy02 ( bl460g1n13 -> bl460g1n14 ) * Start dummy03 (bl460g1n14) * Demote prmDRBD:0 (Master -> Slave bl460g1n13) * Promote prmDRBD:1 (Slave -> Master bl460g1n14) Executing cluster transition: * Pseudo action: grpDRBD_stop_0 * Resource action: dummy02 stop on bl460g1n13 * Resource action: prmDRBD:0 cancel=10000 on bl460g1n13 * Resource action: prmDRBD:1 cancel=20000 on bl460g1n14 * Pseudo action: msDRBD_pre_notify_demote_0 * Resource action: dummy01 stop on bl460g1n13 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-pre_notify_demote_0 * Pseudo action: all_stopped * Pseudo action: grpDRBD_stopped_0 * Pseudo action: msDRBD_demote_0 * Resource action: prmDRBD:0 demote on bl460g1n13 * Pseudo action: msDRBD_demoted_0 * Pseudo action: msDRBD_post_notify_demoted_0 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-post_notify_demoted_0 * Pseudo action: msDRBD_pre_notify_promote_0 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-pre_notify_promote_0 * Pseudo action: msDRBD_promote_0 * Resource action: prmDRBD:1 promote on bl460g1n14 * Pseudo action: msDRBD_promoted_0 * Pseudo action: msDRBD_post_notify_promoted_0 * Resource action: prmDRBD:0 notify on bl460g1n13 * Resource action: prmDRBD:1 notify on bl460g1n14 * Pseudo action: msDRBD_confirmed-post_notify_promoted_0 * Pseudo action: grpDRBD_start_0 * Resource action: dummy01 start on bl460g1n14 * Resource action: dummy02 start on bl460g1n14 * Resource action: dummy03 start on bl460g1n14 * Resource action: prmDRBD:0 monitor=20000 on bl460g1n13 * Resource action: prmDRBD:1 monitor=10000 on bl460g1n14 * Pseudo action: grpDRBD_running_0 * Resource action: dummy01 monitor=10000 on bl460g1n14 * Resource action: dummy02 monitor=10000 on bl460g1n14 * Resource action: dummy03 monitor=10000 on bl460g1n14 Revised cluster status: Online: [ bl460g1n13 bl460g1n14 ] Resource Group: grpDRBD dummy01 (ocf::pacemaker:Dummy): Started bl460g1n14 dummy02 (ocf::pacemaker:Dummy): Started bl460g1n14 dummy03 (ocf::pacemaker:Dummy): Started bl460g1n14 - Master/Slave Set: msDRBD [prmDRBD] + Clone Set: msDRBD [prmDRBD] (promotable) Masters: [ bl460g1n14 ] Slaves: [ bl460g1n13 ] diff --git a/cts/scheduler/master-notify.summary b/cts/scheduler/master-notify.summary index 3b46a1b820..dcf65b83b4 100644 --- a/cts/scheduler/master-notify.summary +++ b/cts/scheduler/master-notify.summary @@ -1,34 +1,34 @@ Current cluster status: Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] shooter (stonith:fence_xvm): Started rhel7-auto1 - Master/Slave Set: fake-master [fake] + Clone Set: fake-master [fake] (promotable) Slaves: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] Transition Summary: * Promote fake:0 (Slave -> Master rhel7-auto1) Executing cluster transition: * Pseudo action: fake-master_pre_notify_promote_0 * Resource action: fake notify on rhel7-auto1 * Resource action: fake notify on rhel7-auto3 * Resource action: fake notify on rhel7-auto2 * Pseudo action: fake-master_confirmed-pre_notify_promote_0 * Pseudo action: fake-master_promote_0 * Resource action: fake promote on rhel7-auto1 * Pseudo action: fake-master_promoted_0 * Pseudo action: fake-master_post_notify_promoted_0 * Resource action: fake notify on rhel7-auto1 * Resource action: fake notify on rhel7-auto3 * Resource action: fake notify on rhel7-auto2 * Pseudo action: fake-master_confirmed-post_notify_promoted_0 Revised cluster status: Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] shooter (stonith:fence_xvm): Started rhel7-auto1 - Master/Slave Set: fake-master [fake] + Clone Set: fake-master [fake] (promotable) Masters: [ rhel7-auto1 ] Slaves: [ rhel7-auto2 rhel7-auto3 ] diff --git a/cts/scheduler/master-ordering.summary b/cts/scheduler/master-ordering.summary index c8e40943d1..4505dd7bb2 100644 --- a/cts/scheduler/master-ordering.summary +++ b/cts/scheduler/master-ordering.summary @@ -1,94 +1,94 @@ Current cluster status: Online: [ webcluster01 ] OFFLINE: [ webcluster02 ] mysql-server (ocf::heartbeat:mysql): Stopped extip_1 (ocf::heartbeat:IPaddr2): Stopped extip_2 (ocf::heartbeat:IPaddr2): Stopped Resource Group: group_main intip_0_main (ocf::heartbeat:IPaddr2): Stopped intip_1_master (ocf::heartbeat:IPaddr2): Stopped intip_2_slave (ocf::heartbeat:IPaddr2): Stopped - Master/Slave Set: ms_drbd_www [drbd_www] + Clone Set: ms_drbd_www [drbd_www] (promotable) Stopped: [ webcluster01 webcluster02 ] Clone Set: clone_ocfs2_www [ocfs2_www] (unique) ocfs2_www:0 (ocf::heartbeat:Filesystem): Stopped ocfs2_www:1 (ocf::heartbeat:Filesystem): Stopped Clone Set: clone_webservice [group_webservice] Stopped: [ webcluster01 webcluster02 ] - Master/Slave Set: ms_drbd_mysql [drbd_mysql] + Clone Set: ms_drbd_mysql [drbd_mysql] (promotable) Stopped: [ webcluster01 webcluster02 ] fs_mysql (ocf::heartbeat:Filesystem): Stopped Transition Summary: * Start extip_1 (webcluster01) * Start extip_2 (webcluster01) * Start intip_1_master (webcluster01) * Start intip_2_slave (webcluster01) * Start drbd_www:0 (webcluster01) * Start drbd_mysql:0 (webcluster01) Executing cluster transition: * Resource action: mysql-server monitor on webcluster01 * Resource action: extip_1 monitor on webcluster01 * Resource action: extip_2 monitor on webcluster01 * Resource action: intip_0_main monitor on webcluster01 * Resource action: intip_1_master monitor on webcluster01 * Resource action: intip_2_slave monitor on webcluster01 * Resource action: drbd_www:0 monitor on webcluster01 * Pseudo action: ms_drbd_www_pre_notify_start_0 * Resource action: ocfs2_www:0 monitor on webcluster01 * Resource action: ocfs2_www:1 monitor on webcluster01 * Resource action: apache2:0 monitor on webcluster01 * Resource action: mysql-proxy:0 monitor on webcluster01 * Resource action: drbd_mysql:0 monitor on webcluster01 * Pseudo action: ms_drbd_mysql_pre_notify_start_0 * Resource action: fs_mysql monitor on webcluster01 * Resource action: extip_1 start on webcluster01 * Resource action: extip_2 start on webcluster01 * Resource action: intip_1_master start on webcluster01 * Resource action: intip_2_slave start on webcluster01 * Pseudo action: ms_drbd_www_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_www_start_0 * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_mysql_start_0 * Resource action: extip_1 monitor=30000 on webcluster01 * Resource action: extip_2 monitor=30000 on webcluster01 * Resource action: intip_1_master monitor=30000 on webcluster01 * Resource action: intip_2_slave monitor=30000 on webcluster01 * Resource action: drbd_www:0 start on webcluster01 * Pseudo action: ms_drbd_www_running_0 * Resource action: drbd_mysql:0 start on webcluster01 * Pseudo action: ms_drbd_mysql_running_0 * Pseudo action: ms_drbd_www_post_notify_running_0 * Pseudo action: ms_drbd_mysql_post_notify_running_0 * Resource action: drbd_www:0 notify on webcluster01 * Pseudo action: ms_drbd_www_confirmed-post_notify_running_0 * Resource action: drbd_mysql:0 notify on webcluster01 * Pseudo action: ms_drbd_mysql_confirmed-post_notify_running_0 Revised cluster status: Online: [ webcluster01 ] OFFLINE: [ webcluster02 ] mysql-server (ocf::heartbeat:mysql): Stopped extip_1 (ocf::heartbeat:IPaddr2): Started webcluster01 extip_2 (ocf::heartbeat:IPaddr2): Started webcluster01 Resource Group: group_main intip_0_main (ocf::heartbeat:IPaddr2): Stopped intip_1_master (ocf::heartbeat:IPaddr2): Started webcluster01 intip_2_slave (ocf::heartbeat:IPaddr2): Started webcluster01 - Master/Slave Set: ms_drbd_www [drbd_www] + Clone Set: ms_drbd_www [drbd_www] (promotable) Slaves: [ webcluster01 ] Stopped: [ webcluster02 ] Clone Set: clone_ocfs2_www [ocfs2_www] (unique) ocfs2_www:0 (ocf::heartbeat:Filesystem): Stopped ocfs2_www:1 (ocf::heartbeat:Filesystem): Stopped Clone Set: clone_webservice [group_webservice] Stopped: [ webcluster01 webcluster02 ] - Master/Slave Set: ms_drbd_mysql [drbd_mysql] + Clone Set: ms_drbd_mysql [drbd_mysql] (promotable) Slaves: [ webcluster01 ] Stopped: [ webcluster02 ] fs_mysql (ocf::heartbeat:Filesystem): Stopped diff --git a/cts/scheduler/master-partially-demoted-group.summary b/cts/scheduler/master-partially-demoted-group.summary index 0abf07c154..b09c731e82 100644 --- a/cts/scheduler/master-partially-demoted-group.summary +++ b/cts/scheduler/master-partially-demoted-group.summary @@ -1,117 +1,117 @@ Current cluster status: Online: [ sd01-0 sd01-1 ] stonith-xvm-sd01-0 (stonith:fence_xvm): Started sd01-1 stonith-xvm-sd01-1 (stonith:fence_xvm): Started sd01-0 Resource Group: cdev-pool-0-iscsi-export cdev-pool-0-iscsi-target (ocf::vds-ok:iSCSITarget): Started sd01-1 cdev-pool-0-iscsi-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started sd01-1 - Master/Slave Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] + Clone Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] (promotable) Masters: [ sd01-1 ] Slaves: [ sd01-0 ] Clone Set: cl-ietd [ietd] Started: [ sd01-0 sd01-1 ] Clone Set: cl-vlan1-net [vlan1-net] Started: [ sd01-0 sd01-1 ] Resource Group: cdev-pool-0-iscsi-vips vip-164 (ocf::heartbeat:IPaddr2): Started sd01-1 vip-165 (ocf::heartbeat:IPaddr2): Started sd01-1 - Master/Slave Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] + Clone Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] (promotable) Masters: [ sd01-1 ] Slaves: [ sd01-0 ] Transition Summary: * Move vip-164 ( sd01-1 -> sd01-0 ) * Move vip-165 ( sd01-1 -> sd01-0 ) * Move cdev-pool-0-iscsi-target ( sd01-1 -> sd01-0 ) * Move cdev-pool-0-iscsi-lun-1 ( sd01-1 -> sd01-0 ) * Demote vip-164-fw:0 ( Master -> Slave sd01-1 ) * Promote vip-164-fw:1 (Slave -> Master sd01-0) * Promote vip-165-fw:1 (Slave -> Master sd01-0) * Demote cdev-pool-0-drbd:0 ( Master -> Slave sd01-1 ) * Promote cdev-pool-0-drbd:1 (Slave -> Master sd01-0) Executing cluster transition: * Resource action: vip-165-fw monitor=10000 on sd01-1 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_demote_0 * Pseudo action: ms-cdev-pool-0-drbd_pre_notify_demote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_demote_0 * Resource action: vip-164-fw demote on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-pre_notify_demote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_demoted_0 * Resource action: vip-164-fw monitor=10000 on sd01-1 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_demoted_0 * Pseudo action: cdev-pool-0-iscsi-vips_stop_0 * Resource action: vip-165 stop on sd01-1 * Resource action: vip-164 stop on sd01-1 * Pseudo action: cdev-pool-0-iscsi-vips_stopped_0 * Pseudo action: cdev-pool-0-iscsi-export_stop_0 * Resource action: cdev-pool-0-iscsi-lun-1 stop on sd01-1 * Resource action: cdev-pool-0-iscsi-target stop on sd01-1 * Pseudo action: all_stopped * Pseudo action: cdev-pool-0-iscsi-export_stopped_0 * Pseudo action: ms-cdev-pool-0-drbd_demote_0 * Resource action: cdev-pool-0-drbd demote on sd01-1 * Pseudo action: ms-cdev-pool-0-drbd_demoted_0 * Pseudo action: ms-cdev-pool-0-drbd_post_notify_demoted_0 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-post_notify_demoted_0 * Pseudo action: ms-cdev-pool-0-drbd_pre_notify_promote_0 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-pre_notify_promote_0 * Pseudo action: ms-cdev-pool-0-drbd_promote_0 * Resource action: cdev-pool-0-drbd promote on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_promoted_0 * Pseudo action: ms-cdev-pool-0-drbd_post_notify_promoted_0 * Resource action: cdev-pool-0-drbd notify on sd01-1 * Resource action: cdev-pool-0-drbd notify on sd01-0 * Pseudo action: ms-cdev-pool-0-drbd_confirmed-post_notify_promoted_0 * Pseudo action: cdev-pool-0-iscsi-export_start_0 * Resource action: cdev-pool-0-iscsi-target start on sd01-0 * Resource action: cdev-pool-0-iscsi-lun-1 start on sd01-0 * Resource action: cdev-pool-0-drbd monitor=20000 on sd01-1 * Resource action: cdev-pool-0-drbd monitor=10000 on sd01-0 * Pseudo action: cdev-pool-0-iscsi-export_running_0 * Resource action: cdev-pool-0-iscsi-target monitor=10000 on sd01-0 * Resource action: cdev-pool-0-iscsi-lun-1 monitor=10000 on sd01-0 * Pseudo action: cdev-pool-0-iscsi-vips_start_0 * Resource action: vip-164 start on sd01-0 * Resource action: vip-165 start on sd01-0 * Pseudo action: cdev-pool-0-iscsi-vips_running_0 * Resource action: vip-164 monitor=30000 on sd01-0 * Resource action: vip-165 monitor=30000 on sd01-0 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_promote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_promote_0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:1_promote_0 * Resource action: vip-164-fw promote on sd01-0 * Resource action: vip-165-fw promote on sd01-0 * Pseudo action: cdev-pool-0-iscsi-vips-fw:1_promoted_0 * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_promoted_0 Revised cluster status: Online: [ sd01-0 sd01-1 ] stonith-xvm-sd01-0 (stonith:fence_xvm): Started sd01-1 stonith-xvm-sd01-1 (stonith:fence_xvm): Started sd01-0 Resource Group: cdev-pool-0-iscsi-export cdev-pool-0-iscsi-target (ocf::vds-ok:iSCSITarget): Started sd01-0 cdev-pool-0-iscsi-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started sd01-0 - Master/Slave Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] + Clone Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] (promotable) Masters: [ sd01-0 ] Slaves: [ sd01-1 ] Clone Set: cl-ietd [ietd] Started: [ sd01-0 sd01-1 ] Clone Set: cl-vlan1-net [vlan1-net] Started: [ sd01-0 sd01-1 ] Resource Group: cdev-pool-0-iscsi-vips vip-164 (ocf::heartbeat:IPaddr2): Started sd01-0 vip-165 (ocf::heartbeat:IPaddr2): Started sd01-0 - Master/Slave Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] + Clone Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] (promotable) Masters: [ sd01-0 ] Slaves: [ sd01-1 ] diff --git a/cts/scheduler/master-probed-score.summary b/cts/scheduler/master-probed-score.summary index 3c67fe9281..197fa2d3e9 100644 --- a/cts/scheduler/master-probed-score.summary +++ b/cts/scheduler/master-probed-score.summary @@ -1,326 +1,326 @@ 2 of 60 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] - Master/Slave Set: AdminClone [AdminDrbd] + Clone Set: AdminClone [AdminDrbd] (promotable) Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] CronAmbientTemperature (ocf::heartbeat:symlink): Stopped StonithHypatia (stonith:fence_nut): Stopped StonithOrestes (stonith:fence_nut): Stopped Resource Group: DhcpGroup SymlinkDhcpdConf (ocf::heartbeat:symlink): Stopped SymlinkSysconfigDhcpd (ocf::heartbeat:symlink): Stopped SymlinkDhcpdLeases (ocf::heartbeat:symlink): Stopped Dhcpd (lsb:dhcpd): Stopped ( disabled ) DhcpIP (ocf::heartbeat:IPaddr2): Stopped Clone Set: CupsClone [CupsGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: IPClone [IPGroup] (unique) Resource Group: IPGroup:0 ClusterIP:0 (ocf::heartbeat:IPaddr2): Stopped ClusterIPLocal:0 (ocf::heartbeat:IPaddr2): Stopped ClusterIPSandbox:0 (ocf::heartbeat:IPaddr2): Stopped Resource Group: IPGroup:1 ClusterIP:1 (ocf::heartbeat:IPaddr2): Stopped ClusterIPLocal:1 (ocf::heartbeat:IPaddr2): Stopped ClusterIPSandbox:1 (ocf::heartbeat:IPaddr2): Stopped Clone Set: LibvirtdClone [LibvirtdGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: TftpClone [TftpGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: ExportsClone [ExportsGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: FilesystemClone [FilesystemGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] KVM-guest (ocf::heartbeat:VirtualDomain): Stopped Proxy (ocf::heartbeat:VirtualDomain): Stopped Transition Summary: * Promote AdminDrbd:0 ( Stopped -> Master hypatia-corosync.nevis.columbia.edu ) * Promote AdminDrbd:1 ( Stopped -> Master orestes-corosync.nevis.columbia.edu ) * Start CronAmbientTemperature (hypatia-corosync.nevis.columbia.edu) * Start StonithHypatia (orestes-corosync.nevis.columbia.edu) * Start StonithOrestes (hypatia-corosync.nevis.columbia.edu) * Start SymlinkDhcpdConf (orestes-corosync.nevis.columbia.edu) * Start SymlinkSysconfigDhcpd (orestes-corosync.nevis.columbia.edu) * Start SymlinkDhcpdLeases (orestes-corosync.nevis.columbia.edu) * Start SymlinkUsrShareCups:0 (hypatia-corosync.nevis.columbia.edu) * Start SymlinkCupsdConf:0 (hypatia-corosync.nevis.columbia.edu) * Start Cups:0 (hypatia-corosync.nevis.columbia.edu) * Start SymlinkUsrShareCups:1 (orestes-corosync.nevis.columbia.edu) * Start SymlinkCupsdConf:1 (orestes-corosync.nevis.columbia.edu) * Start Cups:1 (orestes-corosync.nevis.columbia.edu) * Start ClusterIP:0 (hypatia-corosync.nevis.columbia.edu) * Start ClusterIPLocal:0 (hypatia-corosync.nevis.columbia.edu) * Start ClusterIPSandbox:0 (hypatia-corosync.nevis.columbia.edu) * Start ClusterIP:1 (orestes-corosync.nevis.columbia.edu) * Start ClusterIPLocal:1 (orestes-corosync.nevis.columbia.edu) * Start ClusterIPSandbox:1 (orestes-corosync.nevis.columbia.edu) * Start SymlinkEtcLibvirt:0 (hypatia-corosync.nevis.columbia.edu) * Start Libvirtd:0 (hypatia-corosync.nevis.columbia.edu) * Start SymlinkEtcLibvirt:1 (orestes-corosync.nevis.columbia.edu) * Start Libvirtd:1 (orestes-corosync.nevis.columbia.edu) * Start SymlinkTftp:0 (hypatia-corosync.nevis.columbia.edu) * Start Xinetd:0 (hypatia-corosync.nevis.columbia.edu) * Start SymlinkTftp:1 (orestes-corosync.nevis.columbia.edu) * Start Xinetd:1 (orestes-corosync.nevis.columbia.edu) * Start ExportMail:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportMailInbox:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportMailFolders:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportMailForward:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportMailProcmailrc:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportUsrNevis:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportUsrNevisOffsite:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportWWW:0 (hypatia-corosync.nevis.columbia.edu) * Start ExportMail:1 (orestes-corosync.nevis.columbia.edu) * Start ExportMailInbox:1 (orestes-corosync.nevis.columbia.edu) * Start ExportMailFolders:1 (orestes-corosync.nevis.columbia.edu) * Start ExportMailForward:1 (orestes-corosync.nevis.columbia.edu) * Start ExportMailProcmailrc:1 (orestes-corosync.nevis.columbia.edu) * Start ExportUsrNevis:1 (orestes-corosync.nevis.columbia.edu) * Start ExportUsrNevisOffsite:1 (orestes-corosync.nevis.columbia.edu) * Start ExportWWW:1 (orestes-corosync.nevis.columbia.edu) * Start AdminLvm:0 (hypatia-corosync.nevis.columbia.edu) * Start FSUsrNevis:0 (hypatia-corosync.nevis.columbia.edu) * Start FSVarNevis:0 (hypatia-corosync.nevis.columbia.edu) * Start FSVirtualMachines:0 (hypatia-corosync.nevis.columbia.edu) * Start FSMail:0 (hypatia-corosync.nevis.columbia.edu) * Start FSWork:0 (hypatia-corosync.nevis.columbia.edu) * Start AdminLvm:1 (orestes-corosync.nevis.columbia.edu) * Start FSUsrNevis:1 (orestes-corosync.nevis.columbia.edu) * Start FSVarNevis:1 (orestes-corosync.nevis.columbia.edu) * Start FSVirtualMachines:1 (orestes-corosync.nevis.columbia.edu) * Start FSMail:1 (orestes-corosync.nevis.columbia.edu) * Start FSWork:1 (orestes-corosync.nevis.columbia.edu) * Start KVM-guest (hypatia-corosync.nevis.columbia.edu) * Start Proxy (orestes-corosync.nevis.columbia.edu) Executing cluster transition: * Pseudo action: AdminClone_pre_notify_start_0 * Resource action: StonithHypatia start on orestes-corosync.nevis.columbia.edu * Resource action: StonithOrestes start on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkEtcLibvirt:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkTftp:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Xinetd:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkTftp:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: Xinetd:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMail:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailForward:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportWWW:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMail:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailForward:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportWWW:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: AdminLvm:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: AdminLvm:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: KVM-guest monitor on orestes-corosync.nevis.columbia.edu * Resource action: KVM-guest monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Proxy monitor on orestes-corosync.nevis.columbia.edu * Resource action: Proxy monitor on hypatia-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-pre_notify_start_0 * Pseudo action: AdminClone_start_0 * Resource action: AdminDrbd:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_running_0 * Pseudo action: AdminClone_post_notify_running_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-post_notify_running_0 * Pseudo action: AdminClone_pre_notify_promote_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-pre_notify_promote_0 * Pseudo action: AdminClone_promote_0 * Resource action: AdminDrbd:0 promote on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 promote on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_promoted_0 * Pseudo action: AdminClone_post_notify_promoted_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-post_notify_promoted_0 * Pseudo action: FilesystemClone_start_0 * Resource action: AdminDrbd:0 monitor=59000 on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 monitor=59000 on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:0_start_0 * Resource action: AdminLvm:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:1_start_0 * Resource action: AdminLvm:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:0_running_0 * Resource action: AdminLvm:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:1_running_0 * Resource action: AdminLvm:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemClone_running_0 * Resource action: CronAmbientTemperature start on hypatia-corosync.nevis.columbia.edu * Pseudo action: DhcpGroup_start_0 * Resource action: SymlinkDhcpdConf start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkSysconfigDhcpd start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdLeases start on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsClone_start_0 * Pseudo action: IPClone_start_0 * Pseudo action: LibvirtdClone_start_0 * Pseudo action: TftpClone_start_0 * Pseudo action: ExportsClone_start_0 * Resource action: CronAmbientTemperature monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdConf monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkSysconfigDhcpd monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdLeases monitor=60000 on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:0_start_0 * Resource action: SymlinkUsrShareCups:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Cups:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:1_start_0 * Resource action: SymlinkUsrShareCups:1 start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Cups:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: IPGroup:0_start_0 * Resource action: ClusterIP:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: IPGroup:1_start_0 * Resource action: ClusterIP:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:0_start_0 * Resource action: SymlinkEtcLibvirt:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:1_start_0 * Resource action: SymlinkEtcLibvirt:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:0_start_0 * Resource action: SymlinkTftp:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Xinetd:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:1_start_0 * Resource action: SymlinkTftp:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Xinetd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: ExportsGroup:0_start_0 * Resource action: ExportMail:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailForward:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportWWW:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: ExportsGroup:1_start_0 * Resource action: ExportMail:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailForward:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportWWW:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:0_running_0 * Resource action: SymlinkUsrShareCups:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: Cups:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:1_running_0 * Resource action: SymlinkUsrShareCups:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: Cups:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsClone_running_0 * Pseudo action: IPGroup:0_running_0 * Resource action: ClusterIP:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:0 monitor=31000 on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:0 monitor=32000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: IPGroup:1_running_0 * Resource action: ClusterIP:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:1 monitor=31000 on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:1 monitor=32000 on orestes-corosync.nevis.columbia.edu * Pseudo action: IPClone_running_0 * Pseudo action: LibvirtdGroup:0_running_0 * Resource action: SymlinkEtcLibvirt:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:1_running_0 * Resource action: SymlinkEtcLibvirt:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Pseudo action: LibvirtdClone_running_0 * Pseudo action: TftpGroup:0_running_0 * Resource action: SymlinkTftp:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:1_running_0 * Resource action: SymlinkTftp:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Pseudo action: TftpClone_running_0 * Pseudo action: ExportsGroup:0_running_0 * Pseudo action: ExportsGroup:1_running_0 * Pseudo action: ExportsClone_running_0 * Resource action: KVM-guest start on hypatia-corosync.nevis.columbia.edu * Resource action: Proxy start on orestes-corosync.nevis.columbia.edu Revised cluster status: Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] - Master/Slave Set: AdminClone [AdminDrbd] + Clone Set: AdminClone [AdminDrbd] (promotable) Masters: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] CronAmbientTemperature (ocf::heartbeat:symlink): Started hypatia-corosync.nevis.columbia.edu StonithHypatia (stonith:fence_nut): Started orestes-corosync.nevis.columbia.edu StonithOrestes (stonith:fence_nut): Started hypatia-corosync.nevis.columbia.edu Resource Group: DhcpGroup SymlinkDhcpdConf (ocf::heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu SymlinkSysconfigDhcpd (ocf::heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu SymlinkDhcpdLeases (ocf::heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu Dhcpd (lsb:dhcpd): Stopped ( disabled ) DhcpIP (ocf::heartbeat:IPaddr2): Stopped Clone Set: CupsClone [CupsGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: IPClone [IPGroup] (unique) Resource Group: IPGroup:0 ClusterIP:0 (ocf::heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu ClusterIPLocal:0 (ocf::heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu ClusterIPSandbox:0 (ocf::heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu Resource Group: IPGroup:1 ClusterIP:1 (ocf::heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu ClusterIPLocal:1 (ocf::heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu ClusterIPSandbox:1 (ocf::heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu Clone Set: LibvirtdClone [LibvirtdGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: TftpClone [TftpGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: ExportsClone [ExportsGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: FilesystemClone [FilesystemGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] KVM-guest (ocf::heartbeat:VirtualDomain): Started hypatia-corosync.nevis.columbia.edu Proxy (ocf::heartbeat:VirtualDomain): Started orestes-corosync.nevis.columbia.edu diff --git a/cts/scheduler/master-promotion-constraint.summary b/cts/scheduler/master-promotion-constraint.summary index 80b2505af2..a1ac8b9342 100644 --- a/cts/scheduler/master-promotion-constraint.summary +++ b/cts/scheduler/master-promotion-constraint.summary @@ -1,33 +1,33 @@ 4 of 5 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Started hex-13 Resource Group: g0 d0 (ocf::pacemaker:Dummy): Stopped ( disabled ) d1 (ocf::pacemaker:Dummy): Stopped ( disabled ) - Master/Slave Set: ms0 [s0] + Clone Set: ms0 [s0] (promotable) Masters: [ hex-14 ] Slaves: [ hex-13 ] Transition Summary: * Demote s0:0 (Master -> Slave hex-14) Executing cluster transition: * Resource action: s0:1 cancel=20000 on hex-14 * Pseudo action: ms0_demote_0 * Resource action: s0:1 demote on hex-14 * Pseudo action: ms0_demoted_0 * Resource action: s0:1 monitor=21000 on hex-14 Revised cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Started hex-13 Resource Group: g0 d0 (ocf::pacemaker:Dummy): Stopped ( disabled ) d1 (ocf::pacemaker:Dummy): Stopped ( disabled ) - Master/Slave Set: ms0 [s0] + Clone Set: ms0 [s0] (promotable) Slaves: [ hex-13 hex-14 ] diff --git a/cts/scheduler/master-pseudo.summary b/cts/scheduler/master-pseudo.summary index 8f67a68afb..6233eecb15 100644 --- a/cts/scheduler/master-pseudo.summary +++ b/cts/scheduler/master-pseudo.summary @@ -1,59 +1,59 @@ Current cluster status: Node raki.linbit: standby Online: [ sambuca.linbit ] ip_float_right (ocf::heartbeat:IPaddr2): Stopped - Master/Slave Set: ms_drbd_float [drbd_float] + Clone Set: ms_drbd_float [drbd_float] (promotable) Slaves: [ sambuca.linbit ] Resource Group: nfsexport ip_nfs (ocf::heartbeat:IPaddr2): Stopped fs_float (ocf::heartbeat:Filesystem): Stopped Transition Summary: * Start ip_float_right (sambuca.linbit) * Restart drbd_float:0 ( Slave -> Master sambuca.linbit ) due to required ip_float_right start * Start ip_nfs (sambuca.linbit) Executing cluster transition: * Resource action: ip_float_right start on sambuca.linbit * Pseudo action: ms_drbd_float_pre_notify_stop_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_float_stop_0 * Resource action: drbd_float:0 stop on sambuca.linbit * Pseudo action: ms_drbd_float_stopped_0 * Pseudo action: ms_drbd_float_post_notify_stopped_0 * Pseudo action: ms_drbd_float_confirmed-post_notify_stopped_0 * Pseudo action: ms_drbd_float_pre_notify_start_0 * Pseudo action: all_stopped * Pseudo action: ms_drbd_float_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_float_start_0 * Resource action: drbd_float:0 start on sambuca.linbit * Pseudo action: ms_drbd_float_running_0 * Pseudo action: ms_drbd_float_post_notify_running_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-post_notify_running_0 * Pseudo action: ms_drbd_float_pre_notify_promote_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_float_promote_0 * Resource action: drbd_float:0 promote on sambuca.linbit * Pseudo action: ms_drbd_float_promoted_0 * Pseudo action: ms_drbd_float_post_notify_promoted_0 * Resource action: drbd_float:0 notify on sambuca.linbit * Pseudo action: ms_drbd_float_confirmed-post_notify_promoted_0 * Pseudo action: nfsexport_start_0 * Resource action: ip_nfs start on sambuca.linbit Revised cluster status: Node raki.linbit: standby Online: [ sambuca.linbit ] ip_float_right (ocf::heartbeat:IPaddr2): Started sambuca.linbit - Master/Slave Set: ms_drbd_float [drbd_float] + Clone Set: ms_drbd_float [drbd_float] (promotable) Masters: [ sambuca.linbit ] Resource Group: nfsexport ip_nfs (ocf::heartbeat:IPaddr2): Started sambuca.linbit fs_float (ocf::heartbeat:Filesystem): Stopped diff --git a/cts/scheduler/master-reattach.summary b/cts/scheduler/master-reattach.summary index 008a03b2bf..acd1613cce 100644 --- a/cts/scheduler/master-reattach.summary +++ b/cts/scheduler/master-reattach.summary @@ -1,32 +1,32 @@ Current cluster status: Online: [ dktest1 dktest2 ] - Master/Slave Set: ms-drbd1 [drbd1] (unmanaged) + Clone Set: ms-drbd1 [drbd1] (promotable) (unmanaged) drbd1 (ocf::heartbeat:drbd): Master dktest1 ( unmanaged ) drbd1 (ocf::heartbeat:drbd): Slave dktest2 ( unmanaged ) Resource Group: apache apache-vip (ocf::heartbeat:IPaddr2): Started dktest1 (unmanaged) mount (ocf::heartbeat:Filesystem): Started dktest1 (unmanaged) webserver (ocf::heartbeat:apache): Started dktest1 (unmanaged) Transition Summary: Executing cluster transition: * Resource action: drbd1:0 monitor=10000 on dktest1 * Resource action: drbd1:0 monitor=11000 on dktest2 * Resource action: apache-vip monitor=60000 on dktest1 * Resource action: mount monitor=10000 on dktest1 * Resource action: webserver monitor=30000 on dktest1 Revised cluster status: Online: [ dktest1 dktest2 ] - Master/Slave Set: ms-drbd1 [drbd1] (unmanaged) + Clone Set: ms-drbd1 [drbd1] (promotable) (unmanaged) drbd1 (ocf::heartbeat:drbd): Master dktest1 ( unmanaged ) drbd1 (ocf::heartbeat:drbd): Slave dktest2 ( unmanaged ) Resource Group: apache apache-vip (ocf::heartbeat:IPaddr2): Started dktest1 (unmanaged) mount (ocf::heartbeat:Filesystem): Started dktest1 (unmanaged) webserver (ocf::heartbeat:apache): Started dktest1 (unmanaged) diff --git a/cts/scheduler/master-role.summary b/cts/scheduler/master-role.summary index d2e144ef7b..04edc56492 100644 --- a/cts/scheduler/master-role.summary +++ b/cts/scheduler/master-role.summary @@ -1,22 +1,22 @@ Current cluster status: Online: [ sles11-a sles11-b ] - Master/Slave Set: ms_res_Stateful_1 [res_Stateful_1] + Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable) Masters: [ sles11-a sles11-b ] Transition Summary: * Demote res_Stateful_1:1 (Master -> Slave sles11-a) Executing cluster transition: * Pseudo action: ms_res_Stateful_1_demote_0 * Resource action: res_Stateful_1:0 demote on sles11-a * Pseudo action: ms_res_Stateful_1_demoted_0 Revised cluster status: Online: [ sles11-a sles11-b ] - Master/Slave Set: ms_res_Stateful_1 [res_Stateful_1] + Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable) Masters: [ sles11-b ] Slaves: [ sles11-a ] diff --git a/cts/scheduler/master-score-startup.summary b/cts/scheduler/master-score-startup.summary index 0d2a5077d2..9206d2eb70 100644 --- a/cts/scheduler/master-score-startup.summary +++ b/cts/scheduler/master-score-startup.summary @@ -1,52 +1,52 @@ Current cluster status: Online: [ srv1 srv2 ] - Master/Slave Set: pgsql-ha [pgsqld] + Clone Set: pgsql-ha [pgsqld] (promotable) Stopped: [ srv1 srv2 ] pgsql-master-ip (ocf::heartbeat:IPaddr2): Stopped Transition Summary: * Promote pgsqld:0 ( Stopped -> Master srv1 ) * Start pgsqld:1 ( srv2 ) * Start pgsql-master-ip ( srv1 ) Executing cluster transition: * Resource action: pgsqld:0 monitor on srv1 * Resource action: pgsqld:1 monitor on srv2 * Pseudo action: pgsql-ha_pre_notify_start_0 * Resource action: pgsql-master-ip monitor on srv2 * Resource action: pgsql-master-ip monitor on srv1 * Pseudo action: pgsql-ha_confirmed-pre_notify_start_0 * Pseudo action: pgsql-ha_start_0 * Resource action: pgsqld:0 start on srv1 * Resource action: pgsqld:1 start on srv2 * Pseudo action: pgsql-ha_running_0 * Pseudo action: pgsql-ha_post_notify_running_0 * Resource action: pgsqld:0 notify on srv1 * Resource action: pgsqld:1 notify on srv2 * Pseudo action: pgsql-ha_confirmed-post_notify_running_0 * Pseudo action: pgsql-ha_pre_notify_promote_0 * Resource action: pgsqld:0 notify on srv1 * Resource action: pgsqld:1 notify on srv2 * Pseudo action: pgsql-ha_confirmed-pre_notify_promote_0 * Pseudo action: pgsql-ha_promote_0 * Resource action: pgsqld:0 promote on srv1 * Pseudo action: pgsql-ha_promoted_0 * Pseudo action: pgsql-ha_post_notify_promoted_0 * Resource action: pgsqld:0 notify on srv1 * Resource action: pgsqld:1 notify on srv2 * Pseudo action: pgsql-ha_confirmed-post_notify_promoted_0 * Resource action: pgsql-master-ip start on srv1 * Resource action: pgsqld:0 monitor=15000 on srv1 * Resource action: pgsqld:1 monitor=16000 on srv2 * Resource action: pgsql-master-ip monitor=10000 on srv1 Revised cluster status: Online: [ srv1 srv2 ] - Master/Slave Set: pgsql-ha [pgsqld] + Clone Set: pgsql-ha [pgsqld] (promotable) Masters: [ srv1 ] Slaves: [ srv2 ] pgsql-master-ip (ocf::heartbeat:IPaddr2): Started srv1 diff --git a/cts/scheduler/master-stop.summary b/cts/scheduler/master-stop.summary index 8b861df811..e1d39534db 100644 --- a/cts/scheduler/master-stop.summary +++ b/cts/scheduler/master-stop.summary @@ -1,23 +1,23 @@ Current cluster status: Online: [ node1 node2 node3 ] - Master/Slave Set: m [dummy] + Clone Set: m [dummy] (promotable) Slaves: [ node1 node2 node3 ] Transition Summary: * Stop dummy:2 ( Slave node3 ) due to node availability Executing cluster transition: * Pseudo action: m_stop_0 * Resource action: dummy:2 stop on node3 * Pseudo action: m_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ node1 node2 node3 ] - Master/Slave Set: m [dummy] + Clone Set: m [dummy] (promotable) Slaves: [ node1 node2 ] Stopped: [ node3 ] diff --git a/cts/scheduler/master-unmanaged-monitor.summary b/cts/scheduler/master-unmanaged-monitor.summary index 27a34b35ec..a636a69c5d 100644 --- a/cts/scheduler/master-unmanaged-monitor.summary +++ b/cts/scheduler/master-unmanaged-monitor.summary @@ -1,67 +1,67 @@ Current cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Clone Set: Fencing [FencingChild] (unmanaged) Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Resource Group: group-1 r192.168.122.112 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) r192.168.122.113 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) r192.168.122.114 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 (unmanaged) rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 (unmanaged) lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-3 (unmanaged) migrator (ocf::pacemaker:Dummy): Started pcmk-4 (unmanaged) Clone Set: Connectivity [ping-1] (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-2 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-3 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-4 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-1 (unmanaged) - Master/Slave Set: master-1 [stateful-1] (unmanaged) + Clone Set: master-1 [stateful-1] (promotable) (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Slave pcmk-2 (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Master pcmk-3 (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Slave pcmk-4 (unmanaged) Stopped: [ pcmk-1 ] Transition Summary: Executing cluster transition: * Resource action: lsb-dummy monitor=5000 on pcmk-3 * Resource action: migrator monitor=10000 on pcmk-4 * Resource action: ping-1:0 monitor=60000 on pcmk-2 * Resource action: ping-1:0 monitor=60000 on pcmk-3 * Resource action: ping-1:0 monitor=60000 on pcmk-4 * Resource action: ping-1:0 monitor=60000 on pcmk-1 * Resource action: stateful-1:0 monitor=15000 on pcmk-2 * Resource action: stateful-1:0 monitor on pcmk-1 * Resource action: stateful-1:0 monitor=16000 on pcmk-3 * Resource action: stateful-1:0 monitor=15000 on pcmk-4 Revised cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Clone Set: Fencing [FencingChild] (unmanaged) Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Resource Group: group-1 r192.168.122.112 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) r192.168.122.113 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) r192.168.122.114 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 (unmanaged) rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 (unmanaged) lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-3 (unmanaged) migrator (ocf::pacemaker:Dummy): Started pcmk-4 (unmanaged) Clone Set: Connectivity [ping-1] (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-2 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-3 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-4 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-1 (unmanaged) - Master/Slave Set: master-1 [stateful-1] (unmanaged) + Clone Set: master-1 [stateful-1] (promotable) (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Slave pcmk-2 (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Master pcmk-3 (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Slave pcmk-4 (unmanaged) Stopped: [ pcmk-1 ] diff --git a/cts/scheduler/master_monitor_restart.summary b/cts/scheduler/master_monitor_restart.summary index 05b64601ed..26e3a285d8 100644 --- a/cts/scheduler/master_monitor_restart.summary +++ b/cts/scheduler/master_monitor_restart.summary @@ -1,22 +1,22 @@ Current cluster status: Node node2 (1048225984): standby Online: [ node1 ] - Master/Slave Set: MS_RSC [MS_RSC_NATIVE] + Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable) Masters: [ node1 ] Stopped: [ node2 ] Transition Summary: Executing cluster transition: * Resource action: MS_RSC_NATIVE:0 monitor=5000 on node1 Revised cluster status: Node node2 (1048225984): standby Online: [ node1 ] - Master/Slave Set: MS_RSC [MS_RSC_NATIVE] + Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable) Masters: [ node1 ] Stopped: [ node2 ] diff --git a/cts/scheduler/migrate-fencing.summary b/cts/scheduler/migrate-fencing.summary index d7821b6ce8..b46be46ab3 100644 --- a/cts/scheduler/migrate-fencing.summary +++ b/cts/scheduler/migrate-fencing.summary @@ -1,108 +1,108 @@ Current cluster status: Node pcmk-4: UNCLEAN (online) Online: [ pcmk-1 pcmk-2 pcmk-3 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Started pcmk-4 r192.168.101.182 (ocf::heartbeat:IPaddr): Started pcmk-4 r192.168.101.183 (ocf::heartbeat:IPaddr): Started pcmk-4 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-4 migrator (ocf::pacemaker:Dummy): Started pcmk-1 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-4 ] Slaves: [ pcmk-1 pcmk-2 pcmk-3 ] Transition Summary: * Fence (reboot) pcmk-4 'termination was requested' * Stop FencingChild:0 (pcmk-4) due to node availability * Move r192.168.101.181 ( pcmk-4 -> pcmk-1 ) * Move r192.168.101.182 ( pcmk-4 -> pcmk-1 ) * Move r192.168.101.183 ( pcmk-4 -> pcmk-1 ) * Move rsc_pcmk-4 ( pcmk-4 -> pcmk-2 ) * Move lsb-dummy ( pcmk-4 -> pcmk-1 ) * Migrate migrator ( pcmk-1 -> pcmk-3 ) * Stop ping-1:0 (pcmk-4) due to node availability * Stop stateful-1:0 ( Master pcmk-4 ) due to node availability * Promote stateful-1:1 (Slave -> Master pcmk-1) Executing cluster transition: * Pseudo action: Fencing_stop_0 * Resource action: stateful-1:3 monitor=15000 on pcmk-3 * Resource action: stateful-1:2 monitor=15000 on pcmk-2 * Fencing pcmk-4 (reboot) * Pseudo action: FencingChild:0_stop_0 * Pseudo action: Fencing_stopped_0 * Pseudo action: rsc_pcmk-4_stop_0 * Pseudo action: lsb-dummy_stop_0 * Pseudo action: Connectivity_stop_0 * Pseudo action: stonith_complete * Pseudo action: group-1_stop_0 * Pseudo action: r192.168.101.183_stop_0 * Resource action: rsc_pcmk-4 start on pcmk-2 * Resource action: migrator migrate_to on pcmk-1 * Pseudo action: ping-1:0_stop_0 * Pseudo action: Connectivity_stopped_0 * Pseudo action: r192.168.101.182_stop_0 * Resource action: rsc_pcmk-4 monitor=5000 on pcmk-2 * Resource action: migrator migrate_from on pcmk-3 * Resource action: migrator stop on pcmk-1 * Pseudo action: r192.168.101.181_stop_0 * Pseudo action: migrator_start_0 * Pseudo action: group-1_stopped_0 * Resource action: migrator monitor=10000 on pcmk-3 * Pseudo action: master-1_demote_0 * Pseudo action: stateful-1:0_demote_0 * Pseudo action: master-1_demoted_0 * Pseudo action: master-1_stop_0 * Pseudo action: stateful-1:0_stop_0 * Pseudo action: master-1_stopped_0 * Pseudo action: all_stopped * Pseudo action: master-1_promote_0 * Resource action: stateful-1:1 promote on pcmk-1 * Pseudo action: master-1_promoted_0 * Pseudo action: group-1_start_0 * Resource action: r192.168.101.181 start on pcmk-1 * Resource action: r192.168.101.182 start on pcmk-1 * Resource action: r192.168.101.183 start on pcmk-1 * Resource action: stateful-1:1 monitor=16000 on pcmk-1 * Pseudo action: group-1_running_0 * Resource action: r192.168.101.181 monitor=5000 on pcmk-1 * Resource action: r192.168.101.182 monitor=5000 on pcmk-1 * Resource action: r192.168.101.183 monitor=5000 on pcmk-1 * Resource action: lsb-dummy start on pcmk-1 * Resource action: lsb-dummy monitor=5000 on pcmk-1 Revised cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 ] OFFLINE: [ pcmk-4 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.182 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.183 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-2 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1 migrator (ocf::pacemaker:Dummy): Started pcmk-3 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-1 ] Slaves: [ pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] diff --git a/cts/scheduler/migrate-partial-4.summary b/cts/scheduler/migrate-partial-4.summary index 8fd1d4cfa5..b67085c497 100644 --- a/cts/scheduler/migrate-partial-4.summary +++ b/cts/scheduler/migrate-partial-4.summary @@ -1,125 +1,125 @@ Current cluster status: Online: [ lustre01-left lustre02-left lustre03-left lustre04-left ] drbd-local (ocf::vds-ok:Ticketer): Started lustre01-left drbd-stacked (ocf::vds-ok:Ticketer): Stopped drbd-testfs-local (ocf::vds-ok:Ticketer): Stopped drbd-testfs-stacked (ocf::vds-ok:Ticketer): Stopped ip-testfs-mdt0000-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0000-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0001-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0002-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0003-left (ocf::heartbeat:IPaddr2): Stopped lustre (ocf::vds-ok:Ticketer): Started lustre03-left mgs (ocf::vds-ok:lustre-server): Stopped testfs (ocf::vds-ok:Ticketer): Started lustre02-left testfs-mdt0000 (ocf::vds-ok:lustre-server): Stopped testfs-ost0000 (ocf::vds-ok:lustre-server): Stopped testfs-ost0001 (ocf::vds-ok:lustre-server): Stopped testfs-ost0002 (ocf::vds-ok:lustre-server): Stopped testfs-ost0003 (ocf::vds-ok:lustre-server): Stopped Resource Group: booth ip-booth (ocf::heartbeat:IPaddr2): Started lustre02-left boothd (ocf::pacemaker:booth-site): Started lustre02-left - Master/Slave Set: ms-drbd-mgs [drbd-mgs] + Clone Set: ms-drbd-mgs [drbd-mgs] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-mdt0000 [drbd-testfs-mdt0000] + Clone Set: ms-drbd-testfs-mdt0000 [drbd-testfs-mdt0000] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-mdt0000-left [drbd-testfs-mdt0000-left] + Clone Set: ms-drbd-testfs-mdt0000-left [drbd-testfs-mdt0000-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0000 [drbd-testfs-ost0000] + Clone Set: ms-drbd-testfs-ost0000 [drbd-testfs-ost0000] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0000-left [drbd-testfs-ost0000-left] + Clone Set: ms-drbd-testfs-ost0000-left [drbd-testfs-ost0000-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0001 [drbd-testfs-ost0001] + Clone Set: ms-drbd-testfs-ost0001 [drbd-testfs-ost0001] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0001-left [drbd-testfs-ost0001-left] + Clone Set: ms-drbd-testfs-ost0001-left [drbd-testfs-ost0001-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0002 [drbd-testfs-ost0002] + Clone Set: ms-drbd-testfs-ost0002 [drbd-testfs-ost0002] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0002-left [drbd-testfs-ost0002-left] + Clone Set: ms-drbd-testfs-ost0002-left [drbd-testfs-ost0002-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0003 [drbd-testfs-ost0003] + Clone Set: ms-drbd-testfs-ost0003 [drbd-testfs-ost0003] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0003-left [drbd-testfs-ost0003-left] + Clone Set: ms-drbd-testfs-ost0003-left [drbd-testfs-ost0003-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] Transition Summary: * Start drbd-stacked (lustre02-left) * Start drbd-testfs-local (lustre03-left) * Migrate lustre ( lustre03-left -> lustre04-left ) * Move testfs ( lustre02-left -> lustre03-left ) * Start drbd-mgs:0 (lustre01-left) * Start drbd-mgs:1 (lustre02-left) Executing cluster transition: * Resource action: drbd-stacked start on lustre02-left * Resource action: drbd-testfs-local start on lustre03-left * Resource action: lustre migrate_to on lustre03-left * Resource action: testfs stop on lustre02-left * Resource action: testfs stop on lustre01-left * Pseudo action: ms-drbd-mgs_pre_notify_start_0 * Resource action: lustre migrate_from on lustre04-left * Resource action: lustre stop on lustre03-left * Resource action: testfs start on lustre03-left * Pseudo action: ms-drbd-mgs_confirmed-pre_notify_start_0 * Pseudo action: ms-drbd-mgs_start_0 * Pseudo action: all_stopped * Pseudo action: lustre_start_0 * Resource action: drbd-mgs:0 start on lustre01-left * Resource action: drbd-mgs:1 start on lustre02-left * Pseudo action: ms-drbd-mgs_running_0 * Pseudo action: ms-drbd-mgs_post_notify_running_0 * Resource action: drbd-mgs:0 notify on lustre01-left * Resource action: drbd-mgs:1 notify on lustre02-left * Pseudo action: ms-drbd-mgs_confirmed-post_notify_running_0 * Resource action: drbd-mgs:0 monitor=30000 on lustre01-left * Resource action: drbd-mgs:1 monitor=30000 on lustre02-left Revised cluster status: Online: [ lustre01-left lustre02-left lustre03-left lustre04-left ] drbd-local (ocf::vds-ok:Ticketer): Started lustre01-left drbd-stacked (ocf::vds-ok:Ticketer): Started lustre02-left drbd-testfs-local (ocf::vds-ok:Ticketer): Started lustre03-left drbd-testfs-stacked (ocf::vds-ok:Ticketer): Stopped ip-testfs-mdt0000-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0000-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0001-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0002-left (ocf::heartbeat:IPaddr2): Stopped ip-testfs-ost0003-left (ocf::heartbeat:IPaddr2): Stopped lustre (ocf::vds-ok:Ticketer): Started lustre04-left mgs (ocf::vds-ok:lustre-server): Stopped testfs (ocf::vds-ok:Ticketer): Started lustre03-left testfs-mdt0000 (ocf::vds-ok:lustre-server): Stopped testfs-ost0000 (ocf::vds-ok:lustre-server): Stopped testfs-ost0001 (ocf::vds-ok:lustre-server): Stopped testfs-ost0002 (ocf::vds-ok:lustre-server): Stopped testfs-ost0003 (ocf::vds-ok:lustre-server): Stopped Resource Group: booth ip-booth (ocf::heartbeat:IPaddr2): Started lustre02-left boothd (ocf::pacemaker:booth-site): Started lustre02-left - Master/Slave Set: ms-drbd-mgs [drbd-mgs] + Clone Set: ms-drbd-mgs [drbd-mgs] (promotable) Slaves: [ lustre01-left lustre02-left ] - Master/Slave Set: ms-drbd-testfs-mdt0000 [drbd-testfs-mdt0000] + Clone Set: ms-drbd-testfs-mdt0000 [drbd-testfs-mdt0000] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-mdt0000-left [drbd-testfs-mdt0000-left] + Clone Set: ms-drbd-testfs-mdt0000-left [drbd-testfs-mdt0000-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0000 [drbd-testfs-ost0000] + Clone Set: ms-drbd-testfs-ost0000 [drbd-testfs-ost0000] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0000-left [drbd-testfs-ost0000-left] + Clone Set: ms-drbd-testfs-ost0000-left [drbd-testfs-ost0000-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0001 [drbd-testfs-ost0001] + Clone Set: ms-drbd-testfs-ost0001 [drbd-testfs-ost0001] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0001-left [drbd-testfs-ost0001-left] + Clone Set: ms-drbd-testfs-ost0001-left [drbd-testfs-ost0001-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0002 [drbd-testfs-ost0002] + Clone Set: ms-drbd-testfs-ost0002 [drbd-testfs-ost0002] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0002-left [drbd-testfs-ost0002-left] + Clone Set: ms-drbd-testfs-ost0002-left [drbd-testfs-ost0002-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0003 [drbd-testfs-ost0003] + Clone Set: ms-drbd-testfs-ost0003 [drbd-testfs-ost0003] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] - Master/Slave Set: ms-drbd-testfs-ost0003-left [drbd-testfs-ost0003-left] + Clone Set: ms-drbd-testfs-ost0003-left [drbd-testfs-ost0003-left] (promotable) Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ] diff --git a/cts/scheduler/migrate-shutdown.summary b/cts/scheduler/migrate-shutdown.summary index 24008a9774..a2e951c062 100644 --- a/cts/scheduler/migrate-shutdown.summary +++ b/cts/scheduler/migrate-shutdown.summary @@ -1,95 +1,95 @@ Current cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Fencing (stonith:fence_xvm): Started pcmk-1 Resource Group: group-1 r192.168.122.105 (ocf::heartbeat:IPaddr): Started pcmk-2 r192.168.122.106 (ocf::heartbeat:IPaddr): Started pcmk-2 r192.168.122.107 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 migrator (ocf::pacemaker:Dummy): Started pcmk-1 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-4 ] Stopped: [ pcmk-3 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-2 ] Slaves: [ pcmk-1 pcmk-4 ] Stopped: [ pcmk-3 ] Transition Summary: * Shutdown pcmk-4 * Shutdown pcmk-3 * Shutdown pcmk-2 * Shutdown pcmk-1 * Stop Fencing ( pcmk-1 ) due to node availability * Stop r192.168.122.105 (pcmk-2) due to node availability * Stop r192.168.122.106 (pcmk-2) due to node availability * Stop r192.168.122.107 (pcmk-2) due to node availability * Stop rsc_pcmk-1 ( pcmk-1 ) due to node availability * Stop rsc_pcmk-2 ( pcmk-2 ) due to node availability * Stop rsc_pcmk-4 ( pcmk-4 ) due to node availability * Stop lsb-dummy ( pcmk-2 ) due to node availability * Stop migrator ( pcmk-1 ) due to node availability * Stop ping-1:0 (pcmk-1) due to node availability * Stop ping-1:1 (pcmk-2) due to node availability * Stop ping-1:2 (pcmk-4) due to node availability * Stop stateful-1:0 ( Slave pcmk-1 ) due to node availability * Stop stateful-1:1 ( Master pcmk-2 ) due to node availability * Stop stateful-1:2 ( Slave pcmk-4 ) due to node availability Executing cluster transition: * Resource action: Fencing stop on pcmk-1 * Resource action: rsc_pcmk-1 stop on pcmk-1 * Resource action: rsc_pcmk-2 stop on pcmk-2 * Resource action: rsc_pcmk-4 stop on pcmk-4 * Resource action: lsb-dummy stop on pcmk-2 * Resource action: migrator stop on pcmk-1 * Resource action: migrator stop on pcmk-3 * Pseudo action: Connectivity_stop_0 * Cluster action: do_shutdown on pcmk-3 * Pseudo action: group-1_stop_0 * Resource action: r192.168.122.107 stop on pcmk-2 * Resource action: ping-1:0 stop on pcmk-1 * Resource action: ping-1:1 stop on pcmk-2 * Resource action: ping-1:3 stop on pcmk-4 * Pseudo action: Connectivity_stopped_0 * Resource action: r192.168.122.106 stop on pcmk-2 * Resource action: r192.168.122.105 stop on pcmk-2 * Pseudo action: group-1_stopped_0 * Pseudo action: master-1_demote_0 * Resource action: stateful-1:0 demote on pcmk-2 * Pseudo action: master-1_demoted_0 * Pseudo action: master-1_stop_0 * Resource action: stateful-1:2 stop on pcmk-1 * Resource action: stateful-1:0 stop on pcmk-2 * Resource action: stateful-1:3 stop on pcmk-4 * Pseudo action: master-1_stopped_0 * Cluster action: do_shutdown on pcmk-4 * Cluster action: do_shutdown on pcmk-2 * Cluster action: do_shutdown on pcmk-1 * Pseudo action: all_stopped Revised cluster status: Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Fencing (stonith:fence_xvm): Stopped Resource Group: group-1 r192.168.122.105 (ocf::heartbeat:IPaddr): Stopped r192.168.122.106 (ocf::heartbeat:IPaddr): Stopped r192.168.122.107 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-1 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-2 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-3 (ocf::heartbeat:IPaddr): Stopped rsc_pcmk-4 (ocf::heartbeat:IPaddr): Stopped lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped migrator (ocf::pacemaker:Dummy): Stopped Clone Set: Connectivity [ping-1] Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] diff --git a/cts/scheduler/novell-239079.summary b/cts/scheduler/novell-239079.summary index 1298acb28d..696399f9bd 100644 --- a/cts/scheduler/novell-239079.summary +++ b/cts/scheduler/novell-239079.summary @@ -1,31 +1,31 @@ Current cluster status: Online: [ xen-1 xen-2 ] fs_1 (ocf::heartbeat:Filesystem): Stopped - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Stopped: [ xen-1 xen-2 ] Transition Summary: * Start drbd0:0 (xen-1) * Start drbd0:1 (xen-2) Executing cluster transition: * Pseudo action: ms-drbd0_pre_notify_start_0 * Pseudo action: ms-drbd0_confirmed-pre_notify_start_0 * Pseudo action: ms-drbd0_start_0 * Resource action: drbd0:0 start on xen-1 * Resource action: drbd0:1 start on xen-2 * Pseudo action: ms-drbd0_running_0 * Pseudo action: ms-drbd0_post_notify_running_0 * Resource action: drbd0:0 notify on xen-1 * Resource action: drbd0:1 notify on xen-2 * Pseudo action: ms-drbd0_confirmed-post_notify_running_0 Revised cluster status: Online: [ xen-1 xen-2 ] fs_1 (ocf::heartbeat:Filesystem): Stopped - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Slaves: [ xen-1 xen-2 ] diff --git a/cts/scheduler/novell-239082.summary b/cts/scheduler/novell-239082.summary index 2bafd1b380..376060ba3a 100644 --- a/cts/scheduler/novell-239082.summary +++ b/cts/scheduler/novell-239082.summary @@ -1,59 +1,59 @@ Current cluster status: Online: [ xen-1 xen-2 ] fs_1 (ocf::heartbeat:Filesystem): Started xen-1 - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ xen-1 ] Slaves: [ xen-2 ] Transition Summary: * Shutdown xen-1 * Move fs_1 ( xen-1 -> xen-2 ) * Promote drbd0:0 (Slave -> Master xen-2) * Stop drbd0:1 ( Master xen-1 ) due to node availability Executing cluster transition: * Resource action: fs_1 stop on xen-1 * Pseudo action: ms-drbd0_pre_notify_demote_0 * Resource action: drbd0:0 notify on xen-2 * Resource action: drbd0:1 notify on xen-1 * Pseudo action: ms-drbd0_confirmed-pre_notify_demote_0 * Pseudo action: ms-drbd0_demote_0 * Resource action: drbd0:1 demote on xen-1 * Pseudo action: ms-drbd0_demoted_0 * Pseudo action: ms-drbd0_post_notify_demoted_0 * Resource action: drbd0:0 notify on xen-2 * Resource action: drbd0:1 notify on xen-1 * Pseudo action: ms-drbd0_confirmed-post_notify_demoted_0 * Pseudo action: ms-drbd0_pre_notify_stop_0 * Resource action: drbd0:0 notify on xen-2 * Resource action: drbd0:1 notify on xen-1 * Pseudo action: ms-drbd0_confirmed-pre_notify_stop_0 * Pseudo action: ms-drbd0_stop_0 * Resource action: drbd0:1 stop on xen-1 * Pseudo action: ms-drbd0_stopped_0 * Cluster action: do_shutdown on xen-1 * Pseudo action: ms-drbd0_post_notify_stopped_0 * Resource action: drbd0:0 notify on xen-2 * Pseudo action: ms-drbd0_confirmed-post_notify_stopped_0 * Pseudo action: all_stopped * Pseudo action: ms-drbd0_pre_notify_promote_0 * Resource action: drbd0:0 notify on xen-2 * Pseudo action: ms-drbd0_confirmed-pre_notify_promote_0 * Pseudo action: ms-drbd0_promote_0 * Resource action: drbd0:0 promote on xen-2 * Pseudo action: ms-drbd0_promoted_0 * Pseudo action: ms-drbd0_post_notify_promoted_0 * Resource action: drbd0:0 notify on xen-2 * Pseudo action: ms-drbd0_confirmed-post_notify_promoted_0 * Resource action: fs_1 start on xen-2 Revised cluster status: Online: [ xen-1 xen-2 ] fs_1 (ocf::heartbeat:Filesystem): Started xen-2 - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ xen-2 ] Stopped: [ xen-1 ] diff --git a/cts/scheduler/novell-239087.summary b/cts/scheduler/novell-239087.summary index 5b0e6ed61c..3d7d705d71 100644 --- a/cts/scheduler/novell-239087.summary +++ b/cts/scheduler/novell-239087.summary @@ -1,21 +1,21 @@ Current cluster status: Online: [ xen-1 xen-2 ] fs_1 (ocf::heartbeat:Filesystem): Started xen-1 - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ xen-1 ] Slaves: [ xen-2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ xen-1 xen-2 ] fs_1 (ocf::heartbeat:Filesystem): Started xen-1 - Master/Slave Set: ms-drbd0 [drbd0] + Clone Set: ms-drbd0 [drbd0] (promotable) Masters: [ xen-1 ] Slaves: [ xen-2 ] diff --git a/cts/scheduler/one-or-more-unrunnable-instances.summary b/cts/scheduler/one-or-more-unrunnable-instances.summary index d18c4f45cc..0fc1b2422c 100644 --- a/cts/scheduler/one-or-more-unrunnable-instances.summary +++ b/cts/scheduler/one-or-more-unrunnable-instances.summary @@ -1,734 +1,734 @@ Current cluster status: Online: [ rdo7-node1 rdo7-node2 rdo7-node3 ] RemoteOnline: [ mrg-07 mrg-08 mrg-09 ] fence1 (stonith:fence_xvm): Started rdo7-node2 fence2 (stonith:fence_xvm): Started rdo7-node1 fence3 (stonith:fence_xvm): Started rdo7-node3 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] vip-db (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-rabbitmq (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-keystone (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-glance (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-cinder (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-swift (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-neutron (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-nova (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-horizon (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-heat (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-ceilometer (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-qpid (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-node (ocf::heartbeat:IPaddr2): Started rdo7-node1 - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: rabbitmq-server-clone [rabbitmq-server] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: memcached-clone [memcached] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: mongodb-clone [mongodb] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: keystone-clone [keystone] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: glance-fs-clone [glance-fs] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: glance-registry-clone [glance-registry] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: glance-api-clone [glance-api] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: cinder-api-clone [cinder-api] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: cinder-scheduler-clone [cinder-scheduler] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] cinder-volume (systemd:openstack-cinder-volume): Stopped Clone Set: swift-fs-clone [swift-fs] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: swift-account-clone [swift-account] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: swift-container-clone [swift-container] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: swift-object-clone [swift-object] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: swift-proxy-clone [swift-proxy] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] swift-object-expirer (systemd:openstack-swift-object-expirer): Stopped Clone Set: neutron-server-clone [neutron-server] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: neutron-scale-clone [neutron-scale] (unique) neutron-scale:0 (ocf::neutron:NeutronScale): Stopped neutron-scale:1 (ocf::neutron:NeutronScale): Stopped neutron-scale:2 (ocf::neutron:NeutronScale): Stopped Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: nova-consoleauth-clone [nova-consoleauth] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: nova-novncproxy-clone [nova-novncproxy] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: nova-api-clone [nova-api] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: nova-scheduler-clone [nova-scheduler] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: nova-conductor-clone [nova-conductor] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ rdo7-node1 ] Slaves: [ rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] vip-redis (ocf::heartbeat:IPaddr2): Started rdo7-node1 Clone Set: ceilometer-central-clone [ceilometer-central] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-collector-clone [ceilometer-collector] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-api-clone [ceilometer-api] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-delay-clone [ceilometer-delay] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-notification-clone [ceilometer-notification] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: heat-api-clone [heat-api] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: heat-api-cfn-clone [heat-api-cfn] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: heat-engine-clone [heat-engine] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: horizon-clone [horizon] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-openvswitch-agent-compute-clone [neutron-openvswitch-agent-compute] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: libvirtd-compute-clone [libvirtd-compute] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-compute-clone [ceilometer-compute] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: nova-compute-clone [nova-compute] Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ] fence-nova (stonith:fence_compute): Stopped fence-compute (stonith:fence_apc_snmp): Started rdo7-node3 mrg-07 (ocf::pacemaker:remote): Started rdo7-node1 mrg-08 (ocf::pacemaker:remote): Started rdo7-node2 mrg-09 (ocf::pacemaker:remote): Started rdo7-node3 Transition Summary: * Start keystone:0 (rdo7-node2) * Start keystone:1 (rdo7-node3) * Start keystone:2 (rdo7-node1) * Start glance-registry:0 (rdo7-node2) * Start glance-registry:1 (rdo7-node3) * Start glance-registry:2 (rdo7-node1) * Start glance-api:0 (rdo7-node2) * Start glance-api:1 (rdo7-node3) * Start glance-api:2 (rdo7-node1) * Start cinder-api:0 (rdo7-node2) * Start cinder-api:1 (rdo7-node3) * Start cinder-api:2 (rdo7-node1) * Start cinder-scheduler:0 (rdo7-node2) * Start cinder-scheduler:1 (rdo7-node3) * Start cinder-scheduler:2 (rdo7-node1) * Start cinder-volume (rdo7-node2) * Start swift-account:0 (rdo7-node3) * Start swift-account:1 (rdo7-node1) * Start swift-account:2 (rdo7-node2) * Start swift-container:0 (rdo7-node3) * Start swift-container:1 (rdo7-node1) * Start swift-container:2 (rdo7-node2) * Start swift-object:0 (rdo7-node3) * Start swift-object:1 (rdo7-node1) * Start swift-object:2 (rdo7-node2) * Start swift-proxy:0 (rdo7-node3) * Start swift-proxy:1 (rdo7-node1) * Start swift-proxy:2 (rdo7-node2) * Start swift-object-expirer (rdo7-node3) * Start neutron-server:0 (rdo7-node1) * Start neutron-server:1 (rdo7-node2) * Start neutron-server:2 (rdo7-node3) * Start neutron-scale:0 (rdo7-node1) * Start neutron-scale:1 (rdo7-node2) * Start neutron-scale:2 (rdo7-node3) * Start neutron-ovs-cleanup:0 (rdo7-node1) * Start neutron-ovs-cleanup:1 (rdo7-node2) * Start neutron-ovs-cleanup:2 (rdo7-node3) * Start neutron-netns-cleanup:0 (rdo7-node1) * Start neutron-netns-cleanup:1 (rdo7-node2) * Start neutron-netns-cleanup:2 (rdo7-node3) * Start neutron-openvswitch-agent:0 (rdo7-node1) * Start neutron-openvswitch-agent:1 (rdo7-node2) * Start neutron-openvswitch-agent:2 (rdo7-node3) * Start neutron-dhcp-agent:0 (rdo7-node1) * Start neutron-dhcp-agent:1 (rdo7-node2) * Start neutron-dhcp-agent:2 (rdo7-node3) * Start neutron-l3-agent:0 (rdo7-node1) * Start neutron-l3-agent:1 (rdo7-node2) * Start neutron-l3-agent:2 (rdo7-node3) * Start neutron-metadata-agent:0 (rdo7-node1) * Start neutron-metadata-agent:1 (rdo7-node2) * Start neutron-metadata-agent:2 (rdo7-node3) * Start nova-consoleauth:0 (rdo7-node1) * Start nova-consoleauth:1 (rdo7-node2) * Start nova-consoleauth:2 (rdo7-node3) * Start nova-novncproxy:0 (rdo7-node1) * Start nova-novncproxy:1 (rdo7-node2) * Start nova-novncproxy:2 (rdo7-node3) * Start nova-api:0 (rdo7-node1) * Start nova-api:1 (rdo7-node2) * Start nova-api:2 (rdo7-node3) * Start nova-scheduler:0 (rdo7-node1) * Start nova-scheduler:1 (rdo7-node2) * Start nova-scheduler:2 (rdo7-node3) * Start nova-conductor:0 (rdo7-node1) * Start nova-conductor:1 (rdo7-node2) * Start nova-conductor:2 (rdo7-node3) * Start ceilometer-central:0 (rdo7-node2) * Start ceilometer-central:1 (rdo7-node3) * Start ceilometer-central:2 (rdo7-node1) * Start ceilometer-collector:0 (rdo7-node2) * Start ceilometer-collector:1 (rdo7-node3) * Start ceilometer-collector:2 (rdo7-node1) * Start ceilometer-api:0 (rdo7-node2) * Start ceilometer-api:1 (rdo7-node3) * Start ceilometer-api:2 (rdo7-node1) * Start ceilometer-delay:0 (rdo7-node2) * Start ceilometer-delay:1 (rdo7-node3) * Start ceilometer-delay:2 (rdo7-node1) * Start ceilometer-alarm-evaluator:0 (rdo7-node2) * Start ceilometer-alarm-evaluator:1 (rdo7-node3) * Start ceilometer-alarm-evaluator:2 (rdo7-node1) * Start ceilometer-alarm-notifier:0 (rdo7-node2) * Start ceilometer-alarm-notifier:1 (rdo7-node3) * Start ceilometer-alarm-notifier:2 (rdo7-node1) * Start ceilometer-notification:0 (rdo7-node2) * Start ceilometer-notification:1 (rdo7-node3) * Start ceilometer-notification:2 (rdo7-node1) * Start heat-api:0 (rdo7-node2) * Start heat-api:1 (rdo7-node3) * Start heat-api:2 (rdo7-node1) * Start heat-api-cfn:0 (rdo7-node2) * Start heat-api-cfn:1 (rdo7-node3) * Start heat-api-cfn:2 (rdo7-node1) * Start heat-api-cloudwatch:0 (rdo7-node2) * Start heat-api-cloudwatch:1 (rdo7-node3) * Start heat-api-cloudwatch:2 (rdo7-node1) * Start heat-engine:0 (rdo7-node2) * Start heat-engine:1 (rdo7-node3) * Start heat-engine:2 (rdo7-node1) * Start neutron-openvswitch-agent-compute:0 (mrg-07) * Start neutron-openvswitch-agent-compute:1 (mrg-08) * Start neutron-openvswitch-agent-compute:2 (mrg-09) * Start libvirtd-compute:0 (mrg-07) * Start libvirtd-compute:1 (mrg-08) * Start libvirtd-compute:2 (mrg-09) * Start ceilometer-compute:0 (mrg-07) * Start ceilometer-compute:1 (mrg-08) * Start ceilometer-compute:2 (mrg-09) * Start nova-compute:0 (mrg-07) * Start nova-compute:1 (mrg-08) * Start nova-compute:2 (mrg-09) * Start fence-nova (rdo7-node2) Executing cluster transition: * Resource action: galera monitor=10000 on rdo7-node2 * Pseudo action: keystone-clone_start_0 * Pseudo action: nova-compute-clone_pre_notify_start_0 * Resource action: keystone start on rdo7-node2 * Resource action: keystone start on rdo7-node3 * Resource action: keystone start on rdo7-node1 * Pseudo action: keystone-clone_running_0 * Pseudo action: glance-registry-clone_start_0 * Pseudo action: cinder-api-clone_start_0 * Pseudo action: swift-account-clone_start_0 * Pseudo action: neutron-server-clone_start_0 * Pseudo action: nova-consoleauth-clone_start_0 * Pseudo action: ceilometer-central-clone_start_0 * Pseudo action: nova-compute-clone_confirmed-pre_notify_start_0 * Resource action: keystone monitor=60000 on rdo7-node2 * Resource action: keystone monitor=60000 on rdo7-node3 * Resource action: keystone monitor=60000 on rdo7-node1 * Resource action: glance-registry start on rdo7-node2 * Resource action: glance-registry start on rdo7-node3 * Resource action: glance-registry start on rdo7-node1 * Pseudo action: glance-registry-clone_running_0 * Pseudo action: glance-api-clone_start_0 * Resource action: cinder-api start on rdo7-node2 * Resource action: cinder-api start on rdo7-node3 * Resource action: cinder-api start on rdo7-node1 * Pseudo action: cinder-api-clone_running_0 * Pseudo action: cinder-scheduler-clone_start_0 * Resource action: swift-account start on rdo7-node3 * Resource action: swift-account start on rdo7-node1 * Resource action: swift-account start on rdo7-node2 * Pseudo action: swift-account-clone_running_0 * Pseudo action: swift-container-clone_start_0 * Pseudo action: swift-proxy-clone_start_0 * Resource action: neutron-server start on rdo7-node1 * Resource action: neutron-server start on rdo7-node2 * Resource action: neutron-server start on rdo7-node3 * Pseudo action: neutron-server-clone_running_0 * Pseudo action: neutron-scale-clone_start_0 * Resource action: nova-consoleauth start on rdo7-node1 * Resource action: nova-consoleauth start on rdo7-node2 * Resource action: nova-consoleauth start on rdo7-node3 * Pseudo action: nova-consoleauth-clone_running_0 * Pseudo action: nova-novncproxy-clone_start_0 * Resource action: ceilometer-central start on rdo7-node2 * Resource action: ceilometer-central start on rdo7-node3 * Resource action: ceilometer-central start on rdo7-node1 * Pseudo action: ceilometer-central-clone_running_0 * Pseudo action: ceilometer-collector-clone_start_0 * Pseudo action: clone-one-or-more:order-neutron-server-clone-neutron-openvswitch-agent-compute-clone-mandatory * Resource action: glance-registry monitor=60000 on rdo7-node2 * Resource action: glance-registry monitor=60000 on rdo7-node3 * Resource action: glance-registry monitor=60000 on rdo7-node1 * Resource action: glance-api start on rdo7-node2 * Resource action: glance-api start on rdo7-node3 * Resource action: glance-api start on rdo7-node1 * Pseudo action: glance-api-clone_running_0 * Resource action: cinder-api monitor=60000 on rdo7-node2 * Resource action: cinder-api monitor=60000 on rdo7-node3 * Resource action: cinder-api monitor=60000 on rdo7-node1 * Resource action: cinder-scheduler start on rdo7-node2 * Resource action: cinder-scheduler start on rdo7-node3 * Resource action: cinder-scheduler start on rdo7-node1 * Pseudo action: cinder-scheduler-clone_running_0 * Resource action: cinder-volume start on rdo7-node2 * Resource action: swift-account monitor=60000 on rdo7-node3 * Resource action: swift-account monitor=60000 on rdo7-node1 * Resource action: swift-account monitor=60000 on rdo7-node2 * Resource action: swift-container start on rdo7-node3 * Resource action: swift-container start on rdo7-node1 * Resource action: swift-container start on rdo7-node2 * Pseudo action: swift-container-clone_running_0 * Pseudo action: swift-object-clone_start_0 * Resource action: swift-proxy start on rdo7-node3 * Resource action: swift-proxy start on rdo7-node1 * Resource action: swift-proxy start on rdo7-node2 * Pseudo action: swift-proxy-clone_running_0 * Resource action: swift-object-expirer start on rdo7-node3 * Resource action: neutron-server monitor=60000 on rdo7-node1 * Resource action: neutron-server monitor=60000 on rdo7-node2 * Resource action: neutron-server monitor=60000 on rdo7-node3 * Resource action: neutron-scale:0 start on rdo7-node1 * Resource action: neutron-scale:1 start on rdo7-node2 * Resource action: neutron-scale:2 start on rdo7-node3 * Pseudo action: neutron-scale-clone_running_0 * Pseudo action: neutron-ovs-cleanup-clone_start_0 * Resource action: nova-consoleauth monitor=60000 on rdo7-node1 * Resource action: nova-consoleauth monitor=60000 on rdo7-node2 * Resource action: nova-consoleauth monitor=60000 on rdo7-node3 * Resource action: nova-novncproxy start on rdo7-node1 * Resource action: nova-novncproxy start on rdo7-node2 * Resource action: nova-novncproxy start on rdo7-node3 * Pseudo action: nova-novncproxy-clone_running_0 * Pseudo action: nova-api-clone_start_0 * Resource action: ceilometer-central monitor=60000 on rdo7-node2 * Resource action: ceilometer-central monitor=60000 on rdo7-node3 * Resource action: ceilometer-central monitor=60000 on rdo7-node1 * Resource action: ceilometer-collector start on rdo7-node2 * Resource action: ceilometer-collector start on rdo7-node3 * Resource action: ceilometer-collector start on rdo7-node1 * Pseudo action: ceilometer-collector-clone_running_0 * Pseudo action: ceilometer-api-clone_start_0 * Pseudo action: neutron-openvswitch-agent-compute-clone_start_0 * Resource action: glance-api monitor=60000 on rdo7-node2 * Resource action: glance-api monitor=60000 on rdo7-node3 * Resource action: glance-api monitor=60000 on rdo7-node1 * Resource action: cinder-scheduler monitor=60000 on rdo7-node2 * Resource action: cinder-scheduler monitor=60000 on rdo7-node3 * Resource action: cinder-scheduler monitor=60000 on rdo7-node1 * Resource action: cinder-volume monitor=60000 on rdo7-node2 * Resource action: swift-container monitor=60000 on rdo7-node3 * Resource action: swift-container monitor=60000 on rdo7-node1 * Resource action: swift-container monitor=60000 on rdo7-node2 * Resource action: swift-object start on rdo7-node3 * Resource action: swift-object start on rdo7-node1 * Resource action: swift-object start on rdo7-node2 * Pseudo action: swift-object-clone_running_0 * Resource action: swift-proxy monitor=60000 on rdo7-node3 * Resource action: swift-proxy monitor=60000 on rdo7-node1 * Resource action: swift-proxy monitor=60000 on rdo7-node2 * Resource action: swift-object-expirer monitor=60000 on rdo7-node3 * Resource action: neutron-scale:0 monitor=10000 on rdo7-node1 * Resource action: neutron-scale:1 monitor=10000 on rdo7-node2 * Resource action: neutron-scale:2 monitor=10000 on rdo7-node3 * Resource action: neutron-ovs-cleanup start on rdo7-node1 * Resource action: neutron-ovs-cleanup start on rdo7-node2 * Resource action: neutron-ovs-cleanup start on rdo7-node3 * Pseudo action: neutron-ovs-cleanup-clone_running_0 * Pseudo action: neutron-netns-cleanup-clone_start_0 * Resource action: nova-novncproxy monitor=60000 on rdo7-node1 * Resource action: nova-novncproxy monitor=60000 on rdo7-node2 * Resource action: nova-novncproxy monitor=60000 on rdo7-node3 * Resource action: nova-api start on rdo7-node1 * Resource action: nova-api start on rdo7-node2 * Resource action: nova-api start on rdo7-node3 * Pseudo action: nova-api-clone_running_0 * Pseudo action: nova-scheduler-clone_start_0 * Resource action: ceilometer-collector monitor=60000 on rdo7-node2 * Resource action: ceilometer-collector monitor=60000 on rdo7-node3 * Resource action: ceilometer-collector monitor=60000 on rdo7-node1 * Resource action: ceilometer-api start on rdo7-node2 * Resource action: ceilometer-api start on rdo7-node3 * Resource action: ceilometer-api start on rdo7-node1 * Pseudo action: ceilometer-api-clone_running_0 * Pseudo action: ceilometer-delay-clone_start_0 * Resource action: neutron-openvswitch-agent-compute start on mrg-07 * Resource action: neutron-openvswitch-agent-compute start on mrg-08 * Resource action: neutron-openvswitch-agent-compute start on mrg-09 * Pseudo action: neutron-openvswitch-agent-compute-clone_running_0 * Pseudo action: libvirtd-compute-clone_start_0 * Resource action: swift-object monitor=60000 on rdo7-node3 * Resource action: swift-object monitor=60000 on rdo7-node1 * Resource action: swift-object monitor=60000 on rdo7-node2 * Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node1 * Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node2 * Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node3 * Resource action: neutron-netns-cleanup start on rdo7-node1 * Resource action: neutron-netns-cleanup start on rdo7-node2 * Resource action: neutron-netns-cleanup start on rdo7-node3 * Pseudo action: neutron-netns-cleanup-clone_running_0 * Pseudo action: neutron-openvswitch-agent-clone_start_0 * Resource action: nova-api monitor=60000 on rdo7-node1 * Resource action: nova-api monitor=60000 on rdo7-node2 * Resource action: nova-api monitor=60000 on rdo7-node3 * Resource action: nova-scheduler start on rdo7-node1 * Resource action: nova-scheduler start on rdo7-node2 * Resource action: nova-scheduler start on rdo7-node3 * Pseudo action: nova-scheduler-clone_running_0 * Pseudo action: nova-conductor-clone_start_0 * Resource action: ceilometer-api monitor=60000 on rdo7-node2 * Resource action: ceilometer-api monitor=60000 on rdo7-node3 * Resource action: ceilometer-api monitor=60000 on rdo7-node1 * Resource action: ceilometer-delay start on rdo7-node2 * Resource action: ceilometer-delay start on rdo7-node3 * Resource action: ceilometer-delay start on rdo7-node1 * Pseudo action: ceilometer-delay-clone_running_0 * Pseudo action: ceilometer-alarm-evaluator-clone_start_0 * Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-07 * Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-08 * Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-09 * Resource action: libvirtd-compute start on mrg-07 * Resource action: libvirtd-compute start on mrg-08 * Resource action: libvirtd-compute start on mrg-09 * Pseudo action: libvirtd-compute-clone_running_0 * Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node1 * Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node2 * Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node3 * Resource action: neutron-openvswitch-agent start on rdo7-node1 * Resource action: neutron-openvswitch-agent start on rdo7-node2 * Resource action: neutron-openvswitch-agent start on rdo7-node3 * Pseudo action: neutron-openvswitch-agent-clone_running_0 * Pseudo action: neutron-dhcp-agent-clone_start_0 * Resource action: nova-scheduler monitor=60000 on rdo7-node1 * Resource action: nova-scheduler monitor=60000 on rdo7-node2 * Resource action: nova-scheduler monitor=60000 on rdo7-node3 * Resource action: nova-conductor start on rdo7-node1 * Resource action: nova-conductor start on rdo7-node2 * Resource action: nova-conductor start on rdo7-node3 * Pseudo action: nova-conductor-clone_running_0 * Resource action: ceilometer-delay monitor=10000 on rdo7-node2 * Resource action: ceilometer-delay monitor=10000 on rdo7-node3 * Resource action: ceilometer-delay monitor=10000 on rdo7-node1 * Resource action: ceilometer-alarm-evaluator start on rdo7-node2 * Resource action: ceilometer-alarm-evaluator start on rdo7-node3 * Resource action: ceilometer-alarm-evaluator start on rdo7-node1 * Pseudo action: ceilometer-alarm-evaluator-clone_running_0 * Pseudo action: ceilometer-alarm-notifier-clone_start_0 * Resource action: libvirtd-compute monitor=60000 on mrg-07 * Resource action: libvirtd-compute monitor=60000 on mrg-08 * Resource action: libvirtd-compute monitor=60000 on mrg-09 * Resource action: fence-nova start on rdo7-node2 * Pseudo action: clone-one-or-more:order-nova-conductor-clone-nova-compute-clone-mandatory * Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node1 * Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node2 * Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node3 * Resource action: neutron-dhcp-agent start on rdo7-node1 * Resource action: neutron-dhcp-agent start on rdo7-node2 * Resource action: neutron-dhcp-agent start on rdo7-node3 * Pseudo action: neutron-dhcp-agent-clone_running_0 * Pseudo action: neutron-l3-agent-clone_start_0 * Resource action: nova-conductor monitor=60000 on rdo7-node1 * Resource action: nova-conductor monitor=60000 on rdo7-node2 * Resource action: nova-conductor monitor=60000 on rdo7-node3 * Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node2 * Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node3 * Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node1 * Resource action: ceilometer-alarm-notifier start on rdo7-node2 * Resource action: ceilometer-alarm-notifier start on rdo7-node3 * Resource action: ceilometer-alarm-notifier start on rdo7-node1 * Pseudo action: ceilometer-alarm-notifier-clone_running_0 * Pseudo action: ceilometer-notification-clone_start_0 * Resource action: fence-nova monitor=60000 on rdo7-node2 * Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node1 * Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node2 * Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node3 * Resource action: neutron-l3-agent start on rdo7-node1 * Resource action: neutron-l3-agent start on rdo7-node2 * Resource action: neutron-l3-agent start on rdo7-node3 * Pseudo action: neutron-l3-agent-clone_running_0 * Pseudo action: neutron-metadata-agent-clone_start_0 * Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node2 * Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node3 * Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node1 * Resource action: ceilometer-notification start on rdo7-node2 * Resource action: ceilometer-notification start on rdo7-node3 * Resource action: ceilometer-notification start on rdo7-node1 * Pseudo action: ceilometer-notification-clone_running_0 * Pseudo action: heat-api-clone_start_0 * Pseudo action: clone-one-or-more:order-ceilometer-notification-clone-ceilometer-compute-clone-mandatory * Resource action: neutron-l3-agent monitor=60000 on rdo7-node1 * Resource action: neutron-l3-agent monitor=60000 on rdo7-node2 * Resource action: neutron-l3-agent monitor=60000 on rdo7-node3 * Resource action: neutron-metadata-agent start on rdo7-node1 * Resource action: neutron-metadata-agent start on rdo7-node2 * Resource action: neutron-metadata-agent start on rdo7-node3 * Pseudo action: neutron-metadata-agent-clone_running_0 * Resource action: ceilometer-notification monitor=60000 on rdo7-node2 * Resource action: ceilometer-notification monitor=60000 on rdo7-node3 * Resource action: ceilometer-notification monitor=60000 on rdo7-node1 * Resource action: heat-api start on rdo7-node2 * Resource action: heat-api start on rdo7-node3 * Resource action: heat-api start on rdo7-node1 * Pseudo action: heat-api-clone_running_0 * Pseudo action: heat-api-cfn-clone_start_0 * Pseudo action: ceilometer-compute-clone_start_0 * Resource action: neutron-metadata-agent monitor=60000 on rdo7-node1 * Resource action: neutron-metadata-agent monitor=60000 on rdo7-node2 * Resource action: neutron-metadata-agent monitor=60000 on rdo7-node3 * Resource action: heat-api monitor=60000 on rdo7-node2 * Resource action: heat-api monitor=60000 on rdo7-node3 * Resource action: heat-api monitor=60000 on rdo7-node1 * Resource action: heat-api-cfn start on rdo7-node2 * Resource action: heat-api-cfn start on rdo7-node3 * Resource action: heat-api-cfn start on rdo7-node1 * Pseudo action: heat-api-cfn-clone_running_0 * Pseudo action: heat-api-cloudwatch-clone_start_0 * Resource action: ceilometer-compute start on mrg-07 * Resource action: ceilometer-compute start on mrg-08 * Resource action: ceilometer-compute start on mrg-09 * Pseudo action: ceilometer-compute-clone_running_0 * Pseudo action: nova-compute-clone_start_0 * Resource action: heat-api-cfn monitor=60000 on rdo7-node2 * Resource action: heat-api-cfn monitor=60000 on rdo7-node3 * Resource action: heat-api-cfn monitor=60000 on rdo7-node1 * Resource action: heat-api-cloudwatch start on rdo7-node2 * Resource action: heat-api-cloudwatch start on rdo7-node3 * Resource action: heat-api-cloudwatch start on rdo7-node1 * Pseudo action: heat-api-cloudwatch-clone_running_0 * Pseudo action: heat-engine-clone_start_0 * Resource action: ceilometer-compute monitor=60000 on mrg-07 * Resource action: ceilometer-compute monitor=60000 on mrg-08 * Resource action: ceilometer-compute monitor=60000 on mrg-09 * Resource action: nova-compute start on mrg-07 * Resource action: nova-compute start on mrg-08 * Resource action: nova-compute start on mrg-09 * Pseudo action: nova-compute-clone_running_0 * Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node2 * Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node3 * Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node1 * Resource action: heat-engine start on rdo7-node2 * Resource action: heat-engine start on rdo7-node3 * Resource action: heat-engine start on rdo7-node1 * Pseudo action: heat-engine-clone_running_0 * Pseudo action: nova-compute-clone_post_notify_running_0 * Resource action: heat-engine monitor=60000 on rdo7-node2 * Resource action: heat-engine monitor=60000 on rdo7-node3 * Resource action: heat-engine monitor=60000 on rdo7-node1 * Resource action: nova-compute notify on mrg-07 * Resource action: nova-compute notify on mrg-08 * Resource action: nova-compute notify on mrg-09 * Pseudo action: nova-compute-clone_confirmed-post_notify_running_0 * Resource action: nova-compute monitor=10000 on mrg-07 * Resource action: nova-compute monitor=10000 on mrg-08 * Resource action: nova-compute monitor=10000 on mrg-09 Revised cluster status: Online: [ rdo7-node1 rdo7-node2 rdo7-node3 ] RemoteOnline: [ mrg-07 mrg-08 mrg-09 ] fence1 (stonith:fence_xvm): Started rdo7-node2 fence2 (stonith:fence_xvm): Started rdo7-node1 fence3 (stonith:fence_xvm): Started rdo7-node3 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] vip-db (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-rabbitmq (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-keystone (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-glance (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-cinder (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-swift (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-neutron (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-nova (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-horizon (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-heat (ocf::heartbeat:IPaddr2): Started rdo7-node1 vip-ceilometer (ocf::heartbeat:IPaddr2): Started rdo7-node2 vip-qpid (ocf::heartbeat:IPaddr2): Started rdo7-node3 vip-node (ocf::heartbeat:IPaddr2): Started rdo7-node1 - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: rabbitmq-server-clone [rabbitmq-server] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: memcached-clone [memcached] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: mongodb-clone [mongodb] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: keystone-clone [keystone] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: glance-fs-clone [glance-fs] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: glance-registry-clone [glance-registry] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: glance-api-clone [glance-api] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: cinder-api-clone [cinder-api] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: cinder-scheduler-clone [cinder-scheduler] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] cinder-volume (systemd:openstack-cinder-volume): Started rdo7-node2 Clone Set: swift-fs-clone [swift-fs] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: swift-account-clone [swift-account] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: swift-container-clone [swift-container] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: swift-object-clone [swift-object] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: swift-proxy-clone [swift-proxy] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] swift-object-expirer (systemd:openstack-swift-object-expirer): Started rdo7-node3 Clone Set: neutron-server-clone [neutron-server] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-scale-clone [neutron-scale] (unique) neutron-scale:0 (ocf::neutron:NeutronScale): Started rdo7-node1 neutron-scale:1 (ocf::neutron:NeutronScale): Started rdo7-node2 neutron-scale:2 (ocf::neutron:NeutronScale): Started rdo7-node3 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: nova-consoleauth-clone [nova-consoleauth] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: nova-novncproxy-clone [nova-novncproxy] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: nova-api-clone [nova-api] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: nova-scheduler-clone [nova-scheduler] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: nova-conductor-clone [nova-conductor] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ rdo7-node1 ] Slaves: [ rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] vip-redis (ocf::heartbeat:IPaddr2): Started rdo7-node1 Clone Set: ceilometer-central-clone [ceilometer-central] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: ceilometer-collector-clone [ceilometer-collector] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: ceilometer-api-clone [ceilometer-api] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: ceilometer-delay-clone [ceilometer-delay] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: ceilometer-notification-clone [ceilometer-notification] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: heat-api-clone [heat-api] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: heat-api-cfn-clone [heat-api-cfn] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: heat-engine-clone [heat-engine] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: horizon-clone [horizon] Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Stopped: [ mrg-07 mrg-08 mrg-09 ] Clone Set: neutron-openvswitch-agent-compute-clone [neutron-openvswitch-agent-compute] Started: [ mrg-07 mrg-08 mrg-09 ] Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: libvirtd-compute-clone [libvirtd-compute] Started: [ mrg-07 mrg-08 mrg-09 ] Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: ceilometer-compute-clone [ceilometer-compute] Started: [ mrg-07 mrg-08 mrg-09 ] Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ] Clone Set: nova-compute-clone [nova-compute] Started: [ mrg-07 mrg-08 mrg-09 ] Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ] fence-nova (stonith:fence_compute): Started rdo7-node2 fence-compute (stonith:fence_apc_snmp): Started rdo7-node3 mrg-07 (ocf::pacemaker:remote): Started rdo7-node1 mrg-08 (ocf::pacemaker:remote): Started rdo7-node2 mrg-09 (ocf::pacemaker:remote): Started rdo7-node3 diff --git a/cts/scheduler/order_constraint_stops_master.summary b/cts/scheduler/order_constraint_stops_master.summary index d3d8891395..f0a3a8e529 100644 --- a/cts/scheduler/order_constraint_stops_master.summary +++ b/cts/scheduler/order_constraint_stops_master.summary @@ -1,42 +1,42 @@ 1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ fc16-builder fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Masters: [ fc16-builder ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Started fc16-builder2 ( disabled ) Transition Summary: * Stop NATIVE_RSC_A:0 (Master fc16-builder) due to required NATIVE_RSC_B start * Stop NATIVE_RSC_B (fc16-builder2) due to node availability Executing cluster transition: * Pseudo action: MASTER_RSC_A_pre_notify_demote_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_demote_0 * Pseudo action: MASTER_RSC_A_demote_0 * Resource action: NATIVE_RSC_A:0 demote on fc16-builder * Pseudo action: MASTER_RSC_A_demoted_0 * Pseudo action: MASTER_RSC_A_post_notify_demoted_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-post_notify_demoted_0 * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Resource action: NATIVE_RSC_A:0 delete on fc16-builder2 * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Resource action: NATIVE_RSC_B stop on fc16-builder2 * Pseudo action: all_stopped Revised cluster status: Online: [ fc16-builder fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Stopped: [ fc16-builder fc16-builder2 ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Stopped ( disabled ) diff --git a/cts/scheduler/order_constraint_stops_slave.summary b/cts/scheduler/order_constraint_stops_slave.summary index 896c9c3174..aba653f1ce 100644 --- a/cts/scheduler/order_constraint_stops_slave.summary +++ b/cts/scheduler/order_constraint_stops_slave.summary @@ -1,34 +1,34 @@ 1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Slaves: [ fc16-builder ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Started fc16-builder ( disabled ) Transition Summary: * Stop NATIVE_RSC_A:0 (Slave fc16-builder) due to required NATIVE_RSC_B start * Stop NATIVE_RSC_B (fc16-builder) due to node availability Executing cluster transition: * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Resource action: NATIVE_RSC_B stop on fc16-builder * Pseudo action: all_stopped Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] - Master/Slave Set: MASTER_RSC_A [NATIVE_RSC_A] + Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Stopped: [ fc16-builder fc16-builder2 ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Stopped ( disabled ) diff --git a/cts/scheduler/probe-2.summary b/cts/scheduler/probe-2.summary index 7e74efcea1..1e83165264 100644 --- a/cts/scheduler/probe-2.summary +++ b/cts/scheduler/probe-2.summary @@ -1,162 +1,162 @@ Current cluster status: Node wc02 (f36760d8-d84a-46b2-b452-4c8cac8b3396): standby Online: [ wc01 ] Resource Group: group_www_data fs_www_data (ocf::heartbeat:Filesystem): Started wc01 nfs-kernel-server (lsb:nfs-kernel-server): Started wc01 intip_nfs (ocf::heartbeat:IPaddr2): Started wc01 - Master/Slave Set: ms_drbd_mysql [drbd_mysql] + Clone Set: ms_drbd_mysql [drbd_mysql] (promotable) Masters: [ wc02 ] Slaves: [ wc01 ] Resource Group: group_mysql fs_mysql (ocf::heartbeat:Filesystem): Started wc02 intip_sql (ocf::heartbeat:IPaddr2): Started wc02 mysql-server (ocf::heartbeat:mysql): Started wc02 - Master/Slave Set: ms_drbd_www [drbd_www] + Clone Set: ms_drbd_www [drbd_www] (promotable) Masters: [ wc01 ] Slaves: [ wc02 ] Clone Set: clone_nfs-common [group_nfs-common] Started: [ wc01 wc02 ] Clone Set: clone_mysql-proxy [group_mysql-proxy] Started: [ wc01 wc02 ] Clone Set: clone_webservice [group_webservice] Started: [ wc01 wc02 ] Resource Group: group_ftpd extip_ftp (ocf::heartbeat:IPaddr2): Started wc01 pure-ftpd (ocf::heartbeat:Pure-FTPd): Started wc01 Clone Set: DoFencing [stonith_rackpdu] (unique) stonith_rackpdu:0 (stonith:external/rackpdu): Started wc01 stonith_rackpdu:1 (stonith:external/rackpdu): Started wc02 Transition Summary: * Promote drbd_mysql:0 (Slave -> Master wc01) * Stop drbd_mysql:1 ( Master wc02 ) due to node availability * Move fs_mysql ( wc02 -> wc01 ) * Move intip_sql ( wc02 -> wc01 ) * Move mysql-server ( wc02 -> wc01 ) * Stop drbd_www:1 ( Slave wc02 ) due to node availability * Stop nfs-common:1 (wc02) due to node availability * Stop mysql-proxy:1 (wc02) due to node availability * Stop fs_www:1 (wc02) due to node availability * Stop apache2:1 (wc02) due to node availability * Restart stonith_rackpdu:0 ( wc01 ) * Stop stonith_rackpdu:1 (wc02) due to node availability Executing cluster transition: * Resource action: drbd_mysql:0 cancel=10000 on wc01 * Pseudo action: ms_drbd_mysql_pre_notify_demote_0 * Pseudo action: group_mysql_stop_0 * Resource action: mysql-server stop on wc02 * Pseudo action: ms_drbd_www_pre_notify_stop_0 * Pseudo action: clone_mysql-proxy_stop_0 * Pseudo action: clone_webservice_stop_0 * Pseudo action: DoFencing_stop_0 * Resource action: drbd_mysql:0 notify on wc01 * Resource action: drbd_mysql:1 notify on wc02 * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_demote_0 * Resource action: intip_sql stop on wc02 * Resource action: drbd_www:0 notify on wc01 * Resource action: drbd_www:1 notify on wc02 * Pseudo action: ms_drbd_www_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_www_stop_0 * Pseudo action: group_mysql-proxy:1_stop_0 * Resource action: mysql-proxy:1 stop on wc02 * Pseudo action: group_webservice:1_stop_0 * Resource action: apache2:1 stop on wc02 * Resource action: stonith_rackpdu:0 stop on wc01 * Resource action: stonith_rackpdu:1 stop on wc02 * Pseudo action: DoFencing_stopped_0 * Pseudo action: DoFencing_start_0 * Resource action: fs_mysql stop on wc02 * Resource action: drbd_www:1 stop on wc02 * Pseudo action: ms_drbd_www_stopped_0 * Pseudo action: group_mysql-proxy:1_stopped_0 * Pseudo action: clone_mysql-proxy_stopped_0 * Resource action: fs_www:1 stop on wc02 * Resource action: stonith_rackpdu:0 start on wc01 * Pseudo action: DoFencing_running_0 * Pseudo action: group_mysql_stopped_0 * Pseudo action: ms_drbd_www_post_notify_stopped_0 * Pseudo action: group_webservice:1_stopped_0 * Pseudo action: clone_webservice_stopped_0 * Resource action: stonith_rackpdu:0 monitor=5000 on wc01 * Pseudo action: ms_drbd_mysql_demote_0 * Resource action: drbd_www:0 notify on wc01 * Pseudo action: ms_drbd_www_confirmed-post_notify_stopped_0 * Pseudo action: clone_nfs-common_stop_0 * Resource action: drbd_mysql:1 demote on wc02 * Pseudo action: ms_drbd_mysql_demoted_0 * Pseudo action: group_nfs-common:1_stop_0 * Resource action: nfs-common:1 stop on wc02 * Pseudo action: ms_drbd_mysql_post_notify_demoted_0 * Pseudo action: group_nfs-common:1_stopped_0 * Pseudo action: clone_nfs-common_stopped_0 * Resource action: drbd_mysql:0 notify on wc01 * Resource action: drbd_mysql:1 notify on wc02 * Pseudo action: ms_drbd_mysql_confirmed-post_notify_demoted_0 * Pseudo action: ms_drbd_mysql_pre_notify_stop_0 * Resource action: drbd_mysql:0 notify on wc01 * Resource action: drbd_mysql:1 notify on wc02 * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_stop_0 * Pseudo action: ms_drbd_mysql_stop_0 * Resource action: drbd_mysql:1 stop on wc02 * Pseudo action: ms_drbd_mysql_stopped_0 * Pseudo action: ms_drbd_mysql_post_notify_stopped_0 * Resource action: drbd_mysql:0 notify on wc01 * Pseudo action: ms_drbd_mysql_confirmed-post_notify_stopped_0 * Pseudo action: all_stopped * Pseudo action: ms_drbd_mysql_pre_notify_promote_0 * Resource action: drbd_mysql:0 notify on wc01 * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_promote_0 * Pseudo action: ms_drbd_mysql_promote_0 * Resource action: drbd_mysql:0 promote on wc01 * Pseudo action: ms_drbd_mysql_promoted_0 * Pseudo action: ms_drbd_mysql_post_notify_promoted_0 * Resource action: drbd_mysql:0 notify on wc01 * Pseudo action: ms_drbd_mysql_confirmed-post_notify_promoted_0 * Pseudo action: group_mysql_start_0 * Resource action: fs_mysql start on wc01 * Resource action: intip_sql start on wc01 * Resource action: mysql-server start on wc01 * Resource action: drbd_mysql:0 monitor=5000 on wc01 * Pseudo action: group_mysql_running_0 * Resource action: fs_mysql monitor=30000 on wc01 * Resource action: intip_sql monitor=30000 on wc01 * Resource action: mysql-server monitor=30000 on wc01 Revised cluster status: Node wc02 (f36760d8-d84a-46b2-b452-4c8cac8b3396): standby Online: [ wc01 ] Resource Group: group_www_data fs_www_data (ocf::heartbeat:Filesystem): Started wc01 nfs-kernel-server (lsb:nfs-kernel-server): Started wc01 intip_nfs (ocf::heartbeat:IPaddr2): Started wc01 - Master/Slave Set: ms_drbd_mysql [drbd_mysql] + Clone Set: ms_drbd_mysql [drbd_mysql] (promotable) Masters: [ wc01 ] Stopped: [ wc02 ] Resource Group: group_mysql fs_mysql (ocf::heartbeat:Filesystem): Started wc01 intip_sql (ocf::heartbeat:IPaddr2): Started wc01 mysql-server (ocf::heartbeat:mysql): Started wc01 - Master/Slave Set: ms_drbd_www [drbd_www] + Clone Set: ms_drbd_www [drbd_www] (promotable) Masters: [ wc01 ] Stopped: [ wc02 ] Clone Set: clone_nfs-common [group_nfs-common] Started: [ wc01 ] Stopped: [ wc02 ] Clone Set: clone_mysql-proxy [group_mysql-proxy] Started: [ wc01 ] Stopped: [ wc02 ] Clone Set: clone_webservice [group_webservice] Started: [ wc01 ] Stopped: [ wc02 ] Resource Group: group_ftpd extip_ftp (ocf::heartbeat:IPaddr2): Started wc01 pure-ftpd (ocf::heartbeat:Pure-FTPd): Started wc01 Clone Set: DoFencing [stonith_rackpdu] (unique) stonith_rackpdu:0 (stonith:external/rackpdu): Started wc01 stonith_rackpdu:1 (stonith:external/rackpdu): Stopped diff --git a/cts/scheduler/probe-3.summary b/cts/scheduler/probe-3.summary index 5faa6b12e8..5a657bca0b 100644 --- a/cts/scheduler/probe-3.summary +++ b/cts/scheduler/probe-3.summary @@ -1,55 +1,55 @@ Current cluster status: Node pcmk-4: pending Online: [ pcmk-1 pcmk-2 pcmk-3 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.182 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.183 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-2 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1 migrator (ocf::pacemaker:Dummy): Started pcmk-3 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-1 ] Slaves: [ pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Transition Summary: Executing cluster transition: Revised cluster status: Node pcmk-4: pending Online: [ pcmk-1 pcmk-2 pcmk-3 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.182 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.183 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-2 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1 migrator (ocf::pacemaker:Dummy): Started pcmk-3 Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-1 ] Slaves: [ pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] diff --git a/cts/scheduler/probe-4.summary b/cts/scheduler/probe-4.summary index c1a9fedb64..c194577cf8 100644 --- a/cts/scheduler/probe-4.summary +++ b/cts/scheduler/probe-4.summary @@ -1,56 +1,56 @@ Current cluster status: Node pcmk-4: pending Online: [ pcmk-1 pcmk-2 pcmk-3 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.182 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.183 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-2 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1 migrator (ocf::pacemaker:Dummy): Stopped Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-1 ] Slaves: [ pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Transition Summary: * Start migrator ( pcmk-3 ) blocked Executing cluster transition: Revised cluster status: Node pcmk-4: pending Online: [ pcmk-1 pcmk-2 pcmk-3 ] Resource Group: group-1 r192.168.101.181 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.182 (ocf::heartbeat:IPaddr): Started pcmk-1 r192.168.101.183 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-2 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1 migrator (ocf::pacemaker:Dummy): Stopped Clone Set: Connectivity [ping-1] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ pcmk-1 ] Slaves: [ pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] Clone Set: Fencing [FencingChild] Started: [ pcmk-1 pcmk-2 pcmk-3 ] Stopped: [ pcmk-4 ] diff --git a/cts/scheduler/rec-node-13.summary b/cts/scheduler/rec-node-13.summary index 819a7adbb4..ee0fa645e5 100644 --- a/cts/scheduler/rec-node-13.summary +++ b/cts/scheduler/rec-node-13.summary @@ -1,80 +1,80 @@ Current cluster status: Node c001n04 (9e080e6d-7a25-4dac-be89-f6f4f128623d): UNCLEAN (online) Online: [ c001n02 c001n06 c001n07 ] OFFLINE: [ c001n03 c001n05 ] Clone Set: DoFencing [child_DoFencing] Started: [ c001n02 c001n06 c001n07 ] Stopped: [ c001n03 c001n04 c001n05 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n06 rsc_c001n05 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n06 rsc_c001n04 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n07 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n06 (ocf::heartbeat:IPaddr): Started c001n06 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n02 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): FAILED c001n04 ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:8 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 ocf_msdummy:9 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 ocf_msdummy:10 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 ocf_msdummy:11 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 Transition Summary: * Fence (reboot) c001n04 'ocf_msdummy:6 failed there' * Stop ocf_msdummy:6 ( Slave c001n04 ) due to node availability Executing cluster transition: * Fencing c001n04 (reboot) * Pseudo action: master_rsc_1_stop_0 * Pseudo action: stonith_complete * Pseudo action: ocf_msdummy:6_stop_0 * Pseudo action: master_rsc_1_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ c001n02 c001n06 c001n07 ] OFFLINE: [ c001n03 c001n04 c001n05 ] Clone Set: DoFencing [child_DoFencing] Started: [ c001n02 c001n06 c001n07 ] Stopped: [ c001n03 c001n04 c001n05 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n06 rsc_c001n05 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n06 rsc_c001n04 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n07 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n06 (ocf::heartbeat:IPaddr): Started c001n06 - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n02 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:8 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 ocf_msdummy:9 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 ocf_msdummy:10 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 ocf_msdummy:11 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 diff --git a/cts/scheduler/remote-connection-unrecoverable.summary b/cts/scheduler/remote-connection-unrecoverable.summary index 528c7b4518..dbc61b0c68 100644 --- a/cts/scheduler/remote-connection-unrecoverable.summary +++ b/cts/scheduler/remote-connection-unrecoverable.summary @@ -1,54 +1,54 @@ Current cluster status: Node node1 (1): UNCLEAN (offline) Online: [ node2 ] RemoteOnline: [ remote1 ] remote1 (ocf::pacemaker:remote): Started node1 (UNCLEAN) killer (stonith:fence_xvm): Started node2 rsc1 (ocf::pacemaker:Dummy): Started remote1 - Master/Slave Set: rsc2-master [rsc2] + Clone Set: rsc2-master [rsc2] (promotable) rsc2 (ocf::pacemaker:Stateful): Master node1 (UNCLEAN) Masters: [ node2 ] Stopped: [ remote1 ] Transition Summary: * Fence (reboot) remote1 'resources are active and the connection is unrecoverable' * Fence (reboot) node1 'peer is no longer part of the cluster' * Stop remote1 ( node1 ) due to node availability * Restart killer ( node2 ) due to resource definition change * Move rsc1 ( remote1 -> node2 ) * Stop rsc2:0 ( Master node1 ) due to node availability Executing cluster transition: * Resource action: killer stop on node2 * Resource action: rsc1 monitor on node2 * Fencing node1 (reboot) * Fencing remote1 (reboot) * Pseudo action: stonith_complete * Pseudo action: rsc1_stop_0 * Pseudo action: rsc2-master_demote_0 * Pseudo action: remote1_stop_0 * Resource action: rsc1 start on node2 * Pseudo action: rsc2_demote_0 * Pseudo action: rsc2-master_demoted_0 * Pseudo action: rsc2-master_stop_0 * Resource action: rsc1 monitor=10000 on node2 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc2-master_stopped_0 * Pseudo action: all_stopped * Resource action: killer start on node2 * Resource action: killer monitor=60000 on node2 Revised cluster status: Online: [ node2 ] OFFLINE: [ node1 ] RemoteOFFLINE: [ remote1 ] remote1 (ocf::pacemaker:remote): Stopped killer (stonith:fence_xvm): Started node2 rsc1 (ocf::pacemaker:Dummy): Started node2 - Master/Slave Set: rsc2-master [rsc2] + Clone Set: rsc2-master [rsc2] (promotable) Masters: [ node2 ] Stopped: [ node1 remote1 ] diff --git a/cts/scheduler/remote-orphaned.summary b/cts/scheduler/remote-orphaned.summary index f2050070f0..63e6a730c9 100644 --- a/cts/scheduler/remote-orphaned.summary +++ b/cts/scheduler/remote-orphaned.summary @@ -1,68 +1,68 @@ Current cluster status: Online: [ 18node1 18node3 ] OFFLINE: [ 18node2 ] RemoteOnline: [ remote1 ] Fencing (stonith:fence_xvm): Started 18node3 FencingPass (stonith:fence_dummy): Started 18node1 FencingFail (stonith:fence_dummy): Started 18node3 rsc_18node1 (ocf::heartbeat:IPaddr2): Started 18node1 rsc_18node2 (ocf::heartbeat:IPaddr2): Started remote1 rsc_18node3 (ocf::heartbeat:IPaddr2): Started 18node3 migrator (ocf::pacemaker:Dummy): Started 18node1 Clone Set: Connectivity [ping-1] Started: [ 18node1 18node3 remote1 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ 18node1 ] Slaves: [ 18node3 ] Stopped: [ 18node2 ] Resource Group: group-1 r192.168.122.87 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.88 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.89 (ocf::heartbeat:IPaddr2): Started 18node1 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1 remote1 (ocf::pacemaker:remote): ORPHANED Started 18node1 Transition Summary: * Move rsc_18node2 ( remote1 -> 18node1 ) * Stop ping-1:2 (remote1) due to node availability * Stop remote1 (18node1) due to node availability Executing cluster transition: * Resource action: rsc_18node2 stop on remote1 * Pseudo action: Connectivity_stop_0 * Resource action: rsc_18node2 start on 18node1 * Resource action: ping-1 stop on remote1 * Pseudo action: Connectivity_stopped_0 * Resource action: remote1 stop on 18node1 * Resource action: remote1 delete on 18node3 * Resource action: remote1 delete on 18node1 * Pseudo action: all_stopped * Resource action: rsc_18node2 monitor=5000 on 18node1 Revised cluster status: Online: [ 18node1 18node3 ] OFFLINE: [ 18node2 ] RemoteOFFLINE: [ remote1 ] Fencing (stonith:fence_xvm): Started 18node3 FencingPass (stonith:fence_dummy): Started 18node1 FencingFail (stonith:fence_dummy): Started 18node3 rsc_18node1 (ocf::heartbeat:IPaddr2): Started 18node1 rsc_18node2 (ocf::heartbeat:IPaddr2): Started 18node1 rsc_18node3 (ocf::heartbeat:IPaddr2): Started 18node3 migrator (ocf::pacemaker:Dummy): Started 18node1 Clone Set: Connectivity [ping-1] Started: [ 18node1 18node3 ] Stopped: [ 18node2 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ 18node1 ] Slaves: [ 18node3 ] Stopped: [ 18node2 ] Resource Group: group-1 r192.168.122.87 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.88 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.89 (ocf::heartbeat:IPaddr2): Started 18node1 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1 diff --git a/cts/scheduler/remote-reconnect-delay.summary b/cts/scheduler/remote-reconnect-delay.summary index bd46eae9dc..a708d1bd61 100644 --- a/cts/scheduler/remote-reconnect-delay.summary +++ b/cts/scheduler/remote-reconnect-delay.summary @@ -1,66 +1,66 @@ Using the original execution date of: 2017-08-21 17:12:54Z Current cluster status: Online: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] RemoteOFFLINE: [ remote-rhel7-3 ] Fencing (stonith:fence_xvm): Started rhel7-2 FencingFail (stonith:fence_dummy): Started rhel7-4 rsc_rhel7-1 (ocf::heartbeat:IPaddr2): Started rhel7-1 rsc_rhel7-2 (ocf::heartbeat:IPaddr2): Started rhel7-2 rsc_rhel7-3 (ocf::heartbeat:IPaddr2): Started rhel7-5 rsc_rhel7-4 (ocf::heartbeat:IPaddr2): Started rhel7-4 rsc_rhel7-5 (ocf::heartbeat:IPaddr2): Started rhel7-5 migrator (ocf::pacemaker:Dummy): Started rhel7-5 Clone Set: Connectivity [ping-1] Started: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] Stopped: [ remote-rhel7-3 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ rhel7-2 ] Slaves: [ rhel7-1 rhel7-4 rhel7-5 ] Stopped: [ remote-rhel7-3 ] Resource Group: group-1 r192.168.122.207 (ocf::heartbeat:IPaddr2): Started rhel7-2 petulant (service:DummySD): Started rhel7-2 r192.168.122.208 (ocf::heartbeat:IPaddr2): Started rhel7-2 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-2 remote-rhel7-3 (ocf::pacemaker:remote): FAILED remote-rsc (ocf::heartbeat:Dummy): Started rhel7-1 Transition Summary: * Restart Fencing ( rhel7-2 ) due to resource definition change Executing cluster transition: * Resource action: Fencing stop on rhel7-2 * Resource action: Fencing start on rhel7-2 * Resource action: Fencing monitor=120000 on rhel7-2 * Pseudo action: all_stopped Using the original execution date of: 2017-08-21 17:12:54Z Revised cluster status: Online: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] RemoteOFFLINE: [ remote-rhel7-3 ] Fencing (stonith:fence_xvm): Started rhel7-2 FencingFail (stonith:fence_dummy): Started rhel7-4 rsc_rhel7-1 (ocf::heartbeat:IPaddr2): Started rhel7-1 rsc_rhel7-2 (ocf::heartbeat:IPaddr2): Started rhel7-2 rsc_rhel7-3 (ocf::heartbeat:IPaddr2): Started rhel7-5 rsc_rhel7-4 (ocf::heartbeat:IPaddr2): Started rhel7-4 rsc_rhel7-5 (ocf::heartbeat:IPaddr2): Started rhel7-5 migrator (ocf::pacemaker:Dummy): Started rhel7-5 Clone Set: Connectivity [ping-1] Started: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] Stopped: [ remote-rhel7-3 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ rhel7-2 ] Slaves: [ rhel7-1 rhel7-4 rhel7-5 ] Stopped: [ remote-rhel7-3 ] Resource Group: group-1 r192.168.122.207 (ocf::heartbeat:IPaddr2): Started rhel7-2 petulant (service:DummySD): Started rhel7-2 r192.168.122.208 (ocf::heartbeat:IPaddr2): Started rhel7-2 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-2 remote-rhel7-3 (ocf::pacemaker:remote): FAILED remote-rsc (ocf::heartbeat:Dummy): Started rhel7-1 diff --git a/cts/scheduler/remote-recover-all.summary b/cts/scheduler/remote-recover-all.summary index ba074e5082..7fa3c111eb 100644 --- a/cts/scheduler/remote-recover-all.summary +++ b/cts/scheduler/remote-recover-all.summary @@ -1,154 +1,154 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: Node controller-1 (2): UNCLEAN (offline) Online: [ controller-0 controller-2 ] RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-1 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 galera-2 ] Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) redis (ocf::heartbeat:redis): Slave controller-1 (UNCLEAN) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) Clone Set: haproxy-clone [haproxy] haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) Started: [ controller-0 controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) messaging-1 'resources are active and the connection is unrecoverable' * Fence (reboot) galera-2 'resources are active and the connection is unrecoverable' * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Stop messaging-1 (controller-1) due to node availability * Move galera-0 ( controller-1 -> controller-2 ) * Stop galera-2 (controller-1) due to node availability * Stop rabbitmq:2 (messaging-1) due to node availability * Stop galera:1 ( Master galera-2 ) due to node availability * Stop redis:0 ( Slave controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 (controller-1) due to node availability * Restart stonith-fence_ipmilan-525400bbf613 ( controller-0 ) due to resource definition change * Restart stonith-fence_ipmilan-525400b4f6bd ( controller-0 ) due to resource definition change * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing cluster transition: * Pseudo action: galera-0_stop_0 * Pseudo action: galera-master_demote_0 * Pseudo action: redis-master_pre_notify_stop_0 * Resource action: stonith-fence_ipmilan-525400bbf613 stop on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd stop on controller-0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Fencing galera-2 (reboot) * Pseudo action: galera_demote_0 * Pseudo action: galera-master_demoted_0 * Pseudo action: galera-master_stop_0 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Fencing messaging-1 (reboot) * Pseudo action: stonith_complete * Pseudo action: rabbitmq_post_notify_stop_0 * Pseudo action: rabbitmq-clone_stop_0 * Pseudo action: galera_stop_0 * Pseudo action: galera-master_stopped_0 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Pseudo action: galera-2_stop_0 * Resource action: rabbitmq notify on messaging-2 * Resource action: rabbitmq notify on messaging-0 * Pseudo action: rabbitmq_notified_0 * Pseudo action: rabbitmq_stop_0 * Pseudo action: rabbitmq-clone_stopped_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: messaging-1_stop_0 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 * Pseudo action: all_stopped * Resource action: galera-0 start on controller-2 * Resource action: galera monitor=10000 on galera-0 * Resource action: stonith-fence_ipmilan-525400bbf613 start on controller-0 * Resource action: stonith-fence_ipmilan-525400bbf613 monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd start on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised cluster status: Online: [ controller-0 controller-2 ] OFFLINE: [ controller-1 ] RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ] RemoteOFFLINE: [ galera-2 messaging-1 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Stopped messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-2 galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Stopped Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 ] Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-2 Clone Set: haproxy-clone [haproxy] Started: [ controller-0 controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/remote-recover-connection.summary b/cts/scheduler/remote-recover-connection.summary index 8246cd958d..fdd97f26d3 100644 --- a/cts/scheduler/remote-recover-connection.summary +++ b/cts/scheduler/remote-recover-connection.summary @@ -1,140 +1,140 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: Node controller-1 (2): UNCLEAN (offline) Online: [ controller-0 controller-2 ] RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-1 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 galera-2 ] Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) redis (ocf::heartbeat:redis): Slave controller-1 (UNCLEAN) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) Clone Set: haproxy-clone [haproxy] haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) Started: [ controller-0 controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Move messaging-1 ( controller-1 -> controller-2 ) * Move galera-0 ( controller-1 -> controller-2 ) * Move galera-2 ( controller-1 -> controller-2 ) * Stop redis:0 ( Slave controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 (controller-1) due to node availability * Restart stonith-fence_ipmilan-525400bbf613 ( controller-0 ) due to resource definition change * Restart stonith-fence_ipmilan-525400b4f6bd ( controller-0 ) due to resource definition change * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing cluster transition: * Pseudo action: messaging-1_stop_0 * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Resource action: stonith-fence_ipmilan-525400bbf613 stop on controller-0 * Resource action: stonith-fence_ipmilan-525400bbf613 start on controller-0 * Resource action: stonith-fence_ipmilan-525400bbf613 monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd stop on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd start on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd monitor=60000 on controller-0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Resource action: messaging-1 start on controller-2 * Resource action: galera-0 start on controller-2 * Resource action: galera-2 start on controller-2 * Resource action: rabbitmq monitor=10000 on messaging-1 * Resource action: galera monitor=10000 on galera-2 * Resource action: galera monitor=10000 on galera-0 * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Pseudo action: stonith_complete * Resource action: messaging-1 monitor=20000 on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: galera-2 monitor=20000 on controller-2 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 * Pseudo action: all_stopped Using the original execution date of: 2017-05-03 13:33:24Z Revised cluster status: Online: [ controller-0 controller-2 ] OFFLINE: [ controller-1 ] RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Started controller-2 messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-2 galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Started controller-2 Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-1 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 galera-2 ] Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-2 Clone Set: haproxy-clone [haproxy] Started: [ controller-0 controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/remote-recover-no-resources.summary b/cts/scheduler/remote-recover-no-resources.summary index 35be629fe9..13c32ff65e 100644 --- a/cts/scheduler/remote-recover-no-resources.summary +++ b/cts/scheduler/remote-recover-no-resources.summary @@ -1,145 +1,145 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: Node controller-1 (2): UNCLEAN (offline) Online: [ controller-0 controller-2 ] RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-1 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 ] Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) redis (ocf::heartbeat:redis): Slave controller-1 (UNCLEAN) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) Clone Set: haproxy-clone [haproxy] haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) Started: [ controller-0 controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) messaging-1 'resources are active and the connection is unrecoverable' * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Stop messaging-1 (controller-1) due to node availability * Move galera-0 ( controller-1 -> controller-2 ) * Stop galera-2 ( controller-1 ) due to node availability * Stop rabbitmq:2 (messaging-1) due to node availability * Stop redis:0 ( Slave controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 (controller-1) due to node availability * Restart stonith-fence_ipmilan-525400bbf613 ( controller-0 ) due to resource definition change * Restart stonith-fence_ipmilan-525400b4f6bd ( controller-0 ) due to resource definition change * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing cluster transition: * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Resource action: stonith-fence_ipmilan-525400bbf613 stop on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd stop on controller-0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Fencing messaging-1 (reboot) * Pseudo action: stonith_complete * Pseudo action: rabbitmq_post_notify_stop_0 * Pseudo action: rabbitmq-clone_stop_0 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: rabbitmq notify on messaging-2 * Resource action: rabbitmq notify on messaging-0 * Pseudo action: rabbitmq_notified_0 * Pseudo action: rabbitmq_stop_0 * Pseudo action: rabbitmq-clone_stopped_0 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Pseudo action: messaging-1_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 * Pseudo action: all_stopped * Resource action: galera-0 start on controller-2 * Resource action: galera monitor=10000 on galera-0 * Resource action: stonith-fence_ipmilan-525400bbf613 start on controller-0 * Resource action: stonith-fence_ipmilan-525400bbf613 monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd start on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised cluster status: Online: [ controller-0 controller-2 ] OFFLINE: [ controller-1 ] RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ] RemoteOFFLINE: [ galera-2 messaging-1 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Stopped messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-2 galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Stopped Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 ] Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-2 Clone Set: haproxy-clone [haproxy] Started: [ controller-0 controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/remote-recover-unknown.summary b/cts/scheduler/remote-recover-unknown.summary index cd82b8cb09..7e56e7d64e 100644 --- a/cts/scheduler/remote-recover-unknown.summary +++ b/cts/scheduler/remote-recover-unknown.summary @@ -1,147 +1,147 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: Node controller-1 (2): UNCLEAN (offline) Online: [ controller-0 controller-2 ] RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-1 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 ] Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) redis (ocf::heartbeat:redis): Slave controller-1 (UNCLEAN) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) Clone Set: haproxy-clone [haproxy] haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) Started: [ controller-0 controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) galera-2 'resources are in an unknown state and the connection is unrecoverable' * Fence (reboot) messaging-1 'resources are active and the connection is unrecoverable' * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Stop messaging-1 (controller-1) due to node availability * Move galera-0 ( controller-1 -> controller-2 ) * Stop galera-2 (controller-1) due to node availability * Stop rabbitmq:2 (messaging-1) due to node availability * Stop redis:0 ( Slave controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 (controller-1) due to node availability * Restart stonith-fence_ipmilan-525400bbf613 ( controller-0 ) due to resource definition change * Restart stonith-fence_ipmilan-525400b4f6bd ( controller-0 ) due to resource definition change * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing cluster transition: * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Resource action: stonith-fence_ipmilan-525400bbf613 stop on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd stop on controller-0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Fencing galera-2 (reboot) * Fencing messaging-1 (reboot) * Pseudo action: stonith_complete * Pseudo action: rabbitmq_post_notify_stop_0 * Pseudo action: rabbitmq-clone_stop_0 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: rabbitmq notify on messaging-2 * Resource action: rabbitmq notify on messaging-0 * Pseudo action: rabbitmq_notified_0 * Pseudo action: rabbitmq_stop_0 * Pseudo action: rabbitmq-clone_stopped_0 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Pseudo action: messaging-1_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 * Pseudo action: all_stopped * Resource action: galera-0 start on controller-2 * Resource action: galera monitor=10000 on galera-0 * Resource action: stonith-fence_ipmilan-525400bbf613 start on controller-0 * Resource action: stonith-fence_ipmilan-525400bbf613 monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd start on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 Using the original execution date of: 2017-05-03 13:33:24Z Revised cluster status: Online: [ controller-0 controller-2 ] OFFLINE: [ controller-1 ] RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ] RemoteOFFLINE: [ galera-2 messaging-1 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Stopped messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-2 galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Stopped Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 ] Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-2 Clone Set: haproxy-clone [haproxy] Started: [ controller-0 controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/remote-recovery.summary b/cts/scheduler/remote-recovery.summary index 8246cd958d..fdd97f26d3 100644 --- a/cts/scheduler/remote-recovery.summary +++ b/cts/scheduler/remote-recovery.summary @@ -1,140 +1,140 @@ Using the original execution date of: 2017-05-03 13:33:24Z Current cluster status: Node controller-1 (2): UNCLEAN (offline) Online: [ controller-0 controller-2 ] RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Started controller-1 (UNCLEAN) Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-1 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 galera-2 ] Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) redis (ocf::heartbeat:redis): Slave controller-1 (UNCLEAN) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-1 (UNCLEAN) Clone Set: haproxy-clone [haproxy] haproxy (systemd:haproxy): Started controller-1 (UNCLEAN) Started: [ controller-0 controller-2 ] Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN) Transition Summary: * Fence (reboot) controller-1 'peer is no longer part of the cluster' * Move messaging-1 ( controller-1 -> controller-2 ) * Move galera-0 ( controller-1 -> controller-2 ) * Move galera-2 ( controller-1 -> controller-2 ) * Stop redis:0 ( Slave controller-1 ) due to node availability * Move ip-172.17.1.14 ( controller-1 -> controller-2 ) * Move ip-172.17.1.17 ( controller-1 -> controller-2 ) * Move ip-172.17.4.11 ( controller-1 -> controller-2 ) * Stop haproxy:0 (controller-1) due to node availability * Restart stonith-fence_ipmilan-525400bbf613 ( controller-0 ) due to resource definition change * Restart stonith-fence_ipmilan-525400b4f6bd ( controller-0 ) due to resource definition change * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 ) Executing cluster transition: * Pseudo action: messaging-1_stop_0 * Pseudo action: galera-0_stop_0 * Pseudo action: galera-2_stop_0 * Pseudo action: redis-master_pre_notify_stop_0 * Resource action: stonith-fence_ipmilan-525400bbf613 stop on controller-0 * Resource action: stonith-fence_ipmilan-525400bbf613 start on controller-0 * Resource action: stonith-fence_ipmilan-525400bbf613 monitor=60000 on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd stop on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd start on controller-0 * Resource action: stonith-fence_ipmilan-525400b4f6bd monitor=60000 on controller-0 * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0 * Fencing controller-1 (reboot) * Resource action: messaging-1 start on controller-2 * Resource action: galera-0 start on controller-2 * Resource action: galera-2 start on controller-2 * Resource action: rabbitmq monitor=10000 on messaging-1 * Resource action: galera monitor=10000 on galera-2 * Resource action: galera monitor=10000 on galera-0 * Pseudo action: redis_post_notify_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-pre_notify_stop_0 * Pseudo action: redis-master_stop_0 * Pseudo action: haproxy-clone_stop_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2 * Pseudo action: stonith_complete * Resource action: messaging-1 monitor=20000 on controller-2 * Resource action: galera-0 monitor=20000 on controller-2 * Resource action: galera-2 monitor=20000 on controller-2 * Pseudo action: redis_stop_0 * Pseudo action: redis-master_stopped_0 * Pseudo action: haproxy_stop_0 * Pseudo action: haproxy-clone_stopped_0 * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2 * Pseudo action: redis-master_post_notify_stopped_0 * Pseudo action: ip-172.17.1.14_stop_0 * Pseudo action: ip-172.17.1.17_stop_0 * Pseudo action: ip-172.17.4.11_stop_0 * Resource action: redis notify on controller-0 * Resource action: redis notify on controller-2 * Pseudo action: redis-master_confirmed-post_notify_stopped_0 * Resource action: ip-172.17.1.14 start on controller-2 * Resource action: ip-172.17.1.17 start on controller-2 * Resource action: ip-172.17.4.11 start on controller-2 * Pseudo action: redis_notified_0 * Resource action: ip-172.17.1.14 monitor=10000 on controller-2 * Resource action: ip-172.17.1.17 monitor=10000 on controller-2 * Resource action: ip-172.17.4.11 monitor=10000 on controller-2 * Pseudo action: all_stopped Using the original execution date of: 2017-05-03 13:33:24Z Revised cluster status: Online: [ controller-0 controller-2 ] OFFLINE: [ controller-1 ] RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] messaging-0 (ocf::pacemaker:remote): Started controller-0 messaging-1 (ocf::pacemaker:remote): Started controller-2 messaging-2 (ocf::pacemaker:remote): Started controller-0 galera-0 (ocf::pacemaker:remote): Started controller-2 galera-1 (ocf::pacemaker:remote): Started controller-0 galera-2 (ocf::pacemaker:remote): Started controller-2 Clone Set: rabbitmq-clone [rabbitmq] Started: [ messaging-0 messaging-1 messaging-2 ] Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ galera-0 galera-1 galera-2 ] Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ controller-0 ] Slaves: [ controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started controller-0 ip-10.0.0.102 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.1.14 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.1.17 (ocf::heartbeat:IPaddr2): Started controller-2 ip-172.17.3.15 (ocf::heartbeat:IPaddr2): Started controller-0 ip-172.17.4.11 (ocf::heartbeat:IPaddr2): Started controller-2 Clone Set: haproxy-clone [haproxy] Started: [ controller-0 controller-2 ] Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0 stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0 stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2 diff --git a/cts/scheduler/remote-stale-node-entry.summary b/cts/scheduler/remote-stale-node-entry.summary index a8b64815e7..3a99f6c1e5 100644 --- a/cts/scheduler/remote-stale-node-entry.summary +++ b/cts/scheduler/remote-stale-node-entry.summary @@ -1,110 +1,110 @@ Current cluster status: Online: [ rhel7-node1 rhel7-node2 rhel7-node3 ] RemoteOFFLINE: [ remote1 ] Fencing (stonith:fence_xvm): Stopped FencingPass (stonith:fence_dummy): Stopped rsc_rhel7-node1 (ocf::heartbeat:IPaddr2): Stopped rsc_rhel7-node2 (ocf::heartbeat:IPaddr2): Stopped rsc_rhel7-node3 (ocf::heartbeat:IPaddr2): Stopped migrator (ocf::pacemaker:Dummy): Stopped Clone Set: Connectivity [ping-1] Stopped: [ remote1 rhel7-node1 rhel7-node2 rhel7-node3 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Stopped: [ remote1 rhel7-node1 rhel7-node2 rhel7-node3 ] Resource Group: group-1 r192.168.122.204 (ocf::heartbeat:IPaddr2): Stopped r192.168.122.205 (ocf::heartbeat:IPaddr2): Stopped r192.168.122.206 (ocf::heartbeat:IPaddr2): Stopped lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped Transition Summary: * Start Fencing (rhel7-node1) * Start FencingPass (rhel7-node2) * Start rsc_rhel7-node1 (rhel7-node1) * Start rsc_rhel7-node2 (rhel7-node2) * Start rsc_rhel7-node3 (rhel7-node3) * Start migrator (rhel7-node3) * Start ping-1:0 (rhel7-node1) * Start ping-1:1 (rhel7-node2) * Start ping-1:2 (rhel7-node3) Executing cluster transition: * Resource action: Fencing monitor on rhel7-node3 * Resource action: Fencing monitor on rhel7-node2 * Resource action: Fencing monitor on rhel7-node1 * Resource action: FencingPass monitor on rhel7-node3 * Resource action: FencingPass monitor on rhel7-node2 * Resource action: FencingPass monitor on rhel7-node1 * Resource action: rsc_rhel7-node1 monitor on rhel7-node3 * Resource action: rsc_rhel7-node1 monitor on rhel7-node2 * Resource action: rsc_rhel7-node1 monitor on rhel7-node1 * Resource action: rsc_rhel7-node2 monitor on rhel7-node3 * Resource action: rsc_rhel7-node2 monitor on rhel7-node2 * Resource action: rsc_rhel7-node2 monitor on rhel7-node1 * Resource action: rsc_rhel7-node3 monitor on rhel7-node3 * Resource action: rsc_rhel7-node3 monitor on rhel7-node2 * Resource action: rsc_rhel7-node3 monitor on rhel7-node1 * Resource action: migrator monitor on rhel7-node3 * Resource action: migrator monitor on rhel7-node2 * Resource action: migrator monitor on rhel7-node1 * Resource action: ping-1:0 monitor on rhel7-node1 * Resource action: ping-1:1 monitor on rhel7-node2 * Resource action: ping-1:2 monitor on rhel7-node3 * Pseudo action: Connectivity_start_0 * Resource action: stateful-1:0 monitor on rhel7-node3 * Resource action: stateful-1:0 monitor on rhel7-node2 * Resource action: stateful-1:0 monitor on rhel7-node1 * Resource action: r192.168.122.204 monitor on rhel7-node3 * Resource action: r192.168.122.204 monitor on rhel7-node2 * Resource action: r192.168.122.204 monitor on rhel7-node1 * Resource action: r192.168.122.205 monitor on rhel7-node3 * Resource action: r192.168.122.205 monitor on rhel7-node2 * Resource action: r192.168.122.205 monitor on rhel7-node1 * Resource action: r192.168.122.206 monitor on rhel7-node3 * Resource action: r192.168.122.206 monitor on rhel7-node2 * Resource action: r192.168.122.206 monitor on rhel7-node1 * Resource action: lsb-dummy monitor on rhel7-node3 * Resource action: lsb-dummy monitor on rhel7-node2 * Resource action: lsb-dummy monitor on rhel7-node1 * Resource action: Fencing start on rhel7-node1 * Resource action: FencingPass start on rhel7-node2 * Resource action: rsc_rhel7-node1 start on rhel7-node1 * Resource action: rsc_rhel7-node2 start on rhel7-node2 * Resource action: rsc_rhel7-node3 start on rhel7-node3 * Resource action: migrator start on rhel7-node3 * Resource action: ping-1:0 start on rhel7-node1 * Resource action: ping-1:1 start on rhel7-node2 * Resource action: ping-1:2 start on rhel7-node3 * Pseudo action: Connectivity_running_0 * Resource action: Fencing monitor=120000 on rhel7-node1 * Resource action: rsc_rhel7-node1 monitor=5000 on rhel7-node1 * Resource action: rsc_rhel7-node2 monitor=5000 on rhel7-node2 * Resource action: rsc_rhel7-node3 monitor=5000 on rhel7-node3 * Resource action: migrator monitor=10000 on rhel7-node3 * Resource action: ping-1:0 monitor=60000 on rhel7-node1 * Resource action: ping-1:1 monitor=60000 on rhel7-node2 * Resource action: ping-1:2 monitor=60000 on rhel7-node3 Revised cluster status: Online: [ rhel7-node1 rhel7-node2 rhel7-node3 ] RemoteOFFLINE: [ remote1 ] Fencing (stonith:fence_xvm): Started rhel7-node1 FencingPass (stonith:fence_dummy): Started rhel7-node2 rsc_rhel7-node1 (ocf::heartbeat:IPaddr2): Started rhel7-node1 rsc_rhel7-node2 (ocf::heartbeat:IPaddr2): Started rhel7-node2 rsc_rhel7-node3 (ocf::heartbeat:IPaddr2): Started rhel7-node3 migrator (ocf::pacemaker:Dummy): Started rhel7-node3 Clone Set: Connectivity [ping-1] Started: [ rhel7-node1 rhel7-node2 rhel7-node3 ] Stopped: [ remote1 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Stopped: [ remote1 rhel7-node1 rhel7-node2 rhel7-node3 ] Resource Group: group-1 r192.168.122.204 (ocf::heartbeat:IPaddr2): Stopped r192.168.122.205 (ocf::heartbeat:IPaddr2): Stopped r192.168.122.206 (ocf::heartbeat:IPaddr2): Stopped lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped diff --git a/cts/scheduler/rsc-sets-master.summary b/cts/scheduler/rsc-sets-master.summary index 126edc7cdc..783f985c37 100644 --- a/cts/scheduler/rsc-sets-master.summary +++ b/cts/scheduler/rsc-sets-master.summary @@ -1,48 +1,48 @@ Current cluster status: Node node1: standby Online: [ node2 ] - Master/Slave Set: ms-rsc [rsc] + Clone Set: ms-rsc [rsc] (promotable) Masters: [ node1 ] Slaves: [ node2 ] rsc1 (ocf::pacemaker:Dummy): Started node1 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Transition Summary: * Stop rsc:0 ( Master node1 ) due to node availability * Promote rsc:1 (Slave -> Master node2) * Move rsc1 ( node1 -> node2 ) * Move rsc2 ( node1 -> node2 ) * Move rsc3 ( node1 -> node2 ) Executing cluster transition: * Resource action: rsc1 stop on node1 * Resource action: rsc2 stop on node1 * Resource action: rsc3 stop on node1 * Pseudo action: ms-rsc_demote_0 * Resource action: rsc:0 demote on node1 * Pseudo action: ms-rsc_demoted_0 * Pseudo action: ms-rsc_stop_0 * Resource action: rsc:0 stop on node1 * Pseudo action: ms-rsc_stopped_0 * Pseudo action: all_stopped * Pseudo action: ms-rsc_promote_0 * Resource action: rsc:1 promote on node2 * Pseudo action: ms-rsc_promoted_0 * Resource action: rsc1 start on node2 * Resource action: rsc2 start on node2 * Resource action: rsc3 start on node2 Revised cluster status: Node node1: standby Online: [ node2 ] - Master/Slave Set: ms-rsc [rsc] + Clone Set: ms-rsc [rsc] (promotable) Masters: [ node2 ] Stopped: [ node1 ] rsc1 (ocf::pacemaker:Dummy): Started node2 rsc2 (ocf::pacemaker:Dummy): Started node2 rsc3 (ocf::pacemaker:Dummy): Started node2 diff --git a/cts/scheduler/stonith-0.summary b/cts/scheduler/stonith-0.summary index 28049dfded..e9653276a5 100644 --- a/cts/scheduler/stonith-0.summary +++ b/cts/scheduler/stonith-0.summary @@ -1,111 +1,111 @@ Current cluster status: Node c001n03 (f5e1d2de-73da-432a-9d5c-37472253c2ee): UNCLEAN (online) Node c001n05 (52a5ea5e-86ee-442c-b251-0bc9825c517e): UNCLEAN (online) Online: [ c001n02 c001n04 c001n06 c001n07 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started [ c001n03 c001n05 ] heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n03 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): FAILED [ c001n03 c001n05 ] lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n04 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n06 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n04 (ocf::heartbeat:IPaddr): Started c001n04 rsc_c001n05 (ocf::heartbeat:IPaddr): Started c001n05 rsc_c001n06 (ocf::heartbeat:IPaddr): Started c001n06 rsc_c001n07 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 Clone Set: DoFencing [child_DoFencing] Started: [ c001n02 c001n04 c001n06 c001n07 c001n08 ] Stopped: [ c001n03 c001n05 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n02 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:8 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:9 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:10 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n04 ocf_msdummy:11 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n04 ocf_msdummy:12 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 ocf_msdummy:13 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 Transition Summary: * Fence (reboot) c001n05 'ocf_192.168.100.183 failed there' * Fence (reboot) c001n03 'ocf_192.168.100.183 failed there' * Move ocf_192.168.100.181 ( c001n03 -> c001n02 ) * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 ) * Recover ocf_192.168.100.183 ( c001n03 -> c001n02 ) * Move rsc_c001n05 ( c001n05 -> c001n07 ) * Move rsc_c001n07 ( c001n03 -> c001n07 ) Executing cluster transition: * Resource action: child_DoFencing:4 monitor=20000 on c001n08 * Fencing c001n05 (reboot) * Fencing c001n03 (reboot) * Pseudo action: group-1_stop_0 * Pseudo action: ocf_192.168.100.183_stop_0 * Pseudo action: ocf_192.168.100.183_stop_0 * Pseudo action: rsc_c001n05_stop_0 * Pseudo action: rsc_c001n07_stop_0 * Pseudo action: stonith_complete * Pseudo action: heartbeat_192.168.100.182_stop_0 * Resource action: rsc_c001n05 start on c001n07 * Resource action: rsc_c001n07 start on c001n07 * Pseudo action: ocf_192.168.100.181_stop_0 * Pseudo action: ocf_192.168.100.181_stop_0 * Resource action: rsc_c001n05 monitor=5000 on c001n07 * Resource action: rsc_c001n07 monitor=5000 on c001n07 * Pseudo action: all_stopped * Pseudo action: group-1_stopped_0 * Pseudo action: group-1_start_0 * Resource action: ocf_192.168.100.181 start on c001n02 * Resource action: heartbeat_192.168.100.182 start on c001n02 * Resource action: ocf_192.168.100.183 start on c001n02 * Pseudo action: group-1_running_0 * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02 * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02 * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02 Revised cluster status: Online: [ c001n02 c001n04 c001n06 c001n07 c001n08 ] OFFLINE: [ c001n03 c001n05 ] DcIPaddr (ocf::heartbeat:IPaddr): Stopped Resource Group: group-1 ocf_192.168.100.181 (ocf::heartbeat:IPaddr): Started c001n02 heartbeat_192.168.100.182 (ocf::heartbeat:IPaddr): Started c001n02 ocf_192.168.100.183 (ocf::heartbeat:IPaddr): Started c001n02 lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n04 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n06 rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n04 (ocf::heartbeat:IPaddr): Started c001n04 rsc_c001n05 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n06 (ocf::heartbeat:IPaddr): Started c001n06 rsc_c001n07 (ocf::heartbeat:IPaddr): Started c001n07 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 Clone Set: DoFencing [child_DoFencing] Started: [ c001n02 c001n04 c001n06 c001n07 c001n08 ] Stopped: [ c001n03 c001n05 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Master c001n02 ocf_msdummy:1 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n02 ocf_msdummy:2 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 ocf_msdummy:3 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n07 ocf_msdummy:4 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:5 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n08 ocf_msdummy:6 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:7 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:8 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:9 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped ocf_msdummy:10 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n04 ocf_msdummy:11 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n04 ocf_msdummy:12 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 ocf_msdummy:13 (ocf::heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Slave c001n06 diff --git a/cts/scheduler/stonith-1.summary b/cts/scheduler/stonith-1.summary index 35d006986d..291ea5cee1 100644 --- a/cts/scheduler/stonith-1.summary +++ b/cts/scheduler/stonith-1.summary @@ -1,113 +1,113 @@ Current cluster status: Node sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec): UNCLEAN (offline) Online: [ sles-1 sles-2 sles-4 ] Resource Group: group-1 r192.168.100.181 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.182 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.183 (ocf::heartbeat:IPaddr): Stopped lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Started sles-2 migrator (ocf::heartbeat:Dummy): Started sles-3 (UNCLEAN) rsc_sles-1 (ocf::heartbeat:IPaddr): Started sles-1 rsc_sles-2 (ocf::heartbeat:IPaddr): Started sles-2 rsc_sles-3 (ocf::heartbeat:IPaddr): Started sles-3 (UNCLEAN) rsc_sles-4 (ocf::heartbeat:IPaddr): Started sles-4 Clone Set: DoFencing [child_DoFencing] child_DoFencing (stonith:external/vmware): Started sles-3 (UNCLEAN) Started: [ sles-1 sles-2 ] Stopped: [ sles-4 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:1 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:2 (ocf::heartbeat:Stateful): Slave sles-3 ( UNCLEAN ) ocf_msdummy:3 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:4 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:5 (ocf::heartbeat:Stateful): Slave sles-3 ( UNCLEAN ) ocf_msdummy:6 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:7 (ocf::heartbeat:Stateful): Stopped Transition Summary: * Fence (reboot) sles-3 'peer is no longer part of the cluster' * Start r192.168.100.183 (sles-1) * Move migrator ( sles-3 -> sles-4 ) * Move rsc_sles-3 ( sles-3 -> sles-4 ) * Move child_DoFencing:2 ( sles-3 -> sles-4 ) * Start ocf_msdummy:0 (sles-4) * Start ocf_msdummy:1 (sles-1) * Move ocf_msdummy:2 ( sles-3 -> sles-2 Slave ) * Start ocf_msdummy:3 (sles-4) * Start ocf_msdummy:4 (sles-1) * Move ocf_msdummy:5 ( sles-3 -> sles-2 Slave ) Executing cluster transition: * Pseudo action: group-1_start_0 * Resource action: r192.168.100.182 monitor=5000 on sles-1 * Resource action: lsb_dummy monitor=5000 on sles-2 * Resource action: rsc_sles-2 monitor=5000 on sles-2 * Resource action: rsc_sles-4 monitor=5000 on sles-4 * Pseudo action: DoFencing_stop_0 * Fencing sles-3 (reboot) * Pseudo action: migrator_stop_0 * Pseudo action: rsc_sles-3_stop_0 * Pseudo action: child_DoFencing:2_stop_0 * Pseudo action: DoFencing_stopped_0 * Pseudo action: DoFencing_start_0 * Pseudo action: master_rsc_1_stop_0 * Pseudo action: stonith_complete * Resource action: r192.168.100.183 start on sles-1 * Resource action: migrator start on sles-4 * Resource action: rsc_sles-3 start on sles-4 * Resource action: child_DoFencing:2 start on sles-4 * Pseudo action: DoFencing_running_0 * Pseudo action: ocf_msdummy:2_stop_0 * Pseudo action: ocf_msdummy:5_stop_0 * Pseudo action: master_rsc_1_stopped_0 * Pseudo action: master_rsc_1_start_0 * Pseudo action: all_stopped * Pseudo action: group-1_running_0 * Resource action: r192.168.100.183 monitor=5000 on sles-1 * Resource action: migrator monitor=10000 on sles-4 * Resource action: rsc_sles-3 monitor=5000 on sles-4 * Resource action: child_DoFencing:2 monitor=60000 on sles-4 * Resource action: ocf_msdummy:0 start on sles-4 * Resource action: ocf_msdummy:1 start on sles-1 * Resource action: ocf_msdummy:2 start on sles-2 * Resource action: ocf_msdummy:3 start on sles-4 * Resource action: ocf_msdummy:4 start on sles-1 * Resource action: ocf_msdummy:5 start on sles-2 * Pseudo action: master_rsc_1_running_0 * Resource action: ocf_msdummy:0 monitor=5000 on sles-4 * Resource action: ocf_msdummy:1 monitor=5000 on sles-1 * Resource action: ocf_msdummy:2 monitor=5000 on sles-2 * Resource action: ocf_msdummy:3 monitor=5000 on sles-4 * Resource action: ocf_msdummy:4 monitor=5000 on sles-1 * Resource action: ocf_msdummy:5 monitor=5000 on sles-2 Revised cluster status: Online: [ sles-1 sles-2 sles-4 ] OFFLINE: [ sles-3 ] Resource Group: group-1 r192.168.100.181 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.182 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.183 (ocf::heartbeat:IPaddr): Started sles-1 lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Started sles-2 migrator (ocf::heartbeat:Dummy): Started sles-4 rsc_sles-1 (ocf::heartbeat:IPaddr): Started sles-1 rsc_sles-2 (ocf::heartbeat:IPaddr): Started sles-2 rsc_sles-3 (ocf::heartbeat:IPaddr): Started sles-4 rsc_sles-4 (ocf::heartbeat:IPaddr): Started sles-4 Clone Set: DoFencing [child_DoFencing] Started: [ sles-1 sles-2 sles-4 ] Stopped: [ sles-3 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Slave sles-4 ocf_msdummy:1 (ocf::heartbeat:Stateful): Slave sles-1 ocf_msdummy:2 (ocf::heartbeat:Stateful): Slave sles-2 ocf_msdummy:3 (ocf::heartbeat:Stateful): Slave sles-4 ocf_msdummy:4 (ocf::heartbeat:Stateful): Slave sles-1 ocf_msdummy:5 (ocf::heartbeat:Stateful): Slave sles-2 ocf_msdummy:6 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:7 (ocf::heartbeat:Stateful): Stopped diff --git a/cts/scheduler/stonith-2.summary b/cts/scheduler/stonith-2.summary index e495405e8c..78efb03e7b 100644 --- a/cts/scheduler/stonith-2.summary +++ b/cts/scheduler/stonith-2.summary @@ -1,78 +1,78 @@ Current cluster status: Node sles-5 (434915c6-7b40-4d30-95ff-dc0ff3dc005a): UNCLEAN (offline) Online: [ sles-1 sles-2 sles-3 sles-4 sles-6 ] Resource Group: group-1 r192.168.100.181 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.182 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.183 (ocf::heartbeat:IPaddr): Started sles-1 lsb_dummy (lsb:/usr/share/heartbeat/cts/LSBDummy): Started sles-2 migrator (ocf::heartbeat:Dummy): Started sles-3 rsc_sles-1 (ocf::heartbeat:IPaddr): Started sles-1 rsc_sles-2 (ocf::heartbeat:IPaddr): Started sles-2 rsc_sles-3 (ocf::heartbeat:IPaddr): Started sles-3 rsc_sles-4 (ocf::heartbeat:IPaddr): Started sles-4 rsc_sles-5 (ocf::heartbeat:IPaddr): Stopped rsc_sles-6 (ocf::heartbeat:IPaddr): Started sles-6 Clone Set: DoFencing [child_DoFencing] Started: [ sles-1 sles-2 sles-3 sles-4 sles-6 ] Stopped: [ sles-5 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Slave sles-3 ocf_msdummy:1 (ocf::heartbeat:Stateful): Slave sles-4 ocf_msdummy:2 (ocf::heartbeat:Stateful): Slave sles-4 ocf_msdummy:3 (ocf::heartbeat:Stateful): Slave sles-1 ocf_msdummy:4 (ocf::heartbeat:Stateful): Slave sles-2 ocf_msdummy:5 (ocf::heartbeat:Stateful): Slave sles-1 ocf_msdummy:6 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:7 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:8 (ocf::heartbeat:Stateful): Slave sles-6 ocf_msdummy:9 (ocf::heartbeat:Stateful): Slave sles-6 ocf_msdummy:10 (ocf::heartbeat:Stateful): Slave sles-2 ocf_msdummy:11 (ocf::heartbeat:Stateful): Slave sles-3 Transition Summary: * Fence (reboot) sles-5 'peer is no longer part of the cluster' * Start rsc_sles-5 (sles-6) Executing cluster transition: * Fencing sles-5 (reboot) * Pseudo action: stonith_complete * Pseudo action: all_stopped * Resource action: rsc_sles-5 start on sles-6 * Resource action: rsc_sles-5 monitor=5000 on sles-6 Revised cluster status: Online: [ sles-1 sles-2 sles-3 sles-4 sles-6 ] OFFLINE: [ sles-5 ] Resource Group: group-1 r192.168.100.181 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.182 (ocf::heartbeat:IPaddr): Started sles-1 r192.168.100.183 (ocf::heartbeat:IPaddr): Started sles-1 lsb_dummy (lsb:/usr/share/heartbeat/cts/LSBDummy): Started sles-2 migrator (ocf::heartbeat:Dummy): Started sles-3 rsc_sles-1 (ocf::heartbeat:IPaddr): Started sles-1 rsc_sles-2 (ocf::heartbeat:IPaddr): Started sles-2 rsc_sles-3 (ocf::heartbeat:IPaddr): Started sles-3 rsc_sles-4 (ocf::heartbeat:IPaddr): Started sles-4 rsc_sles-5 (ocf::heartbeat:IPaddr): Started sles-6 rsc_sles-6 (ocf::heartbeat:IPaddr): Started sles-6 Clone Set: DoFencing [child_DoFencing] Started: [ sles-1 sles-2 sles-3 sles-4 sles-6 ] Stopped: [ sles-5 ] - Master/Slave Set: master_rsc_1 [ocf_msdummy] (unique) + Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique) ocf_msdummy:0 (ocf::heartbeat:Stateful): Slave sles-3 ocf_msdummy:1 (ocf::heartbeat:Stateful): Slave sles-4 ocf_msdummy:2 (ocf::heartbeat:Stateful): Slave sles-4 ocf_msdummy:3 (ocf::heartbeat:Stateful): Slave sles-1 ocf_msdummy:4 (ocf::heartbeat:Stateful): Slave sles-2 ocf_msdummy:5 (ocf::heartbeat:Stateful): Slave sles-1 ocf_msdummy:6 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:7 (ocf::heartbeat:Stateful): Stopped ocf_msdummy:8 (ocf::heartbeat:Stateful): Slave sles-6 ocf_msdummy:9 (ocf::heartbeat:Stateful): Slave sles-6 ocf_msdummy:10 (ocf::heartbeat:Stateful): Slave sles-2 ocf_msdummy:11 (ocf::heartbeat:Stateful): Slave sles-3 diff --git a/cts/scheduler/target-1.summary b/cts/scheduler/target-1.summary index 6044338f35..399270c686 100644 --- a/cts/scheduler/target-1.summary +++ b/cts/scheduler/target-1.summary @@ -1,41 +1,41 @@ 1 of 5 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 ( disabled ) rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 - Master/Slave Set: promoteme [rsc_c001n03] + Clone Set: promoteme [rsc_c001n03] (promotable) Slaves: [ c001n03 ] rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Transition Summary: * Stop rsc_c001n08 ( c001n08 ) due to node availability Executing cluster transition: * Resource action: DcIPaddr monitor on c001n08 * Resource action: DcIPaddr monitor on c001n03 * Resource action: DcIPaddr monitor on c001n01 * Resource action: rsc_c001n08 stop on c001n08 * Resource action: rsc_c001n08 monitor on c001n03 * Resource action: rsc_c001n08 monitor on c001n02 * Resource action: rsc_c001n08 monitor on c001n01 * Resource action: rsc_c001n02 monitor on c001n08 * Resource action: rsc_c001n02 monitor on c001n03 * Resource action: rsc_c001n02 monitor on c001n01 * Resource action: rsc_c001n01 monitor on c001n08 * Resource action: rsc_c001n01 monitor on c001n03 * Resource action: rsc_c001n01 monitor on c001n02 * Pseudo action: all_stopped Revised cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n08 (ocf::heartbeat:IPaddr): Stopped ( disabled ) rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 - Master/Slave Set: promoteme [rsc_c001n03] + Clone Set: promoteme [rsc_c001n03] (promotable) Slaves: [ c001n03 ] rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 diff --git a/cts/scheduler/ticket-master-1.summary b/cts/scheduler/ticket-master-1.summary index 3d16e58ce1..953f5a4d1b 100644 --- a/cts/scheduler/ticket-master-1.summary +++ b/cts/scheduler/ticket-master-1.summary @@ -1,21 +1,21 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] Transition Summary: Executing cluster transition: * Resource action: rsc1:0 monitor on node2 * Resource action: rsc1:0 monitor on node1 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-10.summary b/cts/scheduler/ticket-master-10.summary index 58148d8952..d5ec66856b 100644 --- a/cts/scheduler/ticket-master-10.summary +++ b/cts/scheduler/ticket-master-10.summary @@ -1,27 +1,27 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] Transition Summary: * Start rsc1:0 (node2) * Start rsc1:1 (node1) Executing cluster transition: * Resource action: rsc1:0 monitor on node2 * Resource action: rsc1:1 monitor on node1 * Pseudo action: ms1_start_0 * Resource action: rsc1:0 start on node2 * Resource action: rsc1:1 start on node1 * Pseudo action: ms1_running_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-11.summary b/cts/scheduler/ticket-master-11.summary index b488118eaf..980cf993d0 100644 --- a/cts/scheduler/ticket-master-11.summary +++ b/cts/scheduler/ticket-master-11.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] Transition Summary: * Promote rsc1:0 (Slave -> Master node1) Executing cluster transition: * Pseudo action: ms1_promote_0 * Resource action: rsc1:1 promote on node1 * Pseudo action: ms1_promoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-master-12.summary b/cts/scheduler/ticket-master-12.summary index b7a3115314..39616a8038 100644 --- a/cts/scheduler/ticket-master-12.summary +++ b/cts/scheduler/ticket-master-12.summary @@ -1,21 +1,21 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-master-13.summary b/cts/scheduler/ticket-master-13.summary index 5f5d0d1d0e..9cb0d4542a 100644 --- a/cts/scheduler/ticket-master-13.summary +++ b/cts/scheduler/ticket-master-13.summary @@ -1,19 +1,19 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-14.summary b/cts/scheduler/ticket-master-14.summary index fa14935670..a6fcf66f36 100644 --- a/cts/scheduler/ticket-master-14.summary +++ b/cts/scheduler/ticket-master-14.summary @@ -1,30 +1,30 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1:0 ( Master node1 ) due to node availability * Stop rsc1:1 ( Slave node2 ) due to node availability Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:1 stop on node1 * Resource action: rsc1:0 stop on node2 * Pseudo action: ms1_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-15.summary b/cts/scheduler/ticket-master-15.summary index fa14935670..a6fcf66f36 100644 --- a/cts/scheduler/ticket-master-15.summary +++ b/cts/scheduler/ticket-master-15.summary @@ -1,30 +1,30 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1:0 ( Master node1 ) due to node availability * Stop rsc1:1 ( Slave node2 ) due to node availability Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:1 stop on node1 * Resource action: rsc1:0 stop on node2 * Pseudo action: ms1_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-16.summary b/cts/scheduler/ticket-master-16.summary index 72c690514f..dc5bc26e49 100644 --- a/cts/scheduler/ticket-master-16.summary +++ b/cts/scheduler/ticket-master-16.summary @@ -1,19 +1,19 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-17.summary b/cts/scheduler/ticket-master-17.summary index ec2660a698..8dbef130d2 100644 --- a/cts/scheduler/ticket-master-17.summary +++ b/cts/scheduler/ticket-master-17.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Demote rsc1:0 (Master -> Slave node1) Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-18.summary b/cts/scheduler/ticket-master-18.summary index ec2660a698..8dbef130d2 100644 --- a/cts/scheduler/ticket-master-18.summary +++ b/cts/scheduler/ticket-master-18.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Demote rsc1:0 (Master -> Slave node1) Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-19.summary b/cts/scheduler/ticket-master-19.summary index 72c690514f..dc5bc26e49 100644 --- a/cts/scheduler/ticket-master-19.summary +++ b/cts/scheduler/ticket-master-19.summary @@ -1,19 +1,19 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-2.summary b/cts/scheduler/ticket-master-2.summary index 6f5be53032..b1667b3b65 100644 --- a/cts/scheduler/ticket-master-2.summary +++ b/cts/scheduler/ticket-master-2.summary @@ -1,29 +1,29 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] Transition Summary: * Start rsc1:0 (node2) * Promote rsc1:1 (Stopped -> Master node1) Executing cluster transition: * Pseudo action: ms1_start_0 * Resource action: rsc1:0 start on node2 * Resource action: rsc1:1 start on node1 * Pseudo action: ms1_running_0 * Pseudo action: ms1_promote_0 * Resource action: rsc1:1 promote on node1 * Pseudo action: ms1_promoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-master-20.summary b/cts/scheduler/ticket-master-20.summary index ec2660a698..8dbef130d2 100644 --- a/cts/scheduler/ticket-master-20.summary +++ b/cts/scheduler/ticket-master-20.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Demote rsc1:0 (Master -> Slave node1) Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-21.summary b/cts/scheduler/ticket-master-21.summary index 88f62fd64f..ac3790947e 100644 --- a/cts/scheduler/ticket-master-21.summary +++ b/cts/scheduler/ticket-master-21.summary @@ -1,36 +1,36 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Fence (reboot) node1 'deadman ticket was lost' * Move rsc_stonith ( node1 -> node2 ) * Stop rsc1:0 ( Master node1 ) due to node availability Executing cluster transition: * Pseudo action: rsc_stonith_stop_0 * Pseudo action: ms1_demote_0 * Fencing node1 (reboot) * Resource action: rsc_stonith start on node2 * Pseudo action: rsc1:1_demote_0 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Pseudo action: stonith_complete * Pseudo action: rsc1:1_stop_0 * Pseudo action: ms1_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ node2 ] OFFLINE: [ node1 ] rsc_stonith (stonith:null): Started node2 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node2 ] Stopped: [ node1 ] diff --git a/cts/scheduler/ticket-master-22.summary b/cts/scheduler/ticket-master-22.summary index 72c690514f..dc5bc26e49 100644 --- a/cts/scheduler/ticket-master-22.summary +++ b/cts/scheduler/ticket-master-22.summary @@ -1,19 +1,19 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-23.summary b/cts/scheduler/ticket-master-23.summary index ec2660a698..8dbef130d2 100644 --- a/cts/scheduler/ticket-master-23.summary +++ b/cts/scheduler/ticket-master-23.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Demote rsc1:0 (Master -> Slave node1) Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-24.summary b/cts/scheduler/ticket-master-24.summary index b7a3115314..39616a8038 100644 --- a/cts/scheduler/ticket-master-24.summary +++ b/cts/scheduler/ticket-master-24.summary @@ -1,21 +1,21 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-master-3.summary b/cts/scheduler/ticket-master-3.summary index fa14935670..a6fcf66f36 100644 --- a/cts/scheduler/ticket-master-3.summary +++ b/cts/scheduler/ticket-master-3.summary @@ -1,30 +1,30 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1:0 ( Master node1 ) due to node availability * Stop rsc1:1 ( Slave node2 ) due to node availability Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:1 stop on node1 * Resource action: rsc1:0 stop on node2 * Pseudo action: ms1_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-4.summary b/cts/scheduler/ticket-master-4.summary index 58148d8952..d5ec66856b 100644 --- a/cts/scheduler/ticket-master-4.summary +++ b/cts/scheduler/ticket-master-4.summary @@ -1,27 +1,27 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] Transition Summary: * Start rsc1:0 (node2) * Start rsc1:1 (node1) Executing cluster transition: * Resource action: rsc1:0 monitor on node2 * Resource action: rsc1:1 monitor on node1 * Pseudo action: ms1_start_0 * Resource action: rsc1:0 start on node2 * Resource action: rsc1:1 start on node1 * Pseudo action: ms1_running_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-5.summary b/cts/scheduler/ticket-master-5.summary index b488118eaf..980cf993d0 100644 --- a/cts/scheduler/ticket-master-5.summary +++ b/cts/scheduler/ticket-master-5.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] Transition Summary: * Promote rsc1:0 (Slave -> Master node1) Executing cluster transition: * Pseudo action: ms1_promote_0 * Resource action: rsc1:1 promote on node1 * Pseudo action: ms1_promoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-master-6.summary b/cts/scheduler/ticket-master-6.summary index ec2660a698..8dbef130d2 100644 --- a/cts/scheduler/ticket-master-6.summary +++ b/cts/scheduler/ticket-master-6.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Demote rsc1:0 (Master -> Slave node1) Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:1 demote on node1 * Pseudo action: ms1_demoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-7.summary b/cts/scheduler/ticket-master-7.summary index 58148d8952..d5ec66856b 100644 --- a/cts/scheduler/ticket-master-7.summary +++ b/cts/scheduler/ticket-master-7.summary @@ -1,27 +1,27 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Stopped: [ node1 node2 ] Transition Summary: * Start rsc1:0 (node2) * Start rsc1:1 (node1) Executing cluster transition: * Resource action: rsc1:0 monitor on node2 * Resource action: rsc1:1 monitor on node1 * Pseudo action: ms1_start_0 * Resource action: rsc1:0 start on node2 * Resource action: rsc1:1 start on node1 * Pseudo action: ms1_running_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-master-8.summary b/cts/scheduler/ticket-master-8.summary index b488118eaf..980cf993d0 100644 --- a/cts/scheduler/ticket-master-8.summary +++ b/cts/scheduler/ticket-master-8.summary @@ -1,24 +1,24 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node1 node2 ] Transition Summary: * Promote rsc1:0 (Slave -> Master node1) Executing cluster transition: * Pseudo action: ms1_promote_0 * Resource action: rsc1:1 promote on node1 * Pseudo action: ms1_promoted_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-master-9.summary b/cts/scheduler/ticket-master-9.summary index 88f62fd64f..ac3790947e 100644 --- a/cts/scheduler/ticket-master-9.summary +++ b/cts/scheduler/ticket-master-9.summary @@ -1,36 +1,36 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Fence (reboot) node1 'deadman ticket was lost' * Move rsc_stonith ( node1 -> node2 ) * Stop rsc1:0 ( Master node1 ) due to node availability Executing cluster transition: * Pseudo action: rsc_stonith_stop_0 * Pseudo action: ms1_demote_0 * Fencing node1 (reboot) * Resource action: rsc_stonith start on node2 * Pseudo action: rsc1:1_demote_0 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Pseudo action: stonith_complete * Pseudo action: rsc1:1_stop_0 * Pseudo action: ms1_stopped_0 * Pseudo action: all_stopped Revised cluster status: Online: [ node2 ] OFFLINE: [ node1 ] rsc_stonith (stonith:null): Started node2 - Master/Slave Set: ms1 [rsc1] + Clone Set: ms1 [rsc1] (promotable) Slaves: [ node2 ] Stopped: [ node1 ] diff --git a/cts/scheduler/ticket-rsc-sets-1.summary b/cts/scheduler/ticket-rsc-sets-1.summary index d87da470c8..6381f76a59 100644 --- a/cts/scheduler/ticket-rsc-sets-1.summary +++ b/cts/scheduler/ticket-rsc-sets-1.summary @@ -1,47 +1,47 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Stopped: [ node1 node2 ] Transition Summary: * Start rsc5:0 (node2) * Start rsc5:1 (node1) Executing cluster transition: * Resource action: rsc1 monitor on node2 * Resource action: rsc1 monitor on node1 * Resource action: rsc2 monitor on node2 * Resource action: rsc2 monitor on node1 * Resource action: rsc3 monitor on node2 * Resource action: rsc3 monitor on node1 * Resource action: rsc4:0 monitor on node2 * Resource action: rsc4:0 monitor on node1 * Resource action: rsc5:0 monitor on node2 * Resource action: rsc5:1 monitor on node1 * Pseudo action: ms5_start_0 * Resource action: rsc5:0 start on node2 * Resource action: rsc5:1 start on node1 * Pseudo action: ms5_running_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-10.summary b/cts/scheduler/ticket-rsc-sets-10.summary index 0a36d45658..a33e5204ca 100644 --- a/cts/scheduler/ticket-rsc-sets-10.summary +++ b/cts/scheduler/ticket-rsc-sets-10.summary @@ -1,51 +1,51 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1 ( node2 ) due to node availability * Stop rsc2 (node1) due to node availability * Stop rsc3 (node1) due to node availability * Stop rsc4:0 (node1) due to node availability * Stop rsc4:1 (node2) due to node availability * Demote rsc5:0 (Master -> Slave node1) Executing cluster transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: all_stopped * Pseudo action: group2_stopped_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-11.summary b/cts/scheduler/ticket-rsc-sets-11.summary index 47d392377d..d04b1ea81b 100644 --- a/cts/scheduler/ticket-rsc-sets-11.summary +++ b/cts/scheduler/ticket-rsc-sets-11.summary @@ -1,31 +1,31 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-12.summary b/cts/scheduler/ticket-rsc-sets-12.summary index fd22d77969..f268002f00 100644 --- a/cts/scheduler/ticket-rsc-sets-12.summary +++ b/cts/scheduler/ticket-rsc-sets-12.summary @@ -1,40 +1,40 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] Transition Summary: * Stop rsc1 ( node2 ) due to node availability * Stop rsc2 (node1) due to node availability * Stop rsc3 (node1) due to node availability Executing cluster transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Resource action: rsc2 stop on node1 * Pseudo action: all_stopped * Pseudo action: group2_stopped_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-13.summary b/cts/scheduler/ticket-rsc-sets-13.summary index 0a36d45658..a33e5204ca 100644 --- a/cts/scheduler/ticket-rsc-sets-13.summary +++ b/cts/scheduler/ticket-rsc-sets-13.summary @@ -1,51 +1,51 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1 ( node2 ) due to node availability * Stop rsc2 (node1) due to node availability * Stop rsc3 (node1) due to node availability * Stop rsc4:0 (node1) due to node availability * Stop rsc4:1 (node2) due to node availability * Demote rsc5:0 (Master -> Slave node1) Executing cluster transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: all_stopped * Pseudo action: group2_stopped_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-14.summary b/cts/scheduler/ticket-rsc-sets-14.summary index 0a36d45658..a33e5204ca 100644 --- a/cts/scheduler/ticket-rsc-sets-14.summary +++ b/cts/scheduler/ticket-rsc-sets-14.summary @@ -1,51 +1,51 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1 ( node2 ) due to node availability * Stop rsc2 (node1) due to node availability * Stop rsc3 (node1) due to node availability * Stop rsc4:0 (node1) due to node availability * Stop rsc4:1 (node2) due to node availability * Demote rsc5:0 (Master -> Slave node1) Executing cluster transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: all_stopped * Pseudo action: group2_stopped_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-2.summary b/cts/scheduler/ticket-rsc-sets-2.summary index e17dfdb6c9..e8b7a3c349 100644 --- a/cts/scheduler/ticket-rsc-sets-2.summary +++ b/cts/scheduler/ticket-rsc-sets-2.summary @@ -1,55 +1,55 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] Transition Summary: * Start rsc1 (node2) * Start rsc2 (node1) * Start rsc3 (node1) * Start rsc4:0 (node2) * Start rsc4:1 (node1) * Promote rsc5:0 (Slave -> Master node1) Executing cluster transition: * Resource action: rsc1 start on node2 * Pseudo action: group2_start_0 * Resource action: rsc2 start on node1 * Resource action: rsc3 start on node1 * Pseudo action: clone4_start_0 * Pseudo action: ms5_promote_0 * Resource action: rsc1 monitor=10000 on node2 * Pseudo action: group2_running_0 * Resource action: rsc2 monitor=5000 on node1 * Resource action: rsc3 monitor=5000 on node1 * Resource action: rsc4:0 start on node2 * Resource action: rsc4:1 start on node1 * Pseudo action: clone4_running_0 * Resource action: rsc5:1 promote on node1 * Pseudo action: ms5_promoted_0 * Resource action: rsc4:0 monitor=5000 on node2 * Resource action: rsc4:1 monitor=5000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-3.summary b/cts/scheduler/ticket-rsc-sets-3.summary index 0a36d45658..a33e5204ca 100644 --- a/cts/scheduler/ticket-rsc-sets-3.summary +++ b/cts/scheduler/ticket-rsc-sets-3.summary @@ -1,51 +1,51 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1 ( node2 ) due to node availability * Stop rsc2 (node1) due to node availability * Stop rsc3 (node1) due to node availability * Stop rsc4:0 (node1) due to node availability * Stop rsc4:1 (node2) due to node availability * Demote rsc5:0 (Master -> Slave node1) Executing cluster transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: all_stopped * Pseudo action: group2_stopped_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-4.summary b/cts/scheduler/ticket-rsc-sets-4.summary index d87da470c8..6381f76a59 100644 --- a/cts/scheduler/ticket-rsc-sets-4.summary +++ b/cts/scheduler/ticket-rsc-sets-4.summary @@ -1,47 +1,47 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Stopped: [ node1 node2 ] Transition Summary: * Start rsc5:0 (node2) * Start rsc5:1 (node1) Executing cluster transition: * Resource action: rsc1 monitor on node2 * Resource action: rsc1 monitor on node1 * Resource action: rsc2 monitor on node2 * Resource action: rsc2 monitor on node1 * Resource action: rsc3 monitor on node2 * Resource action: rsc3 monitor on node1 * Resource action: rsc4:0 monitor on node2 * Resource action: rsc4:0 monitor on node1 * Resource action: rsc5:0 monitor on node2 * Resource action: rsc5:1 monitor on node1 * Pseudo action: ms5_start_0 * Resource action: rsc5:0 start on node2 * Resource action: rsc5:1 start on node1 * Pseudo action: ms5_running_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-5.summary b/cts/scheduler/ticket-rsc-sets-5.summary index 2982a434ce..08a955f822 100644 --- a/cts/scheduler/ticket-rsc-sets-5.summary +++ b/cts/scheduler/ticket-rsc-sets-5.summary @@ -1,42 +1,42 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] Transition Summary: * Start rsc1 (node2) * Start rsc2 (node1) * Start rsc3 (node1) Executing cluster transition: * Resource action: rsc1 start on node2 * Pseudo action: group2_start_0 * Resource action: rsc2 start on node1 * Resource action: rsc3 start on node1 * Resource action: rsc1 monitor=10000 on node2 * Pseudo action: group2_running_0 * Resource action: rsc2 monitor=5000 on node1 * Resource action: rsc3 monitor=5000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-6.summary b/cts/scheduler/ticket-rsc-sets-6.summary index 7bb168674b..94a6a65f83 100644 --- a/cts/scheduler/ticket-rsc-sets-6.summary +++ b/cts/scheduler/ticket-rsc-sets-6.summary @@ -1,44 +1,44 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] Transition Summary: * Start rsc4:0 (node2) * Start rsc4:1 (node1) * Promote rsc5:0 (Slave -> Master node1) Executing cluster transition: * Pseudo action: clone4_start_0 * Pseudo action: ms5_promote_0 * Resource action: rsc4:0 start on node2 * Resource action: rsc4:1 start on node1 * Pseudo action: clone4_running_0 * Resource action: rsc5:1 promote on node1 * Pseudo action: ms5_promoted_0 * Resource action: rsc4:0 monitor=5000 on node2 * Resource action: rsc4:1 monitor=5000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-7.summary b/cts/scheduler/ticket-rsc-sets-7.summary index 0a36d45658..a33e5204ca 100644 --- a/cts/scheduler/ticket-rsc-sets-7.summary +++ b/cts/scheduler/ticket-rsc-sets-7.summary @@ -1,51 +1,51 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1 ( node2 ) due to node availability * Stop rsc2 (node1) due to node availability * Stop rsc3 (node1) due to node availability * Stop rsc4:0 (node1) due to node availability * Stop rsc4:1 (node2) due to node availability * Demote rsc5:0 (Master -> Slave node1) Executing cluster transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: all_stopped * Pseudo action: group2_stopped_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-8.summary b/cts/scheduler/ticket-rsc-sets-8.summary index 47d392377d..d04b1ea81b 100644 --- a/cts/scheduler/ticket-rsc-sets-8.summary +++ b/cts/scheduler/ticket-rsc-sets-8.summary @@ -1,31 +1,31 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/ticket-rsc-sets-9.summary b/cts/scheduler/ticket-rsc-sets-9.summary index 0a36d45658..a33e5204ca 100644 --- a/cts/scheduler/ticket-rsc-sets-9.summary +++ b/cts/scheduler/ticket-rsc-sets-9.summary @@ -1,51 +1,51 @@ Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Started node2 Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Started node1 rsc3 (ocf::pacemaker:Dummy): Started node1 Clone Set: clone4 [rsc4] Started: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Transition Summary: * Stop rsc1 ( node2 ) due to node availability * Stop rsc2 (node1) due to node availability * Stop rsc3 (node1) due to node availability * Stop rsc4:0 (node1) due to node availability * Stop rsc4:1 (node2) due to node availability * Demote rsc5:0 (Master -> Slave node1) Executing cluster transition: * Resource action: rsc1 stop on node2 * Pseudo action: group2_stop_0 * Resource action: rsc3 stop on node1 * Pseudo action: clone4_stop_0 * Pseudo action: ms5_demote_0 * Resource action: rsc2 stop on node1 * Resource action: rsc4:1 stop on node1 * Resource action: rsc4:0 stop on node2 * Pseudo action: clone4_stopped_0 * Resource action: rsc5:1 demote on node1 * Pseudo action: ms5_demoted_0 * Pseudo action: all_stopped * Pseudo action: group2_stopped_0 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped Resource Group: group2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Stopped Clone Set: clone4 [rsc4] Stopped: [ node1 node2 ] - Master/Slave Set: ms5 [rsc5] + Clone Set: ms5 [rsc5] (promotable) Slaves: [ node1 node2 ] diff --git a/cts/scheduler/unmanaged-master.summary b/cts/scheduler/unmanaged-master.summary index 66a8748053..9d1e0b8ad8 100644 --- a/cts/scheduler/unmanaged-master.summary +++ b/cts/scheduler/unmanaged-master.summary @@ -1,63 +1,63 @@ Current cluster status: Online: [ pcmk-1 pcmk-2 ] OFFLINE: [ pcmk-3 pcmk-4 ] Clone Set: Fencing [FencingChild] (unmanaged) FencingChild (stonith:fence_xvm): Started pcmk-2 (unmanaged) FencingChild (stonith:fence_xvm): Started pcmk-1 (unmanaged) Stopped: [ pcmk-3 pcmk-4 ] Resource Group: group-1 r192.168.122.126 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) r192.168.122.127 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) r192.168.122.128 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 (unmanaged) rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 (unmanaged) lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 (unmanaged) migrator (ocf::pacemaker:Dummy): Started pcmk-4 (unmanaged) Clone Set: Connectivity [ping-1] (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-2 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-1 (unmanaged) Stopped: [ pcmk-3 pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] (unmanaged) + Clone Set: master-1 [stateful-1] (promotable) (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Master pcmk-2 (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Slave pcmk-1 ( unmanaged ) Stopped: [ pcmk-3 pcmk-4 ] Transition Summary: * Shutdown pcmk-2 * Shutdown pcmk-1 Executing cluster transition: * Cluster action: do_shutdown on pcmk-2 * Cluster action: do_shutdown on pcmk-1 Revised cluster status: Online: [ pcmk-1 pcmk-2 ] OFFLINE: [ pcmk-3 pcmk-4 ] Clone Set: Fencing [FencingChild] (unmanaged) FencingChild (stonith:fence_xvm): Started pcmk-2 (unmanaged) FencingChild (stonith:fence_xvm): Started pcmk-1 (unmanaged) Stopped: [ pcmk-3 pcmk-4 ] Resource Group: group-1 r192.168.122.126 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) r192.168.122.127 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) r192.168.122.128 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) rsc_pcmk-1 (ocf::heartbeat:IPaddr): Started pcmk-1 (unmanaged) rsc_pcmk-2 (ocf::heartbeat:IPaddr): Started pcmk-2 (unmanaged) rsc_pcmk-3 (ocf::heartbeat:IPaddr): Started pcmk-3 (unmanaged) rsc_pcmk-4 (ocf::heartbeat:IPaddr): Started pcmk-4 (unmanaged) lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 (unmanaged) migrator (ocf::pacemaker:Dummy): Started pcmk-4 (unmanaged) Clone Set: Connectivity [ping-1] (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-2 (unmanaged) ping-1 (ocf::pacemaker:ping): Started pcmk-1 (unmanaged) Stopped: [ pcmk-3 pcmk-4 ] - Master/Slave Set: master-1 [stateful-1] (unmanaged) + Clone Set: master-1 [stateful-1] (promotable) (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Master pcmk-2 (unmanaged) stateful-1 (ocf::pacemaker:Stateful): Slave pcmk-1 ( unmanaged ) Stopped: [ pcmk-3 pcmk-4 ] diff --git a/cts/scheduler/unrunnable-2.summary b/cts/scheduler/unrunnable-2.summary index 4bbacece54..9c847e2acd 100644 --- a/cts/scheduler/unrunnable-2.summary +++ b/cts/scheduler/unrunnable-2.summary @@ -1,175 +1,175 @@ 6 of 117 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ip-192.0.2.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 Clone Set: haproxy-clone [haproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: memcached-clone [memcached] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: rabbitmq-clone [rabbitmq] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-core-clone [openstack-core] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ overcloud-controller-1 ] Slaves: [ overcloud-controller-0 overcloud-controller-2 ] ip-192.0.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 Clone Set: mongod-clone [mongod] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped Clone Set: openstack-heat-engine-clone [openstack-heat-engine] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-clone [openstack-heat-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-api-clone [openstack-sahara-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-api-clone [openstack-cinder-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: delay-clone [delay] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-server-clone [neutron-server] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: httpd-clone [httpd] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Transition Summary: * Start openstack-cinder-volume ( overcloud-controller-2 ) due to unrunnable openstack-cinder-scheduler-clone running (blocked) Executing cluster transition: Revised cluster status: Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ip-192.0.2.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 Clone Set: haproxy-clone [haproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] - Master/Slave Set: galera-master [galera] + Clone Set: galera-master [galera] (promotable) Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: memcached-clone [memcached] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: rabbitmq-clone [rabbitmq] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-core-clone [openstack-core] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] - Master/Slave Set: redis-master [redis] + Clone Set: redis-master [redis] (promotable) Masters: [ overcloud-controller-1 ] Slaves: [ overcloud-controller-0 overcloud-controller-2 ] ip-192.0.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 Clone Set: mongod-clone [mongod] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped Clone Set: openstack-heat-engine-clone [openstack-heat-engine] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-clone [openstack-heat-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-api-clone [openstack-sahara-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-api-clone [openstack-cinder-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: delay-clone [delay] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-server-clone [neutron-server] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: httpd-clone [httpd] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] diff --git a/cts/scheduler/use-after-free-merge.summary b/cts/scheduler/use-after-free-merge.summary index c74af65c31..a05f1eea44 100644 --- a/cts/scheduler/use-after-free-merge.summary +++ b/cts/scheduler/use-after-free-merge.summary @@ -1,42 +1,42 @@ 4 of 5 resources DISABLED and 0 BLOCKED from being started due to failures Current cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Stopped Resource Group: g0 d0 (ocf::heartbeat:Dummy): Stopped ( disabled ) d1 (ocf::heartbeat:Dummy): Stopped ( disabled ) - Master/Slave Set: ms0 [s0] + Clone Set: ms0 [s0] (promotable) Stopped: [ hex-13 hex-14 ] Transition Summary: * Start fencing-sbd (hex-14) * Start s0:0 (hex-13) * Start s0:1 (hex-14) Executing cluster transition: * Resource action: fencing-sbd monitor on hex-14 * Resource action: fencing-sbd monitor on hex-13 * Resource action: d0 monitor on hex-14 * Resource action: d0 monitor on hex-13 * Resource action: d1 monitor on hex-14 * Resource action: d1 monitor on hex-13 * Resource action: s0:0 monitor on hex-13 * Resource action: s0:1 monitor on hex-14 * Pseudo action: ms0_start_0 * Resource action: fencing-sbd start on hex-14 * Resource action: s0:0 start on hex-13 * Resource action: s0:1 start on hex-14 * Pseudo action: ms0_running_0 Revised cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Started hex-14 Resource Group: g0 d0 (ocf::heartbeat:Dummy): Stopped ( disabled ) d1 (ocf::heartbeat:Dummy): Stopped ( disabled ) - Master/Slave Set: ms0 [s0] + Clone Set: ms0 [s0] (promotable) Slaves: [ hex-13 hex-14 ] diff --git a/cts/scheduler/whitebox-fail3.summary b/cts/scheduler/whitebox-fail3.summary index eded0999e0..9f3aa6cfe9 100644 --- a/cts/scheduler/whitebox-fail3.summary +++ b/cts/scheduler/whitebox-fail3.summary @@ -1,54 +1,54 @@ Current cluster status: Online: [ dvossel-laptop2 ] vm (ocf::heartbeat:VirtualDomain): Stopped vm2 (ocf::heartbeat:VirtualDomain): Stopped FAKE (ocf::pacemaker:Dummy): Started dvossel-laptop2 - Master/Slave Set: W-master [W] + Clone Set: W-master [W] (promotable) Masters: [ dvossel-laptop2 ] Stopped: [ 18builder 18node1 ] - Master/Slave Set: X-master [X] + Clone Set: X-master [X] (promotable) Masters: [ dvossel-laptop2 ] Stopped: [ 18builder 18node1 ] Transition Summary: * Start vm (dvossel-laptop2) * Move FAKE ( dvossel-laptop2 -> 18builder ) * Start W:1 (18builder) * Start X:1 (18builder) * Start 18builder (dvossel-laptop2) Executing cluster transition: * Resource action: vm start on dvossel-laptop2 * Resource action: FAKE stop on dvossel-laptop2 * Pseudo action: W-master_start_0 * Pseudo action: X-master_start_0 * Resource action: 18builder monitor on dvossel-laptop2 * Pseudo action: all_stopped * Resource action: 18builder start on dvossel-laptop2 * Resource action: FAKE start on 18builder * Resource action: W start on 18builder * Pseudo action: W-master_running_0 * Resource action: X start on 18builder * Pseudo action: X-master_running_0 * Resource action: 18builder monitor=30000 on dvossel-laptop2 * Resource action: W monitor=10000 on 18builder * Resource action: X monitor=10000 on 18builder Revised cluster status: Online: [ dvossel-laptop2 ] Containers: [ 18builder:vm ] vm (ocf::heartbeat:VirtualDomain): Started dvossel-laptop2 vm2 (ocf::heartbeat:VirtualDomain): Stopped FAKE (ocf::pacemaker:Dummy): Started 18builder - Master/Slave Set: W-master [W] + Clone Set: W-master [W] (promotable) Masters: [ dvossel-laptop2 ] Slaves: [ 18builder ] Stopped: [ 18node1 ] - Master/Slave Set: X-master [X] + Clone Set: X-master [X] (promotable) Masters: [ dvossel-laptop2 ] Slaves: [ 18builder ] Stopped: [ 18node1 ] diff --git a/cts/scheduler/whitebox-ms-ordering-move.summary b/cts/scheduler/whitebox-ms-ordering-move.summary index af86d7472a..1bcad449f3 100644 --- a/cts/scheduler/whitebox-ms-ordering-move.summary +++ b/cts/scheduler/whitebox-ms-ordering-move.summary @@ -1,106 +1,106 @@ Current cluster status: Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Containers: [ lxc1:container1 lxc2:container2 ] Fencing (stonith:fence_xvm): Started rhel7-3 FencingPass (stonith:fence_dummy): Started rhel7-4 FencingFail (stonith:fence_dummy): Started rhel7-5 rsc_rhel7-1 (ocf::heartbeat:IPaddr2): Started rhel7-1 rsc_rhel7-2 (ocf::heartbeat:IPaddr2): Started rhel7-2 rsc_rhel7-3 (ocf::heartbeat:IPaddr2): Started rhel7-3 rsc_rhel7-4 (ocf::heartbeat:IPaddr2): Started rhel7-4 rsc_rhel7-5 (ocf::heartbeat:IPaddr2): Started rhel7-5 migrator (ocf::pacemaker:Dummy): Started rhel7-4 Clone Set: Connectivity [ping-1] Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Stopped: [ lxc1 lxc2 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ rhel7-3 ] Slaves: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] Resource Group: group-1 r192.168.122.207 (ocf::heartbeat:IPaddr2): Started rhel7-3 petulant (service:DummySD): Started rhel7-3 r192.168.122.208 (ocf::heartbeat:IPaddr2): Started rhel7-3 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-3 container1 (ocf::heartbeat:VirtualDomain): Started rhel7-1 container2 (ocf::heartbeat:VirtualDomain): Started rhel7-1 - Master/Slave Set: lxc-ms-master [lxc-ms] + Clone Set: lxc-ms-master [lxc-ms] (promotable) Masters: [ lxc1 ] Slaves: [ lxc2 ] Transition Summary: * Move container1 ( rhel7-1 -> rhel7-2 ) * Restart lxc-ms:0 (Master lxc1) due to required container1 start * Move lxc1 ( rhel7-1 -> rhel7-2 ) Executing cluster transition: * Resource action: rsc_rhel7-1 monitor on lxc2 * Resource action: rsc_rhel7-2 monitor on lxc2 * Resource action: rsc_rhel7-3 monitor on lxc2 * Resource action: rsc_rhel7-4 monitor on lxc2 * Resource action: rsc_rhel7-5 monitor on lxc2 * Resource action: migrator monitor on lxc2 * Resource action: ping-1 monitor on lxc2 * Resource action: stateful-1 monitor on lxc2 * Resource action: r192.168.122.207 monitor on lxc2 * Resource action: petulant monitor on lxc2 * Resource action: r192.168.122.208 monitor on lxc2 * Resource action: lsb-dummy monitor on lxc2 * Pseudo action: lxc-ms-master_demote_0 * Resource action: lxc1 monitor on rhel7-5 * Resource action: lxc1 monitor on rhel7-4 * Resource action: lxc1 monitor on rhel7-3 * Resource action: lxc1 monitor on rhel7-2 * Resource action: lxc2 monitor on rhel7-5 * Resource action: lxc2 monitor on rhel7-4 * Resource action: lxc2 monitor on rhel7-3 * Resource action: lxc2 monitor on rhel7-2 * Resource action: lxc-ms demote on lxc1 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Resource action: lxc-ms stop on lxc1 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Resource action: lxc1 stop on rhel7-1 * Resource action: container1 stop on rhel7-1 * Pseudo action: all_stopped * Resource action: container1 start on rhel7-2 * Resource action: lxc1 start on rhel7-2 * Resource action: lxc-ms start on lxc1 * Pseudo action: lxc-ms-master_running_0 * Resource action: lxc1 monitor=30000 on rhel7-2 * Pseudo action: lxc-ms-master_promote_0 * Resource action: lxc-ms promote on lxc1 * Pseudo action: lxc-ms-master_promoted_0 Revised cluster status: Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Containers: [ lxc1:container1 lxc2:container2 ] Fencing (stonith:fence_xvm): Started rhel7-3 FencingPass (stonith:fence_dummy): Started rhel7-4 FencingFail (stonith:fence_dummy): Started rhel7-5 rsc_rhel7-1 (ocf::heartbeat:IPaddr2): Started rhel7-1 rsc_rhel7-2 (ocf::heartbeat:IPaddr2): Started rhel7-2 rsc_rhel7-3 (ocf::heartbeat:IPaddr2): Started rhel7-3 rsc_rhel7-4 (ocf::heartbeat:IPaddr2): Started rhel7-4 rsc_rhel7-5 (ocf::heartbeat:IPaddr2): Started rhel7-5 migrator (ocf::pacemaker:Dummy): Started rhel7-4 Clone Set: Connectivity [ping-1] Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Stopped: [ lxc1 lxc2 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ rhel7-3 ] Slaves: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ] Resource Group: group-1 r192.168.122.207 (ocf::heartbeat:IPaddr2): Started rhel7-3 petulant (service:DummySD): Started rhel7-3 r192.168.122.208 (ocf::heartbeat:IPaddr2): Started rhel7-3 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-3 container1 (ocf::heartbeat:VirtualDomain): Started rhel7-2 container2 (ocf::heartbeat:VirtualDomain): Started rhel7-1 - Master/Slave Set: lxc-ms-master [lxc-ms] + Clone Set: lxc-ms-master [lxc-ms] (promotable) Masters: [ lxc1 ] Slaves: [ lxc2 ] diff --git a/cts/scheduler/whitebox-ms-ordering.summary b/cts/scheduler/whitebox-ms-ordering.summary index 46fe9d1bb2..d4964bb63e 100644 --- a/cts/scheduler/whitebox-ms-ordering.summary +++ b/cts/scheduler/whitebox-ms-ordering.summary @@ -1,73 +1,73 @@ Current cluster status: Online: [ 18node1 18node2 18node3 ] shooter (stonith:fence_xvm): Started 18node2 container1 (ocf::heartbeat:VirtualDomain): FAILED container2 (ocf::heartbeat:VirtualDomain): FAILED - Master/Slave Set: lxc-ms-master [lxc-ms] + Clone Set: lxc-ms-master [lxc-ms] (promotable) Stopped: [ 18node1 18node2 18node3 ] Transition Summary: * Fence (reboot) lxc2 (resource: container2) 'guest is unclean' * Fence (reboot) lxc1 (resource: container1) 'guest is unclean' * Start container1 (18node1) * Start container2 (18node1) * Recover lxc-ms:0 (Master lxc1) * Recover lxc-ms:1 (Slave lxc2) * Start lxc1 (18node1) * Start lxc2 (18node1) Executing cluster transition: * Resource action: container1 monitor on 18node3 * Resource action: container1 monitor on 18node2 * Resource action: container1 monitor on 18node1 * Resource action: container2 monitor on 18node3 * Resource action: container2 monitor on 18node2 * Resource action: container2 monitor on 18node1 * Resource action: lxc-ms monitor on 18node3 * Resource action: lxc-ms monitor on 18node2 * Resource action: lxc-ms monitor on 18node1 * Pseudo action: lxc-ms-master_demote_0 * Resource action: lxc1 monitor on 18node3 * Resource action: lxc1 monitor on 18node2 * Resource action: lxc1 monitor on 18node1 * Resource action: lxc2 monitor on 18node3 * Resource action: lxc2 monitor on 18node2 * Resource action: lxc2 monitor on 18node1 * Pseudo action: stonith-lxc2-reboot on lxc2 * Pseudo action: stonith-lxc1-reboot on lxc1 * Pseudo action: stonith_complete * Resource action: container1 start on 18node1 * Resource action: container2 start on 18node1 * Pseudo action: lxc-ms_demote_0 * Pseudo action: lxc-ms-master_demoted_0 * Pseudo action: lxc-ms-master_stop_0 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms_stop_0 * Pseudo action: lxc-ms-master_stopped_0 * Pseudo action: lxc-ms-master_start_0 * Pseudo action: all_stopped * Resource action: lxc1 start on 18node1 * Resource action: lxc2 start on 18node1 * Resource action: lxc-ms start on lxc1 * Resource action: lxc-ms start on lxc2 * Pseudo action: lxc-ms-master_running_0 * Resource action: lxc1 monitor=30000 on 18node1 * Resource action: lxc2 monitor=30000 on 18node1 * Resource action: lxc-ms monitor=10000 on lxc2 * Pseudo action: lxc-ms-master_promote_0 * Resource action: lxc-ms promote on lxc1 * Pseudo action: lxc-ms-master_promoted_0 Revised cluster status: Online: [ 18node1 18node2 18node3 ] Containers: [ lxc1:container1 lxc2:container2 ] shooter (stonith:fence_xvm): Started 18node2 container1 (ocf::heartbeat:VirtualDomain): Started 18node1 container2 (ocf::heartbeat:VirtualDomain): Started 18node1 - Master/Slave Set: lxc-ms-master [lxc-ms] + Clone Set: lxc-ms-master [lxc-ms] (promotable) Masters: [ lxc1 ] Slaves: [ lxc2 ] diff --git a/cts/scheduler/whitebox-orphan-ms.summary b/cts/scheduler/whitebox-orphan-ms.summary index 71f87c5522..66b106f914 100644 --- a/cts/scheduler/whitebox-orphan-ms.summary +++ b/cts/scheduler/whitebox-orphan-ms.summary @@ -1,86 +1,86 @@ Current cluster status: Online: [ 18node1 18node2 18node3 ] Containers: [ lxc1:container1 lxc2:container2 ] Fencing (stonith:fence_xvm): Started 18node2 FencingPass (stonith:fence_dummy): Started 18node3 FencingFail (stonith:fence_dummy): Started 18node3 rsc_18node1 (ocf::heartbeat:IPaddr2): Started 18node1 rsc_18node2 (ocf::heartbeat:IPaddr2): Started 18node2 rsc_18node3 (ocf::heartbeat:IPaddr2): Started 18node3 migrator (ocf::pacemaker:Dummy): Started 18node1 Clone Set: Connectivity [ping-1] Started: [ 18node1 18node2 18node3 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ 18node1 ] Slaves: [ 18node2 18node3 ] Resource Group: group-1 r192.168.122.87 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.88 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.89 (ocf::heartbeat:IPaddr2): Started 18node1 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1 container2 (ocf::heartbeat:VirtualDomain): ORPHANED Started 18node1 lxc1 (ocf::pacemaker:remote): ORPHANED Started 18node1 lxc-ms (ocf::pacemaker:Stateful): ORPHANED Master [ lxc1 lxc2 ] lxc2 (ocf::pacemaker:remote): ORPHANED Started 18node1 container1 (ocf::heartbeat:VirtualDomain): ORPHANED Started 18node1 Transition Summary: * Move FencingFail ( 18node3 -> 18node1 ) * Stop container2 (18node1) due to node availability * Stop lxc1 (18node1) due to node availability * Stop lxc-ms ( Master lxc1 ) due to node availability * Stop lxc-ms ( Master lxc2 ) due to node availability * Stop lxc2 (18node1) due to node availability * Stop container1 (18node1) due to node availability Executing cluster transition: * Resource action: FencingFail stop on 18node3 * Resource action: lxc-ms demote on lxc2 * Resource action: lxc-ms demote on lxc1 * Resource action: FencingFail start on 18node1 * Resource action: lxc-ms stop on lxc2 * Resource action: lxc-ms stop on lxc1 * Resource action: lxc-ms delete on 18node3 * Resource action: lxc-ms delete on 18node2 * Resource action: lxc-ms delete on 18node1 * Resource action: lxc2 stop on 18node1 * Resource action: lxc2 delete on 18node3 * Resource action: lxc2 delete on 18node2 * Resource action: lxc2 delete on 18node1 * Resource action: container2 stop on 18node1 * Resource action: container2 delete on 18node3 * Resource action: container2 delete on 18node2 * Resource action: container2 delete on 18node1 * Resource action: lxc1 stop on 18node1 * Resource action: lxc1 delete on 18node3 * Resource action: lxc1 delete on 18node2 * Resource action: lxc1 delete on 18node1 * Resource action: container1 stop on 18node1 * Resource action: container1 delete on 18node3 * Resource action: container1 delete on 18node2 * Resource action: container1 delete on 18node1 * Pseudo action: all_stopped Revised cluster status: Online: [ 18node1 18node2 18node3 ] Fencing (stonith:fence_xvm): Started 18node2 FencingPass (stonith:fence_dummy): Started 18node3 FencingFail (stonith:fence_dummy): Started 18node1 rsc_18node1 (ocf::heartbeat:IPaddr2): Started 18node1 rsc_18node2 (ocf::heartbeat:IPaddr2): Started 18node2 rsc_18node3 (ocf::heartbeat:IPaddr2): Started 18node3 migrator (ocf::pacemaker:Dummy): Started 18node1 Clone Set: Connectivity [ping-1] Started: [ 18node1 18node2 18node3 ] - Master/Slave Set: master-1 [stateful-1] + Clone Set: master-1 [stateful-1] (promotable) Masters: [ 18node1 ] Slaves: [ 18node2 18node3 ] Resource Group: group-1 r192.168.122.87 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.88 (ocf::heartbeat:IPaddr2): Started 18node1 r192.168.122.89 (ocf::heartbeat:IPaddr2): Started 18node1 lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1