diff --git a/cts/scheduler/8-am-then-bm-a-migrating-b-stopping.summary b/cts/scheduler/8-am-then-bm-a-migrating-b-stopping.summary index a5120dc215..f49be0a2f7 100644 --- a/cts/scheduler/8-am-then-bm-a-migrating-b-stopping.summary +++ b/cts/scheduler/8-am-then-bm-a-migrating-b-stopping.summary @@ -1,26 +1,26 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ 18node1 18node2 18node3 ] A (ocf::heartbeat:Dummy): Started 18node1 B (ocf::heartbeat:Dummy): Started 18node2 (disabled) Transition Summary: * Migrate A ( 18node1 -> 18node2 ) * Stop B ( 18node2 ) due to node availability Executing cluster transition: * Resource action: B stop on 18node2 * Resource action: A migrate_to on 18node1 * Resource action: A migrate_from on 18node2 * Resource action: A stop on 18node1 * Pseudo action: A_start_0 * Resource action: A monitor=60000 on 18node2 Revised cluster status: Online: [ 18node1 18node2 18node3 ] A (ocf::heartbeat:Dummy): Started 18node2 B (ocf::heartbeat:Dummy): Stopped (disabled) diff --git a/cts/scheduler/9-am-then-bm-b-migrating-a-stopping.summary b/cts/scheduler/9-am-then-bm-b-migrating-a-stopping.summary index c06ee69d45..6ee493982a 100644 --- a/cts/scheduler/9-am-then-bm-b-migrating-a-stopping.summary +++ b/cts/scheduler/9-am-then-bm-b-migrating-a-stopping.summary @@ -1,22 +1,22 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ 18node1 18node2 18node3 ] A (ocf::heartbeat:Dummy): Started 18node1 (disabled) B (ocf::heartbeat:Dummy): Started 18node2 Transition Summary: * Stop A ( 18node1 ) due to node availability * Stop B ( 18node2 ) due to unrunnable A start Executing cluster transition: * Resource action: B stop on 18node2 * Resource action: A stop on 18node1 Revised cluster status: Online: [ 18node1 18node2 18node3 ] A (ocf::heartbeat:Dummy): Stopped (disabled) B (ocf::heartbeat:Dummy): Stopped diff --git a/cts/scheduler/asymmetrical-order-move.summary b/cts/scheduler/asymmetrical-order-move.summary index 9c0ff74ab1..5ca5740da2 100644 --- a/cts/scheduler/asymmetrical-order-move.summary +++ b/cts/scheduler/asymmetrical-order-move.summary @@ -1,24 +1,24 @@ Using the original execution date of: 2016-04-28 11:50:29Z -1 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ sle12sp2-1 sle12sp2-2 ] st_sbd (stonith:external/sbd): Started sle12sp2-2 dummy1 (ocf::pacemaker:Dummy): Stopped (disabled) dummy2 (ocf::pacemaker:Dummy): Started sle12sp2-1 Transition Summary: * Stop dummy2 ( sle12sp2-1 ) due to unrunnable dummy1 start Executing cluster transition: * Resource action: dummy2 stop on sle12sp2-1 Using the original execution date of: 2016-04-28 11:50:29Z Revised cluster status: Online: [ sle12sp2-1 sle12sp2-2 ] st_sbd (stonith:external/sbd): Started sle12sp2-2 dummy1 (ocf::pacemaker:Dummy): Stopped (disabled) dummy2 (ocf::pacemaker:Dummy): Stopped diff --git a/cts/scheduler/asymmetrical-order-restart.summary b/cts/scheduler/asymmetrical-order-restart.summary index 4ef8b8272b..b6bdcf244b 100644 --- a/cts/scheduler/asymmetrical-order-restart.summary +++ b/cts/scheduler/asymmetrical-order-restart.summary @@ -1,28 +1,28 @@ Using the original execution date of: 2018-08-09 18:55:41Z -1 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ cesr105-p16 cesr109-p16 ] cesr104ipmi (stonith:fence_ipmilan): Started cesr105-p16 sleep_a (ocf::classe:anything): Stopped (disabled) sleep_b (ocf::classe:anything): FAILED cesr109-p16 Transition Summary: * Restart cesr104ipmi ( cesr105-p16 ) due to resource definition change * Stop sleep_b ( cesr109-p16 ) due to unrunnable sleep_a start Executing cluster transition: * Resource action: cesr104ipmi stop on cesr105-p16 * Resource action: cesr104ipmi start on cesr105-p16 * Resource action: cesr104ipmi monitor=60000 on cesr105-p16 * Resource action: sleep_b stop on cesr109-p16 Using the original execution date of: 2018-08-09 18:55:41Z Revised cluster status: Online: [ cesr105-p16 cesr109-p16 ] cesr104ipmi (stonith:fence_ipmilan): Started cesr105-p16 sleep_a (ocf::classe:anything): Stopped (disabled) sleep_b (ocf::classe:anything): Stopped diff --git a/cts/scheduler/bug-1718.summary b/cts/scheduler/bug-1718.summary index 8d25c97e26..0b74a157f4 100644 --- a/cts/scheduler/bug-1718.summary +++ b/cts/scheduler/bug-1718.summary @@ -1,41 +1,41 @@ -2 of 5 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ biggame.ds9 heartbeat.ds9 ops.ds9 ] OFFLINE: [ defiant.ds9 warbird.ds9 ] Resource Group: Web_Group Apache_IP (ocf::heartbeat:IPaddr): Started heartbeat.ds9 resource_IP2 (ocf::heartbeat:IPaddr): Stopped (disabled) resource_dummyweb (ocf::heartbeat:Dummy): Stopped Resource Group: group_fUN resource_IP3 (ocf::heartbeat:IPaddr): Started ops.ds9 resource_dummy (ocf::heartbeat:Dummy): Started ops.ds9 Transition Summary: * Stop resource_IP3 ( ops.ds9 ) due to unrunnable Web_Group running * Stop resource_dummy ( ops.ds9 ) due to required resource_IP3 start Executing cluster transition: * Pseudo action: group_fUN_stop_0 * Resource action: resource_dummy stop on ops.ds9 * Resource action: OpenVPN_IP delete on ops.ds9 * Resource action: OpenVPN_IP delete on heartbeat.ds9 * Resource action: Apache delete on biggame.ds9 * Resource action: Apache delete on ops.ds9 * Resource action: Apache delete on heartbeat.ds9 * Resource action: resource_IP3 stop on ops.ds9 * Pseudo action: group_fUN_stopped_0 Revised cluster status: Online: [ biggame.ds9 heartbeat.ds9 ops.ds9 ] OFFLINE: [ defiant.ds9 warbird.ds9 ] Resource Group: Web_Group Apache_IP (ocf::heartbeat:IPaddr): Started heartbeat.ds9 resource_IP2 (ocf::heartbeat:IPaddr): Stopped (disabled) resource_dummyweb (ocf::heartbeat:Dummy): Stopped Resource Group: group_fUN resource_IP3 (ocf::heartbeat:IPaddr): Stopped resource_dummy (ocf::heartbeat:Dummy): Stopped diff --git a/cts/scheduler/bug-5014-A-stop-B-started.summary b/cts/scheduler/bug-5014-A-stop-B-started.summary index 765f3924e2..a122602f9b 100644 --- a/cts/scheduler/bug-5014-A-stop-B-started.summary +++ b/cts/scheduler/bug-5014-A-stop-B-started.summary @@ -1,20 +1,20 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] ClusterIP (ocf::heartbeat:IPaddr2): Started fc16-builder (disabled) ClusterIP2 (ocf::heartbeat:IPaddr2): Started fc16-builder Transition Summary: * Stop ClusterIP ( fc16-builder ) due to node availability Executing cluster transition: * Resource action: ClusterIP stop on fc16-builder Revised cluster status: Online: [ fc16-builder ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped (disabled) ClusterIP2 (ocf::heartbeat:IPaddr2): Started fc16-builder diff --git a/cts/scheduler/bug-5014-A-stopped-B-stopped.summary b/cts/scheduler/bug-5014-A-stopped-B-stopped.summary index b5a83f6b28..280f514f01 100644 --- a/cts/scheduler/bug-5014-A-stopped-B-stopped.summary +++ b/cts/scheduler/bug-5014-A-stopped-B-stopped.summary @@ -1,21 +1,21 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped (disabled) ClusterIP2 (ocf::heartbeat:IPaddr2): Stopped Transition Summary: * Start ClusterIP2 ( fc16-builder ) due to unrunnable ClusterIP start (blocked) Executing cluster transition: * Resource action: ClusterIP monitor on fc16-builder * Resource action: ClusterIP2 monitor on fc16-builder Revised cluster status: Online: [ fc16-builder ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped (disabled) ClusterIP2 (ocf::heartbeat:IPaddr2): Stopped diff --git a/cts/scheduler/bug-5014-CLONE-A-stop-B-started.summary b/cts/scheduler/bug-5014-CLONE-A-stop-B-started.summary index 9efb43813f..0f3096cd84 100644 --- a/cts/scheduler/bug-5014-CLONE-A-stop-B-started.summary +++ b/cts/scheduler/bug-5014-CLONE-A-stop-B-started.summary @@ -1,26 +1,26 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] Clone Set: clone1 [ClusterIP] Started: [ fc16-builder ] Clone Set: clone2 [ClusterIP2] Started: [ fc16-builder ] Transition Summary: * Stop ClusterIP:0 ( fc16-builder ) due to node availability Executing cluster transition: * Pseudo action: clone1_stop_0 * Resource action: ClusterIP:0 stop on fc16-builder * Pseudo action: clone1_stopped_0 Revised cluster status: Online: [ fc16-builder ] Clone Set: clone1 [ClusterIP] Stopped (disabled): [ fc16-builder ] Clone Set: clone2 [ClusterIP2] Started: [ fc16-builder ] diff --git a/cts/scheduler/bug-5014-CthenAthenB-C-stopped.summary b/cts/scheduler/bug-5014-CthenAthenB-C-stopped.summary index 57068a673c..96569d51c2 100644 --- a/cts/scheduler/bug-5014-CthenAthenB-C-stopped.summary +++ b/cts/scheduler/bug-5014-CthenAthenB-C-stopped.summary @@ -1,25 +1,25 @@ -1 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped ClusterIP2 (ocf::heartbeat:IPaddr2): Stopped ClusterIP3 (ocf::heartbeat:IPaddr2): Stopped (disabled) Transition Summary: * Start ClusterIP ( fc16-builder ) due to unrunnable ClusterIP3 start (blocked) * Start ClusterIP2 ( fc16-builder ) due to unrunnable ClusterIP start (blocked) Executing cluster transition: * Resource action: ClusterIP monitor on fc16-builder * Resource action: ClusterIP2 monitor on fc16-builder * Resource action: ClusterIP3 monitor on fc16-builder Revised cluster status: Online: [ fc16-builder ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped ClusterIP2 (ocf::heartbeat:IPaddr2): Stopped ClusterIP3 (ocf::heartbeat:IPaddr2): Stopped (disabled) diff --git a/cts/scheduler/bug-5014-GROUP-A-stopped-B-started.summary b/cts/scheduler/bug-5014-GROUP-A-stopped-B-started.summary index e363718bc0..eacffdc667 100644 --- a/cts/scheduler/bug-5014-GROUP-A-stopped-B-started.summary +++ b/cts/scheduler/bug-5014-GROUP-A-stopped-B-started.summary @@ -1,26 +1,26 @@ -2 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] Resource Group: group1 ClusterIP (ocf::heartbeat:IPaddr2): Started fc16-builder (disabled) Resource Group: group2 ClusterIP2 (ocf::heartbeat:IPaddr2): Started fc16-builder Transition Summary: * Stop ClusterIP ( fc16-builder ) due to node availability Executing cluster transition: * Pseudo action: group1_stop_0 * Resource action: ClusterIP stop on fc16-builder * Pseudo action: group1_stopped_0 Revised cluster status: Online: [ fc16-builder ] Resource Group: group1 ClusterIP (ocf::heartbeat:IPaddr2): Stopped (disabled) Resource Group: group2 ClusterIP2 (ocf::heartbeat:IPaddr2): Started fc16-builder diff --git a/cts/scheduler/bug-5014-GROUP-A-stopped-B-stopped.summary b/cts/scheduler/bug-5014-GROUP-A-stopped-B-stopped.summary index c6c88faf39..f61161f45b 100644 --- a/cts/scheduler/bug-5014-GROUP-A-stopped-B-stopped.summary +++ b/cts/scheduler/bug-5014-GROUP-A-stopped-B-stopped.summary @@ -1,23 +1,23 @@ -2 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] Resource Group: group1 ClusterIP (ocf::heartbeat:IPaddr2): Stopped (disabled) Resource Group: group2 ClusterIP2 (ocf::heartbeat:IPaddr2): Stopped Transition Summary: * Start ClusterIP2 ( fc16-builder ) due to unrunnable group1 running (blocked) Executing cluster transition: Revised cluster status: Online: [ fc16-builder ] Resource Group: group1 ClusterIP (ocf::heartbeat:IPaddr2): Stopped (disabled) Resource Group: group2 ClusterIP2 (ocf::heartbeat:IPaddr2): Stopped diff --git a/cts/scheduler/bug-5014-ordered-set-symmetrical-false.summary b/cts/scheduler/bug-5014-ordered-set-symmetrical-false.summary index 9d469c8556..cad1afe0e1 100644 --- a/cts/scheduler/bug-5014-ordered-set-symmetrical-false.summary +++ b/cts/scheduler/bug-5014-ordered-set-symmetrical-false.summary @@ -1,24 +1,24 @@ -1 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Started fc16-builder (disabled) Transition Summary: * Stop C ( fc16-builder ) due to node availability Executing cluster transition: * Resource action: C stop on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/bug-5014-ordered-set-symmetrical-true.summary b/cts/scheduler/bug-5014-ordered-set-symmetrical-true.summary index 516223e544..0e02f167aa 100644 --- a/cts/scheduler/bug-5014-ordered-set-symmetrical-true.summary +++ b/cts/scheduler/bug-5014-ordered-set-symmetrical-true.summary @@ -1,26 +1,26 @@ -1 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Started fc16-builder (disabled) Transition Summary: * Stop A ( fc16-builder ) due to required C start * Stop C ( fc16-builder ) due to node availability Executing cluster transition: * Resource action: A stop on fc16-builder * Resource action: C stop on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/bug-5028-bottom.summary b/cts/scheduler/bug-5028-bottom.summary index 11c719bebd..c7329022c9 100644 --- a/cts/scheduler/bug-5028-bottom.summary +++ b/cts/scheduler/bug-5028-bottom.summary @@ -1,23 +1,24 @@ +0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ bl460g6a bl460g6b ] Resource Group: dummy-g dummy01 (ocf::heartbeat:Dummy): FAILED bl460g6a (blocked) dummy02 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6a Transition Summary: * Shutdown bl460g6a * Stop dummy02 ( bl460g6a ) due to node availability Executing cluster transition: * Pseudo action: dummy-g_stop_0 * Resource action: dummy02 stop on bl460g6a Revised cluster status: Online: [ bl460g6a bl460g6b ] Resource Group: dummy-g dummy01 (ocf::heartbeat:Dummy): FAILED bl460g6a (blocked) dummy02 (ocf::heartbeat:Dummy-stop-NG): Stopped diff --git a/cts/scheduler/bug-5028-detach.summary b/cts/scheduler/bug-5028-detach.summary index 691a7fd3a9..9930c5bb3d 100644 --- a/cts/scheduler/bug-5028-detach.summary +++ b/cts/scheduler/bug-5028-detach.summary @@ -1,24 +1,25 @@ *** Resource management is DISABLED *** The cluster will not attempt to start, stop or recover services +0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ bl460g6a bl460g6b ] Resource Group: dummy-g dummy01 (ocf::heartbeat:Dummy): Started bl460g6a (unmanaged) dummy02 (ocf::heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked) Transition Summary: * Shutdown bl460g6a Executing cluster transition: * Cluster action: do_shutdown on bl460g6a Revised cluster status: Online: [ bl460g6a bl460g6b ] Resource Group: dummy-g dummy01 (ocf::heartbeat:Dummy): Started bl460g6a (unmanaged) dummy02 (ocf::heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked) diff --git a/cts/scheduler/bug-5028.summary b/cts/scheduler/bug-5028.summary index 5a08468b28..97fe35c6ce 100644 --- a/cts/scheduler/bug-5028.summary +++ b/cts/scheduler/bug-5028.summary @@ -1,23 +1,24 @@ +0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ bl460g6a bl460g6b ] Resource Group: dummy-g dummy01 (ocf::heartbeat:Dummy): Started bl460g6a dummy02 (ocf::heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked) Transition Summary: * Shutdown bl460g6a * Stop dummy01 ( bl460g6a ) due to unrunnable dummy02 stop (blocked) Executing cluster transition: * Pseudo action: dummy-g_stop_0 * Pseudo action: dummy-g_start_0 Revised cluster status: Online: [ bl460g6a bl460g6b ] Resource Group: dummy-g dummy01 (ocf::heartbeat:Dummy): Started bl460g6a dummy02 (ocf::heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked) diff --git a/cts/scheduler/bug-5140-require-all-false.summary b/cts/scheduler/bug-5140-require-all-false.summary index 0c52cc4276..caf56e47af 100644 --- a/cts/scheduler/bug-5140-require-all-false.summary +++ b/cts/scheduler/bug-5140-require-all-false.summary @@ -1,80 +1,80 @@ -4 of 35 resources DISABLED and 0 BLOCKED from being started due to failures +4 of 35 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Node hex-1: standby Node hex-2: standby Node hex-3: OFFLINE (standby) fencing (stonith:external/sbd): Stopped Clone Set: baseclone [basegrp] Resource Group: basegrp:0 dlm (ocf::pacemaker:controld): Started hex-2 clvmd (ocf::lvm2:clvmd): Started hex-2 o2cb (ocf::ocfs2:o2cb): Started hex-2 vg1 (ocf::heartbeat:LVM): Stopped fs-ocfs-1 (ocf::heartbeat:Filesystem): Stopped Stopped: [ hex-1 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Stopped Clone Set: fs2 [fs-ocfs-2] Stopped: [ hex-1 hex-2 hex-3 ] Clone Set: ms-r0 [drbd-r0] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] Clone Set: ms-r1 [drbd-r1] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Stopped vg-md0 (ocf::heartbeat:LVM): Stopped fs-md0 (ocf::heartbeat:Filesystem): Stopped dummy1 (ocf::heartbeat:Delay): Stopped dummy3 (ocf::heartbeat:Delay): Stopped dummy4 (ocf::heartbeat:Delay): Stopped dummy5 (ocf::heartbeat:Delay): Stopped dummy6 (ocf::heartbeat:Delay): Stopped Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped dummy2 (ocf::heartbeat:Delay): Stopped cluster-md0 (ocf::heartbeat:Raid1): Stopped Transition Summary: * Stop dlm:0 ( hex-2 ) due to node availability * Stop clvmd:0 ( hex-2 ) due to node availability * Stop o2cb:0 ( hex-2 ) due to node availability Executing cluster transition: * Pseudo action: baseclone_stop_0 * Pseudo action: basegrp:0_stop_0 * Resource action: o2cb stop on hex-2 * Resource action: clvmd stop on hex-2 * Resource action: dlm stop on hex-2 * Pseudo action: basegrp:0_stopped_0 * Pseudo action: baseclone_stopped_0 Revised cluster status: Node hex-1: standby Node hex-2: standby Node hex-3: OFFLINE (standby) fencing (stonith:external/sbd): Stopped Clone Set: baseclone [basegrp] Stopped: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Stopped Clone Set: fs2 [fs-ocfs-2] Stopped: [ hex-1 hex-2 hex-3 ] Clone Set: ms-r0 [drbd-r0] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] Clone Set: ms-r1 [drbd-r1] (promotable) Stopped (disabled): [ hex-1 hex-2 hex-3 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Stopped vg-md0 (ocf::heartbeat:LVM): Stopped fs-md0 (ocf::heartbeat:Filesystem): Stopped dummy1 (ocf::heartbeat:Delay): Stopped dummy3 (ocf::heartbeat:Delay): Stopped dummy4 (ocf::heartbeat:Delay): Stopped dummy5 (ocf::heartbeat:Delay): Stopped dummy6 (ocf::heartbeat:Delay): Stopped Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped dummy2 (ocf::heartbeat:Delay): Stopped cluster-md0 (ocf::heartbeat:Raid1): Stopped diff --git a/cts/scheduler/bug-5143-ms-shuffle.summary b/cts/scheduler/bug-5143-ms-shuffle.summary index 826dfbd075..568f73ce08 100644 --- a/cts/scheduler/bug-5143-ms-shuffle.summary +++ b/cts/scheduler/bug-5143-ms-shuffle.summary @@ -1,75 +1,75 @@ -2 of 34 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 34 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ hex-1 hex-2 hex-3 ] fencing (stonith:external/sbd): Started hex-1 Clone Set: baseclone [basegrp] Started: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Started hex-2 Clone Set: fs2 [fs-ocfs-2] Started: [ hex-1 hex-2 hex-3 ] Clone Set: ms-r0 [drbd-r0] (promotable) Masters: [ hex-1 ] Slaves: [ hex-2 ] Clone Set: ms-r1 [drbd-r1] (promotable) Slaves: [ hex-2 hex-3 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Started hex-3 vg-md0 (ocf::heartbeat:LVM): Started hex-3 fs-md0 (ocf::heartbeat:Filesystem): Started hex-3 dummy1 (ocf::heartbeat:Delay): Started hex-3 dummy3 (ocf::heartbeat:Delay): Started hex-1 dummy4 (ocf::heartbeat:Delay): Started hex-2 dummy5 (ocf::heartbeat:Delay): Started hex-1 dummy6 (ocf::heartbeat:Delay): Started hex-2 Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped (disabled) dummy2 (ocf::heartbeat:Delay): Stopped Transition Summary: * Promote drbd-r1:1 ( Slave -> Master hex-3 ) Executing cluster transition: * Pseudo action: ms-r1_pre_notify_promote_0 * Resource action: drbd-r1 notify on hex-2 * Resource action: drbd-r1 notify on hex-3 * Pseudo action: ms-r1_confirmed-pre_notify_promote_0 * Pseudo action: ms-r1_promote_0 * Resource action: drbd-r1 promote on hex-3 * Pseudo action: ms-r1_promoted_0 * Pseudo action: ms-r1_post_notify_promoted_0 * Resource action: drbd-r1 notify on hex-2 * Resource action: drbd-r1 notify on hex-3 * Pseudo action: ms-r1_confirmed-post_notify_promoted_0 * Resource action: drbd-r1 monitor=29000 on hex-2 * Resource action: drbd-r1 monitor=31000 on hex-3 Revised cluster status: Online: [ hex-1 hex-2 hex-3 ] fencing (stonith:external/sbd): Started hex-1 Clone Set: baseclone [basegrp] Started: [ hex-1 hex-2 hex-3 ] fs-xfs-1 (ocf::heartbeat:Filesystem): Started hex-2 Clone Set: fs2 [fs-ocfs-2] Started: [ hex-1 hex-2 hex-3 ] Clone Set: ms-r0 [drbd-r0] (promotable) Masters: [ hex-1 ] Slaves: [ hex-2 ] Clone Set: ms-r1 [drbd-r1] (promotable) Masters: [ hex-3 ] Slaves: [ hex-2 ] Resource Group: md0-group md0 (ocf::heartbeat:Raid1): Started hex-3 vg-md0 (ocf::heartbeat:LVM): Started hex-3 fs-md0 (ocf::heartbeat:Filesystem): Started hex-3 dummy1 (ocf::heartbeat:Delay): Started hex-3 dummy3 (ocf::heartbeat:Delay): Started hex-1 dummy4 (ocf::heartbeat:Delay): Started hex-2 dummy5 (ocf::heartbeat:Delay): Started hex-1 dummy6 (ocf::heartbeat:Delay): Started hex-2 Resource Group: r0-group fs-r0 (ocf::heartbeat:Filesystem): Stopped (disabled) dummy2 (ocf::heartbeat:Delay): Stopped diff --git a/cts/scheduler/bug-cl-5170.summary b/cts/scheduler/bug-cl-5170.summary index 3136c121c6..e6fa234992 100644 --- a/cts/scheduler/bug-cl-5170.summary +++ b/cts/scheduler/bug-cl-5170.summary @@ -1,33 +1,34 @@ +0 of 4 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Node TCS-1: OFFLINE (standby) Online: [ TCS-2 ] Resource Group: svc ip_trf (ocf::heartbeat:IPaddr2): Started TCS-2 ip_mgmt (ocf::heartbeat:IPaddr2): Started TCS-2 Clone Set: cl_tomcat_nms [d_tomcat_nms] d_tomcat_nms (ocf::ntc:tomcat): FAILED TCS-2 (blocked) Stopped: [ TCS-1 ] Transition Summary: * Stop ip_trf ( TCS-2 ) due to node availability * Stop ip_mgmt ( TCS-2 ) due to node availability Executing cluster transition: * Pseudo action: svc_stop_0 * Resource action: ip_mgmt stop on TCS-2 * Resource action: ip_trf stop on TCS-2 * Pseudo action: svc_stopped_0 Revised cluster status: Node TCS-1: OFFLINE (standby) Online: [ TCS-2 ] Resource Group: svc ip_trf (ocf::heartbeat:IPaddr2): Stopped ip_mgmt (ocf::heartbeat:IPaddr2): Stopped Clone Set: cl_tomcat_nms [d_tomcat_nms] d_tomcat_nms (ocf::ntc:tomcat): FAILED TCS-2 (blocked) Stopped: [ TCS-1 ] diff --git a/cts/scheduler/bug-cl-5219.summary b/cts/scheduler/bug-cl-5219.summary index 97a66fb6ae..d8921c59f3 100644 --- a/cts/scheduler/bug-cl-5219.summary +++ b/cts/scheduler/bug-cl-5219.summary @@ -1,40 +1,40 @@ -1 of 9 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 9 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ ha1.test.anchor.net.au ha2.test.anchor.net.au ] child1-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au (disabled) child2-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au parent-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au Clone Set: child1 [stateful-child1] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] Clone Set: child2 [stateful-child2] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] Clone Set: parent [stateful-parent] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] Transition Summary: * Stop child1-service ( ha2.test.anchor.net.au ) due to node availability Executing cluster transition: * Resource action: child1-service stop on ha2.test.anchor.net.au Revised cluster status: Online: [ ha1.test.anchor.net.au ha2.test.anchor.net.au ] child1-service (ocf::pacemaker:Dummy): Stopped (disabled) child2-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au parent-service (ocf::pacemaker:Dummy): Started ha2.test.anchor.net.au Clone Set: child1 [stateful-child1] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] Clone Set: child2 [stateful-child2] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] Clone Set: parent [stateful-parent] (promotable) Masters: [ ha2.test.anchor.net.au ] Slaves: [ ha1.test.anchor.net.au ] diff --git a/cts/scheduler/bug-lf-2171.summary b/cts/scheduler/bug-lf-2171.summary index daf96bbff5..5a8be0e714 100644 --- a/cts/scheduler/bug-lf-2171.summary +++ b/cts/scheduler/bug-lf-2171.summary @@ -1,36 +1,36 @@ -3 of 4 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ xenserver1 xenserver2 ] Clone Set: cl_res_Dummy1 [res_Dummy1] Started: [ xenserver1 xenserver2 ] Resource Group: gr_Dummy res_Dummy2 (ocf::heartbeat:Dummy): Started xenserver1 res_Dummy3 (ocf::heartbeat:Dummy): Started xenserver1 Transition Summary: * Stop res_Dummy1:0 ( xenserver1 ) due to node availability * Stop res_Dummy1:1 ( xenserver2 ) due to node availability * Stop res_Dummy2 ( xenserver1 ) due to unrunnable cl_res_Dummy1 running * Stop res_Dummy3 ( xenserver1 ) due to unrunnable cl_res_Dummy1 running Executing cluster transition: * Pseudo action: gr_Dummy_stop_0 * Resource action: res_Dummy2 stop on xenserver1 * Resource action: res_Dummy3 stop on xenserver1 * Pseudo action: gr_Dummy_stopped_0 * Pseudo action: cl_res_Dummy1_stop_0 * Resource action: res_Dummy1:1 stop on xenserver1 * Resource action: res_Dummy1:0 stop on xenserver2 * Pseudo action: cl_res_Dummy1_stopped_0 Revised cluster status: Online: [ xenserver1 xenserver2 ] Clone Set: cl_res_Dummy1 [res_Dummy1] Stopped (disabled): [ xenserver1 xenserver2 ] Resource Group: gr_Dummy res_Dummy2 (ocf::heartbeat:Dummy): Stopped res_Dummy3 (ocf::heartbeat:Dummy): Stopped diff --git a/cts/scheduler/bug-lf-2358.summary b/cts/scheduler/bug-lf-2358.summary index 09ff14e01d..40db7956b3 100644 --- a/cts/scheduler/bug-lf-2358.summary +++ b/cts/scheduler/bug-lf-2358.summary @@ -1,65 +1,65 @@ -2 of 15 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 15 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ alice.demo bob.demo ] Clone Set: ms_drbd_nfsexport [res_drbd_nfsexport] (promotable) Stopped (disabled): [ alice.demo bob.demo ] Resource Group: rg_nfs res_fs_nfsexport (ocf::heartbeat:Filesystem): Stopped res_ip_nfs (ocf::heartbeat:IPaddr2): Stopped res_nfs (lsb:nfs): Stopped Resource Group: rg_mysql1 res_fs_mysql1 (ocf::heartbeat:Filesystem): Started bob.demo res_ip_mysql1 (ocf::heartbeat:IPaddr2): Started bob.demo res_mysql1 (ocf::heartbeat:mysql): Started bob.demo Clone Set: ms_drbd_mysql1 [res_drbd_mysql1] (promotable) Masters: [ bob.demo ] Stopped: [ alice.demo ] Clone Set: ms_drbd_mysql2 [res_drbd_mysql2] (promotable) Masters: [ alice.demo ] Slaves: [ bob.demo ] Resource Group: rg_mysql2 res_fs_mysql2 (ocf::heartbeat:Filesystem): Started alice.demo res_ip_mysql2 (ocf::heartbeat:IPaddr2): Started alice.demo res_mysql2 (ocf::heartbeat:mysql): Started alice.demo Transition Summary: * Start res_drbd_mysql1:1 ( alice.demo ) Executing cluster transition: * Pseudo action: ms_drbd_mysql1_pre_notify_start_0 * Resource action: res_drbd_mysql1:0 notify on bob.demo * Pseudo action: ms_drbd_mysql1_confirmed-pre_notify_start_0 * Pseudo action: ms_drbd_mysql1_start_0 * Resource action: res_drbd_mysql1:1 start on alice.demo * Pseudo action: ms_drbd_mysql1_running_0 * Pseudo action: ms_drbd_mysql1_post_notify_running_0 * Resource action: res_drbd_mysql1:0 notify on bob.demo * Resource action: res_drbd_mysql1:1 notify on alice.demo * Pseudo action: ms_drbd_mysql1_confirmed-post_notify_running_0 Revised cluster status: Online: [ alice.demo bob.demo ] Clone Set: ms_drbd_nfsexport [res_drbd_nfsexport] (promotable) Stopped (disabled): [ alice.demo bob.demo ] Resource Group: rg_nfs res_fs_nfsexport (ocf::heartbeat:Filesystem): Stopped res_ip_nfs (ocf::heartbeat:IPaddr2): Stopped res_nfs (lsb:nfs): Stopped Resource Group: rg_mysql1 res_fs_mysql1 (ocf::heartbeat:Filesystem): Started bob.demo res_ip_mysql1 (ocf::heartbeat:IPaddr2): Started bob.demo res_mysql1 (ocf::heartbeat:mysql): Started bob.demo Clone Set: ms_drbd_mysql1 [res_drbd_mysql1] (promotable) Masters: [ bob.demo ] Slaves: [ alice.demo ] Clone Set: ms_drbd_mysql2 [res_drbd_mysql2] (promotable) Masters: [ alice.demo ] Slaves: [ bob.demo ] Resource Group: rg_mysql2 res_fs_mysql2 (ocf::heartbeat:Filesystem): Started alice.demo res_ip_mysql2 (ocf::heartbeat:IPaddr2): Started alice.demo res_mysql2 (ocf::heartbeat:mysql): Started alice.demo diff --git a/cts/scheduler/bug-lf-2422.summary b/cts/scheduler/bug-lf-2422.summary index 4abb0443f2..e90deb3b70 100644 --- a/cts/scheduler/bug-lf-2422.summary +++ b/cts/scheduler/bug-lf-2422.summary @@ -1,80 +1,80 @@ -8 of 21 resources DISABLED and 0 BLOCKED from being started due to failures +4 of 21 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ] sbd_stonith (stonith:external/sbd): Started qa-suse-2 Clone Set: c-o2stage [o2stage] Started: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ] Clone Set: c-ocfs [ocfs] Started: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ] Transition Summary: * Stop o2cb:0 ( qa-suse-1 ) due to node availability * Stop cmirror:0 ( qa-suse-1 ) due to node availability * Stop o2cb:1 ( qa-suse-4 ) due to node availability * Stop cmirror:1 ( qa-suse-4 ) due to node availability * Stop o2cb:2 ( qa-suse-3 ) due to node availability * Stop cmirror:2 ( qa-suse-3 ) due to node availability * Stop o2cb:3 ( qa-suse-2 ) due to node availability * Stop cmirror:3 ( qa-suse-2 ) due to node availability * Stop ocfs:0 ( qa-suse-1 ) due to node availability * Stop ocfs:1 ( qa-suse-4 ) due to node availability * Stop ocfs:2 ( qa-suse-3 ) due to node availability * Stop ocfs:3 ( qa-suse-2 ) due to node availability Executing cluster transition: * Resource action: sbd_stonith monitor=15000 on qa-suse-2 * Pseudo action: c-ocfs_stop_0 * Resource action: ocfs:3 stop on qa-suse-2 * Resource action: ocfs:2 stop on qa-suse-3 * Resource action: ocfs:0 stop on qa-suse-4 * Resource action: ocfs:1 stop on qa-suse-1 * Pseudo action: c-ocfs_stopped_0 * Pseudo action: c-o2stage_stop_0 * Pseudo action: o2stage:0_stop_0 * Resource action: cmirror:1 stop on qa-suse-1 * Pseudo action: o2stage:1_stop_0 * Resource action: cmirror:0 stop on qa-suse-4 * Pseudo action: o2stage:2_stop_0 * Resource action: cmirror:2 stop on qa-suse-3 * Pseudo action: o2stage:3_stop_0 * Resource action: cmirror:3 stop on qa-suse-2 * Resource action: o2cb:1 stop on qa-suse-1 * Resource action: o2cb:0 stop on qa-suse-4 * Resource action: o2cb:2 stop on qa-suse-3 * Resource action: o2cb:3 stop on qa-suse-2 * Pseudo action: o2stage:0_stopped_0 * Pseudo action: o2stage:1_stopped_0 * Pseudo action: o2stage:2_stopped_0 * Pseudo action: o2stage:3_stopped_0 * Pseudo action: c-o2stage_stopped_0 Revised cluster status: Online: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ] sbd_stonith (stonith:external/sbd): Started qa-suse-2 Clone Set: c-o2stage [o2stage] Resource Group: o2stage:0 dlm (ocf::pacemaker:controld): Started qa-suse-1 clvm (ocf::lvm2:clvmd): Started qa-suse-1 o2cb (ocf::ocfs2:o2cb): Stopped (disabled) cmirror (ocf::lvm2:cmirrord): Stopped Resource Group: o2stage:1 dlm (ocf::pacemaker:controld): Started qa-suse-4 clvm (ocf::lvm2:clvmd): Started qa-suse-4 o2cb (ocf::ocfs2:o2cb): Stopped (disabled) cmirror (ocf::lvm2:cmirrord): Stopped Resource Group: o2stage:2 dlm (ocf::pacemaker:controld): Started qa-suse-3 clvm (ocf::lvm2:clvmd): Started qa-suse-3 o2cb (ocf::ocfs2:o2cb): Stopped (disabled) cmirror (ocf::lvm2:cmirrord): Stopped Resource Group: o2stage:3 dlm (ocf::pacemaker:controld): Started qa-suse-2 clvm (ocf::lvm2:clvmd): Started qa-suse-2 o2cb (ocf::ocfs2:o2cb): Stopped (disabled) cmirror (ocf::lvm2:cmirrord): Stopped Clone Set: c-ocfs [ocfs] Stopped: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ] diff --git a/cts/scheduler/bug-lf-2453.summary b/cts/scheduler/bug-lf-2453.summary index c9e43f2b35..894e5ecf23 100644 --- a/cts/scheduler/bug-lf-2453.summary +++ b/cts/scheduler/bug-lf-2453.summary @@ -1,38 +1,38 @@ -2 of 5 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ domu1 domu2 ] PrimitiveResource1 (ocf::heartbeat:IPaddr2): Started domu1 Clone Set: CloneResource1 [apache] Started: [ domu1 domu2 ] Clone Set: CloneResource2 [DummyResource] Started: [ domu1 domu2 ] Transition Summary: * Stop PrimitiveResource1 ( domu1 ) due to required CloneResource2 running * Stop apache:0 ( domu1 ) due to node availability * Stop apache:1 ( domu2 ) due to node availability * Stop DummyResource:0 ( domu1 ) due to unrunnable CloneResource1 running * Stop DummyResource:1 ( domu2 ) due to unrunnable CloneResource1 running Executing cluster transition: * Resource action: PrimitiveResource1 stop on domu1 * Pseudo action: CloneResource2_stop_0 * Resource action: DummyResource:1 stop on domu1 * Resource action: DummyResource:0 stop on domu2 * Pseudo action: CloneResource2_stopped_0 * Pseudo action: CloneResource1_stop_0 * Resource action: apache:1 stop on domu1 * Resource action: apache:0 stop on domu2 * Pseudo action: CloneResource1_stopped_0 Revised cluster status: Online: [ domu1 domu2 ] PrimitiveResource1 (ocf::heartbeat:IPaddr2): Stopped Clone Set: CloneResource1 [apache] Stopped (disabled): [ domu1 domu2 ] Clone Set: CloneResource2 [DummyResource] Stopped: [ domu1 domu2 ] diff --git a/cts/scheduler/bug-lf-2606.summary b/cts/scheduler/bug-lf-2606.summary index 22103e41fe..13eb7dbe76 100644 --- a/cts/scheduler/bug-lf-2606.summary +++ b/cts/scheduler/bug-lf-2606.summary @@ -1,43 +1,43 @@ -1 of 5 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Node node2: UNCLEAN (online) Online: [ node1 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): FAILED node2 (disabled) rsc2 (ocf::pacemaker:Dummy): Started node2 Clone Set: ms3 [rsc3] (promotable) Masters: [ node2 ] Slaves: [ node1 ] Transition Summary: * Fence (reboot) node2 'rsc1 failed there' * Stop rsc1 ( node2 ) due to node availability * Move rsc2 ( node2 -> node1 ) * Stop rsc3:1 ( Master node2 ) due to node availability Executing cluster transition: * Pseudo action: ms3_demote_0 * Fencing node2 (reboot) * Pseudo action: rsc1_stop_0 * Pseudo action: rsc2_stop_0 * Pseudo action: rsc3:1_demote_0 * Pseudo action: ms3_demoted_0 * Pseudo action: ms3_stop_0 * Resource action: rsc2 start on node1 * Pseudo action: rsc3:1_stop_0 * Pseudo action: ms3_stopped_0 * Resource action: rsc2 monitor=10000 on node1 Revised cluster status: Online: [ node1 ] OFFLINE: [ node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) rsc2 (ocf::pacemaker:Dummy): Started node1 Clone Set: ms3 [rsc3] (promotable) Slaves: [ node1 ] Stopped: [ node2 ] diff --git a/cts/scheduler/bug-rh-1097457.summary b/cts/scheduler/bug-rh-1097457.summary index e06badb07d..30eaae510b 100644 --- a/cts/scheduler/bug-rh-1097457.summary +++ b/cts/scheduler/bug-rh-1097457.summary @@ -1,123 +1,123 @@ -2 of 26 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 26 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ lama2 lama3 ] GuestOnline: [ lamaVM1:VM1 lamaVM2:VM2 lamaVM3:VM3 ] restofencelama2 (stonith:fence_ipmilan): Started lama3 restofencelama3 (stonith:fence_ipmilan): Started lama2 VM1 (ocf::heartbeat:VirtualDomain): Started lama2 FSlun1 (ocf::heartbeat:Filesystem): Started lamaVM1 FSlun2 (ocf::heartbeat:Filesystem): Started lamaVM1 VM2 (ocf::heartbeat:VirtualDomain): FAILED lama3 VM3 (ocf::heartbeat:VirtualDomain): Started lama3 FSlun3 (ocf::heartbeat:Filesystem): FAILED lamaVM2 FSlun4 (ocf::heartbeat:Filesystem): Started lamaVM3 FAKE5-IP (ocf::heartbeat:IPaddr2): Stopped (disabled) FAKE6-IP (ocf::heartbeat:IPaddr2): Stopped (disabled) FAKE5 (ocf::heartbeat:Dummy): Started lamaVM3 Resource Group: lamaVM1-G1 FAKE1 (ocf::heartbeat:Dummy): Started lamaVM1 FAKE1-IP (ocf::heartbeat:IPaddr2): Started lamaVM1 Resource Group: lamaVM1-G2 FAKE2 (ocf::heartbeat:Dummy): Started lamaVM1 FAKE2-IP (ocf::heartbeat:IPaddr2): Started lamaVM1 Resource Group: lamaVM1-G3 FAKE3 (ocf::heartbeat:Dummy): Started lamaVM1 FAKE3-IP (ocf::heartbeat:IPaddr2): Started lamaVM1 Resource Group: lamaVM2-G4 FAKE4 (ocf::heartbeat:Dummy): Started lamaVM2 FAKE4-IP (ocf::heartbeat:IPaddr2): Started lamaVM2 Clone Set: FAKE6-clone [FAKE6] Started: [ lamaVM1 lamaVM2 lamaVM3 ] Transition Summary: * Fence (reboot) lamaVM2 (resource: VM2) 'guest is unclean' * Recover VM2 ( lama3 ) * Recover FSlun3 ( lamaVM2 -> lama2 ) * Restart FAKE4 ( lamaVM2 ) due to required VM2 start * Restart FAKE4-IP ( lamaVM2 ) due to required VM2 start * Restart FAKE6:2 ( lamaVM2 ) due to required VM2 start * Restart lamaVM2 ( lama3 ) due to required VM2 start Executing cluster transition: * Resource action: FSlun1 monitor on lamaVM3 * Resource action: FSlun2 monitor on lamaVM3 * Resource action: FSlun3 monitor on lamaVM3 * Resource action: FSlun3 monitor on lamaVM1 * Resource action: FSlun4 monitor on lamaVM1 * Resource action: FAKE5-IP monitor on lamaVM3 * Resource action: FAKE5-IP monitor on lamaVM1 * Resource action: FAKE6-IP monitor on lamaVM3 * Resource action: FAKE6-IP monitor on lamaVM1 * Resource action: FAKE5 monitor on lamaVM1 * Resource action: FAKE1 monitor on lamaVM3 * Resource action: FAKE1-IP monitor on lamaVM3 * Resource action: FAKE2 monitor on lamaVM3 * Resource action: FAKE2-IP monitor on lamaVM3 * Resource action: FAKE3 monitor on lamaVM3 * Resource action: FAKE3-IP monitor on lamaVM3 * Resource action: FAKE4 monitor on lamaVM3 * Resource action: FAKE4 monitor on lamaVM1 * Resource action: FAKE4-IP monitor on lamaVM3 * Resource action: FAKE4-IP monitor on lamaVM1 * Resource action: lamaVM2 stop on lama3 * Resource action: VM2 stop on lama3 * Pseudo action: stonith-lamaVM2-reboot on lamaVM2 * Resource action: VM2 start on lama3 * Resource action: VM2 monitor=10000 on lama3 * Pseudo action: lamaVM2-G4_stop_0 * Pseudo action: FAKE4-IP_stop_0 * Pseudo action: FAKE6-clone_stop_0 * Resource action: lamaVM2 start on lama3 * Resource action: lamaVM2 monitor=30000 on lama3 * Resource action: FSlun3 monitor=10000 on lamaVM2 * Pseudo action: FAKE4_stop_0 * Pseudo action: FAKE6_stop_0 * Pseudo action: FAKE6-clone_stopped_0 * Pseudo action: FAKE6-clone_start_0 * Pseudo action: lamaVM2-G4_stopped_0 * Resource action: FAKE6 start on lamaVM2 * Resource action: FAKE6 monitor=30000 on lamaVM2 * Pseudo action: FAKE6-clone_running_0 * Pseudo action: FSlun3_stop_0 * Resource action: FSlun3 start on lama2 * Pseudo action: lamaVM2-G4_start_0 * Resource action: FAKE4 start on lamaVM2 * Resource action: FAKE4 monitor=30000 on lamaVM2 * Resource action: FAKE4-IP start on lamaVM2 * Resource action: FAKE4-IP monitor=30000 on lamaVM2 * Resource action: FSlun3 monitor=10000 on lama2 * Pseudo action: lamaVM2-G4_running_0 Revised cluster status: Online: [ lama2 lama3 ] GuestOnline: [ lamaVM1:VM1 lamaVM2:VM2 lamaVM3:VM3 ] restofencelama2 (stonith:fence_ipmilan): Started lama3 restofencelama3 (stonith:fence_ipmilan): Started lama2 VM1 (ocf::heartbeat:VirtualDomain): Started lama2 FSlun1 (ocf::heartbeat:Filesystem): Started lamaVM1 FSlun2 (ocf::heartbeat:Filesystem): Started lamaVM1 VM2 (ocf::heartbeat:VirtualDomain): FAILED lama3 VM3 (ocf::heartbeat:VirtualDomain): Started lama3 FSlun3 (ocf::heartbeat:Filesystem): FAILED[ lama2 lamaVM2 ] FSlun4 (ocf::heartbeat:Filesystem): Started lamaVM3 FAKE5-IP (ocf::heartbeat:IPaddr2): Stopped (disabled) FAKE6-IP (ocf::heartbeat:IPaddr2): Stopped (disabled) FAKE5 (ocf::heartbeat:Dummy): Started lamaVM3 Resource Group: lamaVM1-G1 FAKE1 (ocf::heartbeat:Dummy): Started lamaVM1 FAKE1-IP (ocf::heartbeat:IPaddr2): Started lamaVM1 Resource Group: lamaVM1-G2 FAKE2 (ocf::heartbeat:Dummy): Started lamaVM1 FAKE2-IP (ocf::heartbeat:IPaddr2): Started lamaVM1 Resource Group: lamaVM1-G3 FAKE3 (ocf::heartbeat:Dummy): Started lamaVM1 FAKE3-IP (ocf::heartbeat:IPaddr2): Started lamaVM1 Resource Group: lamaVM2-G4 FAKE4 (ocf::heartbeat:Dummy): Started lamaVM2 FAKE4-IP (ocf::heartbeat:IPaddr2): Started lamaVM2 Clone Set: FAKE6-clone [FAKE6] Started: [ lamaVM1 lamaVM2 lamaVM3 ] diff --git a/cts/scheduler/bug-suse-707150.summary b/cts/scheduler/bug-suse-707150.summary index b3949888ee..adb37ccfee 100644 --- a/cts/scheduler/bug-suse-707150.summary +++ b/cts/scheduler/bug-suse-707150.summary @@ -1,72 +1,72 @@ -9 of 28 resources DISABLED and 0 BLOCKED from being started due to failures +5 of 28 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ hex-0 hex-9 ] OFFLINE: [ hex-7 hex-8 ] vm-00 (ocf::heartbeat:Xen): Stopped (disabled) Clone Set: base-clone [base-group] Resource Group: base-group:0 dlm (ocf::pacemaker:controld): Started hex-0 o2cb (ocf::ocfs2:o2cb): Stopped clvm (ocf::lvm2:clvmd): Stopped cmirrord (ocf::lvm2:cmirrord): Stopped vg1 (ocf::heartbeat:LVM): Stopped (disabled) ocfs2-1 (ocf::heartbeat:Filesystem): Stopped Stopped: [ hex-7 hex-8 hex-9 ] vm-01 (ocf::heartbeat:Xen): Stopped fencing-sbd (stonith:external/sbd): Started hex-9 dummy1 (ocf::heartbeat:Dummy): Started hex-0 Transition Summary: * Start o2cb:0 ( hex-0 ) * Start clvm:0 ( hex-0 ) * Start cmirrord:0 ( hex-0 ) * Start dlm:1 ( hex-9 ) * Start o2cb:1 ( hex-9 ) * Start clvm:1 ( hex-9 ) * Start cmirrord:1 ( hex-9 ) * Start vm-01 ( hex-9 ) due to unrunnable base-clone running (blocked) Executing cluster transition: * Resource action: vg1:1 monitor on hex-9 * Pseudo action: base-clone_start_0 * Pseudo action: load_stopped_hex-9 * Pseudo action: load_stopped_hex-8 * Pseudo action: load_stopped_hex-7 * Pseudo action: load_stopped_hex-0 * Pseudo action: base-group:0_start_0 * Resource action: o2cb:0 start on hex-0 * Resource action: clvm:0 start on hex-0 * Resource action: cmirrord:0 start on hex-0 * Pseudo action: base-group:1_start_0 * Resource action: dlm:1 start on hex-9 * Resource action: o2cb:1 start on hex-9 * Resource action: clvm:1 start on hex-9 * Resource action: cmirrord:1 start on hex-9 Revised cluster status: Online: [ hex-0 hex-9 ] OFFLINE: [ hex-7 hex-8 ] vm-00 (ocf::heartbeat:Xen): Stopped (disabled) Clone Set: base-clone [base-group] Resource Group: base-group:0 dlm (ocf::pacemaker:controld): Started hex-0 o2cb (ocf::ocfs2:o2cb): Started hex-0 clvm (ocf::lvm2:clvmd): Started hex-0 cmirrord (ocf::lvm2:cmirrord): Started hex-0 vg1 (ocf::heartbeat:LVM): Stopped (disabled) ocfs2-1 (ocf::heartbeat:Filesystem): Stopped Resource Group: base-group:1 dlm (ocf::pacemaker:controld): Started hex-9 o2cb (ocf::ocfs2:o2cb): Started hex-9 clvm (ocf::lvm2:clvmd): Started hex-9 cmirrord (ocf::lvm2:cmirrord): Started hex-9 vg1 (ocf::heartbeat:LVM): Stopped (disabled) ocfs2-1 (ocf::heartbeat:Filesystem): Stopped Stopped: [ hex-7 hex-8 ] vm-01 (ocf::heartbeat:Xen): Stopped fencing-sbd (stonith:external/sbd): Started hex-9 dummy1 (ocf::heartbeat:Dummy): Started hex-0 diff --git a/cts/scheduler/clone-fail-block-colocation.summary b/cts/scheduler/clone-fail-block-colocation.summary index bdca45f0c5..04e2f2a6be 100644 --- a/cts/scheduler/clone-fail-block-colocation.summary +++ b/cts/scheduler/clone-fail-block-colocation.summary @@ -1,57 +1,58 @@ +0 of 10 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ DEM-1 DEM-2 ] Resource Group: svc ipv6_dem_tas_dns (ocf::heartbeat:IPv6addr): Started DEM-1 d_bird_subnet_state (lsb:bird_subnet_state): Started DEM-1 ip_mgmt (ocf::heartbeat:IPaddr2): Started DEM-1 ip_trf_tas (ocf::heartbeat:IPaddr2): Started DEM-1 Clone Set: cl_bird [d_bird] Started: [ DEM-1 DEM-2 ] Clone Set: cl_bird6 [d_bird6] d_bird6 (lsb:bird6): FAILED DEM-1 (blocked) Started: [ DEM-2 ] Clone Set: cl_tomcat_nms [d_tomcat_nms] Started: [ DEM-1 DEM-2 ] Transition Summary: * Move ipv6_dem_tas_dns ( DEM-1 -> DEM-2 ) * Move d_bird_subnet_state ( DEM-1 -> DEM-2 ) * Move ip_mgmt ( DEM-1 -> DEM-2 ) * Move ip_trf_tas ( DEM-1 -> DEM-2 ) Executing cluster transition: * Pseudo action: svc_stop_0 * Resource action: ip_trf_tas stop on DEM-1 * Resource action: ip_mgmt stop on DEM-1 * Resource action: d_bird_subnet_state stop on DEM-1 * Resource action: ipv6_dem_tas_dns stop on DEM-1 * Pseudo action: svc_stopped_0 * Pseudo action: svc_start_0 * Resource action: ipv6_dem_tas_dns start on DEM-2 * Resource action: d_bird_subnet_state start on DEM-2 * Resource action: ip_mgmt start on DEM-2 * Resource action: ip_trf_tas start on DEM-2 * Pseudo action: svc_running_0 * Resource action: ipv6_dem_tas_dns monitor=10000 on DEM-2 * Resource action: d_bird_subnet_state monitor=10000 on DEM-2 * Resource action: ip_mgmt monitor=10000 on DEM-2 * Resource action: ip_trf_tas monitor=10000 on DEM-2 Revised cluster status: Online: [ DEM-1 DEM-2 ] Resource Group: svc ipv6_dem_tas_dns (ocf::heartbeat:IPv6addr): Started DEM-2 d_bird_subnet_state (lsb:bird_subnet_state): Started DEM-2 ip_mgmt (ocf::heartbeat:IPaddr2): Started DEM-2 ip_trf_tas (ocf::heartbeat:IPaddr2): Started DEM-2 Clone Set: cl_bird [d_bird] Started: [ DEM-1 DEM-2 ] Clone Set: cl_bird6 [d_bird6] d_bird6 (lsb:bird6): FAILED DEM-1 (blocked) Started: [ DEM-2 ] Clone Set: cl_tomcat_nms [d_tomcat_nms] Started: [ DEM-1 DEM-2 ] diff --git a/cts/scheduler/clone-order-16instances.summary b/cts/scheduler/clone-order-16instances.summary index f3cd5547db..004952a494 100644 --- a/cts/scheduler/clone-order-16instances.summary +++ b/cts/scheduler/clone-order-16instances.summary @@ -1,69 +1,69 @@ -16 of 33 resources DISABLED and 0 BLOCKED from being started due to failures +16 of 33 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ] virt-fencing (stonith:fence_xvm): Started virt-010.cluster-qe.lab.eng.brq.redhat.com Clone Set: dlm-clone [dlm] Started: [ virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com ] Stopped: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ] Clone Set: clvmd-clone [clvmd] Stopped (disabled): [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ] Transition Summary: * Start dlm:2 ( virt-009.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:3 ( virt-013.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:4 ( virt-014.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:5 ( virt-015.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:6 ( virt-016.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:7 ( virt-020.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:8 ( virt-027.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:9 ( virt-028.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:10 ( virt-029.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:11 ( virt-030.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:12 ( virt-031.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:13 ( virt-032.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:14 ( virt-033.cluster-qe.lab.eng.brq.redhat.com ) * Start dlm:15 ( virt-034.cluster-qe.lab.eng.brq.redhat.com ) Executing cluster transition: * Pseudo action: dlm-clone_start_0 * Resource action: dlm start on virt-009.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-013.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-014.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-015.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-016.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-020.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-027.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-028.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-029.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-030.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-031.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-032.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-033.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm start on virt-034.cluster-qe.lab.eng.brq.redhat.com * Pseudo action: dlm-clone_running_0 * Resource action: dlm monitor=30000 on virt-009.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-013.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-014.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-015.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-016.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-020.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-027.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-028.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-029.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-030.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-031.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-032.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-033.cluster-qe.lab.eng.brq.redhat.com * Resource action: dlm monitor=30000 on virt-034.cluster-qe.lab.eng.brq.redhat.com Revised cluster status: Online: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ] virt-fencing (stonith:fence_xvm): Started virt-010.cluster-qe.lab.eng.brq.redhat.com Clone Set: dlm-clone [dlm] Started: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ] Clone Set: clvmd-clone [clvmd] Stopped (disabled): [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ] diff --git a/cts/scheduler/cloned-group-stop.summary b/cts/scheduler/cloned-group-stop.summary index 9aa0449d04..2f3d163aff 100644 --- a/cts/scheduler/cloned-group-stop.summary +++ b/cts/scheduler/cloned-group-stop.summary @@ -1,88 +1,88 @@ -2 of 20 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 20 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ rhos4-node3 rhos4-node4 ] virt-fencing (stonith:fence_xvm): Started rhos4-node3 Resource Group: mysql-group mysql-vip (ocf::heartbeat:IPaddr2): Started rhos4-node3 mysql-fs (ocf::heartbeat:Filesystem): Started rhos4-node3 mysql-db (ocf::heartbeat:mysql): Started rhos4-node3 Clone Set: qpidd-clone [qpidd] Started: [ rhos4-node3 rhos4-node4 ] Clone Set: keystone-clone [keystone] Started: [ rhos4-node3 rhos4-node4 ] Clone Set: glance-clone [glance] Started: [ rhos4-node3 rhos4-node4 ] Clone Set: cinder-clone [cinder] Started: [ rhos4-node3 rhos4-node4 ] Transition Summary: * Stop qpidd:0 ( rhos4-node4 ) due to node availability * Stop qpidd:1 ( rhos4-node3 ) due to node availability * Stop keystone:0 ( rhos4-node4 ) due to unrunnable qpidd-clone running * Stop keystone:1 ( rhos4-node3 ) due to unrunnable qpidd-clone running * Stop glance-fs:0 ( rhos4-node4 ) due to required keystone-clone running * Stop glance-registry:0 ( rhos4-node4 ) due to required glance-fs:0 stop * Stop glance-api:0 ( rhos4-node4 ) due to required glance-registry:0 start * Stop glance-fs:1 ( rhos4-node3 ) due to required keystone-clone running * Stop glance-registry:1 ( rhos4-node3 ) due to required glance-fs:1 stop * Stop glance-api:1 ( rhos4-node3 ) due to required glance-registry:1 start * Stop cinder-api:0 ( rhos4-node4 ) due to required glance-clone running * Stop cinder-scheduler:0 ( rhos4-node4 ) due to required cinder-api:0 stop * Stop cinder-volume:0 ( rhos4-node4 ) due to required cinder-scheduler:0 start * Stop cinder-api:1 ( rhos4-node3 ) due to required glance-clone running * Stop cinder-scheduler:1 ( rhos4-node3 ) due to required cinder-api:1 stop * Stop cinder-volume:1 ( rhos4-node3 ) due to required cinder-scheduler:1 start Executing cluster transition: * Pseudo action: cinder-clone_stop_0 * Pseudo action: cinder:0_stop_0 * Resource action: cinder-volume stop on rhos4-node4 * Pseudo action: cinder:1_stop_0 * Resource action: cinder-volume stop on rhos4-node3 * Resource action: cinder-scheduler stop on rhos4-node4 * Resource action: cinder-scheduler stop on rhos4-node3 * Resource action: cinder-api stop on rhos4-node4 * Resource action: cinder-api stop on rhos4-node3 * Pseudo action: cinder:0_stopped_0 * Pseudo action: cinder:1_stopped_0 * Pseudo action: cinder-clone_stopped_0 * Pseudo action: glance-clone_stop_0 * Pseudo action: glance:0_stop_0 * Resource action: glance-api stop on rhos4-node4 * Pseudo action: glance:1_stop_0 * Resource action: glance-api stop on rhos4-node3 * Resource action: glance-registry stop on rhos4-node4 * Resource action: glance-registry stop on rhos4-node3 * Resource action: glance-fs stop on rhos4-node4 * Resource action: glance-fs stop on rhos4-node3 * Pseudo action: glance:0_stopped_0 * Pseudo action: glance:1_stopped_0 * Pseudo action: glance-clone_stopped_0 * Pseudo action: keystone-clone_stop_0 * Resource action: keystone stop on rhos4-node4 * Resource action: keystone stop on rhos4-node3 * Pseudo action: keystone-clone_stopped_0 * Pseudo action: qpidd-clone_stop_0 * Resource action: qpidd stop on rhos4-node4 * Resource action: qpidd stop on rhos4-node3 * Pseudo action: qpidd-clone_stopped_0 Revised cluster status: Online: [ rhos4-node3 rhos4-node4 ] virt-fencing (stonith:fence_xvm): Started rhos4-node3 Resource Group: mysql-group mysql-vip (ocf::heartbeat:IPaddr2): Started rhos4-node3 mysql-fs (ocf::heartbeat:Filesystem): Started rhos4-node3 mysql-db (ocf::heartbeat:mysql): Started rhos4-node3 Clone Set: qpidd-clone [qpidd] Stopped (disabled): [ rhos4-node3 rhos4-node4 ] Clone Set: keystone-clone [keystone] Stopped: [ rhos4-node3 rhos4-node4 ] Clone Set: glance-clone [glance] Stopped: [ rhos4-node3 rhos4-node4 ] Clone Set: cinder-clone [cinder] Stopped: [ rhos4-node3 rhos4-node4 ] diff --git a/cts/scheduler/coloc-clone-stays-active.summary b/cts/scheduler/coloc-clone-stays-active.summary index 010e537612..6fe7427daa 100644 --- a/cts/scheduler/coloc-clone-stays-active.summary +++ b/cts/scheduler/coloc-clone-stays-active.summary @@ -1,206 +1,206 @@ -12 of 87 resources DISABLED and 0 BLOCKED from being started due to failures +9 of 87 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ s01-0 s01-1 ] stonith-s01-0 (stonith:external/ipmi): Started s01-1 stonith-s01-1 (stonith:external/ipmi): Started s01-0 Resource Group: iscsi-pool-0-target-all iscsi-pool-0-target (ocf::vds-ok:iSCSITarget): Started s01-0 iscsi-pool-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-0 Resource Group: iscsi-pool-0-vips vip-235 (ocf::heartbeat:IPaddr2): Started s01-0 vip-236 (ocf::heartbeat:IPaddr2): Started s01-0 Resource Group: iscsi-pool-1-target-all iscsi-pool-1-target (ocf::vds-ok:iSCSITarget): Started s01-1 iscsi-pool-1-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-1 Resource Group: iscsi-pool-1-vips vip-237 (ocf::heartbeat:IPaddr2): Started s01-1 vip-238 (ocf::heartbeat:IPaddr2): Started s01-1 Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] Clone Set: cl-o2cb [o2cb] Stopped (disabled): [ s01-0 s01-1 ] Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-service-fs [s01-service-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-ietd [ietd] Started: [ s01-0 s01-1 ] Clone Set: cl-dhcpd [dhcpd] Stopped (disabled): [ s01-0 s01-1 ] Resource Group: http-server vip-233 (ocf::heartbeat:IPaddr2): Started s01-0 nginx (lsb:nginx): Stopped (disabled) Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-logs-fs [s01-logs-fs] Started: [ s01-0 s01-1 ] Resource Group: syslog-server vip-234 (ocf::heartbeat:IPaddr2): Started s01-1 syslog-ng (ocf::heartbeat:syslog-ng): Started s01-1 Resource Group: tftp-server vip-232 (ocf::heartbeat:IPaddr2): Stopped tftpd (ocf::heartbeat:Xinetd): Stopped Clone Set: cl-xinetd [xinetd] Started: [ s01-0 s01-1 ] Clone Set: cl-ospf-routing [ospf-routing] Started: [ s01-0 s01-1 ] Clone Set: connected-outer [ping-bmc-and-switch] Started: [ s01-0 s01-1 ] Resource Group: iscsi-vds-dom0-stateless-0-target-all iscsi-vds-dom0-stateless-0-target (ocf::vds-ok:iSCSITarget): Stopped (disabled) iscsi-vds-dom0-stateless-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Stopped (disabled) Resource Group: iscsi-vds-dom0-stateless-0-vips vip-227 (ocf::heartbeat:IPaddr2): Stopped vip-228 (ocf::heartbeat:IPaddr2): Stopped Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable) Slaves: [ s01-0 s01-1 ] Clone Set: cl-dlm [dlm] Started: [ s01-0 s01-1 ] Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] Stopped (disabled): [ s01-0 s01-1 ] Clone Set: cl-gfs2 [gfs2] Started: [ s01-0 s01-1 ] Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-http-fs [vds-http-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-clvmd [clvmd] Started: [ s01-0 s01-1 ] Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data] Started: [ s01-0 s01-1 ] mgmt-vm (ocf::vds-ok:VirtualDomain): Started s01-0 Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service] Started: [ s01-0 s01-1 ] Clone Set: cl-libvirtd [libvirtd] Started: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool] Started: [ s01-0 s01-1 ] Transition Summary: * Migrate mgmt-vm ( s01-0 -> s01-1 ) Executing cluster transition: * Resource action: mgmt-vm migrate_to on s01-0 * Resource action: mgmt-vm migrate_from on s01-1 * Resource action: mgmt-vm stop on s01-0 * Pseudo action: mgmt-vm_start_0 * Resource action: mgmt-vm monitor=10000 on s01-1 Revised cluster status: Online: [ s01-0 s01-1 ] stonith-s01-0 (stonith:external/ipmi): Started s01-1 stonith-s01-1 (stonith:external/ipmi): Started s01-0 Resource Group: iscsi-pool-0-target-all iscsi-pool-0-target (ocf::vds-ok:iSCSITarget): Started s01-0 iscsi-pool-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-0 Resource Group: iscsi-pool-0-vips vip-235 (ocf::heartbeat:IPaddr2): Started s01-0 vip-236 (ocf::heartbeat:IPaddr2): Started s01-0 Resource Group: iscsi-pool-1-target-all iscsi-pool-1-target (ocf::vds-ok:iSCSITarget): Started s01-1 iscsi-pool-1-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Started s01-1 Resource Group: iscsi-pool-1-vips vip-237 (ocf::heartbeat:IPaddr2): Started s01-1 vip-238 (ocf::heartbeat:IPaddr2): Started s01-1 Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable) Masters: [ s01-1 ] Slaves: [ s01-0 ] Clone Set: cl-o2cb [o2cb] Stopped (disabled): [ s01-0 s01-1 ] Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-service-fs [s01-service-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-ietd [ietd] Started: [ s01-0 s01-1 ] Clone Set: cl-dhcpd [dhcpd] Stopped (disabled): [ s01-0 s01-1 ] Resource Group: http-server vip-233 (ocf::heartbeat:IPaddr2): Started s01-0 nginx (lsb:nginx): Stopped (disabled) Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-logs-fs [s01-logs-fs] Started: [ s01-0 s01-1 ] Resource Group: syslog-server vip-234 (ocf::heartbeat:IPaddr2): Started s01-1 syslog-ng (ocf::heartbeat:syslog-ng): Started s01-1 Resource Group: tftp-server vip-232 (ocf::heartbeat:IPaddr2): Stopped tftpd (ocf::heartbeat:Xinetd): Stopped Clone Set: cl-xinetd [xinetd] Started: [ s01-0 s01-1 ] Clone Set: cl-ospf-routing [ospf-routing] Started: [ s01-0 s01-1 ] Clone Set: connected-outer [ping-bmc-and-switch] Started: [ s01-0 s01-1 ] Resource Group: iscsi-vds-dom0-stateless-0-target-all iscsi-vds-dom0-stateless-0-target (ocf::vds-ok:iSCSITarget): Stopped (disabled) iscsi-vds-dom0-stateless-0-lun-1 (ocf::vds-ok:iSCSILogicalUnit): Stopped (disabled) Resource Group: iscsi-vds-dom0-stateless-0-vips vip-227 (ocf::heartbeat:IPaddr2): Stopped vip-228 (ocf::heartbeat:IPaddr2): Stopped Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable) Masters: [ s01-0 ] Slaves: [ s01-1 ] Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable) Slaves: [ s01-0 s01-1 ] Clone Set: cl-dlm [dlm] Started: [ s01-0 s01-1 ] Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] Stopped (disabled): [ s01-0 s01-1 ] Clone Set: cl-gfs2 [gfs2] Started: [ s01-0 s01-1 ] Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-vds-http-fs [vds-http-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-clvmd [clvmd] Started: [ s01-0 s01-1 ] Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable) Masters: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs] Started: [ s01-0 s01-1 ] Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data] Started: [ s01-0 s01-1 ] mgmt-vm (ocf::vds-ok:VirtualDomain): Started s01-1 Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service] Started: [ s01-0 s01-1 ] Clone Set: cl-libvirtd [libvirtd] Started: [ s01-0 s01-1 ] Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool] Started: [ s01-0 s01-1 ] diff --git a/cts/scheduler/colocation_constraint_stops_slave.summary b/cts/scheduler/colocation_constraint_stops_slave.summary index ae74d017f5..c70d244c78 100644 --- a/cts/scheduler/colocation_constraint_stops_slave.summary +++ b/cts/scheduler/colocation_constraint_stops_slave.summary @@ -1,33 +1,33 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Slaves: [ fc16-builder ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Started fc16-builder (disabled) Transition Summary: * Stop NATIVE_RSC_A:0 ( Slave fc16-builder ) due to node availability * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability Executing cluster transition: * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_B stop on fc16-builder * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Stopped: [ fc16-builder fc16-builder2 ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/complex_enforce_colo.summary b/cts/scheduler/complex_enforce_colo.summary index 2b8db6cce8..427fd472bc 100644 --- a/cts/scheduler/complex_enforce_colo.summary +++ b/cts/scheduler/complex_enforce_colo.summary @@ -1,452 +1,452 @@ -3 of 132 resources DISABLED and 0 BLOCKED from being started due to failures +3 of 132 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ rhos6-node1 rhos6-node2 rhos6-node3 ] node1-fence (stonith:fence_xvm): Started rhos6-node1 node2-fence (stonith:fence_xvm): Started rhos6-node2 node3-fence (stonith:fence_xvm): Started rhos6-node3 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] vip-db (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-rabbitmq (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-qpid (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-keystone (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-glance (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-cinder (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-swift (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-neutron (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-nova (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-horizon (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-heat (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-ceilometer (ocf::heartbeat:IPaddr2): Started rhos6-node3 Clone Set: galera-master [galera] (promotable) Masters: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: rabbitmq-server-clone [rabbitmq-server] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: memcached-clone [memcached] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: mongodb-clone [mongodb] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: keystone-clone [keystone] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-fs-clone [glance-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-registry-clone [glance-registry] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-api-clone [glance-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] cinder-api (systemd:openstack-cinder-api): Started rhos6-node1 cinder-scheduler (systemd:openstack-cinder-scheduler): Started rhos6-node1 cinder-volume (systemd:openstack-cinder-volume): Started rhos6-node1 Clone Set: swift-fs-clone [swift-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-account-clone [swift-account] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-container-clone [swift-container] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-object-clone [swift-object] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-proxy-clone [swift-proxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] swift-object-expirer (systemd:openstack-swift-object-expirer): Started rhos6-node2 Clone Set: neutron-server-clone [neutron-server] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-scale-clone [neutron-scale] (unique) neutron-scale:0 (ocf::neutron:NeutronScale): Started rhos6-node3 neutron-scale:1 (ocf::neutron:NeutronScale): Started rhos6-node2 neutron-scale:2 (ocf::neutron:NeutronScale): Started rhos6-node1 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-consoleauth-clone [nova-consoleauth] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-novncproxy-clone [nova-novncproxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-api-clone [nova-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-scheduler-clone [nova-scheduler] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-conductor-clone [nova-conductor] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] ceilometer-central (systemd:openstack-ceilometer-central): Started rhos6-node3 Clone Set: ceilometer-collector-clone [ceilometer-collector] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-api-clone [ceilometer-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-delay-clone [ceilometer-delay] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-notification-clone [ceilometer-notification] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-clone [heat-api] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cfn-clone [heat-api-cfn] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] heat-engine (systemd:openstack-heat-engine): Started rhos6-node2 Clone Set: horizon-clone [horizon] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Transition Summary: * Stop keystone:0 ( rhos6-node1 ) due to node availability * Stop keystone:1 ( rhos6-node2 ) due to node availability * Stop keystone:2 ( rhos6-node3 ) due to node availability * Stop glance-registry:0 ( rhos6-node1 ) * Stop glance-registry:1 ( rhos6-node2 ) * Stop glance-registry:2 ( rhos6-node3 ) * Stop glance-api:0 ( rhos6-node1 ) * Stop glance-api:1 ( rhos6-node2 ) * Stop glance-api:2 ( rhos6-node3 ) * Stop cinder-api ( rhos6-node1 ) due to unrunnable keystone-clone running * Stop cinder-scheduler ( rhos6-node1 ) due to required cinder-api start * Stop cinder-volume ( rhos6-node1 ) due to colocation with cinder-scheduler * Stop swift-account:0 ( rhos6-node1 ) * Stop swift-account:1 ( rhos6-node2 ) * Stop swift-account:2 ( rhos6-node3 ) * Stop swift-container:0 ( rhos6-node1 ) * Stop swift-container:1 ( rhos6-node2 ) * Stop swift-container:2 ( rhos6-node3 ) * Stop swift-object:0 ( rhos6-node1 ) * Stop swift-object:1 ( rhos6-node2 ) * Stop swift-object:2 ( rhos6-node3 ) * Stop swift-proxy:0 ( rhos6-node1 ) * Stop swift-proxy:1 ( rhos6-node2 ) * Stop swift-proxy:2 ( rhos6-node3 ) * Stop swift-object-expirer ( rhos6-node2 ) due to required swift-proxy-clone running * Stop neutron-server:0 ( rhos6-node1 ) * Stop neutron-server:1 ( rhos6-node2 ) * Stop neutron-server:2 ( rhos6-node3 ) * Stop neutron-scale:0 ( rhos6-node3 ) * Stop neutron-scale:1 ( rhos6-node2 ) * Stop neutron-scale:2 ( rhos6-node1 ) * Stop neutron-ovs-cleanup:0 ( rhos6-node1 ) * Stop neutron-ovs-cleanup:1 ( rhos6-node2 ) * Stop neutron-ovs-cleanup:2 ( rhos6-node3 ) * Stop neutron-netns-cleanup:0 ( rhos6-node1 ) * Stop neutron-netns-cleanup:1 ( rhos6-node2 ) * Stop neutron-netns-cleanup:2 ( rhos6-node3 ) * Stop neutron-openvswitch-agent:0 ( rhos6-node1 ) * Stop neutron-openvswitch-agent:1 ( rhos6-node2 ) * Stop neutron-openvswitch-agent:2 ( rhos6-node3 ) * Stop neutron-dhcp-agent:0 ( rhos6-node1 ) * Stop neutron-dhcp-agent:1 ( rhos6-node2 ) * Stop neutron-dhcp-agent:2 ( rhos6-node3 ) * Stop neutron-l3-agent:0 ( rhos6-node1 ) * Stop neutron-l3-agent:1 ( rhos6-node2 ) * Stop neutron-l3-agent:2 ( rhos6-node3 ) * Stop neutron-metadata-agent:0 ( rhos6-node1 ) * Stop neutron-metadata-agent:1 ( rhos6-node2 ) * Stop neutron-metadata-agent:2 ( rhos6-node3 ) * Stop nova-consoleauth:0 ( rhos6-node1 ) * Stop nova-consoleauth:1 ( rhos6-node2 ) * Stop nova-consoleauth:2 ( rhos6-node3 ) * Stop nova-novncproxy:0 ( rhos6-node1 ) * Stop nova-novncproxy:1 ( rhos6-node2 ) * Stop nova-novncproxy:2 ( rhos6-node3 ) * Stop nova-api:0 ( rhos6-node1 ) * Stop nova-api:1 ( rhos6-node2 ) * Stop nova-api:2 ( rhos6-node3 ) * Stop nova-scheduler:0 ( rhos6-node1 ) * Stop nova-scheduler:1 ( rhos6-node2 ) * Stop nova-scheduler:2 ( rhos6-node3 ) * Stop nova-conductor:0 ( rhos6-node1 ) * Stop nova-conductor:1 ( rhos6-node2 ) * Stop nova-conductor:2 ( rhos6-node3 ) * Stop ceilometer-central ( rhos6-node3 ) due to unrunnable keystone-clone running * Stop ceilometer-collector:0 ( rhos6-node1 ) due to required ceilometer-central start * Stop ceilometer-collector:1 ( rhos6-node2 ) due to required ceilometer-central start * Stop ceilometer-collector:2 ( rhos6-node3 ) due to required ceilometer-central start * Stop ceilometer-api:0 ( rhos6-node1 ) due to required ceilometer-collector:0 start * Stop ceilometer-api:1 ( rhos6-node2 ) due to required ceilometer-collector:1 start * Stop ceilometer-api:2 ( rhos6-node3 ) due to required ceilometer-collector:2 start * Stop ceilometer-delay:0 ( rhos6-node1 ) due to required ceilometer-api:0 start * Stop ceilometer-delay:1 ( rhos6-node2 ) due to required ceilometer-api:1 start * Stop ceilometer-delay:2 ( rhos6-node3 ) due to required ceilometer-api:2 start * Stop ceilometer-alarm-evaluator:0 ( rhos6-node1 ) due to required ceilometer-delay:0 start * Stop ceilometer-alarm-evaluator:1 ( rhos6-node2 ) due to required ceilometer-delay:1 start * Stop ceilometer-alarm-evaluator:2 ( rhos6-node3 ) due to required ceilometer-delay:2 start * Stop ceilometer-alarm-notifier:0 ( rhos6-node1 ) due to required ceilometer-alarm-evaluator:0 start * Stop ceilometer-alarm-notifier:1 ( rhos6-node2 ) due to required ceilometer-alarm-evaluator:1 start * Stop ceilometer-alarm-notifier:2 ( rhos6-node3 ) due to required ceilometer-alarm-evaluator:2 start * Stop ceilometer-notification:0 ( rhos6-node1 ) due to required ceilometer-alarm-notifier:0 start * Stop ceilometer-notification:1 ( rhos6-node2 ) due to required ceilometer-alarm-notifier:1 start * Stop ceilometer-notification:2 ( rhos6-node3 ) due to required ceilometer-alarm-notifier:2 start * Stop heat-api:0 ( rhos6-node1 ) due to required ceilometer-notification:0 start * Stop heat-api:1 ( rhos6-node2 ) due to required ceilometer-notification:1 start * Stop heat-api:2 ( rhos6-node3 ) due to required ceilometer-notification:2 start * Stop heat-api-cfn:0 ( rhos6-node1 ) due to required heat-api:0 start * Stop heat-api-cfn:1 ( rhos6-node2 ) due to required heat-api:1 start * Stop heat-api-cfn:2 ( rhos6-node3 ) due to required heat-api:2 start * Stop heat-api-cloudwatch:0 ( rhos6-node1 ) due to required heat-api-cfn:0 start * Stop heat-api-cloudwatch:1 ( rhos6-node2 ) due to required heat-api-cfn:1 start * Stop heat-api-cloudwatch:2 ( rhos6-node3 ) due to required heat-api-cfn:2 start * Stop heat-engine ( rhos6-node2 ) due to colocation with heat-api-cloudwatch-clone Executing cluster transition: * Pseudo action: glance-api-clone_stop_0 * Resource action: cinder-volume stop on rhos6-node1 * Pseudo action: swift-object-clone_stop_0 * Resource action: swift-object-expirer stop on rhos6-node2 * Pseudo action: neutron-metadata-agent-clone_stop_0 * Pseudo action: nova-conductor-clone_stop_0 * Resource action: heat-engine stop on rhos6-node2 * Resource action: glance-api stop on rhos6-node1 * Resource action: glance-api stop on rhos6-node2 * Resource action: glance-api stop on rhos6-node3 * Pseudo action: glance-api-clone_stopped_0 * Resource action: cinder-scheduler stop on rhos6-node1 * Resource action: swift-object stop on rhos6-node1 * Resource action: swift-object stop on rhos6-node2 * Resource action: swift-object stop on rhos6-node3 * Pseudo action: swift-object-clone_stopped_0 * Pseudo action: swift-proxy-clone_stop_0 * Resource action: neutron-metadata-agent stop on rhos6-node1 * Resource action: neutron-metadata-agent stop on rhos6-node2 * Resource action: neutron-metadata-agent stop on rhos6-node3 * Pseudo action: neutron-metadata-agent-clone_stopped_0 * Resource action: nova-conductor stop on rhos6-node1 * Resource action: nova-conductor stop on rhos6-node2 * Resource action: nova-conductor stop on rhos6-node3 * Pseudo action: nova-conductor-clone_stopped_0 * Pseudo action: heat-api-cloudwatch-clone_stop_0 * Pseudo action: glance-registry-clone_stop_0 * Resource action: cinder-api stop on rhos6-node1 * Pseudo action: swift-container-clone_stop_0 * Resource action: swift-proxy stop on rhos6-node1 * Resource action: swift-proxy stop on rhos6-node2 * Resource action: swift-proxy stop on rhos6-node3 * Pseudo action: swift-proxy-clone_stopped_0 * Pseudo action: neutron-l3-agent-clone_stop_0 * Pseudo action: nova-scheduler-clone_stop_0 * Resource action: heat-api-cloudwatch stop on rhos6-node1 * Resource action: heat-api-cloudwatch stop on rhos6-node2 * Resource action: heat-api-cloudwatch stop on rhos6-node3 * Pseudo action: heat-api-cloudwatch-clone_stopped_0 * Resource action: glance-registry stop on rhos6-node1 * Resource action: glance-registry stop on rhos6-node2 * Resource action: glance-registry stop on rhos6-node3 * Pseudo action: glance-registry-clone_stopped_0 * Resource action: swift-container stop on rhos6-node1 * Resource action: swift-container stop on rhos6-node2 * Resource action: swift-container stop on rhos6-node3 * Pseudo action: swift-container-clone_stopped_0 * Resource action: neutron-l3-agent stop on rhos6-node1 * Resource action: neutron-l3-agent stop on rhos6-node2 * Resource action: neutron-l3-agent stop on rhos6-node3 * Pseudo action: neutron-l3-agent-clone_stopped_0 * Resource action: nova-scheduler stop on rhos6-node1 * Resource action: nova-scheduler stop on rhos6-node2 * Resource action: nova-scheduler stop on rhos6-node3 * Pseudo action: nova-scheduler-clone_stopped_0 * Pseudo action: heat-api-cfn-clone_stop_0 * Pseudo action: swift-account-clone_stop_0 * Pseudo action: neutron-dhcp-agent-clone_stop_0 * Pseudo action: nova-api-clone_stop_0 * Resource action: heat-api-cfn stop on rhos6-node1 * Resource action: heat-api-cfn stop on rhos6-node2 * Resource action: heat-api-cfn stop on rhos6-node3 * Pseudo action: heat-api-cfn-clone_stopped_0 * Resource action: swift-account stop on rhos6-node1 * Resource action: swift-account stop on rhos6-node2 * Resource action: swift-account stop on rhos6-node3 * Pseudo action: swift-account-clone_stopped_0 * Resource action: neutron-dhcp-agent stop on rhos6-node1 * Resource action: neutron-dhcp-agent stop on rhos6-node2 * Resource action: neutron-dhcp-agent stop on rhos6-node3 * Pseudo action: neutron-dhcp-agent-clone_stopped_0 * Resource action: nova-api stop on rhos6-node1 * Resource action: nova-api stop on rhos6-node2 * Resource action: nova-api stop on rhos6-node3 * Pseudo action: nova-api-clone_stopped_0 * Pseudo action: heat-api-clone_stop_0 * Pseudo action: neutron-openvswitch-agent-clone_stop_0 * Pseudo action: nova-novncproxy-clone_stop_0 * Resource action: heat-api stop on rhos6-node1 * Resource action: heat-api stop on rhos6-node2 * Resource action: heat-api stop on rhos6-node3 * Pseudo action: heat-api-clone_stopped_0 * Resource action: neutron-openvswitch-agent stop on rhos6-node1 * Resource action: neutron-openvswitch-agent stop on rhos6-node2 * Resource action: neutron-openvswitch-agent stop on rhos6-node3 * Pseudo action: neutron-openvswitch-agent-clone_stopped_0 * Resource action: nova-novncproxy stop on rhos6-node1 * Resource action: nova-novncproxy stop on rhos6-node2 * Resource action: nova-novncproxy stop on rhos6-node3 * Pseudo action: nova-novncproxy-clone_stopped_0 * Pseudo action: ceilometer-notification-clone_stop_0 * Pseudo action: neutron-netns-cleanup-clone_stop_0 * Pseudo action: nova-consoleauth-clone_stop_0 * Resource action: ceilometer-notification stop on rhos6-node1 * Resource action: ceilometer-notification stop on rhos6-node2 * Resource action: ceilometer-notification stop on rhos6-node3 * Pseudo action: ceilometer-notification-clone_stopped_0 * Resource action: neutron-netns-cleanup stop on rhos6-node1 * Resource action: neutron-netns-cleanup stop on rhos6-node2 * Resource action: neutron-netns-cleanup stop on rhos6-node3 * Pseudo action: neutron-netns-cleanup-clone_stopped_0 * Resource action: nova-consoleauth stop on rhos6-node1 * Resource action: nova-consoleauth stop on rhos6-node2 * Resource action: nova-consoleauth stop on rhos6-node3 * Pseudo action: nova-consoleauth-clone_stopped_0 * Pseudo action: ceilometer-alarm-notifier-clone_stop_0 * Pseudo action: neutron-ovs-cleanup-clone_stop_0 * Resource action: ceilometer-alarm-notifier stop on rhos6-node1 * Resource action: ceilometer-alarm-notifier stop on rhos6-node2 * Resource action: ceilometer-alarm-notifier stop on rhos6-node3 * Pseudo action: ceilometer-alarm-notifier-clone_stopped_0 * Resource action: neutron-ovs-cleanup stop on rhos6-node1 * Resource action: neutron-ovs-cleanup stop on rhos6-node2 * Resource action: neutron-ovs-cleanup stop on rhos6-node3 * Pseudo action: neutron-ovs-cleanup-clone_stopped_0 * Pseudo action: ceilometer-alarm-evaluator-clone_stop_0 * Pseudo action: neutron-scale-clone_stop_0 * Resource action: ceilometer-alarm-evaluator stop on rhos6-node1 * Resource action: ceilometer-alarm-evaluator stop on rhos6-node2 * Resource action: ceilometer-alarm-evaluator stop on rhos6-node3 * Pseudo action: ceilometer-alarm-evaluator-clone_stopped_0 * Resource action: neutron-scale:0 stop on rhos6-node3 * Resource action: neutron-scale:1 stop on rhos6-node2 * Resource action: neutron-scale:2 stop on rhos6-node1 * Pseudo action: neutron-scale-clone_stopped_0 * Pseudo action: ceilometer-delay-clone_stop_0 * Pseudo action: neutron-server-clone_stop_0 * Resource action: ceilometer-delay stop on rhos6-node1 * Resource action: ceilometer-delay stop on rhos6-node2 * Resource action: ceilometer-delay stop on rhos6-node3 * Pseudo action: ceilometer-delay-clone_stopped_0 * Resource action: neutron-server stop on rhos6-node1 * Resource action: neutron-server stop on rhos6-node2 * Resource action: neutron-server stop on rhos6-node3 * Pseudo action: neutron-server-clone_stopped_0 * Pseudo action: ceilometer-api-clone_stop_0 * Resource action: ceilometer-api stop on rhos6-node1 * Resource action: ceilometer-api stop on rhos6-node2 * Resource action: ceilometer-api stop on rhos6-node3 * Pseudo action: ceilometer-api-clone_stopped_0 * Pseudo action: ceilometer-collector-clone_stop_0 * Resource action: ceilometer-collector stop on rhos6-node1 * Resource action: ceilometer-collector stop on rhos6-node2 * Resource action: ceilometer-collector stop on rhos6-node3 * Pseudo action: ceilometer-collector-clone_stopped_0 * Resource action: ceilometer-central stop on rhos6-node3 * Pseudo action: keystone-clone_stop_0 * Resource action: keystone stop on rhos6-node1 * Resource action: keystone stop on rhos6-node2 * Resource action: keystone stop on rhos6-node3 * Pseudo action: keystone-clone_stopped_0 Revised cluster status: Online: [ rhos6-node1 rhos6-node2 rhos6-node3 ] node1-fence (stonith:fence_xvm): Started rhos6-node1 node2-fence (stonith:fence_xvm): Started rhos6-node2 node3-fence (stonith:fence_xvm): Started rhos6-node3 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] vip-db (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-rabbitmq (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-qpid (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-keystone (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-glance (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-cinder (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-swift (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-neutron (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-nova (ocf::heartbeat:IPaddr2): Started rhos6-node3 vip-horizon (ocf::heartbeat:IPaddr2): Started rhos6-node1 vip-heat (ocf::heartbeat:IPaddr2): Started rhos6-node2 vip-ceilometer (ocf::heartbeat:IPaddr2): Started rhos6-node3 Clone Set: galera-master [galera] (promotable) Masters: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: rabbitmq-server-clone [rabbitmq-server] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: memcached-clone [memcached] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: mongodb-clone [mongodb] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: keystone-clone [keystone] Stopped (disabled): [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-fs-clone [glance-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-registry-clone [glance-registry] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: glance-api-clone [glance-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] cinder-api (systemd:openstack-cinder-api): Stopped cinder-scheduler (systemd:openstack-cinder-scheduler): Stopped cinder-volume (systemd:openstack-cinder-volume): Stopped Clone Set: swift-fs-clone [swift-fs] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-account-clone [swift-account] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-container-clone [swift-container] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-object-clone [swift-object] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: swift-proxy-clone [swift-proxy] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] swift-object-expirer (systemd:openstack-swift-object-expirer): Stopped Clone Set: neutron-server-clone [neutron-server] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-scale-clone [neutron-scale] (unique) neutron-scale:0 (ocf::neutron:NeutronScale): Stopped neutron-scale:1 (ocf::neutron:NeutronScale): Stopped neutron-scale:2 (ocf::neutron:NeutronScale): Stopped Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-consoleauth-clone [nova-consoleauth] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-novncproxy-clone [nova-novncproxy] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-api-clone [nova-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-scheduler-clone [nova-scheduler] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: nova-conductor-clone [nova-conductor] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] ceilometer-central (systemd:openstack-ceilometer-central): Stopped Clone Set: ceilometer-collector-clone [ceilometer-collector] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-api-clone [ceilometer-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-delay-clone [ceilometer-delay] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: ceilometer-notification-clone [ceilometer-notification] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-clone [heat-api] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cfn-clone [heat-api-cfn] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch] Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ] heat-engine (systemd:openstack-heat-engine): Stopped Clone Set: horizon-clone [horizon] Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] diff --git a/cts/scheduler/container-is-remote-node.summary b/cts/scheduler/container-is-remote-node.summary index 33a45afd0b..fc9c0ca82b 100644 --- a/cts/scheduler/container-is-remote-node.summary +++ b/cts/scheduler/container-is-remote-node.summary @@ -1,56 +1,56 @@ -3 of 19 resources DISABLED and 0 BLOCKED from being started due to failures +3 of 19 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ lama2 lama3 ] GuestOnline: [ RNVM1:VM1 ] restofencelama2 (stonith:fence_ipmilan): Started lama3 restofencelama3 (stonith:fence_ipmilan): Started lama2 Clone Set: dlm-clone [dlm] Started: [ lama2 lama3 ] Stopped: [ RNVM1 ] Clone Set: clvmd-clone [clvmd] Started: [ lama2 lama3 ] Stopped: [ RNVM1 ] Clone Set: gfs2-lv_1_1-clone [gfs2-lv_1_1] Started: [ lama2 lama3 ] Stopped: [ RNVM1 ] Clone Set: gfs2-lv_1_2-clone [gfs2-lv_1_2] Stopped (disabled): [ lama2 lama3 RNVM1 ] VM1 (ocf::heartbeat:VirtualDomain): Started lama2 Resource Group: RES1 FSdata1 (ocf::heartbeat:Filesystem): Started RNVM1 RES1-IP (ocf::heartbeat:IPaddr2): Started RNVM1 res-rsyslog (ocf::heartbeat:rsyslog.test): Started RNVM1 Transition Summary: Executing cluster transition: * Resource action: dlm monitor on RNVM1 * Resource action: clvmd monitor on RNVM1 * Resource action: gfs2-lv_1_1 monitor on RNVM1 * Resource action: gfs2-lv_1_2 monitor on RNVM1 Revised cluster status: Online: [ lama2 lama3 ] GuestOnline: [ RNVM1:VM1 ] restofencelama2 (stonith:fence_ipmilan): Started lama3 restofencelama3 (stonith:fence_ipmilan): Started lama2 Clone Set: dlm-clone [dlm] Started: [ lama2 lama3 ] Stopped: [ RNVM1 ] Clone Set: clvmd-clone [clvmd] Started: [ lama2 lama3 ] Stopped: [ RNVM1 ] Clone Set: gfs2-lv_1_1-clone [gfs2-lv_1_1] Started: [ lama2 lama3 ] Stopped: [ RNVM1 ] Clone Set: gfs2-lv_1_2-clone [gfs2-lv_1_2] Stopped (disabled): [ lama2 lama3 RNVM1 ] VM1 (ocf::heartbeat:VirtualDomain): Started lama2 Resource Group: RES1 FSdata1 (ocf::heartbeat:Filesystem): Started RNVM1 RES1-IP (ocf::heartbeat:IPaddr2): Started RNVM1 res-rsyslog (ocf::heartbeat:rsyslog.test): Started RNVM1 diff --git a/cts/scheduler/enforce-colo1.summary b/cts/scheduler/enforce-colo1.summary index 9a93723195..46f808454a 100644 --- a/cts/scheduler/enforce-colo1.summary +++ b/cts/scheduler/enforce-colo1.summary @@ -1,36 +1,36 @@ -3 of 6 resources DISABLED and 0 BLOCKED from being started due to failures +3 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] shooter (stonith:fence_xvm): Started rhel7-auto2 engine (ocf::heartbeat:Dummy): Started rhel7-auto3 Clone Set: keystone-clone [keystone] Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] central (ocf::heartbeat:Dummy): Started rhel7-auto3 Transition Summary: * Stop engine ( rhel7-auto3 ) due to colocation with central * Stop keystone:0 ( rhel7-auto2 ) due to node availability * Stop keystone:1 ( rhel7-auto3 ) due to node availability * Stop keystone:2 ( rhel7-auto1 ) due to node availability * Stop central ( rhel7-auto3 ) due to unrunnable keystone-clone running Executing cluster transition: * Resource action: engine stop on rhel7-auto3 * Resource action: central stop on rhel7-auto3 * Pseudo action: keystone-clone_stop_0 * Resource action: keystone stop on rhel7-auto2 * Resource action: keystone stop on rhel7-auto3 * Resource action: keystone stop on rhel7-auto1 * Pseudo action: keystone-clone_stopped_0 Revised cluster status: Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] shooter (stonith:fence_xvm): Started rhel7-auto2 engine (ocf::heartbeat:Dummy): Stopped Clone Set: keystone-clone [keystone] Stopped (disabled): [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ] central (ocf::heartbeat:Dummy): Stopped diff --git a/cts/scheduler/expire-non-blocked-failure.summary b/cts/scheduler/expire-non-blocked-failure.summary index 2c6e703846..aab1ffe621 100644 --- a/cts/scheduler/expire-non-blocked-failure.summary +++ b/cts/scheduler/expire-non-blocked-failure.summary @@ -1,20 +1,21 @@ +0 of 3 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): FAILED node2 (blocked) rsc2 (ocf::pacemaker:Dummy): Started node1 Transition Summary: Executing cluster transition: * Cluster action: clear_failcount for rsc2 on node1 Revised cluster status: Online: [ node1 node2 ] rsc_stonith (stonith:null): Started node1 rsc1 (ocf::pacemaker:Dummy): FAILED node2 (blocked) rsc2 (ocf::pacemaker:Dummy): Started node1 diff --git a/cts/scheduler/failcount-block.summary b/cts/scheduler/failcount-block.summary index 6d957aae4e..a5a1a36094 100644 --- a/cts/scheduler/failcount-block.summary +++ b/cts/scheduler/failcount-block.summary @@ -1,35 +1,36 @@ +0 of 5 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ pcmk-1 ] OFFLINE: [ pcmk-4 ] rsc_pcmk-1 (ocf::heartbeat:IPaddr2): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr2): FAILED pcmk-1 (blocked) rsc_pcmk-3 (ocf::heartbeat:IPaddr2): Stopped rsc_pcmk-4 (ocf::heartbeat:IPaddr2): Stopped rsc_pcmk-5 (ocf::heartbeat:IPaddr2): Started pcmk-1 Transition Summary: * Start rsc_pcmk-3 ( pcmk-1 ) * Start rsc_pcmk-4 ( pcmk-1 ) Executing cluster transition: * Resource action: rsc_pcmk-1 monitor=5000 on pcmk-1 * Cluster action: clear_failcount for rsc_pcmk-1 on pcmk-1 * Resource action: rsc_pcmk-3 start on pcmk-1 * Cluster action: clear_failcount for rsc_pcmk-3 on pcmk-1 * Resource action: rsc_pcmk-4 start on pcmk-1 * Cluster action: clear_failcount for rsc_pcmk-5 on pcmk-1 * Resource action: rsc_pcmk-3 monitor=5000 on pcmk-1 * Resource action: rsc_pcmk-4 monitor=5000 on pcmk-1 Revised cluster status: Online: [ pcmk-1 ] OFFLINE: [ pcmk-4 ] rsc_pcmk-1 (ocf::heartbeat:IPaddr2): Started pcmk-1 rsc_pcmk-2 (ocf::heartbeat:IPaddr2): FAILED pcmk-1 (blocked) rsc_pcmk-3 (ocf::heartbeat:IPaddr2): Started pcmk-1 rsc_pcmk-4 (ocf::heartbeat:IPaddr2): Started pcmk-1 rsc_pcmk-5 (ocf::heartbeat:IPaddr2): Started pcmk-1 diff --git a/cts/scheduler/group-stop-ordering.summary b/cts/scheduler/group-stop-ordering.summary index 0ec8eb6299..df9e7a3ca9 100644 --- a/cts/scheduler/group-stop-ordering.summary +++ b/cts/scheduler/group-stop-ordering.summary @@ -1,25 +1,26 @@ +0 of 5 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ fastvm-rhel-7-5-73 fastvm-rhel-7-5-74 ] fence-fastvm-rhel-7-5-73 (stonith:fence_xvm): Started fastvm-rhel-7-5-74 fence-fastvm-rhel-7-5-74 (stonith:fence_xvm): Started fastvm-rhel-7-5-73 outside_resource (ocf::pacemaker:Dummy): FAILED fastvm-rhel-7-5-73 (blocked) Resource Group: grp inside_resource_2 (ocf::pacemaker:Dummy): Started fastvm-rhel-7-5-74 inside_resource_3 (ocf::pacemaker:Dummy): Started fastvm-rhel-7-5-74 Transition Summary: Executing cluster transition: Revised cluster status: Online: [ fastvm-rhel-7-5-73 fastvm-rhel-7-5-74 ] fence-fastvm-rhel-7-5-73 (stonith:fence_xvm): Started fastvm-rhel-7-5-74 fence-fastvm-rhel-7-5-74 (stonith:fence_xvm): Started fastvm-rhel-7-5-73 outside_resource (ocf::pacemaker:Dummy): FAILED fastvm-rhel-7-5-73 (blocked) Resource Group: grp inside_resource_2 (ocf::pacemaker:Dummy): Started fastvm-rhel-7-5-74 inside_resource_3 (ocf::pacemaker:Dummy): Started fastvm-rhel-7-5-74 diff --git a/cts/scheduler/group11.summary b/cts/scheduler/group11.summary index 41b5994782..8ebad4cc18 100644 --- a/cts/scheduler/group11.summary +++ b/cts/scheduler/group11.summary @@ -1,29 +1,29 @@ -2 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 ] Resource Group: group1 rsc1 (ocf::heartbeat:apache): Started node1 rsc2 (ocf::heartbeat:apache): Started node1 (disabled) rsc3 (ocf::heartbeat:apache): Started node1 Transition Summary: * Stop rsc2 ( node1 ) due to node availability * Stop rsc3 ( node1 ) due to node availability Executing cluster transition: * Pseudo action: group1_stop_0 * Resource action: rsc3 stop on node1 * Resource action: rsc2 stop on node1 * Pseudo action: group1_stopped_0 * Pseudo action: group1_start_0 Revised cluster status: Online: [ node1 ] Resource Group: group1 rsc1 (ocf::heartbeat:apache): Started node1 rsc2 (ocf::heartbeat:apache): Stopped (disabled) rsc3 (ocf::heartbeat:apache): Stopped diff --git a/cts/scheduler/intervals.summary b/cts/scheduler/intervals.summary index 83603dc37f..69daed7181 100644 --- a/cts/scheduler/intervals.summary +++ b/cts/scheduler/intervals.summary @@ -1,48 +1,49 @@ Using the original execution date of: 2018-03-21 23:12:42Z +0 of 7 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Fencing (stonith:fence_xvm): Started rhel7-1 rsc1 (ocf::pacemaker:Dummy): Started rhel7-2 rsc2 (ocf::pacemaker:Dummy): Stopped rsc3 (ocf::pacemaker:Dummy): Started rhel7-4 rsc4 (ocf::pacemaker:Dummy): FAILED rhel7-5 (blocked) rsc5 (ocf::pacemaker:Dummy): Started rhel7-1 rsc6 (ocf::pacemaker:Dummy): Started rhel7-2 Transition Summary: * Start rsc2 ( rhel7-3 ) * Move rsc5 ( rhel7-1 -> rhel7-2 ) * Move rsc6 ( rhel7-2 -> rhel7-1 ) Executing cluster transition: * Resource action: rsc2 monitor on rhel7-5 * Resource action: rsc2 monitor on rhel7-4 * Resource action: rsc2 monitor on rhel7-3 * Resource action: rsc2 monitor on rhel7-2 * Resource action: rsc2 monitor on rhel7-1 * Resource action: rsc5 stop on rhel7-1 * Resource action: rsc5 cancel=25000 on rhel7-2 * Resource action: rsc6 stop on rhel7-2 * Resource action: rsc2 start on rhel7-3 * Resource action: rsc5 monitor=25000 on rhel7-1 * Resource action: rsc5 start on rhel7-2 * Resource action: rsc6 start on rhel7-1 * Resource action: rsc2 monitor=90000 on rhel7-3 * Resource action: rsc2 monitor=40000 on rhel7-3 * Resource action: rsc5 monitor=20000 on rhel7-2 * Resource action: rsc6 monitor=28000 on rhel7-1 Using the original execution date of: 2018-03-21 23:12:42Z Revised cluster status: Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ] Fencing (stonith:fence_xvm): Started rhel7-1 rsc1 (ocf::pacemaker:Dummy): Started rhel7-2 rsc2 (ocf::pacemaker:Dummy): Started rhel7-3 rsc3 (ocf::pacemaker:Dummy): Started rhel7-4 rsc4 (ocf::pacemaker:Dummy): FAILED rhel7-5 (blocked) rsc5 (ocf::pacemaker:Dummy): Started rhel7-2 rsc6 (ocf::pacemaker:Dummy): Started rhel7-1 diff --git a/cts/scheduler/load-stopped-loop-2.summary b/cts/scheduler/load-stopped-loop-2.summary index 5a26b60a90..09e85cc79b 100644 --- a/cts/scheduler/load-stopped-loop-2.summary +++ b/cts/scheduler/load-stopped-loop-2.summary @@ -1,111 +1,111 @@ -4 of 25 resources DISABLED and 0 BLOCKED from being started due to failures +4 of 25 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ xfc0 xfc1 xfc2 xfc3 ] Clone Set: cl_glusterd [p_glusterd] Started: [ xfc0 xfc1 xfc2 xfc3 ] Clone Set: cl_p_bl_glusterfs [p_bl_glusterfs] Started: [ xfc0 xfc1 xfc2 xfc3 ] xu-test8 (ocf::heartbeat:Xen): Started xfc3 xu-test1 (ocf::heartbeat:Xen): Started xfc3 xu-test10 (ocf::heartbeat:Xen): Started xfc3 xu-test11 (ocf::heartbeat:Xen): Started xfc3 xu-test12 (ocf::heartbeat:Xen): Started xfc2 xu-test13 (ocf::heartbeat:Xen): Stopped xu-test14 (ocf::heartbeat:Xen): Stopped (disabled) xu-test15 (ocf::heartbeat:Xen): Stopped (disabled) xu-test16 (ocf::heartbeat:Xen): Stopped (disabled) xu-test17 (ocf::heartbeat:Xen): Stopped (disabled) xu-test2 (ocf::heartbeat:Xen): Started xfc3 xu-test3 (ocf::heartbeat:Xen): Started xfc1 xu-test4 (ocf::heartbeat:Xen): Started xfc0 xu-test5 (ocf::heartbeat:Xen): Started xfc2 xu-test6 (ocf::heartbeat:Xen): Started xfc3 xu-test7 (ocf::heartbeat:Xen): Started xfc1 xu-test9 (ocf::heartbeat:Xen): Started xfc0 Transition Summary: * Migrate xu-test12 ( xfc2 -> xfc3 ) * Migrate xu-test2 ( xfc3 -> xfc1 ) * Migrate xu-test3 ( xfc1 -> xfc0 ) * Migrate xu-test4 ( xfc0 -> xfc2 ) * Migrate xu-test5 ( xfc2 -> xfc3 ) * Migrate xu-test6 ( xfc3 -> xfc1 ) * Migrate xu-test7 ( xfc1 -> xfc0 ) * Migrate xu-test9 ( xfc0 -> xfc2 ) * Start xu-test13 ( xfc3 ) Executing cluster transition: * Resource action: xu-test4 migrate_to on xfc0 * Resource action: xu-test5 migrate_to on xfc2 * Resource action: xu-test6 migrate_to on xfc3 * Resource action: xu-test7 migrate_to on xfc1 * Resource action: xu-test9 migrate_to on xfc0 * Resource action: xu-test4 migrate_from on xfc2 * Resource action: xu-test4 stop on xfc0 * Resource action: xu-test5 migrate_from on xfc3 * Resource action: xu-test5 stop on xfc2 * Resource action: xu-test6 migrate_from on xfc1 * Resource action: xu-test6 stop on xfc3 * Resource action: xu-test7 migrate_from on xfc0 * Resource action: xu-test7 stop on xfc1 * Resource action: xu-test9 migrate_from on xfc2 * Resource action: xu-test9 stop on xfc0 * Pseudo action: load_stopped_xfc0 * Resource action: xu-test3 migrate_to on xfc1 * Pseudo action: xu-test7_start_0 * Resource action: xu-test3 migrate_from on xfc0 * Resource action: xu-test3 stop on xfc1 * Resource action: xu-test7 monitor=10000 on xfc0 * Pseudo action: load_stopped_xfc1 * Resource action: xu-test2 migrate_to on xfc3 * Pseudo action: xu-test3_start_0 * Pseudo action: xu-test6_start_0 * Resource action: xu-test2 migrate_from on xfc1 * Resource action: xu-test2 stop on xfc3 * Resource action: xu-test3 monitor=10000 on xfc0 * Resource action: xu-test6 monitor=10000 on xfc1 * Pseudo action: load_stopped_xfc3 * Resource action: xu-test12 migrate_to on xfc2 * Pseudo action: xu-test2_start_0 * Pseudo action: xu-test5_start_0 * Resource action: xu-test13 start on xfc3 * Resource action: xu-test12 migrate_from on xfc3 * Resource action: xu-test12 stop on xfc2 * Resource action: xu-test2 monitor=10000 on xfc1 * Resource action: xu-test5 monitor=10000 on xfc3 * Resource action: xu-test13 monitor=10000 on xfc3 * Pseudo action: load_stopped_xfc2 * Pseudo action: xu-test12_start_0 * Pseudo action: xu-test4_start_0 * Pseudo action: xu-test9_start_0 * Resource action: xu-test12 monitor=10000 on xfc3 * Resource action: xu-test4 monitor=10000 on xfc2 * Resource action: xu-test9 monitor=10000 on xfc2 Revised cluster status: Online: [ xfc0 xfc1 xfc2 xfc3 ] Clone Set: cl_glusterd [p_glusterd] Started: [ xfc0 xfc1 xfc2 xfc3 ] Clone Set: cl_p_bl_glusterfs [p_bl_glusterfs] Started: [ xfc0 xfc1 xfc2 xfc3 ] xu-test8 (ocf::heartbeat:Xen): Started xfc3 xu-test1 (ocf::heartbeat:Xen): Started xfc3 xu-test10 (ocf::heartbeat:Xen): Started xfc3 xu-test11 (ocf::heartbeat:Xen): Started xfc3 xu-test12 (ocf::heartbeat:Xen): Started xfc3 xu-test13 (ocf::heartbeat:Xen): Started xfc3 xu-test14 (ocf::heartbeat:Xen): Stopped (disabled) xu-test15 (ocf::heartbeat:Xen): Stopped (disabled) xu-test16 (ocf::heartbeat:Xen): Stopped (disabled) xu-test17 (ocf::heartbeat:Xen): Stopped (disabled) xu-test2 (ocf::heartbeat:Xen): Started xfc1 xu-test3 (ocf::heartbeat:Xen): Started xfc0 xu-test4 (ocf::heartbeat:Xen): Started xfc2 xu-test5 (ocf::heartbeat:Xen): Started xfc3 xu-test6 (ocf::heartbeat:Xen): Started xfc1 xu-test7 (ocf::heartbeat:Xen): Started xfc0 xu-test9 (ocf::heartbeat:Xen): Started xfc2 diff --git a/cts/scheduler/load-stopped-loop.summary b/cts/scheduler/load-stopped-loop.summary index 9106e69a9f..908fbd3ee8 100644 --- a/cts/scheduler/load-stopped-loop.summary +++ b/cts/scheduler/load-stopped-loop.summary @@ -1,334 +1,334 @@ -32 of 308 resources DISABLED and 0 BLOCKED from being started due to failures +32 of 308 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ mgmt01 v03-a v03-b ] stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled) stonith-mgmt01 (stonith:fence_xvm): Started v03-b stonith-mgmt02 (stonith:meatware): Started mgmt01 stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v03-a (stonith:fence_ipmilan): Started v03-b stonith-v03-b (stonith:fence_ipmilan): Started v03-a stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled) Clone Set: cl-clvmd [clvmd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-dlm [dlm] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-iscsid [iscsid] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirtd [libvirtd] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-multipathd [multipathd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-node-params [node-params] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan1-if [vlan1-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan101-if [vlan101-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan102-if [vlan102-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan103-if [vlan103-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan104-if [vlan104-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan3-if [vlan3-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan4-if [vlan4-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan5-if [vlan5-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan900-if [vlan900-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan909-if [vlan909-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-libvirt-images-fs [libvirt-images-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-install-fs [libvirt-install-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-images-pool [libvirt-images-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] git.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) f13-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b eu2.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) zakaz.transferrus.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) Clone Set: cl-vlan200-if [vlan200-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] lenny-x32-devel-vm (ocf::vds-ok:VirtualDomain): Started v03-a dist.express-consult.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) eu1.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) gotin-bbb-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) maxb-c55-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) metae.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) rodovoepomestie.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) ubuntu9.10-gotin-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) c5-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b Clone Set: cl-mcast-test-net [mcast-test-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] dist.fly-uni.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) ktstudio.net-vm (ocf::vds-ok:VirtualDomain): Started v03-a cloudsrv.credo-dialogue.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b c6-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre01-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b lustre02-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre03-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b lustre03-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre04-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b lustre04-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a Clone Set: cl-mcast-anbriz-net [mcast-anbriz-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] gw.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b license.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b terminal.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) lustre01-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre02-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b test-01.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a Clone Set: cl-libvirt-qpid [libvirt-qpid] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] gw.gleb.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) gw.gotin.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) terminal0.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a Clone Set: cl-mcast-gleb-net [mcast-gleb-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Transition Summary: * Reload vds-ok-pool-0-iscsi:0 ( mgmt01 ) * Reload vds-ok-pool-0-iscsi:1 ( v03-b ) * Reload vds-ok-pool-0-iscsi:2 ( v03-a ) * Reload vds-ok-pool-1-iscsi:0 ( mgmt01 ) * Reload vds-ok-pool-1-iscsi:1 ( v03-b ) * Reload vds-ok-pool-1-iscsi:2 ( v03-a ) * Restart stonith-v03-b ( v03-a ) due to resource definition change * Restart stonith-v03-a ( v03-b ) due to resource definition change * Migrate license.anbriz.vds-ok.com-vm ( v03-b -> v03-a ) * Migrate terminal0.anbriz.vds-ok.com-vm ( v03-a -> v03-b ) * Start vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm ( v03-a ) Executing cluster transition: * Resource action: vds-ok-pool-0-iscsi:1 reload on mgmt01 * Resource action: vds-ok-pool-0-iscsi:1 monitor=30000 on mgmt01 * Resource action: vds-ok-pool-0-iscsi:0 reload on v03-b * Resource action: vds-ok-pool-0-iscsi:0 monitor=30000 on v03-b * Resource action: vds-ok-pool-0-iscsi:2 reload on v03-a * Resource action: vds-ok-pool-0-iscsi:2 monitor=30000 on v03-a * Resource action: vds-ok-pool-1-iscsi:1 reload on mgmt01 * Resource action: vds-ok-pool-1-iscsi:1 monitor=30000 on mgmt01 * Resource action: vds-ok-pool-1-iscsi:0 reload on v03-b * Resource action: vds-ok-pool-1-iscsi:0 monitor=30000 on v03-b * Resource action: vds-ok-pool-1-iscsi:2 reload on v03-a * Resource action: vds-ok-pool-1-iscsi:2 monitor=30000 on v03-a * Resource action: stonith-v03-b stop on v03-a * Resource action: stonith-v03-b start on v03-a * Resource action: stonith-v03-b monitor=60000 on v03-a * Resource action: stonith-v03-a stop on v03-b * Resource action: stonith-v03-a start on v03-b * Resource action: stonith-v03-a monitor=60000 on v03-b * Resource action: terminal0.anbriz.vds-ok.com-vm migrate_to on v03-a * Pseudo action: load_stopped_mgmt01 * Resource action: terminal0.anbriz.vds-ok.com-vm migrate_from on v03-b * Resource action: terminal0.anbriz.vds-ok.com-vm stop on v03-a * Pseudo action: load_stopped_v03-a * Resource action: license.anbriz.vds-ok.com-vm migrate_to on v03-b * Resource action: vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm start on v03-a * Resource action: license.anbriz.vds-ok.com-vm migrate_from on v03-a * Resource action: license.anbriz.vds-ok.com-vm stop on v03-b * Resource action: vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm monitor=10000 on v03-a * Pseudo action: load_stopped_v03-b * Pseudo action: license.anbriz.vds-ok.com-vm_start_0 * Pseudo action: terminal0.anbriz.vds-ok.com-vm_start_0 * Resource action: license.anbriz.vds-ok.com-vm monitor=10000 on v03-a * Resource action: terminal0.anbriz.vds-ok.com-vm monitor=10000 on v03-b Revised cluster status: Online: [ mgmt01 v03-a v03-b ] stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled) stonith-mgmt01 (stonith:fence_xvm): Started v03-b stonith-mgmt02 (stonith:meatware): Started mgmt01 stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v03-a (stonith:fence_ipmilan): Started v03-b stonith-v03-b (stonith:fence_ipmilan): Started v03-a stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled) Clone Set: cl-clvmd [clvmd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-dlm [dlm] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-iscsid [iscsid] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirtd [libvirtd] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-multipathd [multipathd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-node-params [node-params] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan1-if [vlan1-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan101-if [vlan101-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan102-if [vlan102-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan103-if [vlan103-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan104-if [vlan104-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan3-if [vlan3-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan4-if [vlan4-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan5-if [vlan5-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan900-if [vlan900-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan909-if [vlan909-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-libvirt-images-fs [libvirt-images-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-install-fs [libvirt-install-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-images-pool [libvirt-images-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] git.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) f13-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b eu2.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) zakaz.transferrus.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) Clone Set: cl-vlan200-if [vlan200-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] lenny-x32-devel-vm (ocf::vds-ok:VirtualDomain): Started v03-a dist.express-consult.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) eu1.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) gotin-bbb-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) maxb-c55-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) metae.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) rodovoepomestie.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) ubuntu9.10-gotin-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) c5-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b Clone Set: cl-mcast-test-net [mcast-test-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] dist.fly-uni.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) ktstudio.net-vm (ocf::vds-ok:VirtualDomain): Started v03-a cloudsrv.credo-dialogue.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b c6-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre01-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b lustre02-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre03-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b lustre03-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre04-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b lustre04-right.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a Clone Set: cl-mcast-anbriz-net [mcast-anbriz-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] gw.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b license.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a terminal.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) lustre01-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a lustre02-left.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b test-01.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a Clone Set: cl-libvirt-qpid [libvirt-qpid] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] gw.gleb.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) gw.gotin.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) terminal0.anbriz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b Clone Set: cl-mcast-gleb-net [mcast-gleb-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] diff --git a/cts/scheduler/master-asymmetrical-order.summary b/cts/scheduler/master-asymmetrical-order.summary index 0fcd24375a..3087bd369b 100644 --- a/cts/scheduler/master-asymmetrical-order.summary +++ b/cts/scheduler/master-asymmetrical-order.summary @@ -1,34 +1,34 @@ -2 of 4 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] Clone Set: ms1 [rsc1] (promotable) Masters: [ node1 ] Slaves: [ node2 ] Clone Set: ms2 [rsc2] (promotable) Masters: [ node2 ] Slaves: [ node1 ] Transition Summary: * Stop rsc1:0 ( Master node1 ) due to node availability * Stop rsc1:1 ( Slave node2 ) due to node availability Executing cluster transition: * Pseudo action: ms1_demote_0 * Resource action: rsc1:0 demote on node1 * Pseudo action: ms1_demoted_0 * Pseudo action: ms1_stop_0 * Resource action: rsc1:0 stop on node1 * Resource action: rsc1:1 stop on node2 * Pseudo action: ms1_stopped_0 Revised cluster status: Online: [ node1 node2 ] Clone Set: ms1 [rsc1] (promotable) Stopped (disabled): [ node1 node2 ] Clone Set: ms2 [rsc2] (promotable) Masters: [ node2 ] Slaves: [ node1 ] diff --git a/cts/scheduler/master-demote-block.summary b/cts/scheduler/master-demote-block.summary index 3873f9f346..246d55e772 100644 --- a/cts/scheduler/master-demote-block.summary +++ b/cts/scheduler/master-demote-block.summary @@ -1,22 +1,23 @@ +0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Node dl380g5c (21c624bd-c426-43dc-9665-bbfb92054bcd): standby Online: [ dl380g5d ] Clone Set: stateful [dummy] (promotable) dummy (ocf::pacemaker:Stateful): FAILED Master dl380g5c (blocked) Slaves: [ dl380g5d ] Transition Summary: Executing cluster transition: * Resource action: dummy:1 monitor=20000 on dl380g5d Revised cluster status: Node dl380g5c (21c624bd-c426-43dc-9665-bbfb92054bcd): standby Online: [ dl380g5d ] Clone Set: stateful [dummy] (promotable) dummy (ocf::pacemaker:Stateful): FAILED Master dl380g5c (blocked) Slaves: [ dl380g5d ] diff --git a/cts/scheduler/master-depend.summary b/cts/scheduler/master-depend.summary index 03fb495e1b..5b43e3e355 100644 --- a/cts/scheduler/master-depend.summary +++ b/cts/scheduler/master-depend.summary @@ -1,59 +1,59 @@ -3 of 10 resources DISABLED and 0 BLOCKED from being started due to failures +3 of 10 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ vbox4 ] OFFLINE: [ vbox3 ] Clone Set: drbd [drbd0] (promotable) Stopped: [ vbox3 vbox4 ] Clone Set: cman_clone [cman] Stopped: [ vbox3 vbox4 ] Clone Set: clvmd_clone [clvmd] Stopped: [ vbox3 vbox4 ] vmnci36 (ocf::heartbeat:vm): Stopped vmnci37 (ocf::heartbeat:vm): Stopped (disabled) vmnci38 (ocf::heartbeat:vm): Stopped (disabled) vmnci55 (ocf::heartbeat:vm): Stopped (disabled) Transition Summary: * Start drbd0:0 ( vbox4 ) * Start cman:0 ( vbox4 ) Executing cluster transition: * Resource action: drbd0:0 monitor on vbox4 * Pseudo action: drbd_pre_notify_start_0 * Resource action: cman:0 monitor on vbox4 * Pseudo action: cman_clone_start_0 * Resource action: clvmd:0 monitor on vbox4 * Resource action: vmnci36 monitor on vbox4 * Resource action: vmnci37 monitor on vbox4 * Resource action: vmnci38 monitor on vbox4 * Resource action: vmnci55 monitor on vbox4 * Pseudo action: drbd_confirmed-pre_notify_start_0 * Pseudo action: drbd_start_0 * Resource action: cman:0 start on vbox4 * Pseudo action: cman_clone_running_0 * Resource action: drbd0:0 start on vbox4 * Pseudo action: drbd_running_0 * Pseudo action: drbd_post_notify_running_0 * Resource action: drbd0:0 notify on vbox4 * Pseudo action: drbd_confirmed-post_notify_running_0 * Resource action: drbd0:0 monitor=60000 on vbox4 Revised cluster status: Online: [ vbox4 ] OFFLINE: [ vbox3 ] Clone Set: drbd [drbd0] (promotable) Slaves: [ vbox4 ] Stopped: [ vbox3 ] Clone Set: cman_clone [cman] Started: [ vbox4 ] Stopped: [ vbox3 ] Clone Set: clvmd_clone [clvmd] Stopped: [ vbox3 vbox4 ] vmnci36 (ocf::heartbeat:vm): Stopped vmnci37 (ocf::heartbeat:vm): Stopped (disabled) vmnci38 (ocf::heartbeat:vm): Stopped (disabled) vmnci55 (ocf::heartbeat:vm): Stopped (disabled) diff --git a/cts/scheduler/master-probed-score.summary b/cts/scheduler/master-probed-score.summary index d91dd71c79..aa1e3a131e 100644 --- a/cts/scheduler/master-probed-score.summary +++ b/cts/scheduler/master-probed-score.summary @@ -1,326 +1,326 @@ -2 of 60 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 60 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: AdminClone [AdminDrbd] (promotable) Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] CronAmbientTemperature (ocf::heartbeat:symlink): Stopped StonithHypatia (stonith:fence_nut): Stopped StonithOrestes (stonith:fence_nut): Stopped Resource Group: DhcpGroup SymlinkDhcpdConf (ocf::heartbeat:symlink): Stopped SymlinkSysconfigDhcpd (ocf::heartbeat:symlink): Stopped SymlinkDhcpdLeases (ocf::heartbeat:symlink): Stopped Dhcpd (lsb:dhcpd): Stopped (disabled) DhcpIP (ocf::heartbeat:IPaddr2): Stopped Clone Set: CupsClone [CupsGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: IPClone [IPGroup] (unique) Resource Group: IPGroup:0 ClusterIP:0 (ocf::heartbeat:IPaddr2): Stopped ClusterIPLocal:0 (ocf::heartbeat:IPaddr2): Stopped ClusterIPSandbox:0 (ocf::heartbeat:IPaddr2): Stopped Resource Group: IPGroup:1 ClusterIP:1 (ocf::heartbeat:IPaddr2): Stopped ClusterIPLocal:1 (ocf::heartbeat:IPaddr2): Stopped ClusterIPSandbox:1 (ocf::heartbeat:IPaddr2): Stopped Clone Set: LibvirtdClone [LibvirtdGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: TftpClone [TftpGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: ExportsClone [ExportsGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: FilesystemClone [FilesystemGroup] Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] KVM-guest (ocf::heartbeat:VirtualDomain): Stopped Proxy (ocf::heartbeat:VirtualDomain): Stopped Transition Summary: * Promote AdminDrbd:0 ( Stopped -> Master hypatia-corosync.nevis.columbia.edu ) * Promote AdminDrbd:1 ( Stopped -> Master orestes-corosync.nevis.columbia.edu ) * Start CronAmbientTemperature ( hypatia-corosync.nevis.columbia.edu ) * Start StonithHypatia ( orestes-corosync.nevis.columbia.edu ) * Start StonithOrestes ( hypatia-corosync.nevis.columbia.edu ) * Start SymlinkDhcpdConf ( orestes-corosync.nevis.columbia.edu ) * Start SymlinkSysconfigDhcpd ( orestes-corosync.nevis.columbia.edu ) * Start SymlinkDhcpdLeases ( orestes-corosync.nevis.columbia.edu ) * Start SymlinkUsrShareCups:0 ( hypatia-corosync.nevis.columbia.edu ) * Start SymlinkCupsdConf:0 ( hypatia-corosync.nevis.columbia.edu ) * Start Cups:0 ( hypatia-corosync.nevis.columbia.edu ) * Start SymlinkUsrShareCups:1 ( orestes-corosync.nevis.columbia.edu ) * Start SymlinkCupsdConf:1 ( orestes-corosync.nevis.columbia.edu ) * Start Cups:1 ( orestes-corosync.nevis.columbia.edu ) * Start ClusterIP:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ClusterIPLocal:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ClusterIPSandbox:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ClusterIP:1 ( orestes-corosync.nevis.columbia.edu ) * Start ClusterIPLocal:1 ( orestes-corosync.nevis.columbia.edu ) * Start ClusterIPSandbox:1 ( orestes-corosync.nevis.columbia.edu ) * Start SymlinkEtcLibvirt:0 ( hypatia-corosync.nevis.columbia.edu ) * Start Libvirtd:0 ( hypatia-corosync.nevis.columbia.edu ) * Start SymlinkEtcLibvirt:1 ( orestes-corosync.nevis.columbia.edu ) * Start Libvirtd:1 ( orestes-corosync.nevis.columbia.edu ) * Start SymlinkTftp:0 ( hypatia-corosync.nevis.columbia.edu ) * Start Xinetd:0 ( hypatia-corosync.nevis.columbia.edu ) * Start SymlinkTftp:1 ( orestes-corosync.nevis.columbia.edu ) * Start Xinetd:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportMail:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportMailInbox:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportMailFolders:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportMailForward:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportMailProcmailrc:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportUsrNevisOffsite:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportWWW:0 ( hypatia-corosync.nevis.columbia.edu ) * Start ExportMail:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportMailInbox:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportMailFolders:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportMailForward:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportMailProcmailrc:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportUsrNevis:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportUsrNevisOffsite:1 ( orestes-corosync.nevis.columbia.edu ) * Start ExportWWW:1 ( orestes-corosync.nevis.columbia.edu ) * Start AdminLvm:0 ( hypatia-corosync.nevis.columbia.edu ) * Start FSUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu ) * Start FSVarNevis:0 ( hypatia-corosync.nevis.columbia.edu ) * Start FSVirtualMachines:0 ( hypatia-corosync.nevis.columbia.edu ) * Start FSMail:0 ( hypatia-corosync.nevis.columbia.edu ) * Start FSWork:0 ( hypatia-corosync.nevis.columbia.edu ) * Start AdminLvm:1 ( orestes-corosync.nevis.columbia.edu ) * Start FSUsrNevis:1 ( orestes-corosync.nevis.columbia.edu ) * Start FSVarNevis:1 ( orestes-corosync.nevis.columbia.edu ) * Start FSVirtualMachines:1 ( orestes-corosync.nevis.columbia.edu ) * Start FSMail:1 ( orestes-corosync.nevis.columbia.edu ) * Start FSWork:1 ( orestes-corosync.nevis.columbia.edu ) * Start KVM-guest ( hypatia-corosync.nevis.columbia.edu ) * Start Proxy ( orestes-corosync.nevis.columbia.edu ) Executing cluster transition: * Pseudo action: AdminClone_pre_notify_start_0 * Resource action: StonithHypatia start on orestes-corosync.nevis.columbia.edu * Resource action: StonithOrestes start on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkEtcLibvirt:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkTftp:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Xinetd:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkTftp:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: Xinetd:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMail:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailForward:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportWWW:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMail:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailForward:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: ExportWWW:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: AdminLvm:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 monitor on hypatia-corosync.nevis.columbia.edu * Resource action: AdminLvm:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 monitor on orestes-corosync.nevis.columbia.edu * Resource action: KVM-guest monitor on orestes-corosync.nevis.columbia.edu * Resource action: KVM-guest monitor on hypatia-corosync.nevis.columbia.edu * Resource action: Proxy monitor on orestes-corosync.nevis.columbia.edu * Resource action: Proxy monitor on hypatia-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-pre_notify_start_0 * Pseudo action: AdminClone_start_0 * Resource action: AdminDrbd:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_running_0 * Pseudo action: AdminClone_post_notify_running_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-post_notify_running_0 * Pseudo action: AdminClone_pre_notify_promote_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-pre_notify_promote_0 * Pseudo action: AdminClone_promote_0 * Resource action: AdminDrbd:0 promote on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 promote on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_promoted_0 * Pseudo action: AdminClone_post_notify_promoted_0 * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu * Pseudo action: AdminClone_confirmed-post_notify_promoted_0 * Pseudo action: FilesystemClone_start_0 * Resource action: AdminDrbd:0 monitor=59000 on hypatia-corosync.nevis.columbia.edu * Resource action: AdminDrbd:1 monitor=59000 on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:0_start_0 * Resource action: AdminLvm:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:1_start_0 * Resource action: AdminLvm:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 start on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:0_running_0 * Resource action: AdminLvm:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSVarNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSMail:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Resource action: FSWork:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: FilesystemGroup:1_running_0 * Resource action: AdminLvm:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Resource action: FSUsrNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSVarNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSVirtualMachines:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSMail:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Resource action: FSWork:1 monitor=20000 on orestes-corosync.nevis.columbia.edu * Pseudo action: FilesystemClone_running_0 * Resource action: CronAmbientTemperature start on hypatia-corosync.nevis.columbia.edu * Pseudo action: DhcpGroup_start_0 * Resource action: SymlinkDhcpdConf start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkSysconfigDhcpd start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdLeases start on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsClone_start_0 * Pseudo action: IPClone_start_0 * Pseudo action: LibvirtdClone_start_0 * Pseudo action: TftpClone_start_0 * Pseudo action: ExportsClone_start_0 * Resource action: CronAmbientTemperature monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdConf monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkSysconfigDhcpd monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkDhcpdLeases monitor=60000 on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:0_start_0 * Resource action: SymlinkUsrShareCups:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Cups:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:1_start_0 * Resource action: SymlinkUsrShareCups:1 start on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Cups:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: IPGroup:0_start_0 * Resource action: ClusterIP:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: IPGroup:1_start_0 * Resource action: ClusterIP:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:0_start_0 * Resource action: SymlinkEtcLibvirt:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:1_start_0 * Resource action: SymlinkEtcLibvirt:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:0_start_0 * Resource action: SymlinkTftp:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: Xinetd:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:1_start_0 * Resource action: SymlinkTftp:1 start on orestes-corosync.nevis.columbia.edu * Resource action: Xinetd:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: ExportsGroup:0_start_0 * Resource action: ExportMail:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailForward:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:0 start on hypatia-corosync.nevis.columbia.edu * Resource action: ExportWWW:0 start on hypatia-corosync.nevis.columbia.edu * Pseudo action: ExportsGroup:1_start_0 * Resource action: ExportMail:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailInbox:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailFolders:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailForward:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportMailProcmailrc:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevis:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportUsrNevisOffsite:1 start on orestes-corosync.nevis.columbia.edu * Resource action: ExportWWW:1 start on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:0_running_0 * Resource action: SymlinkUsrShareCups:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: Cups:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: CupsGroup:1_running_0 * Resource action: SymlinkUsrShareCups:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: SymlinkCupsdConf:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: Cups:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Pseudo action: CupsClone_running_0 * Pseudo action: IPGroup:0_running_0 * Resource action: ClusterIP:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:0 monitor=31000 on hypatia-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:0 monitor=32000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: IPGroup:1_running_0 * Resource action: ClusterIP:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPLocal:1 monitor=31000 on orestes-corosync.nevis.columbia.edu * Resource action: ClusterIPSandbox:1 monitor=32000 on orestes-corosync.nevis.columbia.edu * Pseudo action: IPClone_running_0 * Pseudo action: LibvirtdGroup:0_running_0 * Resource action: SymlinkEtcLibvirt:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Resource action: Libvirtd:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: LibvirtdGroup:1_running_0 * Resource action: SymlinkEtcLibvirt:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Resource action: Libvirtd:1 monitor=30000 on orestes-corosync.nevis.columbia.edu * Pseudo action: LibvirtdClone_running_0 * Pseudo action: TftpGroup:0_running_0 * Resource action: SymlinkTftp:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu * Pseudo action: TftpGroup:1_running_0 * Resource action: SymlinkTftp:1 monitor=60000 on orestes-corosync.nevis.columbia.edu * Pseudo action: TftpClone_running_0 * Pseudo action: ExportsGroup:0_running_0 * Pseudo action: ExportsGroup:1_running_0 * Pseudo action: ExportsClone_running_0 * Resource action: KVM-guest start on hypatia-corosync.nevis.columbia.edu * Resource action: Proxy start on orestes-corosync.nevis.columbia.edu Revised cluster status: Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: AdminClone [AdminDrbd] (promotable) Masters: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] CronAmbientTemperature (ocf::heartbeat:symlink): Started hypatia-corosync.nevis.columbia.edu StonithHypatia (stonith:fence_nut): Started orestes-corosync.nevis.columbia.edu StonithOrestes (stonith:fence_nut): Started hypatia-corosync.nevis.columbia.edu Resource Group: DhcpGroup SymlinkDhcpdConf (ocf::heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu SymlinkSysconfigDhcpd (ocf::heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu SymlinkDhcpdLeases (ocf::heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu Dhcpd (lsb:dhcpd): Stopped (disabled) DhcpIP (ocf::heartbeat:IPaddr2): Stopped Clone Set: CupsClone [CupsGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: IPClone [IPGroup] (unique) Resource Group: IPGroup:0 ClusterIP:0 (ocf::heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu ClusterIPLocal:0 (ocf::heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu ClusterIPSandbox:0 (ocf::heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu Resource Group: IPGroup:1 ClusterIP:1 (ocf::heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu ClusterIPLocal:1 (ocf::heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu ClusterIPSandbox:1 (ocf::heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu Clone Set: LibvirtdClone [LibvirtdGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: TftpClone [TftpGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: ExportsClone [ExportsGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] Clone Set: FilesystemClone [FilesystemGroup] Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ] KVM-guest (ocf::heartbeat:VirtualDomain): Started hypatia-corosync.nevis.columbia.edu Proxy (ocf::heartbeat:VirtualDomain): Started orestes-corosync.nevis.columbia.edu diff --git a/cts/scheduler/master-promotion-constraint.summary b/cts/scheduler/master-promotion-constraint.summary index 49a8b2366f..888fd3a387 100644 --- a/cts/scheduler/master-promotion-constraint.summary +++ b/cts/scheduler/master-promotion-constraint.summary @@ -1,33 +1,33 @@ -4 of 5 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Started hex-13 Resource Group: g0 d0 (ocf::pacemaker:Dummy): Stopped (disabled) d1 (ocf::pacemaker:Dummy): Stopped (disabled) Clone Set: ms0 [s0] (promotable) Masters: [ hex-14 ] Slaves: [ hex-13 ] Transition Summary: * Demote s0:0 ( Master -> Slave hex-14 ) Executing cluster transition: * Resource action: s0:1 cancel=20000 on hex-14 * Pseudo action: ms0_demote_0 * Resource action: s0:1 demote on hex-14 * Pseudo action: ms0_demoted_0 * Resource action: s0:1 monitor=21000 on hex-14 Revised cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Started hex-13 Resource Group: g0 d0 (ocf::pacemaker:Dummy): Stopped (disabled) d1 (ocf::pacemaker:Dummy): Stopped (disabled) Clone Set: ms0 [s0] (promotable) Slaves: [ hex-13 hex-14 ] diff --git a/cts/scheduler/multiple-active-block-group.summary b/cts/scheduler/multiple-active-block-group.summary index 0515613bee..12321ce566 100644 --- a/cts/scheduler/multiple-active-block-group.summary +++ b/cts/scheduler/multiple-active-block-group.summary @@ -1,23 +1,24 @@ +0 of 4 resource instances DISABLED and 3 BLOCKED from further action due to failure Current cluster status: Online: [ node2 node3 ] st-sbd (stonith:external/sbd): Started node2 Resource Group: dgroup dummy (ocf::heartbeat:DummyTimeout): FAILED (blocked)[ node2 node3 ] dummy2 (ocf::heartbeat:Dummy): Started node2 (blocked) dummy3 (ocf::heartbeat:Dummy): Started node2 (blocked) Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node2 node3 ] st-sbd (stonith:external/sbd): Started node2 Resource Group: dgroup dummy (ocf::heartbeat:DummyTimeout): FAILED (blocked)[ node2 node3 ] dummy2 (ocf::heartbeat:Dummy): Started node2 (blocked) dummy3 (ocf::heartbeat:Dummy): Started node2 (blocked) diff --git a/cts/scheduler/not-reschedule-unneeded-monitor.summary b/cts/scheduler/not-reschedule-unneeded-monitor.summary index 626791f8aa..770f9d41ba 100644 --- a/cts/scheduler/not-reschedule-unneeded-monitor.summary +++ b/cts/scheduler/not-reschedule-unneeded-monitor.summary @@ -1,36 +1,36 @@ -1 of 11 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 11 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ castor kimball ] sbd (stonith:external/sbd): Started kimball Clone Set: base-clone [dlm] Started: [ castor kimball ] Clone Set: c-vm-fs [vm1] Started: [ castor kimball ] xen-f (ocf::heartbeat:VirtualDomain): Stopped (disabled) sle12-kvm (ocf::heartbeat:VirtualDomain): FAILED castor Clone Set: cl_sgdisk [sgdisk] Started: [ castor kimball ] Transition Summary: * Recover sle12-kvm ( castor -> kimball ) Executing cluster transition: * Resource action: sle12-kvm stop on castor * Resource action: sle12-kvm start on kimball * Resource action: sle12-kvm monitor=10000 on kimball Revised cluster status: Online: [ castor kimball ] sbd (stonith:external/sbd): Started kimball Clone Set: base-clone [dlm] Started: [ castor kimball ] Clone Set: c-vm-fs [vm1] Started: [ castor kimball ] xen-f (ocf::heartbeat:VirtualDomain): Stopped (disabled) sle12-kvm (ocf::heartbeat:VirtualDomain): Started kimball Clone Set: cl_sgdisk [sgdisk] Started: [ castor kimball ] diff --git a/cts/scheduler/novell-251689.summary b/cts/scheduler/novell-251689.summary index 8e0e622297..a4841beb93 100644 --- a/cts/scheduler/novell-251689.summary +++ b/cts/scheduler/novell-251689.summary @@ -1,46 +1,46 @@ -1 of 11 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 11 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] Clone Set: stonithcloneset [stonithclone] Started: [ node1 node2 ] Clone Set: evmsdcloneset [evmsdclone] Started: [ node1 node2 ] Clone Set: evmscloneset [evmsclone] Started: [ node1 node2 ] Clone Set: imagestorecloneset [imagestoreclone] Started: [ node1 node2 ] Clone Set: configstorecloneset [configstoreclone] Started: [ node1 node2 ] sles10 (ocf::heartbeat:Xen): Started node2 (disabled) Transition Summary: * Stop sles10 ( node2 ) due to node availability Executing cluster transition: * Resource action: stonithclone:0 monitor=5000 on node2 * Resource action: stonithclone:1 monitor=5000 on node1 * Resource action: evmsdclone:0 monitor=5000 on node2 * Resource action: evmsdclone:1 monitor=5000 on node1 * Resource action: imagestoreclone:0 monitor=20000 on node2 * Resource action: imagestoreclone:1 monitor=20000 on node1 * Resource action: configstoreclone:0 monitor=20000 on node2 * Resource action: configstoreclone:1 monitor=20000 on node1 * Resource action: sles10 stop on node2 Revised cluster status: Online: [ node1 node2 ] Clone Set: stonithcloneset [stonithclone] Started: [ node1 node2 ] Clone Set: evmsdcloneset [evmsdclone] Started: [ node1 node2 ] Clone Set: evmscloneset [evmsclone] Started: [ node1 node2 ] Clone Set: imagestorecloneset [imagestoreclone] Started: [ node1 node2 ] Clone Set: configstorecloneset [configstoreclone] Started: [ node1 node2 ] sles10 (ocf::heartbeat:Xen): Stopped (disabled) diff --git a/cts/scheduler/one-or-more-1.summary b/cts/scheduler/one-or-more-1.summary index 3db1ba0683..c1c75f0cf5 100644 --- a/cts/scheduler/one-or-more-1.summary +++ b/cts/scheduler/one-or-more-1.summary @@ -1,31 +1,31 @@ -1 of 4 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped (disabled) B (ocf::pacemaker:Dummy): Stopped C (ocf::pacemaker:Dummy): Stopped D (ocf::pacemaker:Dummy): Stopped Transition Summary: * Start B ( fc16-builder ) due to unrunnable A start (blocked) * Start C ( fc16-builder ) due to unrunnable A start (blocked) * Start D ( fc16-builder ) due to unrunnable one-or-more:require-all-set-1 (blocked) Executing cluster transition: * Resource action: A monitor on fc16-builder * Resource action: B monitor on fc16-builder * Resource action: C monitor on fc16-builder * Resource action: D monitor on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped (disabled) B (ocf::pacemaker:Dummy): Stopped C (ocf::pacemaker:Dummy): Stopped D (ocf::pacemaker:Dummy): Stopped diff --git a/cts/scheduler/one-or-more-2.summary b/cts/scheduler/one-or-more-2.summary index 0345f21239..591f968cc1 100644 --- a/cts/scheduler/one-or-more-2.summary +++ b/cts/scheduler/one-or-more-2.summary @@ -1,35 +1,35 @@ -1 of 4 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped B (ocf::pacemaker:Dummy): Stopped (disabled) C (ocf::pacemaker:Dummy): Stopped D (ocf::pacemaker:Dummy): Stopped Transition Summary: * Start A ( fc16-builder ) * Start C ( fc16-builder ) * Start D ( fc16-builder ) Executing cluster transition: * Resource action: A monitor on fc16-builder * Resource action: B monitor on fc16-builder * Resource action: C monitor on fc16-builder * Resource action: D monitor on fc16-builder * Resource action: A start on fc16-builder * Resource action: C start on fc16-builder * Pseudo action: one-or-more:require-all-set-1 * Resource action: D start on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Stopped (disabled) C (ocf::pacemaker:Dummy): Started fc16-builder D (ocf::pacemaker:Dummy): Started fc16-builder diff --git a/cts/scheduler/one-or-more-3.summary b/cts/scheduler/one-or-more-3.summary index 6b07ca8c9a..73ea57bc67 100644 --- a/cts/scheduler/one-or-more-3.summary +++ b/cts/scheduler/one-or-more-3.summary @@ -1,31 +1,31 @@ -2 of 4 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped B (ocf::pacemaker:Dummy): Stopped (disabled) C (ocf::pacemaker:Dummy): Stopped (disabled) D (ocf::pacemaker:Dummy): Stopped Transition Summary: * Start A ( fc16-builder ) * Start D ( fc16-builder ) due to unrunnable one-or-more:require-all-set-1 (blocked) Executing cluster transition: * Resource action: A monitor on fc16-builder * Resource action: B monitor on fc16-builder * Resource action: C monitor on fc16-builder * Resource action: D monitor on fc16-builder * Resource action: A start on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Stopped (disabled) C (ocf::pacemaker:Dummy): Stopped (disabled) D (ocf::pacemaker:Dummy): Stopped diff --git a/cts/scheduler/one-or-more-4.summary b/cts/scheduler/one-or-more-4.summary index e60331e0f7..ab1632b4d2 100644 --- a/cts/scheduler/one-or-more-4.summary +++ b/cts/scheduler/one-or-more-4.summary @@ -1,35 +1,35 @@ -1 of 4 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped B (ocf::pacemaker:Dummy): Stopped C (ocf::pacemaker:Dummy): Stopped D (ocf::pacemaker:Dummy): Stopped (disabled) Transition Summary: * Start A ( fc16-builder ) * Start B ( fc16-builder ) * Start C ( fc16-builder ) Executing cluster transition: * Resource action: A monitor on fc16-builder * Resource action: B monitor on fc16-builder * Resource action: C monitor on fc16-builder * Resource action: D monitor on fc16-builder * Resource action: A start on fc16-builder * Resource action: B start on fc16-builder * Resource action: C start on fc16-builder * Pseudo action: one-or-more:require-all-set-1 Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Started fc16-builder D (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/one-or-more-5.summary b/cts/scheduler/one-or-more-5.summary index 7ca1cd733d..83b36768e2 100644 --- a/cts/scheduler/one-or-more-5.summary +++ b/cts/scheduler/one-or-more-5.summary @@ -1,44 +1,44 @@ -2 of 6 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped B (ocf::pacemaker:Dummy): Stopped C (ocf::pacemaker:Dummy): Stopped (disabled) D (ocf::pacemaker:Dummy): Stopped (disabled) E (ocf::pacemaker:Dummy): Stopped F (ocf::pacemaker:Dummy): Stopped Transition Summary: * Start A ( fc16-builder ) * Start B ( fc16-builder ) * Start E ( fc16-builder ) * Start F ( fc16-builder ) Executing cluster transition: * Resource action: A monitor on fc16-builder * Resource action: B monitor on fc16-builder * Resource action: C monitor on fc16-builder * Resource action: D monitor on fc16-builder * Resource action: E monitor on fc16-builder * Resource action: F monitor on fc16-builder * Resource action: B start on fc16-builder * Pseudo action: one-or-more:require-all-set-1 * Resource action: A start on fc16-builder * Resource action: E start on fc16-builder * Pseudo action: one-or-more:require-all-set-3 * Resource action: F start on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Stopped (disabled) D (ocf::pacemaker:Dummy): Stopped (disabled) E (ocf::pacemaker:Dummy): Started fc16-builder F (ocf::pacemaker:Dummy): Started fc16-builder diff --git a/cts/scheduler/one-or-more-6.summary b/cts/scheduler/one-or-more-6.summary index 07240e5805..970a7b53d5 100644 --- a/cts/scheduler/one-or-more-6.summary +++ b/cts/scheduler/one-or-more-6.summary @@ -1,24 +1,24 @@ -1 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder (disabled) C (ocf::pacemaker:Dummy): Started fc16-builder Transition Summary: * Stop B ( fc16-builder ) due to node availability Executing cluster transition: * Resource action: B stop on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Stopped (disabled) C (ocf::pacemaker:Dummy): Started fc16-builder diff --git a/cts/scheduler/one-or-more-7.summary b/cts/scheduler/one-or-more-7.summary index 9d469c8556..cad1afe0e1 100644 --- a/cts/scheduler/one-or-more-7.summary +++ b/cts/scheduler/one-or-more-7.summary @@ -1,24 +1,24 @@ -1 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Started fc16-builder (disabled) Transition Summary: * Stop C ( fc16-builder ) due to node availability Executing cluster transition: * Resource action: C stop on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Started fc16-builder B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/order-clone.summary b/cts/scheduler/order-clone.summary index 29da82016a..c2b153d9f7 100644 --- a/cts/scheduler/order-clone.summary +++ b/cts/scheduler/order-clone.summary @@ -1,42 +1,42 @@ -4 of 25 resources DISABLED and 0 BLOCKED from being started due to failures +4 of 25 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ hex-0 hex-7 hex-8 hex-9 ] fencing-sbd (stonith:external/sbd): Stopped Clone Set: o2cb-clone [o2cb] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: vg1-clone [vg1] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: fs2-clone [ocfs2-2] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: fs1-clone [ocfs2-1] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: dlm-clone [dlm] Stopped (disabled): [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: clvm-clone [clvm] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Transition Summary: * Start fencing-sbd ( hex-0 ) Executing cluster transition: * Resource action: fencing-sbd start on hex-0 Revised cluster status: Online: [ hex-0 hex-7 hex-8 hex-9 ] fencing-sbd (stonith:external/sbd): Started hex-0 Clone Set: o2cb-clone [o2cb] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: vg1-clone [vg1] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: fs2-clone [ocfs2-2] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: fs1-clone [ocfs2-1] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: dlm-clone [dlm] Stopped (disabled): [ hex-0 hex-7 hex-8 hex-9 ] Clone Set: clvm-clone [clvm] Stopped: [ hex-0 hex-7 hex-8 hex-9 ] diff --git a/cts/scheduler/order7.summary b/cts/scheduler/order7.summary index 7e54014a4e..0e8e91e002 100644 --- a/cts/scheduler/order7.summary +++ b/cts/scheduler/order7.summary @@ -1,36 +1,37 @@ +0 of 6 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 ] rsc1 (ocf::heartbeat:apache): Started node1 rsc2 (ocf::heartbeat:apache): Stopped rsc3 (ocf::heartbeat:apache): Stopped rscA (ocf::heartbeat:apache): FAILED node1 (blocked) rscB (ocf::heartbeat:apache): Stopped rscC (ocf::heartbeat:apache): Stopped Transition Summary: * Start rsc2 ( node1 ) * Start rsc3 ( node1 ) * Start rscB ( node1 ) * Start rscC ( node1 ) due to unrunnable rscA start (blocked) Executing cluster transition: * Resource action: rsc2 monitor on node1 * Resource action: rsc3 monitor on node1 * Resource action: rscB monitor on node1 * Resource action: rscC monitor on node1 * Resource action: rsc2 start on node1 * Resource action: rsc3 start on node1 * Resource action: rscB start on node1 Revised cluster status: Online: [ node1 ] rsc1 (ocf::heartbeat:apache): Started node1 rsc2 (ocf::heartbeat:apache): Started node1 rsc3 (ocf::heartbeat:apache): Started node1 rscA (ocf::heartbeat:apache): FAILED node1 (blocked) rscB (ocf::heartbeat:apache): Started node1 rscC (ocf::heartbeat:apache): Stopped diff --git a/cts/scheduler/order_constraint_stops_master.summary b/cts/scheduler/order_constraint_stops_master.summary index 94bfbcce22..83654629c8 100644 --- a/cts/scheduler/order_constraint_stops_master.summary +++ b/cts/scheduler/order_constraint_stops_master.summary @@ -1,41 +1,41 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder fc16-builder2 ] Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Masters: [ fc16-builder ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Started fc16-builder2 (disabled) Transition Summary: * Stop NATIVE_RSC_A:0 ( Master fc16-builder ) due to required NATIVE_RSC_B start * Stop NATIVE_RSC_B ( fc16-builder2 ) due to node availability Executing cluster transition: * Pseudo action: MASTER_RSC_A_pre_notify_demote_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_demote_0 * Pseudo action: MASTER_RSC_A_demote_0 * Resource action: NATIVE_RSC_A:0 demote on fc16-builder * Pseudo action: MASTER_RSC_A_demoted_0 * Pseudo action: MASTER_RSC_A_post_notify_demoted_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-post_notify_demoted_0 * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Resource action: NATIVE_RSC_A:0 delete on fc16-builder2 * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Resource action: NATIVE_RSC_B stop on fc16-builder2 Revised cluster status: Online: [ fc16-builder fc16-builder2 ] Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Stopped: [ fc16-builder fc16-builder2 ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/order_constraint_stops_slave.summary b/cts/scheduler/order_constraint_stops_slave.summary index 33eae2d7d0..032261571e 100644 --- a/cts/scheduler/order_constraint_stops_slave.summary +++ b/cts/scheduler/order_constraint_stops_slave.summary @@ -1,33 +1,33 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Slaves: [ fc16-builder ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Started fc16-builder (disabled) Transition Summary: * Stop NATIVE_RSC_A:0 ( Slave fc16-builder ) due to required NATIVE_RSC_B start * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability Executing cluster transition: * Pseudo action: MASTER_RSC_A_pre_notify_stop_0 * Resource action: NATIVE_RSC_A:0 notify on fc16-builder * Pseudo action: MASTER_RSC_A_confirmed-pre_notify_stop_0 * Pseudo action: MASTER_RSC_A_stop_0 * Resource action: NATIVE_RSC_A:0 stop on fc16-builder * Pseudo action: MASTER_RSC_A_stopped_0 * Pseudo action: MASTER_RSC_A_post_notify_stopped_0 * Pseudo action: MASTER_RSC_A_confirmed-post_notify_stopped_0 * Resource action: NATIVE_RSC_B stop on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] Clone Set: MASTER_RSC_A [NATIVE_RSC_A] (promotable) Stopped: [ fc16-builder fc16-builder2 ] NATIVE_RSC_B (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/ordered-set-basic-startup.summary b/cts/scheduler/ordered-set-basic-startup.summary index 49a1a9a8cf..691ca6e53c 100644 --- a/cts/scheduler/ordered-set-basic-startup.summary +++ b/cts/scheduler/ordered-set-basic-startup.summary @@ -1,39 +1,39 @@ -2 of 6 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped B (ocf::pacemaker:Dummy): Stopped C (ocf::pacemaker:Dummy): Stopped (disabled) D (ocf::pacemaker:Dummy): Stopped (disabled) E (ocf::pacemaker:Dummy): Stopped F (ocf::pacemaker:Dummy): Stopped Transition Summary: * Start A ( fc16-builder ) due to unrunnable C start (blocked) * Start B ( fc16-builder ) * Start E ( fc16-builder ) due to unrunnable A start (blocked) * Start F ( fc16-builder ) due to unrunnable D start (blocked) Executing cluster transition: * Resource action: A monitor on fc16-builder * Resource action: B monitor on fc16-builder * Resource action: C monitor on fc16-builder * Resource action: D monitor on fc16-builder * Resource action: E monitor on fc16-builder * Resource action: F monitor on fc16-builder * Resource action: B start on fc16-builder Revised cluster status: Online: [ fc16-builder ] OFFLINE: [ fc16-builder2 ] A (ocf::pacemaker:Dummy): Stopped B (ocf::pacemaker:Dummy): Started fc16-builder C (ocf::pacemaker:Dummy): Stopped (disabled) D (ocf::pacemaker:Dummy): Stopped (disabled) E (ocf::pacemaker:Dummy): Stopped F (ocf::pacemaker:Dummy): Stopped diff --git a/cts/scheduler/ordered-set-natural.summary b/cts/scheduler/ordered-set-natural.summary index e051a4a0f8..ba82e97d45 100644 --- a/cts/scheduler/ordered-set-natural.summary +++ b/cts/scheduler/ordered-set-natural.summary @@ -1,52 +1,52 @@ -4 of 15 resources DISABLED and 0 BLOCKED from being started due to failures +3 of 15 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] Resource Group: rgroup dummy1-1 (ocf::heartbeat:Dummy): Stopped dummy1-2 (ocf::heartbeat:Dummy): Stopped dummy1-3 (ocf::heartbeat:Dummy): Stopped (disabled) dummy1-4 (ocf::heartbeat:Dummy): Stopped dummy1-5 (ocf::heartbeat:Dummy): Stopped dummy2-1 (ocf::heartbeat:Dummy): Stopped dummy2-2 (ocf::heartbeat:Dummy): Stopped dummy2-3 (ocf::heartbeat:Dummy): Stopped (disabled) dummy3-1 (ocf::heartbeat:Dummy): Stopped dummy3-2 (ocf::heartbeat:Dummy): Stopped dummy3-3 (ocf::heartbeat:Dummy): Stopped (disabled) dummy3-4 (ocf::heartbeat:Dummy): Stopped dummy3-5 (ocf::heartbeat:Dummy): Stopped dummy2-4 (ocf::heartbeat:Dummy): Stopped dummy2-5 (ocf::heartbeat:Dummy): Stopped Transition Summary: * Start dummy1-1 ( node1 ) due to no quorum (blocked) * Start dummy1-2 ( node1 ) due to no quorum (blocked) * Start dummy2-1 ( node2 ) due to no quorum (blocked) * Start dummy2-2 ( node2 ) due to no quorum (blocked) * Start dummy3-4 ( node1 ) due to no quorum (blocked) * Start dummy3-5 ( node1 ) due to no quorum (blocked) Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] Resource Group: rgroup dummy1-1 (ocf::heartbeat:Dummy): Stopped dummy1-2 (ocf::heartbeat:Dummy): Stopped dummy1-3 (ocf::heartbeat:Dummy): Stopped (disabled) dummy1-4 (ocf::heartbeat:Dummy): Stopped dummy1-5 (ocf::heartbeat:Dummy): Stopped dummy2-1 (ocf::heartbeat:Dummy): Stopped dummy2-2 (ocf::heartbeat:Dummy): Stopped dummy2-3 (ocf::heartbeat:Dummy): Stopped (disabled) dummy3-1 (ocf::heartbeat:Dummy): Stopped dummy3-2 (ocf::heartbeat:Dummy): Stopped dummy3-3 (ocf::heartbeat:Dummy): Stopped (disabled) dummy3-4 (ocf::heartbeat:Dummy): Stopped dummy3-5 (ocf::heartbeat:Dummy): Stopped dummy2-4 (ocf::heartbeat:Dummy): Stopped dummy2-5 (ocf::heartbeat:Dummy): Stopped diff --git a/cts/scheduler/params-6.summary b/cts/scheduler/params-6.summary index 14ca154e86..ab634c1627 100644 --- a/cts/scheduler/params-6.summary +++ b/cts/scheduler/params-6.summary @@ -1,376 +1,376 @@ -90 of 337 resources DISABLED and 0 BLOCKED from being started due to failures +90 of 337 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ mgmt01 v03-a v03-b ] stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled) stonith-mgmt01 (stonith:fence_xvm): Started v03-a stonith-mgmt02 (stonith:meatware): Started v03-a stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v03-a (stonith:fence_ipmilan): Started v03-b stonith-v03-b (stonith:fence_ipmilan): Started mgmt01 stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled) Clone Set: cl-clvmd [clvmd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-dlm [dlm] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-iscsid [iscsid] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirtd [libvirtd] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-multipathd [multipathd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-node-params [node-params] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan1-if [vlan1-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan101-if [vlan101-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan102-if [vlan102-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan103-if [vlan103-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan104-if [vlan104-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan3-if [vlan3-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan4-if [vlan4-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan5-if [vlan5-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan900-if [vlan900-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan909-if [vlan909-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-libvirt-images-fs [libvirt-images-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-install-fs [libvirt-install-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-images-pool [libvirt-images-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] git.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) Clone Set: cl-libvirt-qpid [libvirt-qpid] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) f13-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a eu2.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b zakaz.transferrus.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) Clone Set: cl-vlan200-if [vlan200-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] anbriz-gw-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) anbriz-work-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) lenny-x32-devel-vm (ocf::vds-ok:VirtualDomain): Started v03-a vptest1.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest2.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest3.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest4.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest5.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest6.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest7.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest8.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest9.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest10.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest11.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest12.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest13.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest14.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest15.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest16.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest17.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest18.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest19.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest20.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest21.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest22.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest23.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest24.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest25.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest26.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest27.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest28.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest29.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest30.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest31.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest32.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest33.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest34.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest35.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest36.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest37.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest38.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest39.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest40.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest41.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest42.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest43.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest44.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest45.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest46.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest47.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest48.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest49.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest50.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest51.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest52.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest53.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest54.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest55.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest56.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest57.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest58.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest59.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest60.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) sl6-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b dist.express-consult.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) eu1.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) gotin-bbb-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) maxb-c55-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) metae.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) rodovoepomestie.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) ubuntu9.10-gotin-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) c5-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a Clone Set: cl-mcast-test-net [mcast-test-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] dist.fly-uni.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) Transition Summary: * Reload c5-x64-devel.vds-ok.com-vm ( v03-a ) Executing cluster transition: * Resource action: vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm monitor=10000 on v03-b * Resource action: vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm monitor=10000 on v03-b * Resource action: c5-x64-devel.vds-ok.com-vm reload on v03-a * Resource action: c5-x64-devel.vds-ok.com-vm monitor=10000 on v03-a * Pseudo action: load_stopped_v03-b * Pseudo action: load_stopped_v03-a * Pseudo action: load_stopped_mgmt01 Revised cluster status: Online: [ mgmt01 v03-a v03-b ] stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled) stonith-mgmt01 (stonith:fence_xvm): Started v03-a stonith-mgmt02 (stonith:meatware): Started v03-a stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled) stonith-v03-a (stonith:fence_ipmilan): Started v03-b stonith-v03-b (stonith:fence_ipmilan): Started mgmt01 stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled) Clone Set: cl-clvmd [clvmd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-dlm [dlm] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-iscsid [iscsid] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirtd [libvirtd] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-multipathd [multipathd] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-node-params [node-params] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan1-if [vlan1-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan101-if [vlan101-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan102-if [vlan102-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan103-if [vlan103-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan104-if [vlan104-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan3-if [vlan3-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan4-if [vlan4-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan5-if [vlan5-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan900-if [vlan900-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vlan909-if [vlan909-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-libvirt-images-fs [libvirt-images-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-install-fs [libvirt-install-fs] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg] Started: [ mgmt01 v03-a v03-b ] Clone Set: cl-libvirt-images-pool [libvirt-images-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] git.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) Clone Set: cl-libvirt-qpid [libvirt-qpid] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) f13-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a eu2.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b zakaz.transferrus.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) Clone Set: cl-vlan200-if [vlan200-if] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] anbriz-gw-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) anbriz-work-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) lenny-x32-devel-vm (ocf::vds-ok:VirtualDomain): Started v03-a vptest1.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest2.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest3.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest4.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest5.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest6.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest7.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest8.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest9.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest10.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest11.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest12.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest13.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest14.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest15.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest16.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest17.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest18.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest19.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest20.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest21.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest22.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest23.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest24.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest25.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest26.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest27.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest28.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest29.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest30.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest31.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest32.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest33.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest34.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest35.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest36.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest37.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest38.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest39.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest40.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest41.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest42.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest43.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest44.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest45.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest46.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest47.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest48.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest49.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest50.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest51.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest52.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest53.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest54.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest55.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest56.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest57.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest58.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest59.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) vptest60.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) sl6-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-b dist.express-consult.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) eu1.ca-pages.com-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) gotin-bbb-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) maxb-c55-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) metae.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) rodovoepomestie.ru-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) ubuntu9.10-gotin-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) c5-x64-devel.vds-ok.com-vm (ocf::vds-ok:VirtualDomain): Started v03-a Clone Set: cl-mcast-test-net [mcast-test-net] Started: [ v03-a v03-b ] Stopped: [ mgmt01 ] dist.fly-uni.org-vm (ocf::vds-ok:VirtualDomain): Stopped (disabled) diff --git a/cts/scheduler/rec-rsc-4.summary b/cts/scheduler/rec-rsc-4.summary index 5e37bc1f6a..289c82b447 100644 --- a/cts/scheduler/rec-rsc-4.summary +++ b/cts/scheduler/rec-rsc-4.summary @@ -1,16 +1,17 @@ +0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::heartbeat:apache): FAILED node2 (blocked) Transition Summary: Executing cluster transition: * Resource action: rsc1 monitor on node1 Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::heartbeat:apache): FAILED node2 (blocked) diff --git a/cts/scheduler/rec-rsc-8.summary b/cts/scheduler/rec-rsc-8.summary index 2968557a08..65f2f8dbd1 100644 --- a/cts/scheduler/rec-rsc-8.summary +++ b/cts/scheduler/rec-rsc-8.summary @@ -1,15 +1,16 @@ +0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::heartbeat:apache): Started (blocked)[ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::heartbeat:apache): Started (blocked)[ node1 node2 ] diff --git a/cts/scheduler/remote-disable.summary b/cts/scheduler/remote-disable.summary index e036225bc3..7923180fed 100644 --- a/cts/scheduler/remote-disable.summary +++ b/cts/scheduler/remote-disable.summary @@ -1,32 +1,32 @@ -2 of 6 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ 18builder 18node1 18node2 ] RemoteOnline: [ remote1 ] shooter (stonith:fence_xvm): Started 18node1 remote1 (ocf::pacemaker:remote): Started 18builder (disabled) FAKE1 (ocf::heartbeat:Dummy): Started 18node2 FAKE2 (ocf::heartbeat:Dummy): Started remote1 FAKE3 (ocf::heartbeat:Dummy): Started 18builder FAKE4 (ocf::heartbeat:Dummy): Started 18node1 Transition Summary: * Stop remote1 ( 18builder ) due to node availability * Stop FAKE2 ( remote1 ) due to node availability Executing cluster transition: * Resource action: FAKE2 stop on remote1 * Resource action: remote1 stop on 18builder Revised cluster status: Online: [ 18builder 18node1 18node2 ] RemoteOFFLINE: [ remote1 ] shooter (stonith:fence_xvm): Started 18node1 remote1 (ocf::pacemaker:remote): Stopped (disabled) FAKE1 (ocf::heartbeat:Dummy): Started 18node2 FAKE2 (ocf::heartbeat:Dummy): Stopped FAKE3 (ocf::heartbeat:Dummy): Started 18builder FAKE4 (ocf::heartbeat:Dummy): Started 18node1 diff --git a/cts/scheduler/remote-probe-disable.summary b/cts/scheduler/remote-probe-disable.summary index e3b77808fc..fe5a27d693 100644 --- a/cts/scheduler/remote-probe-disable.summary +++ b/cts/scheduler/remote-probe-disable.summary @@ -1,34 +1,34 @@ -2 of 6 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ 18builder 18node1 18node2 ] RemoteOnline: [ remote1 ] shooter (stonith:fence_xvm): Started 18node1 remote1 (ocf::pacemaker:remote): Started 18builder (disabled) FAKE1 (ocf::heartbeat:Dummy): Started 18node2 FAKE2 (ocf::heartbeat:Dummy): Stopped FAKE3 (ocf::heartbeat:Dummy): Started 18builder FAKE4 (ocf::heartbeat:Dummy): Started 18node1 Transition Summary: * Stop remote1 ( 18builder ) due to node availability Executing cluster transition: * Resource action: FAKE1 monitor on remote1 * Resource action: FAKE2 monitor on remote1 * Resource action: FAKE3 monitor on remote1 * Resource action: FAKE4 monitor on remote1 * Resource action: remote1 stop on 18builder Revised cluster status: Online: [ 18builder 18node1 18node2 ] RemoteOFFLINE: [ remote1 ] shooter (stonith:fence_xvm): Started 18node1 remote1 (ocf::pacemaker:remote): Stopped (disabled) FAKE1 (ocf::heartbeat:Dummy): Started 18node2 FAKE2 (ocf::heartbeat:Dummy): Stopped FAKE3 (ocf::heartbeat:Dummy): Started 18builder FAKE4 (ocf::heartbeat:Dummy): Started 18node1 diff --git a/cts/scheduler/rsc-maintenance.summary b/cts/scheduler/rsc-maintenance.summary index 4a43217822..0db255a690 100644 --- a/cts/scheduler/rsc-maintenance.summary +++ b/cts/scheduler/rsc-maintenance.summary @@ -1,28 +1,28 @@ -4 of 4 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Started node1 (disabled, unmanaged) rsc2 (ocf::pacemaker:Dummy): Started node1 (disabled, unmanaged) Resource Group: group2 rsc3 (ocf::pacemaker:Dummy): Started node2 rsc4 (ocf::pacemaker:Dummy): Started node2 Transition Summary: Executing cluster transition: * Resource action: rsc1 cancel=10000 on node1 * Resource action: rsc2 cancel=10000 on node1 Revised cluster status: Online: [ node1 node2 ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Started node1 (disabled, unmanaged) rsc2 (ocf::pacemaker:Dummy): Started node1 (disabled, unmanaged) Resource Group: group2 rsc3 (ocf::pacemaker:Dummy): Started node2 rsc4 (ocf::pacemaker:Dummy): Started node2 diff --git a/cts/scheduler/rsc-sets-clone-1.summary b/cts/scheduler/rsc-sets-clone-1.summary index 2732dfabb5..a80e4f1fbc 100644 --- a/cts/scheduler/rsc-sets-clone-1.summary +++ b/cts/scheduler/rsc-sets-clone-1.summary @@ -1,83 +1,83 @@ -5 of 24 resources DISABLED and 0 BLOCKED from being started due to failures +5 of 24 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ sys2 sys3 ] vm1 (ocf::heartbeat:Xen): Started sys2 vm2 (ocf::heartbeat:Xen): Stopped (disabled) vm3 (ocf::heartbeat:Xen): Stopped (disabled) vm4 (ocf::heartbeat:Xen): Stopped (disabled) stonithsys2 (stonith:external/ipmi): Stopped stonithsys3 (stonith:external/ipmi): Started sys2 Clone Set: baseclone [basegrp] Started: [ sys2 ] Stopped: [ sys3 ] Clone Set: fs1 [nfs1] Stopped (disabled): [ sys2 sys3 ] Transition Summary: * Restart stonithsys3 ( sys2 ) due to resource definition change * Start controld:1 ( sys3 ) * Start clvmd:1 ( sys3 ) * Start o2cb:1 ( sys3 ) * Start iscsi1:1 ( sys3 ) * Start iscsi2:1 ( sys3 ) * Start vg1:1 ( sys3 ) * Start vg2:1 ( sys3 ) * Start fs2:1 ( sys3 ) * Start stonithsys2 ( sys3 ) Executing cluster transition: * Resource action: vm1 monitor on sys3 * Resource action: vm2 monitor on sys3 * Resource action: vm3 monitor on sys3 * Resource action: vm4 monitor on sys3 * Resource action: stonithsys3 stop on sys2 * Resource action: stonithsys3 monitor on sys3 * Resource action: stonithsys3 start on sys2 * Resource action: stonithsys3 monitor=15000 on sys2 * Resource action: controld:1 monitor on sys3 * Resource action: clvmd:1 monitor on sys3 * Resource action: o2cb:1 monitor on sys3 * Resource action: iscsi1:1 monitor on sys3 * Resource action: iscsi2:1 monitor on sys3 * Resource action: vg1:1 monitor on sys3 * Resource action: vg2:1 monitor on sys3 * Resource action: fs2:1 monitor on sys3 * Pseudo action: baseclone_start_0 * Resource action: nfs1:0 monitor on sys3 * Resource action: stonithsys2 monitor on sys3 * Pseudo action: load_stopped_sys3 * Pseudo action: load_stopped_sys2 * Pseudo action: basegrp:1_start_0 * Resource action: controld:1 start on sys3 * Resource action: clvmd:1 start on sys3 * Resource action: o2cb:1 start on sys3 * Resource action: iscsi1:1 start on sys3 * Resource action: iscsi2:1 start on sys3 * Resource action: vg1:1 start on sys3 * Resource action: vg2:1 start on sys3 * Resource action: fs2:1 start on sys3 * Resource action: stonithsys2 start on sys3 * Pseudo action: basegrp:1_running_0 * Resource action: controld:1 monitor=10000 on sys3 * Resource action: iscsi1:1 monitor=120000 on sys3 * Resource action: iscsi2:1 monitor=120000 on sys3 * Resource action: fs2:1 monitor=20000 on sys3 * Pseudo action: baseclone_running_0 * Resource action: stonithsys2 monitor=15000 on sys3 Revised cluster status: Online: [ sys2 sys3 ] vm1 (ocf::heartbeat:Xen): Started sys2 vm2 (ocf::heartbeat:Xen): Stopped (disabled) vm3 (ocf::heartbeat:Xen): Stopped (disabled) vm4 (ocf::heartbeat:Xen): Stopped (disabled) stonithsys2 (stonith:external/ipmi): Started sys3 stonithsys3 (stonith:external/ipmi): Started sys2 Clone Set: baseclone [basegrp] Started: [ sys2 sys3 ] Clone Set: fs1 [nfs1] Stopped (disabled): [ sys2 sys3 ] diff --git a/cts/scheduler/stop-failure-no-fencing.summary b/cts/scheduler/stop-failure-no-fencing.summary index b51d0182f4..ae46d999fa 100644 --- a/cts/scheduler/stop-failure-no-fencing.summary +++ b/cts/scheduler/stop-failure-no-fencing.summary @@ -1,29 +1,30 @@ +0 of 9 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Node pcmk-3 (103): UNCLEAN (offline) Node pcmk-4 (104): UNCLEAN (offline) Online: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Clone Set: clvm-clone [clvm] clvm (lsb:clvmd): FAILED pcmk-3 (UNCLEAN, blocked) Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped Transition Summary: Executing cluster transition: Revised cluster status: Node pcmk-3 (103): UNCLEAN (offline) Node pcmk-4 (104): UNCLEAN (offline) Online: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Clone Set: clvm-clone [clvm] clvm (lsb:clvmd): FAILED pcmk-3 (UNCLEAN, blocked) Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped diff --git a/cts/scheduler/stop-failure-no-quorum.summary b/cts/scheduler/stop-failure-no-quorum.summary index ac7a788f0c..b492f75a25 100644 --- a/cts/scheduler/stop-failure-no-quorum.summary +++ b/cts/scheduler/stop-failure-no-quorum.summary @@ -1,44 +1,45 @@ +0 of 10 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Node pcmk-2 (102): UNCLEAN (online) Node pcmk-3 (103): UNCLEAN (offline) Node pcmk-4 (104): UNCLEAN (offline) Online: [ pcmk-1 ] Clone Set: dlm-clone [dlm] Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Clone Set: clvm-clone [clvm] clvm (lsb:clvmd): FAILED pcmk-2 clvm (lsb:clvmd): FAILED pcmk-3 (UNCLEAN, blocked) Stopped: [ pcmk-1 pcmk-3 pcmk-4 ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped Fencing (stonith:fence_xvm): Stopped Transition Summary: * Fence (reboot) pcmk-2 'clvm:0 failed there' * Start dlm:0 ( pcmk-1 ) due to no quorum (blocked) * Stop clvm:0 ( pcmk-2 ) due to node availability * Start clvm:2 ( pcmk-1 ) due to no quorum (blocked) * Start ClusterIP ( pcmk-1 ) due to no quorum (blocked) * Start Fencing ( pcmk-1 ) due to no quorum (blocked) Executing cluster transition: * Fencing pcmk-2 (reboot) * Pseudo action: clvm-clone_stop_0 * Pseudo action: clvm_stop_0 * Pseudo action: clvm-clone_stopped_0 Revised cluster status: Node pcmk-3 (103): UNCLEAN (offline) Node pcmk-4 (104): UNCLEAN (offline) Online: [ pcmk-1 ] OFFLINE: [ pcmk-2 ] Clone Set: dlm-clone [dlm] Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] Clone Set: clvm-clone [clvm] clvm (lsb:clvmd): FAILED pcmk-3 (UNCLEAN, blocked) Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ] ClusterIP (ocf::heartbeat:IPaddr2): Stopped Fencing (stonith:fence_xvm): Stopped diff --git a/cts/scheduler/stopped-monitor-03.summary b/cts/scheduler/stopped-monitor-03.summary index c7cc3cd126..e3700707d1 100644 --- a/cts/scheduler/stopped-monitor-03.summary +++ b/cts/scheduler/stopped-monitor-03.summary @@ -1,19 +1,19 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Started node1 (disabled) Transition Summary: * Stop rsc1 ( node1 ) due to node availability Executing cluster transition: * Resource action: rsc1 stop on node1 * Resource action: rsc1 monitor=20000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/stopped-monitor-04.summary b/cts/scheduler/stopped-monitor-04.summary index 08a4225e45..ed0ba619f1 100644 --- a/cts/scheduler/stopped-monitor-04.summary +++ b/cts/scheduler/stopped-monitor-04.summary @@ -1,16 +1,16 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED node1 (disabled, blocked) Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED node1 (disabled, blocked) diff --git a/cts/scheduler/stopped-monitor-05.summary b/cts/scheduler/stopped-monitor-05.summary index b340756baa..3860bfb404 100644 --- a/cts/scheduler/stopped-monitor-05.summary +++ b/cts/scheduler/stopped-monitor-05.summary @@ -1,15 +1,16 @@ +0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED node1 (blocked) Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED node1 (blocked) diff --git a/cts/scheduler/stopped-monitor-06.summary b/cts/scheduler/stopped-monitor-06.summary index 0d6afc25ea..e18cb69f2f 100644 --- a/cts/scheduler/stopped-monitor-06.summary +++ b/cts/scheduler/stopped-monitor-06.summary @@ -1,16 +1,16 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (disabled, blocked)[ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (disabled, blocked)[ node1 node2 ] diff --git a/cts/scheduler/stopped-monitor-07.summary b/cts/scheduler/stopped-monitor-07.summary index 2d4a102c3b..766d4e9865 100644 --- a/cts/scheduler/stopped-monitor-07.summary +++ b/cts/scheduler/stopped-monitor-07.summary @@ -1,15 +1,16 @@ +0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (blocked)[ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (blocked)[ node1 node2 ] diff --git a/cts/scheduler/stopped-monitor-11.summary b/cts/scheduler/stopped-monitor-11.summary index 39cda798ba..33df7988a2 100644 --- a/cts/scheduler/stopped-monitor-11.summary +++ b/cts/scheduler/stopped-monitor-11.summary @@ -1,16 +1,16 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Started node1 (disabled, unmanaged) Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Started node1 (disabled, unmanaged) diff --git a/cts/scheduler/stopped-monitor-12.summary b/cts/scheduler/stopped-monitor-12.summary index 1508d77977..8903c463fb 100644 --- a/cts/scheduler/stopped-monitor-12.summary +++ b/cts/scheduler/stopped-monitor-12.summary @@ -1,16 +1,16 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (disabled, unmanaged)[ node1 node2 ] Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (disabled, unmanaged)[ node1 node2 ] diff --git a/cts/scheduler/stopped-monitor-20.summary b/cts/scheduler/stopped-monitor-20.summary index 517ca86ce7..4871c36fdf 100644 --- a/cts/scheduler/stopped-monitor-20.summary +++ b/cts/scheduler/stopped-monitor-20.summary @@ -1,20 +1,20 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) Transition Summary: Executing cluster transition: * Resource action: rsc1 monitor on node2 * Resource action: rsc1 monitor on node1 * Resource action: rsc1 monitor=20000 on node2 * Resource action: rsc1 monitor=20000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/stopped-monitor-21.summary b/cts/scheduler/stopped-monitor-21.summary index ced4678c4f..bb2a193bb2 100644 --- a/cts/scheduler/stopped-monitor-21.summary +++ b/cts/scheduler/stopped-monitor-21.summary @@ -1,19 +1,19 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED node1 (disabled) Transition Summary: * Stop rsc1 ( node1 ) due to node availability Executing cluster transition: * Resource action: rsc1 stop on node1 * Resource action: rsc1 monitor=20000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/stopped-monitor-22.summary b/cts/scheduler/stopped-monitor-22.summary index a3348e47b7..94a2cf1bb9 100644 --- a/cts/scheduler/stopped-monitor-22.summary +++ b/cts/scheduler/stopped-monitor-22.summary @@ -1,22 +1,22 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (disabled)[ node1 node2 ] Transition Summary: * Stop rsc1 ( node1 ) due to node availability * Stop rsc1 ( node2 ) due to node availability Executing cluster transition: * Resource action: rsc1 stop on node2 * Resource action: rsc1 monitor=20000 on node2 * Resource action: rsc1 stop on node1 * Resource action: rsc1 monitor=20000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/stopped-monitor-24.summary b/cts/scheduler/stopped-monitor-24.summary index 4634a6b8c9..469db307c0 100644 --- a/cts/scheduler/stopped-monitor-24.summary +++ b/cts/scheduler/stopped-monitor-24.summary @@ -1,16 +1,16 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled, unmanaged) Transition Summary: Executing cluster transition: Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled, unmanaged) diff --git a/cts/scheduler/stopped-monitor-25.summary b/cts/scheduler/stopped-monitor-25.summary index ee4b9fda1b..17ce035194 100644 --- a/cts/scheduler/stopped-monitor-25.summary +++ b/cts/scheduler/stopped-monitor-25.summary @@ -1,18 +1,18 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (disabled, unmanaged)[ node1 node2 ] Transition Summary: Executing cluster transition: * Resource action: rsc1 monitor=10000 on node1 * Resource action: rsc1 cancel=20000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): FAILED (disabled, unmanaged)[ node1 node2 ] diff --git a/cts/scheduler/stopped-monitor-31.summary b/cts/scheduler/stopped-monitor-31.summary index ebda3d79f1..1d9974c3fd 100644 --- a/cts/scheduler/stopped-monitor-31.summary +++ b/cts/scheduler/stopped-monitor-31.summary @@ -1,18 +1,18 @@ -1 of 1 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 node3 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) Transition Summary: Executing cluster transition: * Resource action: rsc1 monitor on node3 * Resource action: rsc1 monitor=20000 on node3 Revised cluster status: Online: [ node1 node2 node3 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) diff --git a/cts/scheduler/target-1.summary b/cts/scheduler/target-1.summary index 9fffecd887..c481ef114b 100644 --- a/cts/scheduler/target-1.summary +++ b/cts/scheduler/target-1.summary @@ -1,40 +1,40 @@ -1 of 5 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 (disabled) rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 Clone Set: promoteme [rsc_c001n03] (promotable) Slaves: [ c001n03 ] rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Transition Summary: * Stop rsc_c001n08 ( c001n08 ) due to node availability Executing cluster transition: * Resource action: DcIPaddr monitor on c001n08 * Resource action: DcIPaddr monitor on c001n03 * Resource action: DcIPaddr monitor on c001n01 * Resource action: rsc_c001n08 stop on c001n08 * Resource action: rsc_c001n08 monitor on c001n03 * Resource action: rsc_c001n08 monitor on c001n02 * Resource action: rsc_c001n08 monitor on c001n01 * Resource action: rsc_c001n02 monitor on c001n08 * Resource action: rsc_c001n02 monitor on c001n03 * Resource action: rsc_c001n02 monitor on c001n01 * Resource action: rsc_c001n01 monitor on c001n08 * Resource action: rsc_c001n01 monitor on c001n03 * Resource action: rsc_c001n01 monitor on c001n02 Revised cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n08 (ocf::heartbeat:IPaddr): Stopped (disabled) rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 Clone Set: promoteme [rsc_c001n03] (promotable) Slaves: [ c001n03 ] rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 diff --git a/cts/scheduler/target-2.summary b/cts/scheduler/target-2.summary index 13e98ca06a..8d3d29b5aa 100644 --- a/cts/scheduler/target-2.summary +++ b/cts/scheduler/target-2.summary @@ -1,41 +1,41 @@ -1 of 5 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n08 (ocf::heartbeat:IPaddr): Started c001n08 (disabled) rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 Transition Summary: * Stop rsc_c001n08 ( c001n08 ) due to node availability Executing cluster transition: * Resource action: DcIPaddr monitor on c001n08 * Resource action: DcIPaddr monitor on c001n03 * Resource action: DcIPaddr monitor on c001n01 * Resource action: rsc_c001n08 stop on c001n08 * Resource action: rsc_c001n08 monitor on c001n03 * Resource action: rsc_c001n08 monitor on c001n02 * Resource action: rsc_c001n08 monitor on c001n01 * Resource action: rsc_c001n02 monitor on c001n08 * Resource action: rsc_c001n02 monitor on c001n03 * Resource action: rsc_c001n02 monitor on c001n01 * Resource action: rsc_c001n03 monitor on c001n08 * Resource action: rsc_c001n03 monitor on c001n02 * Resource action: rsc_c001n03 monitor on c001n01 * Resource action: rsc_c001n01 monitor on c001n08 * Resource action: rsc_c001n01 monitor on c001n03 * Resource action: rsc_c001n01 monitor on c001n02 Revised cluster status: Online: [ c001n01 c001n02 c001n03 c001n08 ] DcIPaddr (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n08 (ocf::heartbeat:IPaddr): Stopped (disabled) rsc_c001n02 (ocf::heartbeat:IPaddr): Started c001n02 rsc_c001n03 (ocf::heartbeat:IPaddr): Started c001n03 rsc_c001n01 (ocf::heartbeat:IPaddr): Started c001n01 diff --git a/cts/scheduler/template-1.summary b/cts/scheduler/template-1.summary index 7c6129daff..ecad222fbb 100644 --- a/cts/scheduler/template-1.summary +++ b/cts/scheduler/template-1.summary @@ -1,27 +1,27 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) rsc2 (ocf::pacemaker:Dummy): Stopped Transition Summary: * Start rsc2 ( node1 ) Executing cluster transition: * Resource action: rsc1 monitor on node2 * Resource action: rsc1 monitor on node1 * Resource action: rsc2 monitor on node2 * Resource action: rsc2 monitor on node1 * Pseudo action: load_stopped_node2 * Pseudo action: load_stopped_node1 * Resource action: rsc2 start on node1 * Resource action: rsc2 monitor=10000 on node1 Revised cluster status: Online: [ node1 node2 ] rsc1 (ocf::pacemaker:Dummy): Stopped (disabled) rsc2 (ocf::pacemaker:Dummy): Started node1 diff --git a/cts/scheduler/unmanaged-block-restart.summary b/cts/scheduler/unmanaged-block-restart.summary index 101b7fd7b2..0410a8ed29 100644 --- a/cts/scheduler/unmanaged-block-restart.summary +++ b/cts/scheduler/unmanaged-block-restart.summary @@ -1,30 +1,31 @@ +0 of 4 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ yingying.site ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Stopped rsc2 (ocf::pacemaker:Dummy): Started yingying.site rsc3 (ocf::pacemaker:Dummy): Started yingying.site rsc4 (ocf::pacemaker:Dummy): FAILED yingying.site (blocked) Transition Summary: * Start rsc1 ( yingying.site ) * Stop rsc2 ( yingying.site ) due to unrunnable rsc3 stop (blocked) * Stop rsc3 ( yingying.site ) due to required rsc2 stop (blocked) Executing cluster transition: * Pseudo action: group1_stop_0 * Pseudo action: group1_start_0 * Resource action: rsc1 start on yingying.site * Resource action: rsc1 monitor=10000 on yingying.site Revised cluster status: Online: [ yingying.site ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Started yingying.site rsc2 (ocf::pacemaker:Dummy): Started yingying.site rsc3 (ocf::pacemaker:Dummy): Started yingying.site rsc4 (ocf::pacemaker:Dummy): FAILED yingying.site (blocked) diff --git a/cts/scheduler/unmanaged-stop-1.summary b/cts/scheduler/unmanaged-stop-1.summary index 8706e70901..32df41134b 100644 --- a/cts/scheduler/unmanaged-stop-1.summary +++ b/cts/scheduler/unmanaged-stop-1.summary @@ -1,19 +1,19 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ yingying.site ] rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (blocked) Transition Summary: * Stop rsc1 ( yingying.site ) due to node availability (blocked) Executing cluster transition: Revised cluster status: Online: [ yingying.site ] rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (blocked) diff --git a/cts/scheduler/unmanaged-stop-2.summary b/cts/scheduler/unmanaged-stop-2.summary index 8706e70901..32df41134b 100644 --- a/cts/scheduler/unmanaged-stop-2.summary +++ b/cts/scheduler/unmanaged-stop-2.summary @@ -1,19 +1,19 @@ -1 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ yingying.site ] rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (blocked) Transition Summary: * Stop rsc1 ( yingying.site ) due to node availability (blocked) Executing cluster transition: Revised cluster status: Online: [ yingying.site ] rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (blocked) diff --git a/cts/scheduler/unmanaged-stop-3.summary b/cts/scheduler/unmanaged-stop-3.summary index 8539f20908..599da31f81 100644 --- a/cts/scheduler/unmanaged-stop-3.summary +++ b/cts/scheduler/unmanaged-stop-3.summary @@ -1,22 +1,22 @@ -4 of 2 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ yingying.site ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (disabled, blocked) Transition Summary: * Stop rsc1 ( yingying.site ) due to node availability (blocked) Executing cluster transition: * Pseudo action: group1_stop_0 Revised cluster status: Online: [ yingying.site ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (disabled, blocked) diff --git a/cts/scheduler/unmanaged-stop-4.summary b/cts/scheduler/unmanaged-stop-4.summary index a2c2f14a77..c3e16217e1 100644 --- a/cts/scheduler/unmanaged-stop-4.summary +++ b/cts/scheduler/unmanaged-stop-4.summary @@ -1,24 +1,24 @@ -6 of 3 resources DISABLED and 0 BLOCKED from being started due to failures +3 of 3 resource instances DISABLED and 1 BLOCKED from further action due to failure Current cluster status: Online: [ yingying.site ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (disabled, blocked) rsc3 (ocf::heartbeat:Dummy): Stopped (disabled) Transition Summary: * Stop rsc1 ( yingying.site ) due to node availability (blocked) Executing cluster transition: * Pseudo action: group1_stop_0 Revised cluster status: Online: [ yingying.site ] Resource Group: group1 rsc1 (ocf::pacemaker:Dummy): Started yingying.site (disabled) rsc2 (ocf::pacemaker:Dummy): FAILED yingying.site (disabled, blocked) rsc3 (ocf::heartbeat:Dummy): Stopped (disabled) diff --git a/cts/scheduler/unrunnable-2.summary b/cts/scheduler/unrunnable-2.summary index 06b7596f24..dfd7b99f54 100644 --- a/cts/scheduler/unrunnable-2.summary +++ b/cts/scheduler/unrunnable-2.summary @@ -1,175 +1,175 @@ -6 of 117 resources DISABLED and 0 BLOCKED from being started due to failures +6 of 117 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ip-192.0.2.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 Clone Set: haproxy-clone [haproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: galera-master [galera] (promotable) Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: memcached-clone [memcached] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: rabbitmq-clone [rabbitmq] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-core-clone [openstack-core] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: redis-master [redis] (promotable) Masters: [ overcloud-controller-1 ] Slaves: [ overcloud-controller-0 overcloud-controller-2 ] ip-192.0.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 Clone Set: mongod-clone [mongod] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped Clone Set: openstack-heat-engine-clone [openstack-heat-engine] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-clone [openstack-heat-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-api-clone [openstack-sahara-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-api-clone [openstack-cinder-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: delay-clone [delay] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-server-clone [neutron-server] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: httpd-clone [httpd] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Transition Summary: * Start openstack-cinder-volume ( overcloud-controller-2 ) due to unrunnable openstack-cinder-scheduler-clone running (blocked) Executing cluster transition: Revised cluster status: Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ip-192.0.2.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 Clone Set: haproxy-clone [haproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: galera-master [galera] (promotable) Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: memcached-clone [memcached] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: rabbitmq-clone [rabbitmq] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-core-clone [openstack-core] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: redis-master [redis] (promotable) Masters: [ overcloud-controller-1 ] Slaves: [ overcloud-controller-0 overcloud-controller-2 ] ip-192.0.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 Clone Set: mongod-clone [mongod] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped Clone Set: openstack-heat-engine-clone [openstack-heat-engine] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-clone [openstack-heat-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-api-clone [openstack-sahara-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-api-clone [openstack-cinder-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: delay-clone [delay] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-server-clone [neutron-server] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: httpd-clone [httpd] Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] diff --git a/cts/scheduler/use-after-free-merge.summary b/cts/scheduler/use-after-free-merge.summary index b47e80c4d4..b7196d8ccf 100644 --- a/cts/scheduler/use-after-free-merge.summary +++ b/cts/scheduler/use-after-free-merge.summary @@ -1,42 +1,42 @@ -4 of 5 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Stopped Resource Group: g0 d0 (ocf::heartbeat:Dummy): Stopped (disabled) d1 (ocf::heartbeat:Dummy): Stopped (disabled) Clone Set: ms0 [s0] (promotable) Stopped: [ hex-13 hex-14 ] Transition Summary: * Start fencing-sbd ( hex-14 ) * Start s0:0 ( hex-13 ) * Start s0:1 ( hex-14 ) Executing cluster transition: * Resource action: fencing-sbd monitor on hex-14 * Resource action: fencing-sbd monitor on hex-13 * Resource action: d0 monitor on hex-14 * Resource action: d0 monitor on hex-13 * Resource action: d1 monitor on hex-14 * Resource action: d1 monitor on hex-13 * Resource action: s0:0 monitor on hex-13 * Resource action: s0:1 monitor on hex-14 * Pseudo action: ms0_start_0 * Resource action: fencing-sbd start on hex-14 * Resource action: s0:0 start on hex-13 * Resource action: s0:1 start on hex-14 * Pseudo action: ms0_running_0 Revised cluster status: Online: [ hex-13 hex-14 ] fencing-sbd (stonith:external/sbd): Started hex-14 Resource Group: g0 d0 (ocf::heartbeat:Dummy): Stopped (disabled) d1 (ocf::heartbeat:Dummy): Stopped (disabled) Clone Set: ms0 [s0] (promotable) Slaves: [ hex-13 hex-14 ] diff --git a/cts/scheduler/utilization-order4.summary b/cts/scheduler/utilization-order4.summary index c794588cfb..4097318b1e 100644 --- a/cts/scheduler/utilization-order4.summary +++ b/cts/scheduler/utilization-order4.summary @@ -1,60 +1,60 @@ -2 of 13 resources DISABLED and 0 BLOCKED from being started due to failures +2 of 13 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Node deglxen002: standby Online: [ deglxen001 ] degllx62-vm (ocf::heartbeat:Xen): Started deglxen002 degllx63-vm (ocf::heartbeat:Xen): Stopped (disabled) degllx61-vm (ocf::heartbeat:Xen): Started deglxen001 degllx64-vm (ocf::heartbeat:Xen): Stopped (disabled) stonith_sbd (stonith:external/sbd): Started deglxen001 Clone Set: clone-nfs [grp-nfs] Started: [ deglxen001 deglxen002 ] Clone Set: clone-ping [prim-ping] Started: [ deglxen001 deglxen002 ] Transition Summary: * Migrate degllx62-vm ( deglxen002 -> deglxen001 ) * Stop degllx61-vm ( deglxen001 ) due to node availability * Stop nfs-xen_config:1 ( deglxen002 ) due to node availability * Stop nfs-xen_swapfiles:1 ( deglxen002 ) due to node availability * Stop nfs-xen_images:1 ( deglxen002 ) due to node availability * Stop prim-ping:1 ( deglxen002 ) due to node availability Executing cluster transition: * Resource action: degllx61-vm stop on deglxen001 * Pseudo action: load_stopped_deglxen001 * Resource action: degllx62-vm migrate_to on deglxen002 * Resource action: degllx62-vm migrate_from on deglxen001 * Resource action: degllx62-vm stop on deglxen002 * Pseudo action: clone-nfs_stop_0 * Pseudo action: load_stopped_deglxen002 * Pseudo action: degllx62-vm_start_0 * Pseudo action: grp-nfs:1_stop_0 * Resource action: nfs-xen_images:1 stop on deglxen002 * Resource action: degllx62-vm monitor=30000 on deglxen001 * Resource action: nfs-xen_swapfiles:1 stop on deglxen002 * Resource action: nfs-xen_config:1 stop on deglxen002 * Pseudo action: grp-nfs:1_stopped_0 * Pseudo action: clone-nfs_stopped_0 * Pseudo action: clone-ping_stop_0 * Resource action: prim-ping:0 stop on deglxen002 * Pseudo action: clone-ping_stopped_0 Revised cluster status: Node deglxen002: standby Online: [ deglxen001 ] degllx62-vm (ocf::heartbeat:Xen): Started deglxen001 degllx63-vm (ocf::heartbeat:Xen): Stopped (disabled) degllx61-vm (ocf::heartbeat:Xen): Stopped deglxen002 degllx64-vm (ocf::heartbeat:Xen): Stopped (disabled) stonith_sbd (stonith:external/sbd): Started deglxen001 Clone Set: clone-nfs [grp-nfs] Started: [ deglxen001 ] Stopped: [ deglxen002 ] Clone Set: clone-ping [prim-ping] Started: [ deglxen001 ] Stopped: [ deglxen002 ] diff --git a/cts/scheduler/whitebox-asymmetric.summary b/cts/scheduler/whitebox-asymmetric.summary index 872bb13d90..2548ee4770 100644 --- a/cts/scheduler/whitebox-asymmetric.summary +++ b/cts/scheduler/whitebox-asymmetric.summary @@ -1,39 +1,39 @@ -2 of 7 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 7 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ 18builder ] fence_false (stonith:fence_false): Stopped container2 (ocf::pacemaker:Dummy): Started 18builder webserver (ocf::pacemaker:Dummy): Stopped nfs_mount (ocf::pacemaker:Dummy): Stopped Resource Group: mygroup vg_tags (ocf::heartbeat:LVM): Stopped (disabled) vg_tags_dup (ocf::heartbeat:LVM): Stopped Transition Summary: * Start nfs_mount ( 18node2 ) * Start 18node2 ( 18builder ) Executing cluster transition: * Resource action: 18node2 start on 18builder * Resource action: webserver monitor on 18node2 * Resource action: nfs_mount monitor on 18node2 * Resource action: vg_tags monitor on 18node2 * Resource action: vg_tags_dup monitor on 18node2 * Resource action: 18node2 monitor=30000 on 18builder * Resource action: nfs_mount start on 18node2 * Resource action: nfs_mount monitor=10000 on 18node2 Revised cluster status: Online: [ 18builder ] GuestOnline: [ 18node2:container2 ] fence_false (stonith:fence_false): Stopped container2 (ocf::pacemaker:Dummy): Started 18builder webserver (ocf::pacemaker:Dummy): Stopped nfs_mount (ocf::pacemaker:Dummy): Started 18node2 Resource Group: mygroup vg_tags (ocf::heartbeat:LVM): Stopped (disabled) vg_tags_dup (ocf::heartbeat:LVM): Stopped diff --git a/cts/scheduler/whitebox-stop.summary b/cts/scheduler/whitebox-stop.summary index e9238d5830..cb1a0143b7 100644 --- a/cts/scheduler/whitebox-stop.summary +++ b/cts/scheduler/whitebox-stop.summary @@ -1,50 +1,50 @@ -1 of 14 resources DISABLED and 0 BLOCKED from being started due to failures +1 of 14 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: Online: [ 18node1 18node2 18node3 ] GuestOnline: [ lxc1:container1 lxc2:container2 ] container1 (ocf::heartbeat:VirtualDomain): Started 18node2 (disabled) container2 (ocf::heartbeat:VirtualDomain): Started 18node2 shoot1 (stonith:fence_xvm): Started 18node3 Clone Set: M-clone [M] Started: [ 18node1 18node2 18node3 lxc1 lxc2 ] A (ocf::pacemaker:Dummy): Started 18node1 B (ocf::pacemaker:Dummy): Started lxc1 C (ocf::pacemaker:Dummy): Started lxc2 D (ocf::pacemaker:Dummy): Started 18node1 Transition Summary: * Stop container1 ( 18node2 ) due to node availability * Stop M:4 ( lxc1 ) due to node availability * Move B ( lxc1 -> lxc2 ) * Stop lxc1 ( 18node2 ) due to node availability Executing cluster transition: * Pseudo action: M-clone_stop_0 * Resource action: A monitor on lxc2 * Resource action: B stop on lxc1 * Resource action: B monitor on lxc2 * Resource action: D monitor on lxc2 * Resource action: M stop on lxc1 * Pseudo action: M-clone_stopped_0 * Resource action: B start on lxc2 * Resource action: lxc1 stop on 18node2 * Resource action: container1 stop on 18node2 * Resource action: B monitor=10000 on lxc2 Revised cluster status: Online: [ 18node1 18node2 18node3 ] GuestOnline: [ lxc2:container2 ] container1 (ocf::heartbeat:VirtualDomain): Stopped (disabled) container2 (ocf::heartbeat:VirtualDomain): Started 18node2 shoot1 (stonith:fence_xvm): Started 18node3 Clone Set: M-clone [M] Started: [ 18node1 18node2 18node3 lxc2 ] Stopped: [ lxc1 ] A (ocf::pacemaker:Dummy): Started 18node1 B (ocf::pacemaker:Dummy): Started lxc2 C (ocf::pacemaker:Dummy): Started lxc2 D (ocf::pacemaker:Dummy): Started 18node1