diff --git a/cts/scheduler/summary/bug-1572-1.summary b/cts/scheduler/summary/bug-1572-1.summary
index 6abedea530..c572db21d5 100644
--- a/cts/scheduler/summary/bug-1572-1.summary
+++ b/cts/scheduler/summary/bug-1572-1.summary
@@ -1,85 +1,85 @@
 Current cluster status:
   * Node List:
     * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
 
   * Full List of Resources:
     * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
       * Promoted: [ arc-tkincaidlx.wsicorp.com ]
       * Unpromoted: [ arc-dknightlx ]
     * Resource Group: grp_pgsql_mirror:
       * fs_mirror	(ocf:heartbeat:Filesystem):	 Started arc-tkincaidlx.wsicorp.com
       * pgsql_5555	(ocf:heartbeat:pgsql):	 Started arc-tkincaidlx.wsicorp.com
       * IPaddr_147_81_84_133	(ocf:heartbeat:IPaddr):	 Started arc-tkincaidlx.wsicorp.com
 
 Transition Summary:
-  * Stop       rsc_drbd_7788:0          (            Unpromoted arc-dknightlx )  due to node availability
+  * Stop       rsc_drbd_7788:0          (               Unpromoted arc-dknightlx )  due to node availability
   * Restart    rsc_drbd_7788:1          ( Promoted arc-tkincaidlx.wsicorp.com )  due to resource definition change
   * Restart    fs_mirror                (        arc-tkincaidlx.wsicorp.com )  due to required ms_drbd_7788 notified
   * Restart    pgsql_5555               (        arc-tkincaidlx.wsicorp.com )  due to required fs_mirror start
   * Restart    IPaddr_147_81_84_133     (        arc-tkincaidlx.wsicorp.com )  due to required pgsql_5555 start
 
 Executing Cluster Transition:
   * Pseudo action:   ms_drbd_7788_pre_notify_demote_0
   * Pseudo action:   grp_pgsql_mirror_stop_0
   * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com
   * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-pre_notify_demote_0
   * Resource action: pgsql_5555      stop on arc-tkincaidlx.wsicorp.com
   * Resource action: fs_mirror       stop on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   grp_pgsql_mirror_stopped_0
   * Pseudo action:   ms_drbd_7788_demote_0
   * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_demoted_0
   * Pseudo action:   ms_drbd_7788_post_notify_demoted_0
   * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-post_notify_demoted_0
   * Pseudo action:   ms_drbd_7788_pre_notify_stop_0
   * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-pre_notify_stop_0
   * Pseudo action:   ms_drbd_7788_stop_0
   * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx
   * Resource action: rsc_drbd_7788:1 stop on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_stopped_0
   * Cluster action:  do_shutdown on arc-dknightlx
   * Pseudo action:   ms_drbd_7788_post_notify_stopped_0
   * Pseudo action:   ms_drbd_7788_confirmed-post_notify_stopped_0
   * Pseudo action:   ms_drbd_7788_pre_notify_start_0
   * Pseudo action:   ms_drbd_7788_confirmed-pre_notify_start_0
   * Pseudo action:   ms_drbd_7788_start_0
   * Resource action: rsc_drbd_7788:1 start on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_running_0
   * Pseudo action:   ms_drbd_7788_post_notify_running_0
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-post_notify_running_0
   * Pseudo action:   ms_drbd_7788_pre_notify_promote_0
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-pre_notify_promote_0
   * Pseudo action:   ms_drbd_7788_promote_0
   * Resource action: rsc_drbd_7788:1 promote on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_promoted_0
   * Pseudo action:   ms_drbd_7788_post_notify_promoted_0
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-post_notify_promoted_0
   * Pseudo action:   grp_pgsql_mirror_start_0
   * Resource action: fs_mirror       start on arc-tkincaidlx.wsicorp.com
   * Resource action: pgsql_5555      start on arc-tkincaidlx.wsicorp.com
   * Resource action: pgsql_5555      monitor=30000 on arc-tkincaidlx.wsicorp.com
   * Resource action: IPaddr_147_81_84_133 start on arc-tkincaidlx.wsicorp.com
   * Resource action: IPaddr_147_81_84_133 monitor=25000 on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   grp_pgsql_mirror_running_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
 
   * Full List of Resources:
     * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
       * Promoted: [ arc-tkincaidlx.wsicorp.com ]
       * Stopped: [ arc-dknightlx ]
     * Resource Group: grp_pgsql_mirror:
       * fs_mirror	(ocf:heartbeat:Filesystem):	 Started arc-tkincaidlx.wsicorp.com
       * pgsql_5555	(ocf:heartbeat:pgsql):	 Started arc-tkincaidlx.wsicorp.com
       * IPaddr_147_81_84_133	(ocf:heartbeat:IPaddr):	 Started arc-tkincaidlx.wsicorp.com
diff --git a/cts/scheduler/summary/bug-1572-2.summary b/cts/scheduler/summary/bug-1572-2.summary
index 7d4921dc36..012ca78dd6 100644
--- a/cts/scheduler/summary/bug-1572-2.summary
+++ b/cts/scheduler/summary/bug-1572-2.summary
@@ -1,61 +1,61 @@
 Current cluster status:
   * Node List:
     * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
 
   * Full List of Resources:
     * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
       * Promoted: [ arc-tkincaidlx.wsicorp.com ]
       * Unpromoted: [ arc-dknightlx ]
     * Resource Group: grp_pgsql_mirror:
       * fs_mirror	(ocf:heartbeat:Filesystem):	 Started arc-tkincaidlx.wsicorp.com
       * pgsql_5555	(ocf:heartbeat:pgsql):	 Started arc-tkincaidlx.wsicorp.com
       * IPaddr_147_81_84_133	(ocf:heartbeat:IPaddr):	 Started arc-tkincaidlx.wsicorp.com
 
 Transition Summary:
-  * Stop       rsc_drbd_7788:0          (                          Unpromoted arc-dknightlx )  due to node availability
+  * Stop       rsc_drbd_7788:0          (                        Unpromoted arc-dknightlx )  due to node availability
   * Demote     rsc_drbd_7788:1          ( Promoted -> Unpromoted arc-tkincaidlx.wsicorp.com )
   * Stop       fs_mirror                (                 arc-tkincaidlx.wsicorp.com )  due to node availability
   * Stop       pgsql_5555               (                 arc-tkincaidlx.wsicorp.com )  due to node availability
   * Stop       IPaddr_147_81_84_133     (                 arc-tkincaidlx.wsicorp.com )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   ms_drbd_7788_pre_notify_demote_0
   * Pseudo action:   grp_pgsql_mirror_stop_0
   * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com
   * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-pre_notify_demote_0
   * Resource action: pgsql_5555      stop on arc-tkincaidlx.wsicorp.com
   * Resource action: fs_mirror       stop on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   grp_pgsql_mirror_stopped_0
   * Pseudo action:   ms_drbd_7788_demote_0
   * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_demoted_0
   * Pseudo action:   ms_drbd_7788_post_notify_demoted_0
   * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-post_notify_demoted_0
   * Pseudo action:   ms_drbd_7788_pre_notify_stop_0
   * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-pre_notify_stop_0
   * Pseudo action:   ms_drbd_7788_stop_0
   * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx
   * Pseudo action:   ms_drbd_7788_stopped_0
   * Cluster action:  do_shutdown on arc-dknightlx
   * Pseudo action:   ms_drbd_7788_post_notify_stopped_0
   * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
   * Pseudo action:   ms_drbd_7788_confirmed-post_notify_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
 
   * Full List of Resources:
     * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
       * Unpromoted: [ arc-tkincaidlx.wsicorp.com ]
       * Stopped: [ arc-dknightlx ]
     * Resource Group: grp_pgsql_mirror:
       * fs_mirror	(ocf:heartbeat:Filesystem):	 Stopped
       * pgsql_5555	(ocf:heartbeat:pgsql):	 Stopped
       * IPaddr_147_81_84_133	(ocf:heartbeat:IPaddr):	 Stopped
diff --git a/cts/scheduler/summary/bug-5059.summary b/cts/scheduler/summary/bug-5059.summary
index a33a2f60a2..c555d1dfb5 100644
--- a/cts/scheduler/summary/bug-5059.summary
+++ b/cts/scheduler/summary/bug-5059.summary
@@ -1,77 +1,77 @@
 Current cluster status:
   * Node List:
     * Node gluster03.h: standby
     * Online: [ gluster01.h gluster02.h ]
     * OFFLINE: [ gluster04.h ]
 
   * Full List of Resources:
     * Clone Set: ms_stateful [g_stateful] (promotable):
       * Resource Group: g_stateful:0:
         * p_stateful1	(ocf:pacemaker:Stateful):	 Unpromoted gluster01.h
         * p_stateful2	(ocf:pacemaker:Stateful):	 Stopped
       * Resource Group: g_stateful:1:
         * p_stateful1	(ocf:pacemaker:Stateful):	 Unpromoted gluster02.h
         * p_stateful2	(ocf:pacemaker:Stateful):	 Stopped
       * Stopped: [ gluster03.h gluster04.h ]
     * Clone Set: c_dummy [p_dummy1]:
       * Started: [ gluster01.h gluster02.h ]
 
 Transition Summary:
-  * Promote    p_stateful1:0     ( Unpromoted -> Promoted gluster01.h )
-  * Promote    p_stateful2:0     (    Stopped -> Promoted gluster01.h )
+  * Promote    p_stateful1:0     (   Unpromoted -> Promoted gluster01.h )
+  * Promote    p_stateful2:0     ( Stopped -> Promoted gluster01.h )
   * Start      p_stateful2:1     (                   gluster02.h )
 
 Executing Cluster Transition:
   * Pseudo action:   ms_stateful_pre_notify_start_0
   * Resource action: iptest          delete on gluster02.h
   * Resource action: ipsrc2          delete on gluster02.h
   * Resource action: p_stateful1:0   notify on gluster01.h
   * Resource action: p_stateful1:1   notify on gluster02.h
   * Pseudo action:   ms_stateful_confirmed-pre_notify_start_0
   * Pseudo action:   ms_stateful_start_0
   * Pseudo action:   g_stateful:0_start_0
   * Resource action: p_stateful2:0   start on gluster01.h
   * Pseudo action:   g_stateful:1_start_0
   * Resource action: p_stateful2:1   start on gluster02.h
   * Pseudo action:   g_stateful:0_running_0
   * Pseudo action:   g_stateful:1_running_0
   * Pseudo action:   ms_stateful_running_0
   * Pseudo action:   ms_stateful_post_notify_running_0
   * Resource action: p_stateful1:0   notify on gluster01.h
   * Resource action: p_stateful2:0   notify on gluster01.h
   * Resource action: p_stateful1:1   notify on gluster02.h
   * Resource action: p_stateful2:1   notify on gluster02.h
   * Pseudo action:   ms_stateful_confirmed-post_notify_running_0
   * Pseudo action:   ms_stateful_pre_notify_promote_0
   * Resource action: p_stateful1:0   notify on gluster01.h
   * Resource action: p_stateful2:0   notify on gluster01.h
   * Resource action: p_stateful1:1   notify on gluster02.h
   * Resource action: p_stateful2:1   notify on gluster02.h
   * Pseudo action:   ms_stateful_confirmed-pre_notify_promote_0
   * Pseudo action:   ms_stateful_promote_0
   * Pseudo action:   g_stateful:0_promote_0
   * Resource action: p_stateful1:0   promote on gluster01.h
   * Resource action: p_stateful2:0   promote on gluster01.h
   * Pseudo action:   g_stateful:0_promoted_0
   * Pseudo action:   ms_stateful_promoted_0
   * Pseudo action:   ms_stateful_post_notify_promoted_0
   * Resource action: p_stateful1:0   notify on gluster01.h
   * Resource action: p_stateful2:0   notify on gluster01.h
   * Resource action: p_stateful1:1   notify on gluster02.h
   * Resource action: p_stateful2:1   notify on gluster02.h
   * Pseudo action:   ms_stateful_confirmed-post_notify_promoted_0
   * Resource action: p_stateful1:1   monitor=10000 on gluster02.h
   * Resource action: p_stateful2:1   monitor=10000 on gluster02.h
 
 Revised Cluster Status:
   * Node List:
     * Node gluster03.h: standby
     * Online: [ gluster01.h gluster02.h ]
     * OFFLINE: [ gluster04.h ]
 
   * Full List of Resources:
     * Clone Set: ms_stateful [g_stateful] (promotable):
       * Promoted: [ gluster01.h ]
       * Unpromoted: [ gluster02.h ]
     * Clone Set: c_dummy [p_dummy1]:
       * Started: [ gluster01.h gluster02.h ]
diff --git a/cts/scheduler/summary/bug-cl-5212.summary b/cts/scheduler/summary/bug-cl-5212.summary
index 48cb54bedc..e7a6e26833 100644
--- a/cts/scheduler/summary/bug-cl-5212.summary
+++ b/cts/scheduler/summary/bug-cl-5212.summary
@@ -1,69 +1,69 @@
 Current cluster status:
   * Node List:
     * Node srv01: UNCLEAN (offline)
     * Node srv02: UNCLEAN (offline)
     * Online: [ srv03 ]
 
   * Full List of Resources:
     * Resource Group: grpStonith1:
       * prmStonith1-1	(stonith:external/ssh):	 Started srv02 (UNCLEAN)
     * Resource Group: grpStonith2:
       * prmStonith2-1	(stonith:external/ssh):	 Started srv01 (UNCLEAN)
     * Resource Group: grpStonith3:
       * prmStonith3-1	(stonith:external/ssh):	 Started srv01 (UNCLEAN)
     * Clone Set: msPostgresql [pgsql] (promotable):
       * pgsql	(ocf:pacemaker:Stateful):	 Unpromoted srv02 (UNCLEAN)
       * pgsql	(ocf:pacemaker:Stateful):	 Promoted srv01 (UNCLEAN)
       * Unpromoted: [ srv03 ]
     * Clone Set: clnPingd [prmPingd]:
       * prmPingd	(ocf:pacemaker:ping):	 Started srv02 (UNCLEAN)
       * prmPingd	(ocf:pacemaker:ping):	 Started srv01 (UNCLEAN)
       * Started: [ srv03 ]
 
 Transition Summary:
   * Stop       prmStonith1-1     (        srv02 )  blocked
   * Stop       prmStonith2-1     (        srv01 )  blocked
   * Stop       prmStonith3-1     (        srv01 )  due to node availability (blocked)
-  * Stop       pgsql:0           ( Unpromoted srv02 )  due to node availability (blocked)
-  * Stop       pgsql:1           (   Promoted srv01 )  due to node availability (blocked)
+  * Stop       pgsql:0           (  Unpromoted srv02 )  due to node availability (blocked)
+  * Stop       pgsql:1           ( Promoted srv01 )  due to node availability (blocked)
   * Stop       prmPingd:0        (        srv02 )  due to node availability (blocked)
   * Stop       prmPingd:1        (        srv01 )  due to node availability (blocked)
 
 Executing Cluster Transition:
   * Pseudo action:   grpStonith1_stop_0
   * Pseudo action:   grpStonith1_start_0
   * Pseudo action:   grpStonith2_stop_0
   * Pseudo action:   grpStonith2_start_0
   * Pseudo action:   grpStonith3_stop_0
   * Pseudo action:   msPostgresql_pre_notify_stop_0
   * Pseudo action:   clnPingd_stop_0
   * Resource action: pgsql           notify on srv03
   * Pseudo action:   msPostgresql_confirmed-pre_notify_stop_0
   * Pseudo action:   msPostgresql_stop_0
   * Pseudo action:   clnPingd_stopped_0
   * Pseudo action:   msPostgresql_stopped_0
   * Pseudo action:   msPostgresql_post_notify_stopped_0
   * Resource action: pgsql           notify on srv03
   * Pseudo action:   msPostgresql_confirmed-post_notify_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Node srv01: UNCLEAN (offline)
     * Node srv02: UNCLEAN (offline)
     * Online: [ srv03 ]
 
   * Full List of Resources:
     * Resource Group: grpStonith1:
       * prmStonith1-1	(stonith:external/ssh):	 Started srv02 (UNCLEAN)
     * Resource Group: grpStonith2:
       * prmStonith2-1	(stonith:external/ssh):	 Started srv01 (UNCLEAN)
     * Resource Group: grpStonith3:
       * prmStonith3-1	(stonith:external/ssh):	 Started srv01 (UNCLEAN)
     * Clone Set: msPostgresql [pgsql] (promotable):
       * pgsql	(ocf:pacemaker:Stateful):	 Unpromoted srv02 (UNCLEAN)
       * pgsql	(ocf:pacemaker:Stateful):	 Promoted srv01 (UNCLEAN)
       * Unpromoted: [ srv03 ]
     * Clone Set: clnPingd [prmPingd]:
       * prmPingd	(ocf:pacemaker:ping):	 Started srv02 (UNCLEAN)
       * prmPingd	(ocf:pacemaker:ping):	 Started srv01 (UNCLEAN)
       * Started: [ srv03 ]
diff --git a/cts/scheduler/summary/bug-cl-5247.summary b/cts/scheduler/summary/bug-cl-5247.summary
index 056e526490..67ad0c3ded 100644
--- a/cts/scheduler/summary/bug-cl-5247.summary
+++ b/cts/scheduler/summary/bug-cl-5247.summary
@@ -1,87 +1,87 @@
 Using the original execution date of: 2015-08-12 02:53:40Z
 Current cluster status:
   * Node List:
     * Online: [ bl460g8n3 bl460g8n4 ]
     * GuestOnline: [ pgsr01@bl460g8n3 ]
 
   * Full List of Resources:
     * prmDB1	(ocf:heartbeat:VirtualDomain):	 Started bl460g8n3
     * prmDB2	(ocf:heartbeat:VirtualDomain):	 FAILED bl460g8n4
     * Resource Group: grpStonith1:
       * prmStonith1-2	(stonith:external/ipmi):	 Started bl460g8n4
     * Resource Group: grpStonith2:
       * prmStonith2-2	(stonith:external/ipmi):	 Started bl460g8n3
     * Resource Group: master-group:
       * vip-master	(ocf:heartbeat:Dummy):	 FAILED pgsr02
       * vip-rep	(ocf:heartbeat:Dummy):	 FAILED pgsr02
     * Clone Set: msPostgresql [pgsql] (promotable):
       * Promoted: [ pgsr01 ]
       * Stopped: [ bl460g8n3 bl460g8n4 ]
 
 Transition Summary:
   * Fence (off) pgsr02 (resource: prmDB2) 'guest is unclean'
   * Stop       prmDB2         (        bl460g8n4 )  due to node availability
   * Recover    vip-master     ( pgsr02 -> pgsr01 )
   * Recover    vip-rep        ( pgsr02 -> pgsr01 )
-  * Stop       pgsql:0        (  Promoted pgsr02 )  due to node availability
+  * Stop       pgsql:0        (    Promoted pgsr02 )  due to node availability
   * Stop       pgsr02         (        bl460g8n4 )  due to node availability
 
 Executing Cluster Transition:
   * Resource action: vip-master      monitor on pgsr01
   * Resource action: vip-rep         monitor on pgsr01
   * Pseudo action:   msPostgresql_pre_notify_demote_0
   * Resource action: pgsr01          monitor on bl460g8n4
   * Resource action: pgsr02          stop on bl460g8n4
   * Resource action: pgsr02          monitor on bl460g8n3
   * Resource action: prmDB2          stop on bl460g8n4
   * Resource action: pgsql           notify on pgsr01
   * Pseudo action:   msPostgresql_confirmed-pre_notify_demote_0
   * Pseudo action:   msPostgresql_demote_0
   * Pseudo action:   stonith-pgsr02-off on pgsr02
   * Pseudo action:   pgsql_post_notify_stop_0
   * Pseudo action:   pgsql_demote_0
   * Pseudo action:   msPostgresql_demoted_0
   * Pseudo action:   msPostgresql_post_notify_demoted_0
   * Resource action: pgsql           notify on pgsr01
   * Pseudo action:   msPostgresql_confirmed-post_notify_demoted_0
   * Pseudo action:   msPostgresql_pre_notify_stop_0
   * Pseudo action:   master-group_stop_0
   * Pseudo action:   vip-rep_stop_0
   * Resource action: pgsql           notify on pgsr01
   * Pseudo action:   msPostgresql_confirmed-pre_notify_stop_0
   * Pseudo action:   msPostgresql_stop_0
   * Pseudo action:   vip-master_stop_0
   * Pseudo action:   pgsql_stop_0
   * Pseudo action:   msPostgresql_stopped_0
   * Pseudo action:   master-group_stopped_0
   * Pseudo action:   master-group_start_0
   * Resource action: vip-master      start on pgsr01
   * Resource action: vip-rep         start on pgsr01
   * Pseudo action:   msPostgresql_post_notify_stopped_0
   * Pseudo action:   master-group_running_0
   * Resource action: vip-master      monitor=10000 on pgsr01
   * Resource action: vip-rep         monitor=10000 on pgsr01
   * Resource action: pgsql           notify on pgsr01
   * Pseudo action:   msPostgresql_confirmed-post_notify_stopped_0
   * Pseudo action:   pgsql_notified_0
   * Resource action: pgsql           monitor=9000 on pgsr01
 Using the original execution date of: 2015-08-12 02:53:40Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ bl460g8n3 bl460g8n4 ]
     * GuestOnline: [ pgsr01@bl460g8n3 ]
 
   * Full List of Resources:
     * prmDB1	(ocf:heartbeat:VirtualDomain):	 Started bl460g8n3
     * prmDB2	(ocf:heartbeat:VirtualDomain):	 FAILED
     * Resource Group: grpStonith1:
       * prmStonith1-2	(stonith:external/ipmi):	 Started bl460g8n4
     * Resource Group: grpStonith2:
       * prmStonith2-2	(stonith:external/ipmi):	 Started bl460g8n3
     * Resource Group: master-group:
       * vip-master	(ocf:heartbeat:Dummy):	 FAILED [ pgsr01 pgsr02 ]
       * vip-rep	(ocf:heartbeat:Dummy):	 FAILED [ pgsr01 pgsr02 ]
     * Clone Set: msPostgresql [pgsql] (promotable):
       * Promoted: [ pgsr01 ]
       * Stopped: [ bl460g8n3 bl460g8n4 ]
diff --git a/cts/scheduler/summary/bug-lf-2606.summary b/cts/scheduler/summary/bug-lf-2606.summary
index e0b7ebf0e6..004788e80b 100644
--- a/cts/scheduler/summary/bug-lf-2606.summary
+++ b/cts/scheduler/summary/bug-lf-2606.summary
@@ -1,46 +1,46 @@
 1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
 
 Current cluster status:
   * Node List:
     * Node node2: UNCLEAN (online)
     * Online: [ node1 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * rsc1	(ocf:pacemaker:Dummy):	 FAILED node2 (disabled)
     * rsc2	(ocf:pacemaker:Dummy):	 Started node2
     * Clone Set: ms3 [rsc3] (promotable):
       * Promoted: [ node2 ]
       * Unpromoted: [ node1 ]
 
 Transition Summary:
   * Fence (reboot) node2 'rsc1 failed there'
   * Stop       rsc1       (          node2 )  due to node availability
   * Move       rsc2       ( node2 -> node1 )
-  * Stop       rsc3:1     ( Promoted node2 )  due to node availability
+  * Stop       rsc3:1     (   Promoted node2 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   ms3_demote_0
   * Fencing node2 (reboot)
   * Pseudo action:   rsc1_stop_0
   * Pseudo action:   rsc2_stop_0
   * Pseudo action:   rsc3:1_demote_0
   * Pseudo action:   ms3_demoted_0
   * Pseudo action:   ms3_stop_0
   * Resource action: rsc2            start on node1
   * Pseudo action:   rsc3:1_stop_0
   * Pseudo action:   ms3_stopped_0
   * Resource action: rsc2            monitor=10000 on node1
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node1 ]
     * OFFLINE: [ node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * rsc1	(ocf:pacemaker:Dummy):	 Stopped (disabled)
     * rsc2	(ocf:pacemaker:Dummy):	 Started node1
     * Clone Set: ms3 [rsc3] (promotable):
       * Unpromoted: [ node1 ]
       * Stopped: [ node2 ]
diff --git a/cts/scheduler/summary/bug-pm-12.summary b/cts/scheduler/summary/bug-pm-12.summary
index 7b811d1a02..c4f3adb908 100644
--- a/cts/scheduler/summary/bug-pm-12.summary
+++ b/cts/scheduler/summary/bug-pm-12.summary
@@ -1,57 +1,57 @@
 Current cluster status:
   * Node List:
     * Online: [ node-a node-b ]
 
   * Full List of Resources:
     * Clone Set: ms-sf [group] (promotable) (unique):
       * Resource Group: group:0:
         * stateful-1:0	(ocf:heartbeat:Stateful):	 Unpromoted node-b
         * stateful-2:0	(ocf:heartbeat:Stateful):	 Unpromoted node-b
       * Resource Group: group:1:
         * stateful-1:1	(ocf:heartbeat:Stateful):	 Promoted node-a
         * stateful-2:1	(ocf:heartbeat:Stateful):	 Promoted node-a
 
 Transition Summary:
-  * Restart    stateful-2:0     ( Unpromoted node-b )  due to resource definition change
-  * Restart    stateful-2:1     (   Promoted node-a )  due to resource definition change
+  * Restart    stateful-2:0     (  Unpromoted node-b )  due to resource definition change
+  * Restart    stateful-2:1     ( Promoted node-a )  due to resource definition change
 
 Executing Cluster Transition:
   * Pseudo action:   ms-sf_demote_0
   * Pseudo action:   group:1_demote_0
   * Resource action: stateful-2:1    demote on node-a
   * Pseudo action:   group:1_demoted_0
   * Pseudo action:   ms-sf_demoted_0
   * Pseudo action:   ms-sf_stop_0
   * Pseudo action:   group:0_stop_0
   * Resource action: stateful-2:0    stop on node-b
   * Pseudo action:   group:1_stop_0
   * Resource action: stateful-2:1    stop on node-a
   * Pseudo action:   group:0_stopped_0
   * Pseudo action:   group:1_stopped_0
   * Pseudo action:   ms-sf_stopped_0
   * Pseudo action:   ms-sf_start_0
   * Pseudo action:   group:0_start_0
   * Resource action: stateful-2:0    start on node-b
   * Pseudo action:   group:1_start_0
   * Resource action: stateful-2:1    start on node-a
   * Pseudo action:   group:0_running_0
   * Pseudo action:   group:1_running_0
   * Pseudo action:   ms-sf_running_0
   * Pseudo action:   ms-sf_promote_0
   * Pseudo action:   group:1_promote_0
   * Resource action: stateful-2:1    promote on node-a
   * Pseudo action:   group:1_promoted_0
   * Pseudo action:   ms-sf_promoted_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node-a node-b ]
 
   * Full List of Resources:
     * Clone Set: ms-sf [group] (promotable) (unique):
       * Resource Group: group:0:
         * stateful-1:0	(ocf:heartbeat:Stateful):	 Unpromoted node-b
         * stateful-2:0	(ocf:heartbeat:Stateful):	 Unpromoted node-b
       * Resource Group: group:1:
         * stateful-1:1	(ocf:heartbeat:Stateful):	 Promoted node-a
         * stateful-2:1	(ocf:heartbeat:Stateful):	 Promoted node-a
diff --git a/cts/scheduler/summary/bundle-order-fencing.summary b/cts/scheduler/summary/bundle-order-fencing.summary
index 387c05532a..8cb40718db 100644
--- a/cts/scheduler/summary/bundle-order-fencing.summary
+++ b/cts/scheduler/summary/bundle-order-fencing.summary
@@ -1,220 +1,220 @@
 Using the original execution date of: 2017-09-12 10:51:59Z
 Current cluster status:
   * Node List:
     * Node controller-0: UNCLEAN (offline)
     * Online: [ controller-1 controller-2 ]
     * GuestOnline: [ galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]
 
   * Full List of Resources:
     * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
       * rabbitmq-bundle-0	(ocf:heartbeat:rabbitmq-cluster):	 FAILED controller-0 (UNCLEAN)
       * rabbitmq-bundle-1	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-1
       * rabbitmq-bundle-2	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-2
     * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
       * galera-bundle-0	(ocf:heartbeat:galera):	 FAILED Promoted controller-0 (UNCLEAN)
       * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted controller-1
       * galera-bundle-2	(ocf:heartbeat:galera):	 Promoted controller-2
     * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
       * redis-bundle-0	(ocf:heartbeat:redis):	 FAILED Promoted controller-0 (UNCLEAN)
       * redis-bundle-1	(ocf:heartbeat:redis):	 Unpromoted controller-1
       * redis-bundle-2	(ocf:heartbeat:redis):	 Unpromoted controller-2
     * ip-192.168.24.7	(ocf:heartbeat:IPaddr2):	 Started controller-0 (UNCLEAN)
     * ip-10.0.0.109	(ocf:heartbeat:IPaddr2):	 Started controller-0 (UNCLEAN)
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.3.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-0 (UNCLEAN)
     * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
       * haproxy-bundle-docker-0	(ocf:heartbeat:docker):	 Started controller-0 (UNCLEAN)
       * haproxy-bundle-docker-1	(ocf:heartbeat:docker):	 Started controller-2
       * haproxy-bundle-docker-2	(ocf:heartbeat:docker):	 Started controller-1
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-2
     * stonith-fence_ipmilan-525400efba5c	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-5254003e8e97	(stonith:fence_ipmilan):	 Started controller-0 (UNCLEAN)
     * stonith-fence_ipmilan-5254000dcb3f	(stonith:fence_ipmilan):	 Started controller-0 (UNCLEAN)
 
 Transition Summary:
   * Fence (off) redis-bundle-0 (resource: redis-bundle-docker-0) 'guest is unclean'
   * Fence (off) rabbitmq-bundle-0 (resource: rabbitmq-bundle-docker-0) 'guest is unclean'
   * Fence (off) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean'
   * Fence (reboot) controller-0 'peer is no longer part of the cluster'
   * Stop       rabbitmq-bundle-docker-0               (                   controller-0 )  due to node availability
   * Stop       rabbitmq-bundle-0                      (                   controller-0 )  due to unrunnable rabbitmq-bundle-docker-0 start
   * Stop       rabbitmq:0                             (              rabbitmq-bundle-0 )  due to unrunnable rabbitmq-bundle-docker-0 start
   * Stop       galera-bundle-docker-0                 (                   controller-0 )  due to node availability
   * Stop       galera-bundle-0                        (                   controller-0 )  due to unrunnable galera-bundle-docker-0 start
-  * Stop       galera:0                               (              Promoted galera-bundle-0 )  due to unrunnable galera-bundle-docker-0 start
+  * Stop       galera:0                               (         Promoted galera-bundle-0 )  due to unrunnable galera-bundle-docker-0 start
   * Stop       redis-bundle-docker-0                  (                   controller-0 )  due to node availability
   * Stop       redis-bundle-0                         (                   controller-0 )  due to unrunnable redis-bundle-docker-0 start
-  * Stop       redis:0                                (               Promoted redis-bundle-0 )  due to unrunnable redis-bundle-docker-0 start
+  * Stop       redis:0                                (          Promoted redis-bundle-0 )  due to unrunnable redis-bundle-docker-0 start
   * Promote    redis:1                                ( Unpromoted -> Promoted redis-bundle-1 )
   * Move       ip-192.168.24.7                        (   controller-0 -> controller-2 )
   * Move       ip-10.0.0.109                          (   controller-0 -> controller-1 )
   * Move       ip-172.17.4.11                         (   controller-0 -> controller-1 )
   * Stop       haproxy-bundle-docker-0                (                   controller-0 )  due to node availability
   * Move       stonith-fence_ipmilan-5254003e8e97     (   controller-0 -> controller-1 )
   * Move       stonith-fence_ipmilan-5254000dcb3f     (   controller-0 -> controller-2 )
 
 Executing Cluster Transition:
   * Pseudo action:   rabbitmq-bundle-clone_pre_notify_stop_0
   * Pseudo action:   rabbitmq-bundle-0_stop_0
   * Resource action: rabbitmq-bundle-0 monitor on controller-2
   * Resource action: rabbitmq-bundle-0 monitor on controller-1
   * Resource action: rabbitmq-bundle-1 monitor on controller-2
   * Resource action: rabbitmq-bundle-2 monitor on controller-1
   * Pseudo action:   galera-bundle-0_stop_0
   * Resource action: galera-bundle-0 monitor on controller-2
   * Resource action: galera-bundle-0 monitor on controller-1
   * Resource action: galera-bundle-1 monitor on controller-2
   * Resource action: galera-bundle-2 monitor on controller-1
   * Resource action: redis           cancel=45000 on redis-bundle-1
   * Resource action: redis           cancel=60000 on redis-bundle-1
   * Pseudo action:   redis-bundle-master_pre_notify_demote_0
   * Pseudo action:   redis-bundle-0_stop_0
   * Resource action: redis-bundle-0  monitor on controller-2
   * Resource action: redis-bundle-0  monitor on controller-1
   * Resource action: redis-bundle-1  monitor on controller-2
   * Resource action: redis-bundle-2  monitor on controller-1
   * Pseudo action:   stonith-fence_ipmilan-5254003e8e97_stop_0
   * Pseudo action:   stonith-fence_ipmilan-5254000dcb3f_stop_0
   * Pseudo action:   haproxy-bundle_stop_0
   * Pseudo action:   redis-bundle_demote_0
   * Pseudo action:   galera-bundle_demote_0
   * Pseudo action:   rabbitmq-bundle_stop_0
   * Pseudo action:   rabbitmq-bundle_start_0
   * Fencing controller-0 (reboot)
   * Resource action: rabbitmq        notify on rabbitmq-bundle-1
   * Resource action: rabbitmq        notify on rabbitmq-bundle-2
   * Pseudo action:   rabbitmq-bundle-clone_confirmed-pre_notify_stop_0
   * Pseudo action:   rabbitmq-bundle-docker-0_stop_0
   * Pseudo action:   galera-bundle-master_demote_0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-pre_notify_demote_0
   * Pseudo action:   redis-bundle-master_demote_0
   * Pseudo action:   haproxy-bundle-docker-0_stop_0
   * Resource action: stonith-fence_ipmilan-5254003e8e97 start on controller-1
   * Resource action: stonith-fence_ipmilan-5254000dcb3f start on controller-2
   * Pseudo action:   stonith-redis-bundle-0-off on redis-bundle-0
   * Pseudo action:   stonith-rabbitmq-bundle-0-off on rabbitmq-bundle-0
   * Pseudo action:   stonith-galera-bundle-0-off on galera-bundle-0
   * Pseudo action:   haproxy-bundle_stopped_0
   * Pseudo action:   rabbitmq_post_notify_stop_0
   * Pseudo action:   rabbitmq-bundle-clone_stop_0
   * Pseudo action:   galera_demote_0
   * Pseudo action:   galera-bundle-master_demoted_0
   * Pseudo action:   redis_post_notify_stop_0
   * Pseudo action:   redis_demote_0
   * Pseudo action:   redis-bundle-master_demoted_0
   * Pseudo action:   ip-192.168.24.7_stop_0
   * Pseudo action:   ip-10.0.0.109_stop_0
   * Pseudo action:   ip-172.17.4.11_stop_0
   * Resource action: stonith-fence_ipmilan-5254003e8e97 monitor=60000 on controller-1
   * Resource action: stonith-fence_ipmilan-5254000dcb3f monitor=60000 on controller-2
   * Pseudo action:   galera-bundle_demoted_0
   * Pseudo action:   galera-bundle_stop_0
   * Pseudo action:   rabbitmq_stop_0
   * Pseudo action:   rabbitmq-bundle-clone_stopped_0
   * Pseudo action:   galera-bundle-master_stop_0
   * Pseudo action:   galera-bundle-docker-0_stop_0
   * Pseudo action:   redis-bundle-master_post_notify_demoted_0
   * Resource action: ip-192.168.24.7 start on controller-2
   * Resource action: ip-10.0.0.109   start on controller-1
   * Resource action: ip-172.17.4.11  start on controller-1
   * Pseudo action:   rabbitmq-bundle-clone_post_notify_stopped_0
   * Pseudo action:   galera_stop_0
   * Pseudo action:   galera-bundle-master_stopped_0
   * Pseudo action:   galera-bundle-master_start_0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-post_notify_demoted_0
   * Pseudo action:   redis-bundle-master_pre_notify_stop_0
   * Resource action: ip-192.168.24.7 monitor=10000 on controller-2
   * Resource action: ip-10.0.0.109   monitor=10000 on controller-1
   * Resource action: ip-172.17.4.11  monitor=10000 on controller-1
   * Pseudo action:   redis-bundle_demoted_0
   * Pseudo action:   redis-bundle_stop_0
   * Pseudo action:   galera-bundle_stopped_0
   * Resource action: rabbitmq        notify on rabbitmq-bundle-1
   * Resource action: rabbitmq        notify on rabbitmq-bundle-2
   * Pseudo action:   rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
   * Pseudo action:   rabbitmq-bundle-clone_pre_notify_start_0
   * Pseudo action:   galera-bundle-master_running_0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-pre_notify_stop_0
   * Pseudo action:   redis-bundle-master_stop_0
   * Pseudo action:   redis-bundle-docker-0_stop_0
   * Pseudo action:   galera-bundle_running_0
   * Pseudo action:   rabbitmq-bundle_stopped_0
   * Pseudo action:   rabbitmq_notified_0
   * Pseudo action:   rabbitmq-bundle-clone_confirmed-pre_notify_start_0
   * Pseudo action:   rabbitmq-bundle-clone_start_0
   * Pseudo action:   redis_stop_0
   * Pseudo action:   redis-bundle-master_stopped_0
   * Pseudo action:   rabbitmq-bundle-clone_running_0
   * Pseudo action:   redis-bundle-master_post_notify_stopped_0
   * Pseudo action:   rabbitmq-bundle-clone_post_notify_running_0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-post_notify_stopped_0
   * Pseudo action:   redis-bundle-master_pre_notify_start_0
   * Pseudo action:   redis-bundle_stopped_0
   * Pseudo action:   rabbitmq-bundle-clone_confirmed-post_notify_running_0
   * Pseudo action:   redis_notified_0
   * Pseudo action:   redis-bundle-master_confirmed-pre_notify_start_0
   * Pseudo action:   redis-bundle-master_start_0
   * Pseudo action:   rabbitmq-bundle_running_0
   * Pseudo action:   redis-bundle-master_running_0
   * Pseudo action:   redis-bundle-master_post_notify_running_0
   * Pseudo action:   redis-bundle-master_confirmed-post_notify_running_0
   * Pseudo action:   redis-bundle_running_0
   * Pseudo action:   redis-bundle-master_pre_notify_promote_0
   * Pseudo action:   redis-bundle_promote_0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-pre_notify_promote_0
   * Pseudo action:   redis-bundle-master_promote_0
   * Resource action: redis           promote on redis-bundle-1
   * Pseudo action:   redis-bundle-master_promoted_0
   * Pseudo action:   redis-bundle-master_post_notify_promoted_0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-post_notify_promoted_0
   * Pseudo action:   redis-bundle_promoted_0
   * Resource action: redis           monitor=20000 on redis-bundle-1
 Using the original execution date of: 2017-09-12 10:51:59Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-1 controller-2 ]
     * OFFLINE: [ controller-0 ]
     * GuestOnline: [ galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]
 
   * Full List of Resources:
     * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
       * rabbitmq-bundle-0	(ocf:heartbeat:rabbitmq-cluster):	 FAILED
       * rabbitmq-bundle-1	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-1
       * rabbitmq-bundle-2	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-2
     * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
       * galera-bundle-0	(ocf:heartbeat:galera):	 FAILED Promoted
       * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted controller-1
       * galera-bundle-2	(ocf:heartbeat:galera):	 Promoted controller-2
     * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
       * redis-bundle-0	(ocf:heartbeat:redis):	 FAILED Promoted
       * redis-bundle-1	(ocf:heartbeat:redis):	 Promoted controller-1
       * redis-bundle-2	(ocf:heartbeat:redis):	 Unpromoted controller-2
     * ip-192.168.24.7	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-10.0.0.109	(ocf:heartbeat:IPaddr2):	 Started controller-1
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.3.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-1
     * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
       * haproxy-bundle-docker-0	(ocf:heartbeat:docker):	 Stopped
       * haproxy-bundle-docker-1	(ocf:heartbeat:docker):	 Started controller-2
       * haproxy-bundle-docker-2	(ocf:heartbeat:docker):	 Started controller-1
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-2
     * stonith-fence_ipmilan-525400efba5c	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-5254003e8e97	(stonith:fence_ipmilan):	 Started controller-1
     * stonith-fence_ipmilan-5254000dcb3f	(stonith:fence_ipmilan):	 Started controller-2
diff --git a/cts/scheduler/summary/bundle-order-stop-on-remote.summary b/cts/scheduler/summary/bundle-order-stop-on-remote.summary
index bf94ce3c72..8cd17eef61 100644
--- a/cts/scheduler/summary/bundle-order-stop-on-remote.summary
+++ b/cts/scheduler/summary/bundle-order-stop-on-remote.summary
@@ -1,224 +1,224 @@
 Current cluster status:
   * Node List:
     * RemoteNode database-0: UNCLEAN (offline)
     * RemoteNode database-2: UNCLEAN (offline)
     * Online: [ controller-0 controller-1 controller-2 ]
     * RemoteOnline: [ database-1 messaging-0 messaging-1 messaging-2 ]
     * GuestOnline: [ galera-bundle-1@controller-2 rabbitmq-bundle-0@controller-2 rabbitmq-bundle-1@controller-2 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-2@controller-2 ]
 
   * Full List of Resources:
     * database-0	(ocf:pacemaker:remote):	 Stopped
     * database-1	(ocf:pacemaker:remote):	 Started controller-2
     * database-2	(ocf:pacemaker:remote):	 Stopped
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-2
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-2
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-2
     * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
       * rabbitmq-bundle-0	(ocf:heartbeat:rabbitmq-cluster):	 Started messaging-0
       * rabbitmq-bundle-1	(ocf:heartbeat:rabbitmq-cluster):	 Started messaging-1
       * rabbitmq-bundle-2	(ocf:heartbeat:rabbitmq-cluster):	 Started messaging-2
     * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
       * galera-bundle-0	(ocf:heartbeat:galera):	 FAILED Promoted database-0 (UNCLEAN)
       * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted database-1
       * galera-bundle-2	(ocf:heartbeat:galera):	 FAILED Promoted database-2 (UNCLEAN)
     * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
       * redis-bundle-0	(ocf:heartbeat:redis):	 Unpromoted controller-0
       * redis-bundle-1	(ocf:heartbeat:redis):	 Stopped
       * redis-bundle-2	(ocf:heartbeat:redis):	 Unpromoted controller-2
     * ip-192.168.24.11	(ocf:heartbeat:IPaddr2):	 Stopped
     * ip-10.0.0.104	(ocf:heartbeat:IPaddr2):	 Stopped
     * ip-172.17.1.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.11	(ocf:heartbeat:IPaddr2):	 Stopped
     * ip-172.17.3.13	(ocf:heartbeat:IPaddr2):	 Stopped
     * ip-172.17.4.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
       * haproxy-bundle-docker-0	(ocf:heartbeat:docker):	 Started controller-0
       * haproxy-bundle-docker-1	(ocf:heartbeat:docker):	 Stopped
       * haproxy-bundle-docker-2	(ocf:heartbeat:docker):	 Started controller-2
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Stopped
     * stonith-fence_ipmilan-525400244e09	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400cdec10	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400c709f7	(stonith:fence_ipmilan):	 Stopped
     * stonith-fence_ipmilan-525400a7f9e0	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400a25787	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-5254005ea387	(stonith:fence_ipmilan):	 Stopped
     * stonith-fence_ipmilan-525400542c06	(stonith:fence_ipmilan):	 Stopped
     * stonith-fence_ipmilan-525400aac413	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400498d34	(stonith:fence_ipmilan):	 Stopped
 
 Transition Summary:
   * Fence (reboot) galera-bundle-2 (resource: galera-bundle-docker-2) 'guest is unclean'
   * Fence (reboot) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean'
   * Start      database-0                             (                   controller-0 )
   * Start      database-2                             (                   controller-1 )
   * Recover    galera-bundle-docker-0                 (                     database-0 )
   * Start      galera-bundle-0                        (                   controller-0 )
-  * Recover    galera:0                               (              Promoted galera-bundle-0 )
+  * Recover    galera:0                               (         Promoted galera-bundle-0 )
   * Recover    galera-bundle-docker-2                 (                     database-2 )
   * Start      galera-bundle-2                        (                   controller-1 )
-  * Recover    galera:2                               (              Promoted galera-bundle-2 )
+  * Recover    galera:2                               (         Promoted galera-bundle-2 )
   * Promote    redis:0                                ( Unpromoted -> Promoted redis-bundle-0 )
   * Start      redis-bundle-docker-1                  (                   controller-1 )
   * Start      redis-bundle-1                         (                   controller-1 )
   * Start      redis:1                                (                 redis-bundle-1 )
   * Start      ip-192.168.24.11                       (                   controller-0 )
   * Start      ip-10.0.0.104                          (                   controller-1 )
   * Start      ip-172.17.1.11                         (                   controller-0 )
   * Start      ip-172.17.3.13                         (                   controller-1 )
   * Start      haproxy-bundle-docker-1                (                   controller-1 )
   * Start      openstack-cinder-volume                (                   controller-0 )
   * Start      stonith-fence_ipmilan-525400c709f7     (                   controller-1 )
   * Start      stonith-fence_ipmilan-5254005ea387     (                   controller-1 )
   * Start      stonith-fence_ipmilan-525400542c06     (                   controller-0 )
   * Start      stonith-fence_ipmilan-525400498d34     (                   controller-1 )
 
 Executing Cluster Transition:
   * Resource action: database-0      start on controller-0
   * Resource action: database-2      start on controller-1
   * Pseudo action:   redis-bundle-master_pre_notify_start_0
   * Resource action: stonith-fence_ipmilan-525400c709f7 start on controller-1
   * Resource action: stonith-fence_ipmilan-5254005ea387 start on controller-1
   * Resource action: stonith-fence_ipmilan-525400542c06 start on controller-0
   * Resource action: stonith-fence_ipmilan-525400498d34 start on controller-1
   * Pseudo action:   redis-bundle_start_0
   * Pseudo action:   galera-bundle_demote_0
   * Resource action: database-0      monitor=20000 on controller-0
   * Resource action: database-2      monitor=20000 on controller-1
   * Pseudo action:   galera-bundle-master_demote_0
   * Resource action: redis           notify on redis-bundle-0
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-pre_notify_start_0
   * Pseudo action:   redis-bundle-master_start_0
   * Resource action: stonith-fence_ipmilan-525400c709f7 monitor=60000 on controller-1
   * Resource action: stonith-fence_ipmilan-5254005ea387 monitor=60000 on controller-1
   * Resource action: stonith-fence_ipmilan-525400542c06 monitor=60000 on controller-0
   * Resource action: stonith-fence_ipmilan-525400498d34 monitor=60000 on controller-1
   * Pseudo action:   galera_demote_0
   * Pseudo action:   galera_demote_0
   * Pseudo action:   galera-bundle-master_demoted_0
   * Pseudo action:   galera-bundle_demoted_0
   * Pseudo action:   galera-bundle_stop_0
   * Resource action: galera-bundle-docker-0 stop on database-0
   * Resource action: galera-bundle-docker-2 stop on database-2
   * Pseudo action:   stonith-galera-bundle-2-reboot on galera-bundle-2
   * Pseudo action:   stonith-galera-bundle-0-reboot on galera-bundle-0
   * Pseudo action:   galera-bundle-master_stop_0
   * Resource action: redis-bundle-docker-1 start on controller-1
   * Resource action: redis-bundle-1  monitor on controller-1
   * Resource action: ip-192.168.24.11 start on controller-0
   * Resource action: ip-10.0.0.104   start on controller-1
   * Resource action: ip-172.17.1.11  start on controller-0
   * Resource action: ip-172.17.3.13  start on controller-1
   * Resource action: openstack-cinder-volume start on controller-0
   * Pseudo action:   haproxy-bundle_start_0
   * Pseudo action:   galera_stop_0
   * Resource action: redis-bundle-docker-1 monitor=60000 on controller-1
   * Resource action: redis-bundle-1  start on controller-1
   * Resource action: ip-192.168.24.11 monitor=10000 on controller-0
   * Resource action: ip-10.0.0.104   monitor=10000 on controller-1
   * Resource action: ip-172.17.1.11  monitor=10000 on controller-0
   * Resource action: ip-172.17.3.13  monitor=10000 on controller-1
   * Resource action: haproxy-bundle-docker-1 start on controller-1
   * Resource action: openstack-cinder-volume monitor=60000 on controller-0
   * Pseudo action:   haproxy-bundle_running_0
   * Pseudo action:   galera_stop_0
   * Pseudo action:   galera-bundle-master_stopped_0
   * Resource action: redis           start on redis-bundle-1
   * Pseudo action:   redis-bundle-master_running_0
   * Resource action: redis-bundle-1  monitor=30000 on controller-1
   * Resource action: haproxy-bundle-docker-1 monitor=60000 on controller-1
   * Pseudo action:   galera-bundle_stopped_0
   * Pseudo action:   galera-bundle_start_0
   * Pseudo action:   galera-bundle-master_start_0
   * Resource action: galera-bundle-docker-0 start on database-0
   * Resource action: galera-bundle-0 monitor on controller-1
   * Resource action: galera-bundle-docker-2 start on database-2
   * Resource action: galera-bundle-2 monitor on controller-1
   * Pseudo action:   redis-bundle-master_post_notify_running_0
   * Resource action: galera-bundle-docker-0 monitor=60000 on database-0
   * Resource action: galera-bundle-0 start on controller-0
   * Resource action: galera-bundle-docker-2 monitor=60000 on database-2
   * Resource action: galera-bundle-2 start on controller-1
   * Resource action: redis           notify on redis-bundle-0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-post_notify_running_0
   * Pseudo action:   redis-bundle_running_0
   * Resource action: galera          start on galera-bundle-0
   * Resource action: galera          start on galera-bundle-2
   * Pseudo action:   galera-bundle-master_running_0
   * Resource action: galera-bundle-0 monitor=30000 on controller-0
   * Resource action: galera-bundle-2 monitor=30000 on controller-1
   * Pseudo action:   redis-bundle-master_pre_notify_promote_0
   * Pseudo action:   redis-bundle_promote_0
   * Pseudo action:   galera-bundle_running_0
   * Resource action: redis           notify on redis-bundle-0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-pre_notify_promote_0
   * Pseudo action:   redis-bundle-master_promote_0
   * Pseudo action:   galera-bundle_promote_0
   * Pseudo action:   galera-bundle-master_promote_0
   * Resource action: redis           promote on redis-bundle-0
   * Pseudo action:   redis-bundle-master_promoted_0
   * Resource action: galera          promote on galera-bundle-0
   * Resource action: galera          promote on galera-bundle-2
   * Pseudo action:   galera-bundle-master_promoted_0
   * Pseudo action:   redis-bundle-master_post_notify_promoted_0
   * Pseudo action:   galera-bundle_promoted_0
   * Resource action: galera          monitor=10000 on galera-bundle-0
   * Resource action: galera          monitor=10000 on galera-bundle-2
   * Resource action: redis           notify on redis-bundle-0
   * Resource action: redis           notify on redis-bundle-1
   * Resource action: redis           notify on redis-bundle-2
   * Pseudo action:   redis-bundle-master_confirmed-post_notify_promoted_0
   * Pseudo action:   redis-bundle_promoted_0
   * Resource action: redis           monitor=20000 on redis-bundle-0
   * Resource action: redis           monitor=60000 on redis-bundle-1
   * Resource action: redis           monitor=45000 on redis-bundle-1
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-0 controller-1 controller-2 ]
     * RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
     * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-2 galera-bundle-2@controller-1 rabbitmq-bundle-0@controller-2 rabbitmq-bundle-1@controller-2 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]
 
   * Full List of Resources:
     * database-0	(ocf:pacemaker:remote):	 Started controller-0
     * database-1	(ocf:pacemaker:remote):	 Started controller-2
     * database-2	(ocf:pacemaker:remote):	 Started controller-1
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-2
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-2
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-2
     * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
       * rabbitmq-bundle-0	(ocf:heartbeat:rabbitmq-cluster):	 Started messaging-0
       * rabbitmq-bundle-1	(ocf:heartbeat:rabbitmq-cluster):	 Started messaging-1
       * rabbitmq-bundle-2	(ocf:heartbeat:rabbitmq-cluster):	 Started messaging-2
     * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
       * galera-bundle-0	(ocf:heartbeat:galera):	 Promoted database-0
       * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted database-1
       * galera-bundle-2	(ocf:heartbeat:galera):	 Promoted database-2
     * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
       * redis-bundle-0	(ocf:heartbeat:redis):	 Promoted controller-0
       * redis-bundle-1	(ocf:heartbeat:redis):	 Unpromoted controller-1
       * redis-bundle-2	(ocf:heartbeat:redis):	 Unpromoted controller-2
     * ip-192.168.24.11	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.104	(ocf:heartbeat:IPaddr2):	 Started controller-1
     * ip-172.17.1.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.11	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.3.13	(ocf:heartbeat:IPaddr2):	 Started controller-1
     * ip-172.17.4.19	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
       * haproxy-bundle-docker-0	(ocf:heartbeat:docker):	 Started controller-0
       * haproxy-bundle-docker-1	(ocf:heartbeat:docker):	 Started controller-1
       * haproxy-bundle-docker-2	(ocf:heartbeat:docker):	 Started controller-2
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400244e09	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400cdec10	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400c709f7	(stonith:fence_ipmilan):	 Started controller-1
     * stonith-fence_ipmilan-525400a7f9e0	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400a25787	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-5254005ea387	(stonith:fence_ipmilan):	 Started controller-1
     * stonith-fence_ipmilan-525400542c06	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400aac413	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400498d34	(stonith:fence_ipmilan):	 Started controller-1
diff --git a/cts/scheduler/summary/colocation-influence.summary b/cts/scheduler/summary/colocation-influence.summary
index 3ea8b3f545..7fa4fcf0c2 100644
--- a/cts/scheduler/summary/colocation-influence.summary
+++ b/cts/scheduler/summary/colocation-influence.summary
@@ -1,170 +1,170 @@
 Current cluster status:
   * Node List:
     * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * GuestOnline: [ bundle10-0@rhel7-2 bundle10-1@rhel7-3 bundle11-0@rhel7-1 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-1
     * rsc1a	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * rsc1b	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * rsc2a	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc2b	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc3a	(ocf:pacemaker:Dummy):	 Stopped
     * rsc3b	(ocf:pacemaker:Dummy):	 Stopped
     * rsc4a	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * rsc4b	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * rsc5a	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * Resource Group: group5a:
       * rsc5a1	(ocf:pacemaker:Dummy):	 Started rhel7-1
       * rsc5a2	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * Resource Group: group6a:
       * rsc6a1	(ocf:pacemaker:Dummy):	 Started rhel7-2
       * rsc6a2	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * rsc6a	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * Resource Group: group7a:
       * rsc7a1	(ocf:pacemaker:Dummy):	 Started rhel7-3
       * rsc7a2	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * Clone Set: rsc8a-clone [rsc8a]:
       * Started: [ rhel7-1 rhel7-3 rhel7-4 ]
     * Clone Set: rsc8b-clone [rsc8b]:
       * Started: [ rhel7-1 rhel7-3 rhel7-4 ]
     * rsc9a	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc9b	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc9c	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc10a	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * rsc11a	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * rsc12a	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * rsc12b	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * rsc12c	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * Container bundle set: bundle10 [pcmktest:http]:
       * bundle10-0 (192.168.122.131)	(ocf:heartbeat:apache):	 Started rhel7-2
       * bundle10-1 (192.168.122.132)	(ocf:heartbeat:apache):	 Started rhel7-3
     * Container bundle set: bundle11 [pcmktest:http]:
       * bundle11-0 (192.168.122.134)	(ocf:pacemaker:Dummy):	 Started rhel7-1
       * bundle11-1 (192.168.122.135)	(ocf:pacemaker:Dummy):	 Stopped
     * rsc13a	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * Clone Set: rsc13b-clone [rsc13b] (promotable):
       * Promoted: [ rhel7-3 ]
       * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ]
       * Stopped: [ rhel7-5 ]
     * rsc14b	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * Clone Set: rsc14a-clone [rsc14a] (promotable):
       * Promoted: [ rhel7-4 ]
       * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ]
       * Stopped: [ rhel7-5 ]
 
 Transition Summary:
   * Move       rsc1a          ( rhel7-2 -> rhel7-3 )
   * Move       rsc1b          ( rhel7-2 -> rhel7-3 )
   * Stop       rsc2a          (            rhel7-4 )  due to node availability
   * Start      rsc3a          (            rhel7-2 )
   * Start      rsc3b          (            rhel7-2 )
   * Stop       rsc4a          (            rhel7-3 )  due to node availability
   * Stop       rsc5a          (            rhel7-1 )  due to node availability
   * Stop       rsc6a1         (            rhel7-2 )  due to node availability
   * Stop       rsc6a2         (            rhel7-2 )  due to node availability
   * Stop       rsc7a2         (            rhel7-3 )  due to node availability
   * Stop       rsc8a:1        (            rhel7-4 )  due to node availability
   * Stop       rsc9c          (            rhel7-4 )  due to node availability
   * Move       rsc10a         ( rhel7-2 -> rhel7-3 )
   * Stop       rsc12b         (            rhel7-1 )  due to node availability
   * Start      bundle11-1     (            rhel7-5 )  due to unrunnable bundle11-docker-1 start (blocked)
   * Start      bundle11a:1    (         bundle11-1 )  due to unrunnable bundle11-docker-1 start (blocked)
   * Stop       rsc13a         (            rhel7-3 )  due to node availability
-  * Stop       rsc14a:1       (   Promoted rhel7-4 )  due to node availability
+  * Stop       rsc14a:1       (     Promoted rhel7-4 )  due to node availability
 
 Executing Cluster Transition:
   * Resource action: rsc1a           stop on rhel7-2
   * Resource action: rsc1b           stop on rhel7-2
   * Resource action: rsc2a           stop on rhel7-4
   * Resource action: rsc3a           start on rhel7-2
   * Resource action: rsc3b           start on rhel7-2
   * Resource action: rsc4a           stop on rhel7-3
   * Resource action: rsc5a           stop on rhel7-1
   * Pseudo action:   group6a_stop_0
   * Resource action: rsc6a2          stop on rhel7-2
   * Pseudo action:   group7a_stop_0
   * Resource action: rsc7a2          stop on rhel7-3
   * Pseudo action:   rsc8a-clone_stop_0
   * Resource action: rsc9c           stop on rhel7-4
   * Resource action: rsc10a          stop on rhel7-2
   * Resource action: rsc12b          stop on rhel7-1
   * Resource action: rsc13a          stop on rhel7-3
   * Pseudo action:   rsc14a-clone_demote_0
   * Pseudo action:   bundle11_start_0
   * Resource action: rsc1a           start on rhel7-3
   * Resource action: rsc1b           start on rhel7-3
   * Resource action: rsc3a           monitor=10000 on rhel7-2
   * Resource action: rsc3b           monitor=10000 on rhel7-2
   * Resource action: rsc6a1          stop on rhel7-2
   * Pseudo action:   group7a_stopped_0
   * Resource action: rsc8a           stop on rhel7-4
   * Pseudo action:   rsc8a-clone_stopped_0
   * Resource action: rsc10a          start on rhel7-3
   * Pseudo action:   bundle11-clone_start_0
   * Resource action: rsc14a          demote on rhel7-4
   * Pseudo action:   rsc14a-clone_demoted_0
   * Pseudo action:   rsc14a-clone_stop_0
   * Resource action: rsc1a           monitor=10000 on rhel7-3
   * Resource action: rsc1b           monitor=10000 on rhel7-3
   * Pseudo action:   group6a_stopped_0
   * Resource action: rsc10a          monitor=10000 on rhel7-3
   * Pseudo action:   bundle11-clone_running_0
   * Resource action: rsc14a          stop on rhel7-4
   * Pseudo action:   rsc14a-clone_stopped_0
   * Pseudo action:   bundle11_running_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * GuestOnline: [ bundle10-0@rhel7-2 bundle10-1@rhel7-3 bundle11-0@rhel7-1 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-1
     * rsc1a	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * rsc1b	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * rsc2a	(ocf:pacemaker:Dummy):	 Stopped
     * rsc2b	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc3a	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * rsc3b	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * rsc4a	(ocf:pacemaker:Dummy):	 Stopped
     * rsc4b	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * rsc5a	(ocf:pacemaker:Dummy):	 Stopped
     * Resource Group: group5a:
       * rsc5a1	(ocf:pacemaker:Dummy):	 Started rhel7-1
       * rsc5a2	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * Resource Group: group6a:
       * rsc6a1	(ocf:pacemaker:Dummy):	 Stopped
       * rsc6a2	(ocf:pacemaker:Dummy):	 Stopped
     * rsc6a	(ocf:pacemaker:Dummy):	 Started rhel7-2
     * Resource Group: group7a:
       * rsc7a1	(ocf:pacemaker:Dummy):	 Started rhel7-3
       * rsc7a2	(ocf:pacemaker:Dummy):	 Stopped
     * Clone Set: rsc8a-clone [rsc8a]:
       * Started: [ rhel7-1 rhel7-3 ]
       * Stopped: [ rhel7-2 rhel7-4 rhel7-5 ]
     * Clone Set: rsc8b-clone [rsc8b]:
       * Started: [ rhel7-1 rhel7-3 rhel7-4 ]
     * rsc9a	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc9b	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * rsc9c	(ocf:pacemaker:Dummy):	 Stopped
     * rsc10a	(ocf:pacemaker:Dummy):	 Started rhel7-3
     * rsc11a	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * rsc12a	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * rsc12b	(ocf:pacemaker:Dummy):	 Stopped
     * rsc12c	(ocf:pacemaker:Dummy):	 Started rhel7-1
     * Container bundle set: bundle10 [pcmktest:http]:
       * bundle10-0 (192.168.122.131)	(ocf:heartbeat:apache):	 Started rhel7-2
       * bundle10-1 (192.168.122.132)	(ocf:heartbeat:apache):	 Started rhel7-3
     * Container bundle set: bundle11 [pcmktest:http]:
       * bundle11-0 (192.168.122.134)	(ocf:pacemaker:Dummy):	 Started rhel7-1
       * bundle11-1 (192.168.122.135)	(ocf:pacemaker:Dummy):	 Stopped
     * rsc13a	(ocf:pacemaker:Dummy):	 Stopped
     * Clone Set: rsc13b-clone [rsc13b] (promotable):
       * Promoted: [ rhel7-3 ]
       * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ]
       * Stopped: [ rhel7-5 ]
     * rsc14b	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * Clone Set: rsc14a-clone [rsc14a] (promotable):
       * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ]
       * Stopped: [ rhel7-4 rhel7-5 ]
diff --git a/cts/scheduler/summary/dc-fence-ordering.summary b/cts/scheduler/summary/dc-fence-ordering.summary
index ac46031f07..305ebd5c19 100644
--- a/cts/scheduler/summary/dc-fence-ordering.summary
+++ b/cts/scheduler/summary/dc-fence-ordering.summary
@@ -1,82 +1,82 @@
 Using the original execution date of: 2018-11-28 18:37:16Z
 Current cluster status:
   * Node List:
     * Node rhel7-1: UNCLEAN (online)
     * Online: [ rhel7-2 rhel7-4 rhel7-5 ]
     * OFFLINE: [ rhel7-3 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Stopped
     * FencingPass	(stonith:fence_dummy):	 Stopped
     * FencingFail	(stonith:fence_dummy):	 Stopped
     * rsc_rhel7-1	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-2	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-3	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-4	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-5	(ocf:heartbeat:IPaddr2):	 Stopped
     * migrator	(ocf:pacemaker:Dummy):	 Stopped
     * Clone Set: Connectivity [ping-1]:
       * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * Clone Set: promotable-1 [stateful-1] (promotable):
       * Promoted: [ rhel7-1 ]
       * Unpromoted: [ rhel7-2 rhel7-4 rhel7-5 ]
       * Stopped: [ rhel7-3 ]
     * Resource Group: group-1:
       * r192.168.122.207	(ocf:heartbeat:IPaddr2):	 Started rhel7-1
       * petulant	(service:pacemaker-cts-dummyd@10):	 FAILED rhel7-1
       * r192.168.122.208	(ocf:heartbeat:IPaddr2):	 Stopped
     * lsb-dummy	(lsb:LSBDummy):	 Stopped
 
 Transition Summary:
   * Fence (reboot) rhel7-1 'petulant failed there'
-  * Stop       stateful-1:0         ( Unpromoted rhel7-5 )  due to node availability
-  * Stop       stateful-1:1         (   Promoted rhel7-1 )  due to node availability
-  * Stop       stateful-1:2         ( Unpromoted rhel7-2 )  due to node availability
-  * Stop       stateful-1:3         ( Unpromoted rhel7-4 )  due to node availability
+  * Stop       stateful-1:0         (  Unpromoted rhel7-5 )  due to node availability
+  * Stop       stateful-1:1         ( Promoted rhel7-1 )  due to node availability
+  * Stop       stateful-1:2         (  Unpromoted rhel7-2 )  due to node availability
+  * Stop       stateful-1:3         (  Unpromoted rhel7-4 )  due to node availability
   * Stop       r192.168.122.207     (        rhel7-1 )  due to node availability
   * Stop       petulant             (        rhel7-1 )  due to node availability
 
 Executing Cluster Transition:
   * Fencing rhel7-1 (reboot)
   * Pseudo action:   group-1_stop_0
   * Pseudo action:   petulant_stop_0
   * Pseudo action:   r192.168.122.207_stop_0
   * Pseudo action:   group-1_stopped_0
   * Pseudo action:   promotable-1_demote_0
   * Pseudo action:   stateful-1_demote_0
   * Pseudo action:   promotable-1_demoted_0
   * Pseudo action:   promotable-1_stop_0
   * Resource action: stateful-1      stop on rhel7-5
   * Pseudo action:   stateful-1_stop_0
   * Resource action: stateful-1      stop on rhel7-2
   * Resource action: stateful-1      stop on rhel7-4
   * Pseudo action:   promotable-1_stopped_0
   * Cluster action:  do_shutdown on rhel7-5
   * Cluster action:  do_shutdown on rhel7-4
   * Cluster action:  do_shutdown on rhel7-2
 Using the original execution date of: 2018-11-28 18:37:16Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ rhel7-2 rhel7-4 rhel7-5 ]
     * OFFLINE: [ rhel7-1 rhel7-3 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Stopped
     * FencingPass	(stonith:fence_dummy):	 Stopped
     * FencingFail	(stonith:fence_dummy):	 Stopped
     * rsc_rhel7-1	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-2	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-3	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-4	(ocf:heartbeat:IPaddr2):	 Stopped
     * rsc_rhel7-5	(ocf:heartbeat:IPaddr2):	 Stopped
     * migrator	(ocf:pacemaker:Dummy):	 Stopped
     * Clone Set: Connectivity [ping-1]:
       * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * Clone Set: promotable-1 [stateful-1] (promotable):
       * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * Resource Group: group-1:
       * r192.168.122.207	(ocf:heartbeat:IPaddr2):	 Stopped
       * petulant	(service:pacemaker-cts-dummyd@10):	 Stopped
       * r192.168.122.208	(ocf:heartbeat:IPaddr2):	 Stopped
     * lsb-dummy	(lsb:LSBDummy):	 Stopped
diff --git a/cts/scheduler/summary/guest-node-host-dies.summary b/cts/scheduler/summary/guest-node-host-dies.summary
index b0286b2846..f4509b9029 100644
--- a/cts/scheduler/summary/guest-node-host-dies.summary
+++ b/cts/scheduler/summary/guest-node-host-dies.summary
@@ -1,82 +1,82 @@
 Current cluster status:
   * Node List:
     * Node rhel7-1: UNCLEAN (offline)
     * Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-4
     * rsc_rhel7-1	(ocf:heartbeat:IPaddr2):	 Started rhel7-1 (UNCLEAN)
     * container1	(ocf:heartbeat:VirtualDomain):	 FAILED rhel7-1 (UNCLEAN)
     * container2	(ocf:heartbeat:VirtualDomain):	 FAILED rhel7-1 (UNCLEAN)
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
 
 Transition Summary:
   * Fence (reboot) lxc2 (resource: container2) 'guest is unclean'
   * Fence (reboot) lxc1 (resource: container1) 'guest is unclean'
   * Fence (reboot) rhel7-1 'rsc_rhel7-1 is thought to be active there'
   * Restart    Fencing         (            rhel7-4 )  due to resource definition change
   * Move       rsc_rhel7-1     ( rhel7-1 -> rhel7-5 )
   * Recover    container1      ( rhel7-1 -> rhel7-2 )
   * Recover    container2      ( rhel7-1 -> rhel7-3 )
-  * Recover    lxc-ms:0        (      Promoted lxc1 )
-  * Recover    lxc-ms:1        (    Unpromoted lxc2 )
+  * Recover    lxc-ms:0        (        Promoted lxc1 )
+  * Recover    lxc-ms:1        (         Unpromoted lxc2 )
   * Move       lxc1            ( rhel7-1 -> rhel7-2 )
   * Move       lxc2            ( rhel7-1 -> rhel7-3 )
 
 Executing Cluster Transition:
   * Resource action: Fencing         stop on rhel7-4
   * Pseudo action:   lxc-ms-master_demote_0
   * Pseudo action:   lxc1_stop_0
   * Resource action: lxc1            monitor on rhel7-5
   * Resource action: lxc1            monitor on rhel7-4
   * Resource action: lxc1            monitor on rhel7-3
   * Pseudo action:   lxc2_stop_0
   * Resource action: lxc2            monitor on rhel7-5
   * Resource action: lxc2            monitor on rhel7-4
   * Resource action: lxc2            monitor on rhel7-2
   * Fencing rhel7-1 (reboot)
   * Pseudo action:   rsc_rhel7-1_stop_0
   * Pseudo action:   container1_stop_0
   * Pseudo action:   container2_stop_0
   * Pseudo action:   stonith-lxc2-reboot on lxc2
   * Pseudo action:   stonith-lxc1-reboot on lxc1
   * Resource action: Fencing         start on rhel7-4
   * Resource action: Fencing         monitor=120000 on rhel7-4
   * Resource action: rsc_rhel7-1     start on rhel7-5
   * Resource action: container1      start on rhel7-2
   * Resource action: container2      start on rhel7-3
   * Pseudo action:   lxc-ms_demote_0
   * Pseudo action:   lxc-ms-master_demoted_0
   * Pseudo action:   lxc-ms-master_stop_0
   * Resource action: lxc1            start on rhel7-2
   * Resource action: lxc2            start on rhel7-3
   * Resource action: rsc_rhel7-1     monitor=5000 on rhel7-5
   * Pseudo action:   lxc-ms_stop_0
   * Pseudo action:   lxc-ms_stop_0
   * Pseudo action:   lxc-ms-master_stopped_0
   * Pseudo action:   lxc-ms-master_start_0
   * Resource action: lxc1            monitor=30000 on rhel7-2
   * Resource action: lxc2            monitor=30000 on rhel7-3
   * Resource action: lxc-ms          start on lxc1
   * Resource action: lxc-ms          start on lxc2
   * Pseudo action:   lxc-ms-master_running_0
   * Resource action: lxc-ms          monitor=10000 on lxc2
   * Pseudo action:   lxc-ms-master_promote_0
   * Resource action: lxc-ms          promote on lxc1
   * Pseudo action:   lxc-ms-master_promoted_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * OFFLINE: [ rhel7-1 ]
     * GuestOnline: [ lxc1@rhel7-2 lxc2@rhel7-3 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-4
     * rsc_rhel7-1	(ocf:heartbeat:IPaddr2):	 Started rhel7-5
     * container1	(ocf:heartbeat:VirtualDomain):	 Started rhel7-2
     * container2	(ocf:heartbeat:VirtualDomain):	 Started rhel7-3
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Promoted: [ lxc1 ]
       * Unpromoted: [ lxc2 ]
diff --git a/cts/scheduler/summary/migrate-fencing.summary b/cts/scheduler/summary/migrate-fencing.summary
index fd4fffa1d3..955bb0f434 100644
--- a/cts/scheduler/summary/migrate-fencing.summary
+++ b/cts/scheduler/summary/migrate-fencing.summary
@@ -1,108 +1,108 @@
 Current cluster status:
   * Node List:
     * Node pcmk-4: UNCLEAN (online)
     * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
 
   * Full List of Resources:
     * Clone Set: Fencing [FencingChild]:
       * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
     * Resource Group: group-1:
       * r192.168.101.181	(ocf:heartbeat:IPaddr):	 Started pcmk-4
       * r192.168.101.182	(ocf:heartbeat:IPaddr):	 Started pcmk-4
       * r192.168.101.183	(ocf:heartbeat:IPaddr):	 Started pcmk-4
     * rsc_pcmk-1	(ocf:heartbeat:IPaddr):	 Started pcmk-1
     * rsc_pcmk-2	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * rsc_pcmk-3	(ocf:heartbeat:IPaddr):	 Started pcmk-3
     * rsc_pcmk-4	(ocf:heartbeat:IPaddr):	 Started pcmk-4
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started pcmk-4
     * migrator	(ocf:pacemaker:Dummy):	 Started pcmk-1
     * Clone Set: Connectivity [ping-1]:
       * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ pcmk-4 ]
       * Unpromoted: [ pcmk-1 pcmk-2 pcmk-3 ]
 
 Transition Summary:
   * Fence (reboot) pcmk-4 'termination was requested'
   * Stop       FencingChild:0     (                 pcmk-4 )  due to node availability
   * Move       r192.168.101.181   (       pcmk-4 -> pcmk-1 )
   * Move       r192.168.101.182   (       pcmk-4 -> pcmk-1 )
   * Move       r192.168.101.183   (       pcmk-4 -> pcmk-1 )
   * Move       rsc_pcmk-4         (       pcmk-4 -> pcmk-2 )
   * Move       lsb-dummy          (       pcmk-4 -> pcmk-1 )
   * Migrate    migrator           (       pcmk-1 -> pcmk-3 )
   * Stop       ping-1:0           (                 pcmk-4 )  due to node availability
-  * Stop       stateful-1:0       (               Promoted pcmk-4 )  due to node availability
+  * Stop       stateful-1:0       (          Promoted pcmk-4 )  due to node availability
   * Promote    stateful-1:1       ( Unpromoted -> Promoted pcmk-1 )
 
 Executing Cluster Transition:
   * Pseudo action:   Fencing_stop_0
   * Resource action: stateful-1:3    monitor=15000 on pcmk-3
   * Resource action: stateful-1:2    monitor=15000 on pcmk-2
   * Fencing pcmk-4 (reboot)
   * Pseudo action:   FencingChild:0_stop_0
   * Pseudo action:   Fencing_stopped_0
   * Pseudo action:   rsc_pcmk-4_stop_0
   * Pseudo action:   lsb-dummy_stop_0
   * Resource action: migrator        migrate_to on pcmk-1
   * Pseudo action:   Connectivity_stop_0
   * Pseudo action:   group-1_stop_0
   * Pseudo action:   r192.168.101.183_stop_0
   * Resource action: rsc_pcmk-4      start on pcmk-2
   * Resource action: migrator        migrate_from on pcmk-3
   * Resource action: migrator        stop on pcmk-1
   * Pseudo action:   ping-1:0_stop_0
   * Pseudo action:   Connectivity_stopped_0
   * Pseudo action:   r192.168.101.182_stop_0
   * Resource action: rsc_pcmk-4      monitor=5000 on pcmk-2
   * Pseudo action:   migrator_start_0
   * Pseudo action:   r192.168.101.181_stop_0
   * Resource action: migrator        monitor=10000 on pcmk-3
   * Pseudo action:   group-1_stopped_0
   * Pseudo action:   master-1_demote_0
   * Pseudo action:   stateful-1:0_demote_0
   * Pseudo action:   master-1_demoted_0
   * Pseudo action:   master-1_stop_0
   * Pseudo action:   stateful-1:0_stop_0
   * Pseudo action:   master-1_stopped_0
   * Pseudo action:   master-1_promote_0
   * Resource action: stateful-1:1    promote on pcmk-1
   * Pseudo action:   master-1_promoted_0
   * Pseudo action:   group-1_start_0
   * Resource action: r192.168.101.181 start on pcmk-1
   * Resource action: r192.168.101.182 start on pcmk-1
   * Resource action: r192.168.101.183 start on pcmk-1
   * Resource action: stateful-1:1    monitor=16000 on pcmk-1
   * Pseudo action:   group-1_running_0
   * Resource action: r192.168.101.181 monitor=5000 on pcmk-1
   * Resource action: r192.168.101.182 monitor=5000 on pcmk-1
   * Resource action: r192.168.101.183 monitor=5000 on pcmk-1
   * Resource action: lsb-dummy       start on pcmk-1
   * Resource action: lsb-dummy       monitor=5000 on pcmk-1
 
 Revised Cluster Status:
   * Node List:
     * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
     * OFFLINE: [ pcmk-4 ]
 
   * Full List of Resources:
     * Clone Set: Fencing [FencingChild]:
       * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
       * Stopped: [ pcmk-4 ]
     * Resource Group: group-1:
       * r192.168.101.181	(ocf:heartbeat:IPaddr):	 Started pcmk-1
       * r192.168.101.182	(ocf:heartbeat:IPaddr):	 Started pcmk-1
       * r192.168.101.183	(ocf:heartbeat:IPaddr):	 Started pcmk-1
     * rsc_pcmk-1	(ocf:heartbeat:IPaddr):	 Started pcmk-1
     * rsc_pcmk-2	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * rsc_pcmk-3	(ocf:heartbeat:IPaddr):	 Started pcmk-3
     * rsc_pcmk-4	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started pcmk-1
     * migrator	(ocf:pacemaker:Dummy):	 Started pcmk-3
     * Clone Set: Connectivity [ping-1]:
       * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
       * Stopped: [ pcmk-4 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ pcmk-1 ]
       * Unpromoted: [ pcmk-2 pcmk-3 ]
       * Stopped: [ pcmk-4 ]
diff --git a/cts/scheduler/summary/migrate-shutdown.summary b/cts/scheduler/summary/migrate-shutdown.summary
index 551a41a175..1da9db21e8 100644
--- a/cts/scheduler/summary/migrate-shutdown.summary
+++ b/cts/scheduler/summary/migrate-shutdown.summary
@@ -1,92 +1,92 @@
 Current cluster status:
   * Node List:
     * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started pcmk-1
     * Resource Group: group-1:
       * r192.168.122.105	(ocf:heartbeat:IPaddr):	 Started pcmk-2
       * r192.168.122.106	(ocf:heartbeat:IPaddr):	 Started pcmk-2
       * r192.168.122.107	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * rsc_pcmk-1	(ocf:heartbeat:IPaddr):	 Started pcmk-1
     * rsc_pcmk-2	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * rsc_pcmk-3	(ocf:heartbeat:IPaddr):	 Stopped
     * rsc_pcmk-4	(ocf:heartbeat:IPaddr):	 Started pcmk-4
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started pcmk-2
     * migrator	(ocf:pacemaker:Dummy):	 Started pcmk-1
     * Clone Set: Connectivity [ping-1]:
       * Started: [ pcmk-1 pcmk-2 pcmk-4 ]
       * Stopped: [ pcmk-3 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ pcmk-2 ]
       * Unpromoted: [ pcmk-1 pcmk-4 ]
       * Stopped: [ pcmk-3 ]
 
 Transition Summary:
   * Stop       Fencing              (        pcmk-1 )  due to node availability
   * Stop       r192.168.122.105     (        pcmk-2 )  due to node availability
   * Stop       r192.168.122.106     (        pcmk-2 )  due to node availability
   * Stop       r192.168.122.107     (        pcmk-2 )  due to node availability
   * Stop       rsc_pcmk-1           (        pcmk-1 )  due to node availability
   * Stop       rsc_pcmk-2           (        pcmk-2 )  due to node availability
   * Stop       rsc_pcmk-4           (        pcmk-4 )  due to node availability
   * Stop       lsb-dummy            (        pcmk-2 )  due to node availability
   * Stop       migrator             (        pcmk-1 )  due to node availability
   * Stop       ping-1:0             (        pcmk-1 )  due to node availability
   * Stop       ping-1:1             (        pcmk-2 )  due to node availability
   * Stop       ping-1:2             (        pcmk-4 )  due to node availability
-  * Stop       stateful-1:0         ( Unpromoted pcmk-1 )  due to node availability
-  * Stop       stateful-1:1         (   Promoted pcmk-2 )  due to node availability
-  * Stop       stateful-1:2         ( Unpromoted pcmk-4 )  due to node availability
+  * Stop       stateful-1:0         (  Unpromoted pcmk-1 )  due to node availability
+  * Stop       stateful-1:1         ( Promoted pcmk-2 )  due to node availability
+  * Stop       stateful-1:2         (  Unpromoted pcmk-4 )  due to node availability
 
 Executing Cluster Transition:
   * Resource action: Fencing         stop on pcmk-1
   * Resource action: rsc_pcmk-1      stop on pcmk-1
   * Resource action: rsc_pcmk-2      stop on pcmk-2
   * Resource action: rsc_pcmk-4      stop on pcmk-4
   * Resource action: lsb-dummy       stop on pcmk-2
   * Resource action: migrator        stop on pcmk-1
   * Resource action: migrator        stop on pcmk-3
   * Pseudo action:   Connectivity_stop_0
   * Cluster action:  do_shutdown on pcmk-3
   * Pseudo action:   group-1_stop_0
   * Resource action: r192.168.122.107 stop on pcmk-2
   * Resource action: ping-1:0        stop on pcmk-1
   * Resource action: ping-1:1        stop on pcmk-2
   * Resource action: ping-1:3        stop on pcmk-4
   * Pseudo action:   Connectivity_stopped_0
   * Resource action: r192.168.122.106 stop on pcmk-2
   * Resource action: r192.168.122.105 stop on pcmk-2
   * Pseudo action:   group-1_stopped_0
   * Pseudo action:   master-1_demote_0
   * Resource action: stateful-1:0    demote on pcmk-2
   * Pseudo action:   master-1_demoted_0
   * Pseudo action:   master-1_stop_0
   * Resource action: stateful-1:2    stop on pcmk-1
   * Resource action: stateful-1:0    stop on pcmk-2
   * Resource action: stateful-1:3    stop on pcmk-4
   * Pseudo action:   master-1_stopped_0
   * Cluster action:  do_shutdown on pcmk-4
   * Cluster action:  do_shutdown on pcmk-2
   * Cluster action:  do_shutdown on pcmk-1
 
 Revised Cluster Status:
   * Node List:
     * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Stopped
     * Resource Group: group-1:
       * r192.168.122.105	(ocf:heartbeat:IPaddr):	 Stopped
       * r192.168.122.106	(ocf:heartbeat:IPaddr):	 Stopped
       * r192.168.122.107	(ocf:heartbeat:IPaddr):	 Stopped
     * rsc_pcmk-1	(ocf:heartbeat:IPaddr):	 Stopped
     * rsc_pcmk-2	(ocf:heartbeat:IPaddr):	 Stopped
     * rsc_pcmk-3	(ocf:heartbeat:IPaddr):	 Stopped
     * rsc_pcmk-4	(ocf:heartbeat:IPaddr):	 Stopped
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Stopped
     * migrator	(ocf:pacemaker:Dummy):	 Stopped
     * Clone Set: Connectivity [ping-1]:
       * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
diff --git a/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
index 532f731235..8eb68a4cb9 100644
--- a/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
+++ b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
@@ -1,103 +1,103 @@
 Using the original execution date of: 2020-05-14 10:49:31Z
 Current cluster status:
   * Node List:
     * Online: [ controller-0 controller-1 controller-2 ]
     * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-0@controller-0 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]
 
   * Full List of Resources:
     * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
       * galera-bundle-0	(ocf:heartbeat:galera):	 Promoted controller-0
       * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted controller-1
       * galera-bundle-2	(ocf:heartbeat:galera):	 Promoted controller-2
     * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
       * rabbitmq-bundle-0	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-0
       * rabbitmq-bundle-1	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-1
       * rabbitmq-bundle-2	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-2
     * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
       * redis-bundle-0	(ocf:heartbeat:redis):	 Promoted controller-0
       * redis-bundle-1	(ocf:heartbeat:redis):	 Unpromoted controller-1
       * redis-bundle-2	(ocf:heartbeat:redis):	 Unpromoted controller-2
     * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
       * ovn-dbs-bundle-0	(ocf:ovn:ovndb-servers):	 Unpromoted controller-0
       * ovn-dbs-bundle-1	(ocf:ovn:ovndb-servers):	 Unpromoted controller-1
       * ovn-dbs-bundle-2	(ocf:ovn:ovndb-servers):	 Unpromoted controller-2
     * stonith-fence_ipmilan-5254005e097a	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400afe30e	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400985679	(stonith:fence_ipmilan):	 Started controller-1
     * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
       * openstack-cinder-volume-podman-0	(ocf:heartbeat:podman):	 Started controller-0
 
 Transition Summary:
   * Stop       ovn-dbs-bundle-podman-0     (                     controller-0 )  due to node availability
   * Stop       ovn-dbs-bundle-0            (                     controller-0 )  due to unrunnable ovn-dbs-bundle-podman-0 start
-  * Stop       ovndb_servers:0             (             Unpromoted ovn-dbs-bundle-0 )  due to unrunnable ovn-dbs-bundle-podman-0 start
+  * Stop       ovndb_servers:0             (           Unpromoted ovn-dbs-bundle-0 )  due to unrunnable ovn-dbs-bundle-podman-0 start
   * Promote    ovndb_servers:1             ( Unpromoted -> Promoted ovn-dbs-bundle-1 )
 
 Executing Cluster Transition:
   * Resource action: ovndb_servers   cancel=30000 on ovn-dbs-bundle-1
   * Pseudo action:   ovn-dbs-bundle-master_pre_notify_stop_0
   * Pseudo action:   ovn-dbs-bundle_stop_0
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-0
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-1
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-2
   * Pseudo action:   ovn-dbs-bundle-master_confirmed-pre_notify_stop_0
   * Pseudo action:   ovn-dbs-bundle-master_stop_0
   * Resource action: ovndb_servers   stop on ovn-dbs-bundle-0
   * Pseudo action:   ovn-dbs-bundle-master_stopped_0
   * Resource action: ovn-dbs-bundle-0 stop on controller-0
   * Pseudo action:   ovn-dbs-bundle-master_post_notify_stopped_0
   * Resource action: ovn-dbs-bundle-podman-0 stop on controller-0
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-1
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-2
   * Pseudo action:   ovn-dbs-bundle-master_confirmed-post_notify_stopped_0
   * Pseudo action:   ovn-dbs-bundle-master_pre_notify_start_0
   * Pseudo action:   ovn-dbs-bundle_stopped_0
   * Pseudo action:   ovn-dbs-bundle-master_confirmed-pre_notify_start_0
   * Pseudo action:   ovn-dbs-bundle-master_start_0
   * Pseudo action:   ovn-dbs-bundle-master_running_0
   * Pseudo action:   ovn-dbs-bundle-master_post_notify_running_0
   * Pseudo action:   ovn-dbs-bundle-master_confirmed-post_notify_running_0
   * Pseudo action:   ovn-dbs-bundle_running_0
   * Pseudo action:   ovn-dbs-bundle-master_pre_notify_promote_0
   * Pseudo action:   ovn-dbs-bundle_promote_0
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-1
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-2
   * Pseudo action:   ovn-dbs-bundle-master_confirmed-pre_notify_promote_0
   * Pseudo action:   ovn-dbs-bundle-master_promote_0
   * Resource action: ovndb_servers   promote on ovn-dbs-bundle-1
   * Pseudo action:   ovn-dbs-bundle-master_promoted_0
   * Pseudo action:   ovn-dbs-bundle-master_post_notify_promoted_0
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-1
   * Resource action: ovndb_servers   notify on ovn-dbs-bundle-2
   * Pseudo action:   ovn-dbs-bundle-master_confirmed-post_notify_promoted_0
   * Pseudo action:   ovn-dbs-bundle_promoted_0
   * Resource action: ovndb_servers   monitor=10000 on ovn-dbs-bundle-1
 Using the original execution date of: 2020-05-14 10:49:31Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-0 controller-1 controller-2 ]
     * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]
 
   * Full List of Resources:
     * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
       * galera-bundle-0	(ocf:heartbeat:galera):	 Promoted controller-0
       * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted controller-1
       * galera-bundle-2	(ocf:heartbeat:galera):	 Promoted controller-2
     * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
       * rabbitmq-bundle-0	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-0
       * rabbitmq-bundle-1	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-1
       * rabbitmq-bundle-2	(ocf:heartbeat:rabbitmq-cluster):	 Started controller-2
     * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
       * redis-bundle-0	(ocf:heartbeat:redis):	 Promoted controller-0
       * redis-bundle-1	(ocf:heartbeat:redis):	 Unpromoted controller-1
       * redis-bundle-2	(ocf:heartbeat:redis):	 Unpromoted controller-2
     * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
       * ovn-dbs-bundle-0	(ocf:ovn:ovndb-servers):	 Stopped
       * ovn-dbs-bundle-1	(ocf:ovn:ovndb-servers):	 Promoted controller-1
       * ovn-dbs-bundle-2	(ocf:ovn:ovndb-servers):	 Unpromoted controller-2
     * stonith-fence_ipmilan-5254005e097a	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400afe30e	(stonith:fence_ipmilan):	 Started controller-2
     * stonith-fence_ipmilan-525400985679	(stonith:fence_ipmilan):	 Started controller-1
     * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
       * openstack-cinder-volume-podman-0	(ocf:heartbeat:podman):	 Started controller-0
diff --git a/cts/scheduler/summary/novell-239082.summary b/cts/scheduler/summary/novell-239082.summary
index 431b6ddc63..01af7656e9 100644
--- a/cts/scheduler/summary/novell-239082.summary
+++ b/cts/scheduler/summary/novell-239082.summary
@@ -1,59 +1,59 @@
 Current cluster status:
   * Node List:
     * Online: [ xen-1 xen-2 ]
 
   * Full List of Resources:
     * fs_1	(ocf:heartbeat:Filesystem):	 Started xen-1
     * Clone Set: ms-drbd0 [drbd0] (promotable):
       * Promoted: [ xen-1 ]
       * Unpromoted: [ xen-2 ]
 
 Transition Summary:
   * Move       fs_1        (        xen-1 -> xen-2 )
   * Promote    drbd0:0     ( Unpromoted -> Promoted xen-2 )
-  * Stop       drbd0:1     (               Promoted xen-1 )  due to node availability
+  * Stop       drbd0:1     (          Promoted xen-1 )  due to node availability
 
 Executing Cluster Transition:
   * Resource action: fs_1            stop on xen-1
   * Pseudo action:   ms-drbd0_pre_notify_demote_0
   * Resource action: drbd0:0         notify on xen-2
   * Resource action: drbd0:1         notify on xen-1
   * Pseudo action:   ms-drbd0_confirmed-pre_notify_demote_0
   * Pseudo action:   ms-drbd0_demote_0
   * Resource action: drbd0:1         demote on xen-1
   * Pseudo action:   ms-drbd0_demoted_0
   * Pseudo action:   ms-drbd0_post_notify_demoted_0
   * Resource action: drbd0:0         notify on xen-2
   * Resource action: drbd0:1         notify on xen-1
   * Pseudo action:   ms-drbd0_confirmed-post_notify_demoted_0
   * Pseudo action:   ms-drbd0_pre_notify_stop_0
   * Resource action: drbd0:0         notify on xen-2
   * Resource action: drbd0:1         notify on xen-1
   * Pseudo action:   ms-drbd0_confirmed-pre_notify_stop_0
   * Pseudo action:   ms-drbd0_stop_0
   * Resource action: drbd0:1         stop on xen-1
   * Pseudo action:   ms-drbd0_stopped_0
   * Cluster action:  do_shutdown on xen-1
   * Pseudo action:   ms-drbd0_post_notify_stopped_0
   * Resource action: drbd0:0         notify on xen-2
   * Pseudo action:   ms-drbd0_confirmed-post_notify_stopped_0
   * Pseudo action:   ms-drbd0_pre_notify_promote_0
   * Resource action: drbd0:0         notify on xen-2
   * Pseudo action:   ms-drbd0_confirmed-pre_notify_promote_0
   * Pseudo action:   ms-drbd0_promote_0
   * Resource action: drbd0:0         promote on xen-2
   * Pseudo action:   ms-drbd0_promoted_0
   * Pseudo action:   ms-drbd0_post_notify_promoted_0
   * Resource action: drbd0:0         notify on xen-2
   * Pseudo action:   ms-drbd0_confirmed-post_notify_promoted_0
   * Resource action: fs_1            start on xen-2
 
 Revised Cluster Status:
   * Node List:
     * Online: [ xen-1 xen-2 ]
 
   * Full List of Resources:
     * fs_1	(ocf:heartbeat:Filesystem):	 Started xen-2
     * Clone Set: ms-drbd0 [drbd0] (promotable):
       * Promoted: [ xen-2 ]
       * Stopped: [ xen-1 ]
diff --git a/cts/scheduler/summary/on_fail_demote4.summary b/cts/scheduler/summary/on_fail_demote4.summary
index 781f5488bb..b7b1388e58 100644
--- a/cts/scheduler/summary/on_fail_demote4.summary
+++ b/cts/scheduler/summary/on_fail_demote4.summary
@@ -1,189 +1,189 @@
 Using the original execution date of: 2020-06-16 19:23:21Z
 Current cluster status:
   * Node List:
     * RemoteNode remote-rhel7-2: UNCLEAN (offline)
     * Node rhel7-4: UNCLEAN (offline)
     * Online: [ rhel7-1 rhel7-3 rhel7-5 ]
     * GuestOnline: [ lxc1@rhel7-3 stateful-bundle-1@rhel7-1 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-4 (UNCLEAN)
     * Clone Set: rsc1-clone [rsc1] (promotable):
       * rsc1	(ocf:pacemaker:Stateful):	 Promoted rhel7-4 (UNCLEAN)
       * rsc1	(ocf:pacemaker:Stateful):	 Unpromoted remote-rhel7-2 (UNCLEAN)
       * Unpromoted: [ lxc1 rhel7-1 rhel7-3 rhel7-5 ]
     * Clone Set: rsc2-master [rsc2] (promotable):
       * rsc2	(ocf:pacemaker:Stateful):	 Unpromoted rhel7-4 (UNCLEAN)
       * rsc2	(ocf:pacemaker:Stateful):	 Promoted remote-rhel7-2 (UNCLEAN)
       * Unpromoted: [ lxc1 rhel7-1 rhel7-3 rhel7-5 ]
     * remote-rhel7-2	(ocf:pacemaker:remote):	 FAILED rhel7-1
     * container1	(ocf:heartbeat:VirtualDomain):	 Started rhel7-3
     * container2	(ocf:heartbeat:VirtualDomain):	 FAILED rhel7-3
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Unpromoted: [ lxc1 ]
       * Stopped: [ remote-rhel7-2 rhel7-1 rhel7-3 rhel7-4 rhel7-5 ]
     * Container bundle set: stateful-bundle [pcmktest:http]:
       * stateful-bundle-0 (192.168.122.131)	(ocf:pacemaker:Stateful):	 FAILED Promoted rhel7-5
       * stateful-bundle-1 (192.168.122.132)	(ocf:pacemaker:Stateful):	 Unpromoted rhel7-1
       * stateful-bundle-2 (192.168.122.133)	(ocf:pacemaker:Stateful):	 FAILED rhel7-4 (UNCLEAN)
 
 Transition Summary:
   * Fence (reboot) stateful-bundle-2 (resource: stateful-bundle-docker-2) 'guest is unclean'
   * Fence (reboot) stateful-bundle-0 (resource: stateful-bundle-docker-0) 'guest is unclean'
   * Fence (reboot) lxc2 (resource: container2) 'guest is unclean'
   * Fence (reboot) remote-rhel7-2 'remote connection is unrecoverable'
   * Fence (reboot) rhel7-4 'peer is no longer part of the cluster'
   * Move       Fencing                                (       rhel7-4 -> rhel7-5 )
-  * Stop       rsc1:0                                 (               Promoted rhel7-4 )  due to node availability
-  * Promote    rsc1:1                                 ( Unpromoted -> Promoted rhel7-3 )
-  * Stop       rsc1:4                                 (      Unpromoted remote-rhel7-2 )  due to node availability
-  * Recover    rsc1:5                                 (                Unpromoted lxc2 )
-  * Stop       rsc2:0                                 (             Unpromoted rhel7-4 )  due to node availability
-  * Promote    rsc2:1                                 ( Unpromoted -> Promoted rhel7-3 )
-  * Stop       rsc2:4                                 (        Promoted remote-rhel7-2 )  due to node availability
-  * Recover    rsc2:5                                 (                Unpromoted lxc2 )
+  * Stop       rsc1:0                                 (           Promoted rhel7-4 )  due to node availability
+  * Promote    rsc1:1                                 (  Unpromoted -> Promoted rhel7-3 )
+  * Stop       rsc1:4                                 (     Unpromoted remote-rhel7-2 )  due to node availability
+  * Recover    rsc1:5                                 (               Unpromoted lxc2 )
+  * Stop       rsc2:0                                 (            Unpromoted rhel7-4 )  due to node availability
+  * Promote    rsc2:1                                 (  Unpromoted -> Promoted rhel7-3 )
+  * Stop       rsc2:4                                 (    Promoted remote-rhel7-2 )  due to node availability
+  * Recover    rsc2:5                                 (               Unpromoted lxc2 )
   * Recover    remote-rhel7-2                         (                  rhel7-1 )
   * Recover    container2                             (                  rhel7-3 )
-  * Recover    lxc-ms:0                               (                  Promoted lxc2 )
+  * Recover    lxc-ms:0                               (              Promoted lxc2 )
   * Recover    stateful-bundle-docker-0               (                  rhel7-5 )
   * Restart    stateful-bundle-0                      (                  rhel7-5 )  due to required stateful-bundle-docker-0 start
-  * Recover    bundled:0                              (     Promoted stateful-bundle-0 )
+  * Recover    bundled:0                              ( Promoted stateful-bundle-0 )
   * Move       stateful-bundle-ip-192.168.122.133     (       rhel7-4 -> rhel7-3 )
   * Recover    stateful-bundle-docker-2               (       rhel7-4 -> rhel7-3 )
   * Move       stateful-bundle-2                      (       rhel7-4 -> rhel7-3 )
-  * Recover    bundled:2                              (   Unpromoted stateful-bundle-2 )
+  * Recover    bundled:2                              (  Unpromoted stateful-bundle-2 )
   * Restart    lxc2                                   (                  rhel7-3 )  due to required container2 start
 
 Executing Cluster Transition:
   * Pseudo action:   Fencing_stop_0
   * Resource action: rsc1            cancel=11000 on rhel7-3
   * Pseudo action:   rsc1-clone_demote_0
   * Resource action: rsc2            cancel=11000 on rhel7-3
   * Pseudo action:   rsc2-master_demote_0
   * Pseudo action:   lxc-ms-master_demote_0
   * Resource action: stateful-bundle-0 stop on rhel7-5
   * Pseudo action:   stateful-bundle-2_stop_0
   * Resource action: lxc2            stop on rhel7-3
   * Pseudo action:   stateful-bundle_demote_0
   * Fencing remote-rhel7-2 (reboot)
   * Fencing rhel7-4 (reboot)
   * Pseudo action:   rsc1_demote_0
   * Pseudo action:   rsc1-clone_demoted_0
   * Pseudo action:   rsc2_demote_0
   * Pseudo action:   rsc2-master_demoted_0
   * Resource action: container2      stop on rhel7-3
   * Pseudo action:   stateful-bundle-master_demote_0
   * Pseudo action:   stonith-stateful-bundle-2-reboot on stateful-bundle-2
   * Pseudo action:   stonith-lxc2-reboot on lxc2
   * Resource action: Fencing         start on rhel7-5
   * Pseudo action:   rsc1-clone_stop_0
   * Pseudo action:   rsc2-master_stop_0
   * Pseudo action:   lxc-ms_demote_0
   * Pseudo action:   lxc-ms-master_demoted_0
   * Pseudo action:   lxc-ms-master_stop_0
   * Pseudo action:   bundled_demote_0
   * Pseudo action:   stateful-bundle-master_demoted_0
   * Pseudo action:   stateful-bundle_demoted_0
   * Pseudo action:   stateful-bundle_stop_0
   * Resource action: Fencing         monitor=120000 on rhel7-5
   * Pseudo action:   rsc1_stop_0
   * Pseudo action:   rsc1_stop_0
   * Pseudo action:   rsc1_stop_0
   * Pseudo action:   rsc1-clone_stopped_0
   * Pseudo action:   rsc1-clone_start_0
   * Pseudo action:   rsc2_stop_0
   * Pseudo action:   rsc2_stop_0
   * Pseudo action:   rsc2_stop_0
   * Pseudo action:   rsc2-master_stopped_0
   * Pseudo action:   rsc2-master_start_0
   * Resource action: remote-rhel7-2  stop on rhel7-1
   * Pseudo action:   lxc-ms_stop_0
   * Pseudo action:   lxc-ms-master_stopped_0
   * Pseudo action:   lxc-ms-master_start_0
   * Resource action: stateful-bundle-docker-0 stop on rhel7-5
   * Pseudo action:   stateful-bundle-docker-2_stop_0
   * Pseudo action:   stonith-stateful-bundle-0-reboot on stateful-bundle-0
   * Resource action: remote-rhel7-2  start on rhel7-1
   * Resource action: remote-rhel7-2  monitor=60000 on rhel7-1
   * Resource action: container2      start on rhel7-3
   * Resource action: container2      monitor=20000 on rhel7-3
   * Pseudo action:   stateful-bundle-master_stop_0
   * Pseudo action:   stateful-bundle-ip-192.168.122.133_stop_0
   * Resource action: lxc2            start on rhel7-3
   * Resource action: lxc2            monitor=30000 on rhel7-3
   * Resource action: rsc1            start on lxc2
   * Pseudo action:   rsc1-clone_running_0
   * Resource action: rsc2            start on lxc2
   * Pseudo action:   rsc2-master_running_0
   * Resource action: lxc-ms          start on lxc2
   * Pseudo action:   lxc-ms-master_running_0
   * Pseudo action:   bundled_stop_0
   * Resource action: stateful-bundle-ip-192.168.122.133 start on rhel7-3
   * Resource action: rsc1            monitor=11000 on lxc2
   * Pseudo action:   rsc1-clone_promote_0
   * Resource action: rsc2            monitor=11000 on lxc2
   * Pseudo action:   rsc2-master_promote_0
   * Pseudo action:   lxc-ms-master_promote_0
   * Pseudo action:   bundled_stop_0
   * Pseudo action:   stateful-bundle-master_stopped_0
   * Resource action: stateful-bundle-ip-192.168.122.133 monitor=60000 on rhel7-3
   * Pseudo action:   stateful-bundle_stopped_0
   * Pseudo action:   stateful-bundle_start_0
   * Resource action: rsc1            promote on rhel7-3
   * Pseudo action:   rsc1-clone_promoted_0
   * Resource action: rsc2            promote on rhel7-3
   * Pseudo action:   rsc2-master_promoted_0
   * Resource action: lxc-ms          promote on lxc2
   * Pseudo action:   lxc-ms-master_promoted_0
   * Pseudo action:   stateful-bundle-master_start_0
   * Resource action: stateful-bundle-docker-0 start on rhel7-5
   * Resource action: stateful-bundle-docker-0 monitor=60000 on rhel7-5
   * Resource action: stateful-bundle-0 start on rhel7-5
   * Resource action: stateful-bundle-0 monitor=30000 on rhel7-5
   * Resource action: stateful-bundle-docker-2 start on rhel7-3
   * Resource action: stateful-bundle-2 start on rhel7-3
   * Resource action: rsc1            monitor=10000 on rhel7-3
   * Resource action: rsc2            monitor=10000 on rhel7-3
   * Resource action: lxc-ms          monitor=10000 on lxc2
   * Resource action: bundled         start on stateful-bundle-0
   * Resource action: bundled         start on stateful-bundle-2
   * Pseudo action:   stateful-bundle-master_running_0
   * Resource action: stateful-bundle-docker-2 monitor=60000 on rhel7-3
   * Resource action: stateful-bundle-2 monitor=30000 on rhel7-3
   * Pseudo action:   stateful-bundle_running_0
   * Resource action: bundled         monitor=11000 on stateful-bundle-2
   * Pseudo action:   stateful-bundle_promote_0
   * Pseudo action:   stateful-bundle-master_promote_0
   * Resource action: bundled         promote on stateful-bundle-0
   * Pseudo action:   stateful-bundle-master_promoted_0
   * Pseudo action:   stateful-bundle_promoted_0
   * Resource action: bundled         monitor=10000 on stateful-bundle-0
 Using the original execution date of: 2020-06-16 19:23:21Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ rhel7-1 rhel7-3 rhel7-5 ]
     * OFFLINE: [ rhel7-4 ]
     * RemoteOnline: [ remote-rhel7-2 ]
     * GuestOnline: [ lxc1@rhel7-3 lxc2@rhel7-3 stateful-bundle-0@rhel7-5 stateful-bundle-1@rhel7-1 stateful-bundle-2@rhel7-3 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-5
     * Clone Set: rsc1-clone [rsc1] (promotable):
       * Promoted: [ rhel7-3 ]
       * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-5 ]
       * Stopped: [ remote-rhel7-2 rhel7-4 ]
     * Clone Set: rsc2-master [rsc2] (promotable):
       * Promoted: [ rhel7-3 ]
       * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-5 ]
       * Stopped: [ remote-rhel7-2 rhel7-4 ]
     * remote-rhel7-2	(ocf:pacemaker:remote):	 Started rhel7-1
     * container1	(ocf:heartbeat:VirtualDomain):	 Started rhel7-3
     * container2	(ocf:heartbeat:VirtualDomain):	 Started rhel7-3
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Promoted: [ lxc2 ]
       * Unpromoted: [ lxc1 ]
     * Container bundle set: stateful-bundle [pcmktest:http]:
       * stateful-bundle-0 (192.168.122.131)	(ocf:pacemaker:Stateful):	 Promoted rhel7-5
       * stateful-bundle-1 (192.168.122.132)	(ocf:pacemaker:Stateful):	 Unpromoted rhel7-1
       * stateful-bundle-2 (192.168.122.133)	(ocf:pacemaker:Stateful):	 Unpromoted rhel7-3
diff --git a/cts/scheduler/summary/probe-2.summary b/cts/scheduler/summary/probe-2.summary
index f2c60821ab..3523891d30 100644
--- a/cts/scheduler/summary/probe-2.summary
+++ b/cts/scheduler/summary/probe-2.summary
@@ -1,163 +1,163 @@
 Current cluster status:
   * Node List:
     * Node wc02: standby (with active resources)
     * Online: [ wc01 ]
 
   * Full List of Resources:
     * Resource Group: group_www_data:
       * fs_www_data	(ocf:heartbeat:Filesystem):	 Started wc01
       * nfs-kernel-server	(lsb:nfs-kernel-server):	 Started wc01
       * intip_nfs	(ocf:heartbeat:IPaddr2):	 Started wc01
     * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
       * Promoted: [ wc02 ]
       * Unpromoted: [ wc01 ]
     * Resource Group: group_mysql:
       * fs_mysql	(ocf:heartbeat:Filesystem):	 Started wc02
       * intip_sql	(ocf:heartbeat:IPaddr2):	 Started wc02
       * mysql-server	(ocf:heartbeat:mysql):	 Started wc02
     * Clone Set: ms_drbd_www [drbd_www] (promotable):
       * Promoted: [ wc01 ]
       * Unpromoted: [ wc02 ]
     * Clone Set: clone_nfs-common [group_nfs-common]:
       * Started: [ wc01 wc02 ]
     * Clone Set: clone_mysql-proxy [group_mysql-proxy]:
       * Started: [ wc01 wc02 ]
     * Clone Set: clone_webservice [group_webservice]:
       * Started: [ wc01 wc02 ]
     * Resource Group: group_ftpd:
       * extip_ftp	(ocf:heartbeat:IPaddr2):	 Started wc01
       * pure-ftpd	(ocf:heartbeat:Pure-FTPd):	 Started wc01
     * Clone Set: DoFencing [stonith_rackpdu] (unique):
       * stonith_rackpdu:0	(stonith:external/rackpdu):	 Started wc01
       * stonith_rackpdu:1	(stonith:external/rackpdu):	 Started wc02
 
 Transition Summary:
   * Promote    drbd_mysql:0          ( Unpromoted -> Promoted wc01 )
-  * Stop       drbd_mysql:1          (               Promoted wc02 )  due to node availability
+  * Stop       drbd_mysql:1          (          Promoted wc02 )  due to node availability
   * Move       fs_mysql              (         wc02 -> wc01 )
   * Move       intip_sql             (         wc02 -> wc01 )
   * Move       mysql-server          (         wc02 -> wc01 )
-  * Stop       drbd_www:1            (             Unpromoted wc02 )  due to node availability
+  * Stop       drbd_www:1            (           Unpromoted wc02 )  due to node availability
   * Stop       nfs-common:1          (                 wc02 )  due to node availability
   * Stop       mysql-proxy:1         (                 wc02 )  due to node availability
   * Stop       fs_www:1              (                 wc02 )  due to node availability
   * Stop       apache2:1             (                 wc02 )  due to node availability
   * Restart    stonith_rackpdu:0     (                 wc01 )
   * Stop       stonith_rackpdu:1     (                 wc02 )  due to node availability
 
 Executing Cluster Transition:
   * Resource action: drbd_mysql:0    cancel=10000 on wc01
   * Pseudo action:   ms_drbd_mysql_pre_notify_demote_0
   * Pseudo action:   group_mysql_stop_0
   * Resource action: mysql-server    stop on wc02
   * Pseudo action:   ms_drbd_www_pre_notify_stop_0
   * Pseudo action:   clone_mysql-proxy_stop_0
   * Pseudo action:   clone_webservice_stop_0
   * Pseudo action:   DoFencing_stop_0
   * Resource action: drbd_mysql:0    notify on wc01
   * Resource action: drbd_mysql:1    notify on wc02
   * Pseudo action:   ms_drbd_mysql_confirmed-pre_notify_demote_0
   * Resource action: intip_sql       stop on wc02
   * Resource action: drbd_www:0      notify on wc01
   * Resource action: drbd_www:1      notify on wc02
   * Pseudo action:   ms_drbd_www_confirmed-pre_notify_stop_0
   * Pseudo action:   ms_drbd_www_stop_0
   * Pseudo action:   group_mysql-proxy:1_stop_0
   * Resource action: mysql-proxy:1   stop on wc02
   * Pseudo action:   group_webservice:1_stop_0
   * Resource action: apache2:1       stop on wc02
   * Resource action: stonith_rackpdu:0 stop on wc01
   * Resource action: stonith_rackpdu:1 stop on wc02
   * Pseudo action:   DoFencing_stopped_0
   * Pseudo action:   DoFencing_start_0
   * Resource action: fs_mysql        stop on wc02
   * Resource action: drbd_www:1      stop on wc02
   * Pseudo action:   ms_drbd_www_stopped_0
   * Pseudo action:   group_mysql-proxy:1_stopped_0
   * Pseudo action:   clone_mysql-proxy_stopped_0
   * Resource action: fs_www:1        stop on wc02
   * Resource action: stonith_rackpdu:0 start on wc01
   * Pseudo action:   DoFencing_running_0
   * Pseudo action:   group_mysql_stopped_0
   * Pseudo action:   ms_drbd_www_post_notify_stopped_0
   * Pseudo action:   group_webservice:1_stopped_0
   * Pseudo action:   clone_webservice_stopped_0
   * Resource action: stonith_rackpdu:0 monitor=5000 on wc01
   * Pseudo action:   ms_drbd_mysql_demote_0
   * Resource action: drbd_www:0      notify on wc01
   * Pseudo action:   ms_drbd_www_confirmed-post_notify_stopped_0
   * Pseudo action:   clone_nfs-common_stop_0
   * Resource action: drbd_mysql:1    demote on wc02
   * Pseudo action:   ms_drbd_mysql_demoted_0
   * Pseudo action:   group_nfs-common:1_stop_0
   * Resource action: nfs-common:1    stop on wc02
   * Pseudo action:   ms_drbd_mysql_post_notify_demoted_0
   * Pseudo action:   group_nfs-common:1_stopped_0
   * Pseudo action:   clone_nfs-common_stopped_0
   * Resource action: drbd_mysql:0    notify on wc01
   * Resource action: drbd_mysql:1    notify on wc02
   * Pseudo action:   ms_drbd_mysql_confirmed-post_notify_demoted_0
   * Pseudo action:   ms_drbd_mysql_pre_notify_stop_0
   * Resource action: drbd_mysql:0    notify on wc01
   * Resource action: drbd_mysql:1    notify on wc02
   * Pseudo action:   ms_drbd_mysql_confirmed-pre_notify_stop_0
   * Pseudo action:   ms_drbd_mysql_stop_0
   * Resource action: drbd_mysql:1    stop on wc02
   * Pseudo action:   ms_drbd_mysql_stopped_0
   * Pseudo action:   ms_drbd_mysql_post_notify_stopped_0
   * Resource action: drbd_mysql:0    notify on wc01
   * Pseudo action:   ms_drbd_mysql_confirmed-post_notify_stopped_0
   * Pseudo action:   ms_drbd_mysql_pre_notify_promote_0
   * Resource action: drbd_mysql:0    notify on wc01
   * Pseudo action:   ms_drbd_mysql_confirmed-pre_notify_promote_0
   * Pseudo action:   ms_drbd_mysql_promote_0
   * Resource action: drbd_mysql:0    promote on wc01
   * Pseudo action:   ms_drbd_mysql_promoted_0
   * Pseudo action:   ms_drbd_mysql_post_notify_promoted_0
   * Resource action: drbd_mysql:0    notify on wc01
   * Pseudo action:   ms_drbd_mysql_confirmed-post_notify_promoted_0
   * Pseudo action:   group_mysql_start_0
   * Resource action: fs_mysql        start on wc01
   * Resource action: intip_sql       start on wc01
   * Resource action: mysql-server    start on wc01
   * Resource action: drbd_mysql:0    monitor=5000 on wc01
   * Pseudo action:   group_mysql_running_0
   * Resource action: fs_mysql        monitor=30000 on wc01
   * Resource action: intip_sql       monitor=30000 on wc01
   * Resource action: mysql-server    monitor=30000 on wc01
 
 Revised Cluster Status:
   * Node List:
     * Node wc02: standby
     * Online: [ wc01 ]
 
   * Full List of Resources:
     * Resource Group: group_www_data:
       * fs_www_data	(ocf:heartbeat:Filesystem):	 Started wc01
       * nfs-kernel-server	(lsb:nfs-kernel-server):	 Started wc01
       * intip_nfs	(ocf:heartbeat:IPaddr2):	 Started wc01
     * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
       * Promoted: [ wc01 ]
       * Stopped: [ wc02 ]
     * Resource Group: group_mysql:
       * fs_mysql	(ocf:heartbeat:Filesystem):	 Started wc01
       * intip_sql	(ocf:heartbeat:IPaddr2):	 Started wc01
       * mysql-server	(ocf:heartbeat:mysql):	 Started wc01
     * Clone Set: ms_drbd_www [drbd_www] (promotable):
       * Promoted: [ wc01 ]
       * Stopped: [ wc02 ]
     * Clone Set: clone_nfs-common [group_nfs-common]:
       * Started: [ wc01 ]
       * Stopped: [ wc02 ]
     * Clone Set: clone_mysql-proxy [group_mysql-proxy]:
       * Started: [ wc01 ]
       * Stopped: [ wc02 ]
     * Clone Set: clone_webservice [group_webservice]:
       * Started: [ wc01 ]
       * Stopped: [ wc02 ]
     * Resource Group: group_ftpd:
       * extip_ftp	(ocf:heartbeat:IPaddr2):	 Started wc01
       * pure-ftpd	(ocf:heartbeat:Pure-FTPd):	 Started wc01
     * Clone Set: DoFencing [stonith_rackpdu] (unique):
       * stonith_rackpdu:0	(stonith:external/rackpdu):	 Started wc01
       * stonith_rackpdu:1	(stonith:external/rackpdu):	 Stopped
diff --git a/cts/scheduler/summary/promoted-7.summary b/cts/scheduler/summary/promoted-7.summary
index 4fc3a85e9a..0602f95895 100644
--- a/cts/scheduler/summary/promoted-7.summary
+++ b/cts/scheduler/summary/promoted-7.summary
@@ -1,121 +1,121 @@
 Current cluster status:
   * Node List:
     * Node c001n01: UNCLEAN (offline)
     * Online: [ c001n02 c001n03 c001n08 ]
 
   * Full List of Resources:
     * DcIPaddr	(ocf:heartbeat:IPaddr):	 Started c001n01 (UNCLEAN)
     * Resource Group: group-1:
       * ocf_192.168.100.181	(ocf:heartbeat:IPaddr):	 Started c001n03
       * heartbeat_192.168.100.182	(ocf:heartbeat:IPaddr):	 Started c001n03
       * ocf_192.168.100.183	(ocf:heartbeat:IPaddr):	 Started c001n03
     * lsb_dummy	(lsb:/usr/lib/heartbeat/cts/LSBDummy):	 Started c001n02
     * rsc_c001n01	(ocf:heartbeat:IPaddr):	 Started c001n01 (UNCLEAN)
     * rsc_c001n08	(ocf:heartbeat:IPaddr):	 Started c001n08
     * rsc_c001n02	(ocf:heartbeat:IPaddr):	 Started c001n02
     * rsc_c001n03	(ocf:heartbeat:IPaddr):	 Started c001n03
     * Clone Set: DoFencing [child_DoFencing] (unique):
       * child_DoFencing:0	(stonith:ssh):	 Started c001n01 (UNCLEAN)
       * child_DoFencing:1	(stonith:ssh):	 Started c001n03
       * child_DoFencing:2	(stonith:ssh):	 Started c001n02
       * child_DoFencing:3	(stonith:ssh):	 Started c001n08
     * Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique):
       * ocf_msdummy:0	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Promoted c001n01 (UNCLEAN)
       * ocf_msdummy:1	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n03
       * ocf_msdummy:2	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n02
       * ocf_msdummy:3	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n08
       * ocf_msdummy:4	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n01 (UNCLEAN)
       * ocf_msdummy:5	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n03
       * ocf_msdummy:6	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n02
       * ocf_msdummy:7	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n08
 
 Transition Summary:
   * Fence (reboot) c001n01 'peer is no longer part of the cluster'
   * Move       DcIPaddr                      ( c001n01 -> c001n03 )
   * Move       ocf_192.168.100.181           ( c001n03 -> c001n02 )
   * Move       heartbeat_192.168.100.182     ( c001n03 -> c001n02 )
   * Move       ocf_192.168.100.183           ( c001n03 -> c001n02 )
   * Move       lsb_dummy                     ( c001n02 -> c001n08 )
   * Move       rsc_c001n01                   ( c001n01 -> c001n03 )
   * Stop       child_DoFencing:0             (            c001n01 )  due to node availability
-  * Stop       ocf_msdummy:0                 (   Promoted c001n01 )  due to node availability
-  * Stop       ocf_msdummy:4                 ( Unpromoted c001n01 )  due to node availability
+  * Stop       ocf_msdummy:0                 (     Promoted c001n01 )  due to node availability
+  * Stop       ocf_msdummy:4                 (      Unpromoted c001n01 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   group-1_stop_0
   * Resource action: ocf_192.168.100.183 stop on c001n03
   * Resource action: lsb_dummy       stop on c001n02
   * Resource action: child_DoFencing:2 monitor on c001n08
   * Resource action: child_DoFencing:2 monitor on c001n03
   * Resource action: child_DoFencing:3 monitor on c001n03
   * Resource action: child_DoFencing:3 monitor on c001n02
   * Pseudo action:   DoFencing_stop_0
   * Resource action: ocf_msdummy:4   monitor on c001n08
   * Resource action: ocf_msdummy:4   monitor on c001n03
   * Resource action: ocf_msdummy:4   monitor on c001n02
   * Resource action: ocf_msdummy:5   monitor on c001n08
   * Resource action: ocf_msdummy:5   monitor on c001n02
   * Resource action: ocf_msdummy:6   monitor on c001n08
   * Resource action: ocf_msdummy:6   monitor on c001n03
   * Resource action: ocf_msdummy:7   monitor on c001n03
   * Resource action: ocf_msdummy:7   monitor on c001n02
   * Pseudo action:   master_rsc_1_demote_0
   * Fencing c001n01 (reboot)
   * Pseudo action:   DcIPaddr_stop_0
   * Resource action: heartbeat_192.168.100.182 stop on c001n03
   * Resource action: lsb_dummy       start on c001n08
   * Pseudo action:   rsc_c001n01_stop_0
   * Pseudo action:   child_DoFencing:0_stop_0
   * Pseudo action:   DoFencing_stopped_0
   * Pseudo action:   ocf_msdummy:0_demote_0
   * Pseudo action:   master_rsc_1_demoted_0
   * Pseudo action:   master_rsc_1_stop_0
   * Resource action: DcIPaddr        start on c001n03
   * Resource action: ocf_192.168.100.181 stop on c001n03
   * Resource action: lsb_dummy       monitor=5000 on c001n08
   * Resource action: rsc_c001n01     start on c001n03
   * Pseudo action:   ocf_msdummy:0_stop_0
   * Pseudo action:   ocf_msdummy:4_stop_0
   * Pseudo action:   master_rsc_1_stopped_0
   * Resource action: DcIPaddr        monitor=5000 on c001n03
   * Pseudo action:   group-1_stopped_0
   * Pseudo action:   group-1_start_0
   * Resource action: ocf_192.168.100.181 start on c001n02
   * Resource action: heartbeat_192.168.100.182 start on c001n02
   * Resource action: ocf_192.168.100.183 start on c001n02
   * Resource action: rsc_c001n01     monitor=5000 on c001n03
   * Pseudo action:   group-1_running_0
   * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02
   * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02
   * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02
 
 Revised Cluster Status:
   * Node List:
     * Online: [ c001n02 c001n03 c001n08 ]
     * OFFLINE: [ c001n01 ]
 
   * Full List of Resources:
     * DcIPaddr	(ocf:heartbeat:IPaddr):	 Started c001n03
     * Resource Group: group-1:
       * ocf_192.168.100.181	(ocf:heartbeat:IPaddr):	 Started c001n02
       * heartbeat_192.168.100.182	(ocf:heartbeat:IPaddr):	 Started c001n02
       * ocf_192.168.100.183	(ocf:heartbeat:IPaddr):	 Started c001n02
     * lsb_dummy	(lsb:/usr/lib/heartbeat/cts/LSBDummy):	 Started c001n08
     * rsc_c001n01	(ocf:heartbeat:IPaddr):	 Started c001n03
     * rsc_c001n08	(ocf:heartbeat:IPaddr):	 Started c001n08
     * rsc_c001n02	(ocf:heartbeat:IPaddr):	 Started c001n02
     * rsc_c001n03	(ocf:heartbeat:IPaddr):	 Started c001n03
     * Clone Set: DoFencing [child_DoFencing] (unique):
       * child_DoFencing:0	(stonith:ssh):	 Stopped
       * child_DoFencing:1	(stonith:ssh):	 Started c001n03
       * child_DoFencing:2	(stonith:ssh):	 Started c001n02
       * child_DoFencing:3	(stonith:ssh):	 Started c001n08
     * Clone Set: master_rsc_1 [ocf_msdummy] (promotable) (unique):
       * ocf_msdummy:0	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Stopped
       * ocf_msdummy:1	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n03
       * ocf_msdummy:2	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n02
       * ocf_msdummy:3	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n08
       * ocf_msdummy:4	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Stopped
       * ocf_msdummy:5	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n03
       * ocf_msdummy:6	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n02
       * ocf_msdummy:7	(ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy):	 Unpromoted c001n08
diff --git a/cts/scheduler/summary/promoted-asymmetrical-order.summary b/cts/scheduler/summary/promoted-asymmetrical-order.summary
index df6e00c9c2..e10568e898 100644
--- a/cts/scheduler/summary/promoted-asymmetrical-order.summary
+++ b/cts/scheduler/summary/promoted-asymmetrical-order.summary
@@ -1,37 +1,37 @@
 2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
 
 Current cluster status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * Clone Set: ms1 [rsc1] (promotable) (disabled):
       * Promoted: [ node1 ]
       * Unpromoted: [ node2 ]
     * Clone Set: ms2 [rsc2] (promotable):
       * Promoted: [ node2 ]
       * Unpromoted: [ node1 ]
 
 Transition Summary:
-  * Stop       rsc1:0     (   Promoted node1 )  due to node availability
-  * Stop       rsc1:1     ( Unpromoted node2 )  due to node availability
+  * Stop       rsc1:0     ( Promoted node1 )  due to node availability
+  * Stop       rsc1:1     (  Unpromoted node2 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   ms1_demote_0
   * Resource action: rsc1:0          demote on node1
   * Pseudo action:   ms1_demoted_0
   * Pseudo action:   ms1_stop_0
   * Resource action: rsc1:0          stop on node1
   * Resource action: rsc1:1          stop on node2
   * Pseudo action:   ms1_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * Clone Set: ms1 [rsc1] (promotable) (disabled):
       * Stopped (disabled): [ node1 node2 ]
     * Clone Set: ms2 [rsc2] (promotable):
       * Promoted: [ node2 ]
       * Unpromoted: [ node1 ]
diff --git a/cts/scheduler/summary/promoted-demote-2.summary b/cts/scheduler/summary/promoted-demote-2.summary
index daea66ae8b..115da9aaaf 100644
--- a/cts/scheduler/summary/promoted-demote-2.summary
+++ b/cts/scheduler/summary/promoted-demote-2.summary
@@ -1,75 +1,75 @@
 Current cluster status:
   * Node List:
     * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started pcmk-1
     * Resource Group: group-1:
       * r192.168.122.105	(ocf:heartbeat:IPaddr):	 Stopped
       * r192.168.122.106	(ocf:heartbeat:IPaddr):	 Stopped
       * r192.168.122.107	(ocf:heartbeat:IPaddr):	 Stopped
     * rsc_pcmk-1	(ocf:heartbeat:IPaddr):	 Started pcmk-1
     * rsc_pcmk-2	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * rsc_pcmk-3	(ocf:heartbeat:IPaddr):	 Started pcmk-3
     * rsc_pcmk-4	(ocf:heartbeat:IPaddr):	 Started pcmk-4
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Stopped
     * migrator	(ocf:pacemaker:Dummy):	 Started pcmk-4
     * Clone Set: Connectivity [ping-1]:
       * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * stateful-1	(ocf:pacemaker:Stateful):	 FAILED pcmk-1
       * Unpromoted: [ pcmk-2 pcmk-3 pcmk-4 ]
 
 Transition Summary:
   * Start      r192.168.122.105     (                 pcmk-2 )
   * Start      r192.168.122.106     (                 pcmk-2 )
   * Start      r192.168.122.107     (                 pcmk-2 )
   * Start      lsb-dummy            (                 pcmk-2 )
-  * Recover    stateful-1:0         (             Unpromoted pcmk-1 )
+  * Recover    stateful-1:0         (           Unpromoted pcmk-1 )
   * Promote    stateful-1:1         ( Unpromoted -> Promoted pcmk-2 )
 
 Executing Cluster Transition:
   * Resource action: stateful-1:0    cancel=15000 on pcmk-2
   * Pseudo action:   master-1_stop_0
   * Resource action: stateful-1:1    stop on pcmk-1
   * Pseudo action:   master-1_stopped_0
   * Pseudo action:   master-1_start_0
   * Resource action: stateful-1:1    start on pcmk-1
   * Pseudo action:   master-1_running_0
   * Resource action: stateful-1:1    monitor=15000 on pcmk-1
   * Pseudo action:   master-1_promote_0
   * Resource action: stateful-1:0    promote on pcmk-2
   * Pseudo action:   master-1_promoted_0
   * Pseudo action:   group-1_start_0
   * Resource action: r192.168.122.105 start on pcmk-2
   * Resource action: r192.168.122.106 start on pcmk-2
   * Resource action: r192.168.122.107 start on pcmk-2
   * Resource action: stateful-1:0    monitor=16000 on pcmk-2
   * Pseudo action:   group-1_running_0
   * Resource action: r192.168.122.105 monitor=5000 on pcmk-2
   * Resource action: r192.168.122.106 monitor=5000 on pcmk-2
   * Resource action: r192.168.122.107 monitor=5000 on pcmk-2
   * Resource action: lsb-dummy       start on pcmk-2
   * Resource action: lsb-dummy       monitor=5000 on pcmk-2
 
 Revised Cluster Status:
   * Node List:
     * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started pcmk-1
     * Resource Group: group-1:
       * r192.168.122.105	(ocf:heartbeat:IPaddr):	 Started pcmk-2
       * r192.168.122.106	(ocf:heartbeat:IPaddr):	 Started pcmk-2
       * r192.168.122.107	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * rsc_pcmk-1	(ocf:heartbeat:IPaddr):	 Started pcmk-1
     * rsc_pcmk-2	(ocf:heartbeat:IPaddr):	 Started pcmk-2
     * rsc_pcmk-3	(ocf:heartbeat:IPaddr):	 Started pcmk-3
     * rsc_pcmk-4	(ocf:heartbeat:IPaddr):	 Started pcmk-4
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started pcmk-2
     * migrator	(ocf:pacemaker:Dummy):	 Started pcmk-4
     * Clone Set: Connectivity [ping-1]:
       * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ pcmk-2 ]
       * Unpromoted: [ pcmk-1 pcmk-3 pcmk-4 ]
diff --git a/cts/scheduler/summary/promoted-failed-demote-2.summary b/cts/scheduler/summary/promoted-failed-demote-2.summary
index 198d9ad3ee..c8504e9e1d 100644
--- a/cts/scheduler/summary/promoted-failed-demote-2.summary
+++ b/cts/scheduler/summary/promoted-failed-demote-2.summary
@@ -1,47 +1,47 @@
 Current cluster status:
   * Node List:
     * Online: [ dl380g5a dl380g5b ]
 
   * Full List of Resources:
     * Clone Set: ms-sf [group] (promotable) (unique):
       * Resource Group: group:0:
         * stateful-1:0	(ocf:heartbeat:Stateful):	 FAILED dl380g5b
         * stateful-2:0	(ocf:heartbeat:Stateful):	 Stopped
       * Resource Group: group:1:
         * stateful-1:1	(ocf:heartbeat:Stateful):	 Unpromoted dl380g5a
         * stateful-2:1	(ocf:heartbeat:Stateful):	 Unpromoted dl380g5a
 
 Transition Summary:
-  * Stop       stateful-1:0     (             Unpromoted dl380g5b )  due to node availability
+  * Stop       stateful-1:0     (           Unpromoted dl380g5b )  due to node availability
   * Promote    stateful-1:1     ( Unpromoted -> Promoted dl380g5a )
   * Promote    stateful-2:1     ( Unpromoted -> Promoted dl380g5a )
 
 Executing Cluster Transition:
   * Resource action: stateful-1:1    cancel=20000 on dl380g5a
   * Resource action: stateful-2:1    cancel=20000 on dl380g5a
   * Pseudo action:   ms-sf_stop_0
   * Pseudo action:   group:0_stop_0
   * Resource action: stateful-1:0    stop on dl380g5b
   * Pseudo action:   group:0_stopped_0
   * Pseudo action:   ms-sf_stopped_0
   * Pseudo action:   ms-sf_promote_0
   * Pseudo action:   group:1_promote_0
   * Resource action: stateful-1:1    promote on dl380g5a
   * Resource action: stateful-2:1    promote on dl380g5a
   * Pseudo action:   group:1_promoted_0
   * Resource action: stateful-1:1    monitor=10000 on dl380g5a
   * Resource action: stateful-2:1    monitor=10000 on dl380g5a
   * Pseudo action:   ms-sf_promoted_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ dl380g5a dl380g5b ]
 
   * Full List of Resources:
     * Clone Set: ms-sf [group] (promotable) (unique):
       * Resource Group: group:0:
         * stateful-1:0	(ocf:heartbeat:Stateful):	 Stopped
         * stateful-2:0	(ocf:heartbeat:Stateful):	 Stopped
       * Resource Group: group:1:
         * stateful-1:1	(ocf:heartbeat:Stateful):	 Promoted dl380g5a
         * stateful-2:1	(ocf:heartbeat:Stateful):	 Promoted dl380g5a
diff --git a/cts/scheduler/summary/promoted-failed-demote.summary b/cts/scheduler/summary/promoted-failed-demote.summary
index 884a380063..f071025528 100644
--- a/cts/scheduler/summary/promoted-failed-demote.summary
+++ b/cts/scheduler/summary/promoted-failed-demote.summary
@@ -1,64 +1,64 @@
 Current cluster status:
   * Node List:
     * Online: [ dl380g5a dl380g5b ]
 
   * Full List of Resources:
     * Clone Set: ms-sf [group] (promotable) (unique):
       * Resource Group: group:0:
         * stateful-1:0	(ocf:heartbeat:Stateful):	 FAILED dl380g5b
         * stateful-2:0	(ocf:heartbeat:Stateful):	 Stopped
       * Resource Group: group:1:
         * stateful-1:1	(ocf:heartbeat:Stateful):	 Unpromoted dl380g5a
         * stateful-2:1	(ocf:heartbeat:Stateful):	 Unpromoted dl380g5a
 
 Transition Summary:
-  * Stop       stateful-1:0     (             Unpromoted dl380g5b )  due to node availability
+  * Stop       stateful-1:0     (           Unpromoted dl380g5b )  due to node availability
   * Promote    stateful-1:1     ( Unpromoted -> Promoted dl380g5a )
   * Promote    stateful-2:1     ( Unpromoted -> Promoted dl380g5a )
 
 Executing Cluster Transition:
   * Resource action: stateful-1:1    cancel=20000 on dl380g5a
   * Resource action: stateful-2:1    cancel=20000 on dl380g5a
   * Pseudo action:   ms-sf_pre_notify_stop_0
   * Resource action: stateful-1:0    notify on dl380g5b
   * Resource action: stateful-1:1    notify on dl380g5a
   * Resource action: stateful-2:1    notify on dl380g5a
   * Pseudo action:   ms-sf_confirmed-pre_notify_stop_0
   * Pseudo action:   ms-sf_stop_0
   * Pseudo action:   group:0_stop_0
   * Resource action: stateful-1:0    stop on dl380g5b
   * Pseudo action:   group:0_stopped_0
   * Pseudo action:   ms-sf_stopped_0
   * Pseudo action:   ms-sf_post_notify_stopped_0
   * Resource action: stateful-1:1    notify on dl380g5a
   * Resource action: stateful-2:1    notify on dl380g5a
   * Pseudo action:   ms-sf_confirmed-post_notify_stopped_0
   * Pseudo action:   ms-sf_pre_notify_promote_0
   * Resource action: stateful-1:1    notify on dl380g5a
   * Resource action: stateful-2:1    notify on dl380g5a
   * Pseudo action:   ms-sf_confirmed-pre_notify_promote_0
   * Pseudo action:   ms-sf_promote_0
   * Pseudo action:   group:1_promote_0
   * Resource action: stateful-1:1    promote on dl380g5a
   * Resource action: stateful-2:1    promote on dl380g5a
   * Pseudo action:   group:1_promoted_0
   * Pseudo action:   ms-sf_promoted_0
   * Pseudo action:   ms-sf_post_notify_promoted_0
   * Resource action: stateful-1:1    notify on dl380g5a
   * Resource action: stateful-2:1    notify on dl380g5a
   * Pseudo action:   ms-sf_confirmed-post_notify_promoted_0
   * Resource action: stateful-1:1    monitor=10000 on dl380g5a
   * Resource action: stateful-2:1    monitor=10000 on dl380g5a
 
 Revised Cluster Status:
   * Node List:
     * Online: [ dl380g5a dl380g5b ]
 
   * Full List of Resources:
     * Clone Set: ms-sf [group] (promotable) (unique):
       * Resource Group: group:0:
         * stateful-1:0	(ocf:heartbeat:Stateful):	 Stopped
         * stateful-2:0	(ocf:heartbeat:Stateful):	 Stopped
       * Resource Group: group:1:
         * stateful-1:1	(ocf:heartbeat:Stateful):	 Promoted dl380g5a
         * stateful-2:1	(ocf:heartbeat:Stateful):	 Promoted dl380g5a
diff --git a/cts/scheduler/summary/remote-connection-unrecoverable.summary b/cts/scheduler/summary/remote-connection-unrecoverable.summary
index bd1adfcfa4..3cfb64565a 100644
--- a/cts/scheduler/summary/remote-connection-unrecoverable.summary
+++ b/cts/scheduler/summary/remote-connection-unrecoverable.summary
@@ -1,54 +1,54 @@
 Current cluster status:
   * Node List:
     * Node node1: UNCLEAN (offline)
     * Online: [ node2 ]
     * RemoteOnline: [ remote1 ]
 
   * Full List of Resources:
     * remote1	(ocf:pacemaker:remote):	 Started node1 (UNCLEAN)
     * killer	(stonith:fence_xvm):	 Started node2
     * rsc1	(ocf:pacemaker:Dummy):	 Started remote1
     * Clone Set: rsc2-master [rsc2] (promotable):
       * rsc2	(ocf:pacemaker:Stateful):	 Promoted node1 (UNCLEAN)
       * Promoted: [ node2 ]
       * Stopped: [ remote1 ]
 
 Transition Summary:
   * Fence (reboot) remote1 'resources are active and the connection is unrecoverable'
   * Fence (reboot) node1 'peer is no longer part of the cluster'
   * Stop       remote1     (            node1 )  due to node availability
   * Restart    killer      (            node2 )  due to resource definition change
   * Move       rsc1        ( remote1 -> node2 )
-  * Stop       rsc2:0      (   Promoted node1 )  due to node availability
+  * Stop       rsc2:0      (     Promoted node1 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   remote1_stop_0
   * Resource action: killer          stop on node2
   * Resource action: rsc1            monitor on node2
   * Fencing node1 (reboot)
   * Fencing remote1 (reboot)
   * Resource action: killer          start on node2
   * Resource action: killer          monitor=60000 on node2
   * Pseudo action:   rsc1_stop_0
   * Pseudo action:   rsc2-master_demote_0
   * Resource action: rsc1            start on node2
   * Pseudo action:   rsc2_demote_0
   * Pseudo action:   rsc2-master_demoted_0
   * Pseudo action:   rsc2-master_stop_0
   * Resource action: rsc1            monitor=10000 on node2
   * Pseudo action:   rsc2_stop_0
   * Pseudo action:   rsc2-master_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node2 ]
     * OFFLINE: [ node1 ]
     * RemoteOFFLINE: [ remote1 ]
 
   * Full List of Resources:
     * remote1	(ocf:pacemaker:remote):	 Stopped
     * killer	(stonith:fence_xvm):	 Started node2
     * rsc1	(ocf:pacemaker:Dummy):	 Started node2
     * Clone Set: rsc2-master [rsc2] (promotable):
       * Promoted: [ node2 ]
       * Stopped: [ node1 remote1 ]
diff --git a/cts/scheduler/summary/remote-recover-all.summary b/cts/scheduler/summary/remote-recover-all.summary
index 176c1de8b3..18d10730bf 100644
--- a/cts/scheduler/summary/remote-recover-all.summary
+++ b/cts/scheduler/summary/remote-recover-all.summary
@@ -1,146 +1,146 @@
 Using the original execution date of: 2017-05-03 13:33:24Z
 Current cluster status:
   * Node List:
     * Node controller-1: UNCLEAN (offline)
     * Online: [ controller-0 controller-2 ]
     * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-1 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 galera-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * redis	(ocf:heartbeat:redis):	 Unpromoted controller-1 (UNCLEAN)
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * Clone Set: haproxy-clone [haproxy]:
       * haproxy	(systemd:haproxy):	 Started controller-1 (UNCLEAN)
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-1 (UNCLEAN)
 
 Transition Summary:
   * Fence (reboot) messaging-1 'resources are active and the connection is unrecoverable'
   * Fence (reboot) galera-2 'resources are active and the connection is unrecoverable'
   * Fence (reboot) controller-1 'peer is no longer part of the cluster'
   * Stop       messaging-1                            (                 controller-1 )  due to node availability
   * Move       galera-0                               ( controller-1 -> controller-2 )
   * Stop       galera-2                               (                 controller-1 )  due to node availability
   * Stop       rabbitmq:2                             (                  messaging-1 )  due to node availability
-  * Stop       galera:1                               (            Promoted galera-2 )  due to node availability
-  * Stop       redis:0                                (      Unpromoted controller-1 )  due to node availability
+  * Stop       galera:1                               (              Promoted galera-2 )  due to node availability
+  * Stop       redis:0                                (           Unpromoted controller-1 )  due to node availability
   * Move       ip-172.17.1.14                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.1.17                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.4.11                         ( controller-1 -> controller-2 )
   * Stop       haproxy:0                              (                 controller-1 )  due to node availability
   * Move       stonith-fence_ipmilan-5254005bdbb5     ( controller-1 -> controller-2 )
 
 Executing Cluster Transition:
   * Pseudo action:   messaging-1_stop_0
   * Pseudo action:   galera-0_stop_0
   * Pseudo action:   galera-2_stop_0
   * Pseudo action:   galera-master_demote_0
   * Pseudo action:   redis-master_pre_notify_stop_0
   * Pseudo action:   stonith-fence_ipmilan-5254005bdbb5_stop_0
   * Fencing controller-1 (reboot)
   * Pseudo action:   redis_post_notify_stop_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-pre_notify_stop_0
   * Pseudo action:   redis-master_stop_0
   * Pseudo action:   haproxy-clone_stop_0
   * Fencing galera-2 (reboot)
   * Pseudo action:   galera_demote_0
   * Pseudo action:   galera-master_demoted_0
   * Pseudo action:   galera-master_stop_0
   * Pseudo action:   redis_stop_0
   * Pseudo action:   redis-master_stopped_0
   * Pseudo action:   haproxy_stop_0
   * Pseudo action:   haproxy-clone_stopped_0
   * Fencing messaging-1 (reboot)
   * Resource action: galera-0        start on controller-2
   * Pseudo action:   rabbitmq_post_notify_stop_0
   * Pseudo action:   rabbitmq-clone_stop_0
   * Pseudo action:   galera_stop_0
   * Resource action: galera          monitor=10000 on galera-0
   * Pseudo action:   galera-master_stopped_0
   * Pseudo action:   redis-master_post_notify_stopped_0
   * Pseudo action:   ip-172.17.1.14_stop_0
   * Pseudo action:   ip-172.17.1.17_stop_0
   * Pseudo action:   ip-172.17.4.11_stop_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
   * Resource action: galera-0        monitor=20000 on controller-2
   * Resource action: rabbitmq        notify on messaging-2
   * Resource action: rabbitmq        notify on messaging-0
   * Pseudo action:   rabbitmq_notified_0
   * Pseudo action:   rabbitmq_stop_0
   * Pseudo action:   rabbitmq-clone_stopped_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-post_notify_stopped_0
   * Resource action: ip-172.17.1.14  start on controller-2
   * Resource action: ip-172.17.1.17  start on controller-2
   * Resource action: ip-172.17.4.11  start on controller-2
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
   * Pseudo action:   redis_notified_0
   * Resource action: ip-172.17.1.14  monitor=10000 on controller-2
   * Resource action: ip-172.17.1.17  monitor=10000 on controller-2
   * Resource action: ip-172.17.4.11  monitor=10000 on controller-2
 Using the original execution date of: 2017-05-03 13:33:24Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-0 controller-2 ]
     * OFFLINE: [ controller-1 ]
     * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ]
     * RemoteOFFLINE: [ galera-2 messaging-1 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Stopped
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-2
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Stopped
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * Clone Set: haproxy-clone [haproxy]:
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-2
diff --git a/cts/scheduler/summary/remote-recover-connection.summary b/cts/scheduler/summary/remote-recover-connection.summary
index fd6900dd96..a9723bc5e1 100644
--- a/cts/scheduler/summary/remote-recover-connection.summary
+++ b/cts/scheduler/summary/remote-recover-connection.summary
@@ -1,132 +1,132 @@
 Using the original execution date of: 2017-05-03 13:33:24Z
 Current cluster status:
   * Node List:
     * Node controller-1: UNCLEAN (offline)
     * Online: [ controller-0 controller-2 ]
     * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-1 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 galera-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * redis	(ocf:heartbeat:redis):	 Unpromoted controller-1 (UNCLEAN)
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * Clone Set: haproxy-clone [haproxy]:
       * haproxy	(systemd:haproxy):	 Started controller-1 (UNCLEAN)
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-1 (UNCLEAN)
 
 Transition Summary:
   * Fence (reboot) controller-1 'peer is no longer part of the cluster'
   * Move       messaging-1                            ( controller-1 -> controller-2 )
   * Move       galera-0                               ( controller-1 -> controller-2 )
   * Move       galera-2                               ( controller-1 -> controller-2 )
-  * Stop       redis:0                                (      Unpromoted controller-1 )  due to node availability
+  * Stop       redis:0                                (           Unpromoted controller-1 )  due to node availability
   * Move       ip-172.17.1.14                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.1.17                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.4.11                         ( controller-1 -> controller-2 )
   * Stop       haproxy:0                              (                 controller-1 )  due to node availability
   * Move       stonith-fence_ipmilan-5254005bdbb5     ( controller-1 -> controller-2 )
 
 Executing Cluster Transition:
   * Pseudo action:   messaging-1_stop_0
   * Pseudo action:   galera-0_stop_0
   * Pseudo action:   galera-2_stop_0
   * Pseudo action:   redis-master_pre_notify_stop_0
   * Pseudo action:   stonith-fence_ipmilan-5254005bdbb5_stop_0
   * Fencing controller-1 (reboot)
   * Resource action: messaging-1     start on controller-2
   * Resource action: galera-0        start on controller-2
   * Resource action: galera-2        start on controller-2
   * Resource action: rabbitmq        monitor=10000 on messaging-1
   * Resource action: galera          monitor=10000 on galera-2
   * Resource action: galera          monitor=10000 on galera-0
   * Pseudo action:   redis_post_notify_stop_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-pre_notify_stop_0
   * Pseudo action:   redis-master_stop_0
   * Pseudo action:   haproxy-clone_stop_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
   * Resource action: messaging-1     monitor=20000 on controller-2
   * Resource action: galera-0        monitor=20000 on controller-2
   * Resource action: galera-2        monitor=20000 on controller-2
   * Pseudo action:   redis_stop_0
   * Pseudo action:   redis-master_stopped_0
   * Pseudo action:   haproxy_stop_0
   * Pseudo action:   haproxy-clone_stopped_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
   * Pseudo action:   redis-master_post_notify_stopped_0
   * Pseudo action:   ip-172.17.1.14_stop_0
   * Pseudo action:   ip-172.17.1.17_stop_0
   * Pseudo action:   ip-172.17.4.11_stop_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-post_notify_stopped_0
   * Resource action: ip-172.17.1.14  start on controller-2
   * Resource action: ip-172.17.1.17  start on controller-2
   * Resource action: ip-172.17.4.11  start on controller-2
   * Pseudo action:   redis_notified_0
   * Resource action: ip-172.17.1.14  monitor=10000 on controller-2
   * Resource action: ip-172.17.1.17  monitor=10000 on controller-2
   * Resource action: ip-172.17.4.11  monitor=10000 on controller-2
 Using the original execution date of: 2017-05-03 13:33:24Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-0 controller-2 ]
     * OFFLINE: [ controller-1 ]
     * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-2
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-2
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Started controller-2
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-1 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 galera-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * Clone Set: haproxy-clone [haproxy]:
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-2
diff --git a/cts/scheduler/summary/remote-recover-no-resources.summary b/cts/scheduler/summary/remote-recover-no-resources.summary
index 332d1c4123..d7d9ef942c 100644
--- a/cts/scheduler/summary/remote-recover-no-resources.summary
+++ b/cts/scheduler/summary/remote-recover-no-resources.summary
@@ -1,137 +1,137 @@
 Using the original execution date of: 2017-05-03 13:33:24Z
 Current cluster status:
   * Node List:
     * Node controller-1: UNCLEAN (offline)
     * Online: [ controller-0 controller-2 ]
     * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-1 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * redis	(ocf:heartbeat:redis):	 Unpromoted controller-1 (UNCLEAN)
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * Clone Set: haproxy-clone [haproxy]:
       * haproxy	(systemd:haproxy):	 Started controller-1 (UNCLEAN)
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-1 (UNCLEAN)
 
 Transition Summary:
   * Fence (reboot) messaging-1 'resources are active and the connection is unrecoverable'
   * Fence (reboot) controller-1 'peer is no longer part of the cluster'
   * Stop       messaging-1                            (                 controller-1 )  due to node availability
   * Move       galera-0                               ( controller-1 -> controller-2 )
   * Stop       galera-2                               (                 controller-1 )  due to node availability
   * Stop       rabbitmq:2                             (                  messaging-1 )  due to node availability
-  * Stop       redis:0                                (      Unpromoted controller-1 )  due to node availability
+  * Stop       redis:0                                (           Unpromoted controller-1 )  due to node availability
   * Move       ip-172.17.1.14                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.1.17                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.4.11                         ( controller-1 -> controller-2 )
   * Stop       haproxy:0                              (                 controller-1 )  due to node availability
   * Move       stonith-fence_ipmilan-5254005bdbb5     ( controller-1 -> controller-2 )
 
 Executing Cluster Transition:
   * Pseudo action:   messaging-1_stop_0
   * Pseudo action:   galera-0_stop_0
   * Pseudo action:   galera-2_stop_0
   * Pseudo action:   redis-master_pre_notify_stop_0
   * Pseudo action:   stonith-fence_ipmilan-5254005bdbb5_stop_0
   * Fencing controller-1 (reboot)
   * Pseudo action:   redis_post_notify_stop_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-pre_notify_stop_0
   * Pseudo action:   redis-master_stop_0
   * Pseudo action:   haproxy-clone_stop_0
   * Fencing messaging-1 (reboot)
   * Resource action: galera-0        start on controller-2
   * Pseudo action:   rabbitmq_post_notify_stop_0
   * Pseudo action:   rabbitmq-clone_stop_0
   * Resource action: galera          monitor=10000 on galera-0
   * Pseudo action:   redis_stop_0
   * Pseudo action:   redis-master_stopped_0
   * Pseudo action:   haproxy_stop_0
   * Pseudo action:   haproxy-clone_stopped_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
   * Resource action: galera-0        monitor=20000 on controller-2
   * Resource action: rabbitmq        notify on messaging-2
   * Resource action: rabbitmq        notify on messaging-0
   * Pseudo action:   rabbitmq_notified_0
   * Pseudo action:   rabbitmq_stop_0
   * Pseudo action:   rabbitmq-clone_stopped_0
   * Pseudo action:   redis-master_post_notify_stopped_0
   * Pseudo action:   ip-172.17.1.14_stop_0
   * Pseudo action:   ip-172.17.1.17_stop_0
   * Pseudo action:   ip-172.17.4.11_stop_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-post_notify_stopped_0
   * Resource action: ip-172.17.1.14  start on controller-2
   * Resource action: ip-172.17.1.17  start on controller-2
   * Resource action: ip-172.17.4.11  start on controller-2
   * Pseudo action:   redis_notified_0
   * Resource action: ip-172.17.1.14  monitor=10000 on controller-2
   * Resource action: ip-172.17.1.17  monitor=10000 on controller-2
   * Resource action: ip-172.17.4.11  monitor=10000 on controller-2
 Using the original execution date of: 2017-05-03 13:33:24Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-0 controller-2 ]
     * OFFLINE: [ controller-1 ]
     * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ]
     * RemoteOFFLINE: [ galera-2 messaging-1 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Stopped
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-2
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Stopped
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * Clone Set: haproxy-clone [haproxy]:
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-2
diff --git a/cts/scheduler/summary/remote-recover-unknown.summary b/cts/scheduler/summary/remote-recover-unknown.summary
index ac5143a16e..4f3d045284 100644
--- a/cts/scheduler/summary/remote-recover-unknown.summary
+++ b/cts/scheduler/summary/remote-recover-unknown.summary
@@ -1,139 +1,139 @@
 Using the original execution date of: 2017-05-03 13:33:24Z
 Current cluster status:
   * Node List:
     * Node controller-1: UNCLEAN (offline)
     * Online: [ controller-0 controller-2 ]
     * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-1 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * redis	(ocf:heartbeat:redis):	 Unpromoted controller-1 (UNCLEAN)
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * Clone Set: haproxy-clone [haproxy]:
       * haproxy	(systemd:haproxy):	 Started controller-1 (UNCLEAN)
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-1 (UNCLEAN)
 
 Transition Summary:
   * Fence (reboot) galera-2 'resources are in an unknown state and the connection is unrecoverable'
   * Fence (reboot) messaging-1 'resources are active and the connection is unrecoverable'
   * Fence (reboot) controller-1 'peer is no longer part of the cluster'
   * Stop       messaging-1                            (                 controller-1 )  due to node availability
   * Move       galera-0                               ( controller-1 -> controller-2 )
   * Stop       galera-2                               (                 controller-1 )  due to node availability
   * Stop       rabbitmq:2                             (                  messaging-1 )  due to node availability
-  * Stop       redis:0                                (      Unpromoted controller-1 )  due to node availability
+  * Stop       redis:0                                (           Unpromoted controller-1 )  due to node availability
   * Move       ip-172.17.1.14                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.1.17                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.4.11                         ( controller-1 -> controller-2 )
   * Stop       haproxy:0                              (                 controller-1 )  due to node availability
   * Move       stonith-fence_ipmilan-5254005bdbb5     ( controller-1 -> controller-2 )
 
 Executing Cluster Transition:
   * Pseudo action:   messaging-1_stop_0
   * Pseudo action:   galera-0_stop_0
   * Pseudo action:   galera-2_stop_0
   * Pseudo action:   redis-master_pre_notify_stop_0
   * Pseudo action:   stonith-fence_ipmilan-5254005bdbb5_stop_0
   * Fencing controller-1 (reboot)
   * Pseudo action:   redis_post_notify_stop_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-pre_notify_stop_0
   * Pseudo action:   redis-master_stop_0
   * Pseudo action:   haproxy-clone_stop_0
   * Fencing galera-2 (reboot)
   * Fencing messaging-1 (reboot)
   * Resource action: galera-0        start on controller-2
   * Pseudo action:   rabbitmq_post_notify_stop_0
   * Pseudo action:   rabbitmq-clone_stop_0
   * Resource action: galera          monitor=10000 on galera-0
   * Pseudo action:   redis_stop_0
   * Pseudo action:   redis-master_stopped_0
   * Pseudo action:   haproxy_stop_0
   * Pseudo action:   haproxy-clone_stopped_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
   * Resource action: galera-0        monitor=20000 on controller-2
   * Resource action: rabbitmq        notify on messaging-2
   * Resource action: rabbitmq        notify on messaging-0
   * Pseudo action:   rabbitmq_notified_0
   * Pseudo action:   rabbitmq_stop_0
   * Pseudo action:   rabbitmq-clone_stopped_0
   * Pseudo action:   redis-master_post_notify_stopped_0
   * Pseudo action:   ip-172.17.1.14_stop_0
   * Pseudo action:   ip-172.17.1.17_stop_0
   * Pseudo action:   ip-172.17.4.11_stop_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-post_notify_stopped_0
   * Resource action: ip-172.17.1.14  start on controller-2
   * Resource action: ip-172.17.1.17  start on controller-2
   * Resource action: ip-172.17.4.11  start on controller-2
   * Pseudo action:   redis_notified_0
   * Resource action: ip-172.17.1.14  monitor=10000 on controller-2
   * Resource action: ip-172.17.1.17  monitor=10000 on controller-2
   * Resource action: ip-172.17.4.11  monitor=10000 on controller-2
 Using the original execution date of: 2017-05-03 13:33:24Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-0 controller-2 ]
     * OFFLINE: [ controller-1 ]
     * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ]
     * RemoteOFFLINE: [ galera-2 messaging-1 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Stopped
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-2
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Stopped
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * Clone Set: haproxy-clone [haproxy]:
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-2
diff --git a/cts/scheduler/summary/remote-recovery.summary b/cts/scheduler/summary/remote-recovery.summary
index fd6900dd96..a9723bc5e1 100644
--- a/cts/scheduler/summary/remote-recovery.summary
+++ b/cts/scheduler/summary/remote-recovery.summary
@@ -1,132 +1,132 @@
 Using the original execution date of: 2017-05-03 13:33:24Z
 Current cluster status:
   * Node List:
     * Node controller-1: UNCLEAN (offline)
     * Online: [ controller-0 controller-2 ]
     * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Started controller-1 (UNCLEAN)
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-1 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 galera-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * redis	(ocf:heartbeat:redis):	 Unpromoted controller-1 (UNCLEAN)
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-1 (UNCLEAN)
     * Clone Set: haproxy-clone [haproxy]:
       * haproxy	(systemd:haproxy):	 Started controller-1 (UNCLEAN)
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-1 (UNCLEAN)
 
 Transition Summary:
   * Fence (reboot) controller-1 'peer is no longer part of the cluster'
   * Move       messaging-1                            ( controller-1 -> controller-2 )
   * Move       galera-0                               ( controller-1 -> controller-2 )
   * Move       galera-2                               ( controller-1 -> controller-2 )
-  * Stop       redis:0                                (      Unpromoted controller-1 )  due to node availability
+  * Stop       redis:0                                (           Unpromoted controller-1 )  due to node availability
   * Move       ip-172.17.1.14                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.1.17                         ( controller-1 -> controller-2 )
   * Move       ip-172.17.4.11                         ( controller-1 -> controller-2 )
   * Stop       haproxy:0                              (                 controller-1 )  due to node availability
   * Move       stonith-fence_ipmilan-5254005bdbb5     ( controller-1 -> controller-2 )
 
 Executing Cluster Transition:
   * Pseudo action:   messaging-1_stop_0
   * Pseudo action:   galera-0_stop_0
   * Pseudo action:   galera-2_stop_0
   * Pseudo action:   redis-master_pre_notify_stop_0
   * Pseudo action:   stonith-fence_ipmilan-5254005bdbb5_stop_0
   * Fencing controller-1 (reboot)
   * Resource action: messaging-1     start on controller-2
   * Resource action: galera-0        start on controller-2
   * Resource action: galera-2        start on controller-2
   * Resource action: rabbitmq        monitor=10000 on messaging-1
   * Resource action: galera          monitor=10000 on galera-2
   * Resource action: galera          monitor=10000 on galera-0
   * Pseudo action:   redis_post_notify_stop_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-pre_notify_stop_0
   * Pseudo action:   redis-master_stop_0
   * Pseudo action:   haproxy-clone_stop_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
   * Resource action: messaging-1     monitor=20000 on controller-2
   * Resource action: galera-0        monitor=20000 on controller-2
   * Resource action: galera-2        monitor=20000 on controller-2
   * Pseudo action:   redis_stop_0
   * Pseudo action:   redis-master_stopped_0
   * Pseudo action:   haproxy_stop_0
   * Pseudo action:   haproxy-clone_stopped_0
   * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
   * Pseudo action:   redis-master_post_notify_stopped_0
   * Pseudo action:   ip-172.17.1.14_stop_0
   * Pseudo action:   ip-172.17.1.17_stop_0
   * Pseudo action:   ip-172.17.4.11_stop_0
   * Resource action: redis           notify on controller-0
   * Resource action: redis           notify on controller-2
   * Pseudo action:   redis-master_confirmed-post_notify_stopped_0
   * Resource action: ip-172.17.1.14  start on controller-2
   * Resource action: ip-172.17.1.17  start on controller-2
   * Resource action: ip-172.17.4.11  start on controller-2
   * Pseudo action:   redis_notified_0
   * Resource action: ip-172.17.1.14  monitor=10000 on controller-2
   * Resource action: ip-172.17.1.17  monitor=10000 on controller-2
   * Resource action: ip-172.17.4.11  monitor=10000 on controller-2
 Using the original execution date of: 2017-05-03 13:33:24Z
 
 Revised Cluster Status:
   * Node List:
     * Online: [ controller-0 controller-2 ]
     * OFFLINE: [ controller-1 ]
     * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
 
   * Full List of Resources:
     * messaging-0	(ocf:pacemaker:remote):	 Started controller-0
     * messaging-1	(ocf:pacemaker:remote):	 Started controller-2
     * messaging-2	(ocf:pacemaker:remote):	 Started controller-0
     * galera-0	(ocf:pacemaker:remote):	 Started controller-2
     * galera-1	(ocf:pacemaker:remote):	 Started controller-0
     * galera-2	(ocf:pacemaker:remote):	 Started controller-2
     * Clone Set: rabbitmq-clone [rabbitmq]:
       * Started: [ messaging-0 messaging-1 messaging-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
     * Clone Set: galera-master [galera] (promotable):
       * Promoted: [ galera-0 galera-1 galera-2 ]
       * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
     * Clone Set: redis-master [redis] (promotable):
       * Promoted: [ controller-0 ]
       * Unpromoted: [ controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * ip-192.168.24.6	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-10.0.0.102	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.1.14	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.1.17	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * ip-172.17.3.15	(ocf:heartbeat:IPaddr2):	 Started controller-0
     * ip-172.17.4.11	(ocf:heartbeat:IPaddr2):	 Started controller-2
     * Clone Set: haproxy-clone [haproxy]:
       * Started: [ controller-0 controller-2 ]
       * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
     * openstack-cinder-volume	(systemd:openstack-cinder-volume):	 Started controller-0
     * stonith-fence_ipmilan-525400bbf613	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-525400b4f6bd	(stonith:fence_ipmilan):	 Started controller-0
     * stonith-fence_ipmilan-5254005bdbb5	(stonith:fence_ipmilan):	 Started controller-2
diff --git a/cts/scheduler/summary/rsc-sets-promoted.summary b/cts/scheduler/summary/rsc-sets-promoted.summary
index a45e4b16e8..3db15881a0 100644
--- a/cts/scheduler/summary/rsc-sets-promoted.summary
+++ b/cts/scheduler/summary/rsc-sets-promoted.summary
@@ -1,49 +1,49 @@
 Current cluster status:
   * Node List:
     * Node node1: standby (with active resources)
     * Online: [ node2 ]
 
   * Full List of Resources:
     * Clone Set: ms-rsc [rsc] (promotable):
       * Promoted: [ node1 ]
       * Unpromoted: [ node2 ]
     * rsc1	(ocf:pacemaker:Dummy):	 Started node1
     * rsc2	(ocf:pacemaker:Dummy):	 Started node1
     * rsc3	(ocf:pacemaker:Dummy):	 Started node1
 
 Transition Summary:
-  * Stop       rsc:0   (               Promoted node1 )  due to node availability
+  * Stop       rsc:0   (          Promoted node1 )  due to node availability
   * Promote    rsc:1   ( Unpromoted -> Promoted node2 )
   * Move       rsc1    (        node1 -> node2 )
   * Move       rsc2    (        node1 -> node2 )
   * Move       rsc3    (        node1 -> node2 )
 
 Executing Cluster Transition:
   * Resource action: rsc1            stop on node1
   * Resource action: rsc2            stop on node1
   * Resource action: rsc3            stop on node1
   * Pseudo action:   ms-rsc_demote_0
   * Resource action: rsc:0           demote on node1
   * Pseudo action:   ms-rsc_demoted_0
   * Pseudo action:   ms-rsc_stop_0
   * Resource action: rsc:0           stop on node1
   * Pseudo action:   ms-rsc_stopped_0
   * Pseudo action:   ms-rsc_promote_0
   * Resource action: rsc:1           promote on node2
   * Pseudo action:   ms-rsc_promoted_0
   * Resource action: rsc1            start on node2
   * Resource action: rsc2            start on node2
   * Resource action: rsc3            start on node2
 
 Revised Cluster Status:
   * Node List:
     * Node node1: standby
     * Online: [ node2 ]
 
   * Full List of Resources:
     * Clone Set: ms-rsc [rsc] (promotable):
       * Promoted: [ node2 ]
       * Stopped: [ node1 ]
     * rsc1	(ocf:pacemaker:Dummy):	 Started node2
     * rsc2	(ocf:pacemaker:Dummy):	 Started node2
     * rsc3	(ocf:pacemaker:Dummy):	 Started node2
diff --git a/cts/scheduler/summary/ticket-promoted-14.summary b/cts/scheduler/summary/ticket-promoted-14.summary
index ee8912b2e9..80ff84346b 100644
--- a/cts/scheduler/summary/ticket-promoted-14.summary
+++ b/cts/scheduler/summary/ticket-promoted-14.summary
@@ -1,31 +1,31 @@
 Current cluster status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Promoted: [ node1 ]
       * Unpromoted: [ node2 ]
 
 Transition Summary:
-  * Stop       rsc1:0     (   Promoted node1 )  due to node availability
-  * Stop       rsc1:1     ( Unpromoted node2 )  due to node availability
+  * Stop       rsc1:0     ( Promoted node1 )  due to node availability
+  * Stop       rsc1:1     (  Unpromoted node2 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   ms1_demote_0
   * Resource action: rsc1:1          demote on node1
   * Pseudo action:   ms1_demoted_0
   * Pseudo action:   ms1_stop_0
   * Resource action: rsc1:1          stop on node1
   * Resource action: rsc1:0          stop on node2
   * Pseudo action:   ms1_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-15.summary b/cts/scheduler/summary/ticket-promoted-15.summary
index ee8912b2e9..80ff84346b 100644
--- a/cts/scheduler/summary/ticket-promoted-15.summary
+++ b/cts/scheduler/summary/ticket-promoted-15.summary
@@ -1,31 +1,31 @@
 Current cluster status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Promoted: [ node1 ]
       * Unpromoted: [ node2 ]
 
 Transition Summary:
-  * Stop       rsc1:0     (   Promoted node1 )  due to node availability
-  * Stop       rsc1:1     ( Unpromoted node2 )  due to node availability
+  * Stop       rsc1:0     ( Promoted node1 )  due to node availability
+  * Stop       rsc1:1     (  Unpromoted node2 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   ms1_demote_0
   * Resource action: rsc1:1          demote on node1
   * Pseudo action:   ms1_demoted_0
   * Pseudo action:   ms1_stop_0
   * Resource action: rsc1:1          stop on node1
   * Resource action: rsc1:0          stop on node2
   * Pseudo action:   ms1_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-21.summary b/cts/scheduler/summary/ticket-promoted-21.summary
index f116a2eea0..788573facb 100644
--- a/cts/scheduler/summary/ticket-promoted-21.summary
+++ b/cts/scheduler/summary/ticket-promoted-21.summary
@@ -1,36 +1,36 @@
 Current cluster status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Promoted: [ node1 ]
       * Unpromoted: [ node2 ]
 
 Transition Summary:
   * Fence (reboot) node1 'deadman ticket was lost'
   * Move       rsc_stonith     ( node1 -> node2 )
-  * Stop       rsc1:0          ( Promoted node1 )  due to node availability
+  * Stop       rsc1:0          (   Promoted node1 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   rsc_stonith_stop_0
   * Pseudo action:   ms1_demote_0
   * Fencing node1 (reboot)
   * Resource action: rsc_stonith     start on node2
   * Pseudo action:   rsc1:1_demote_0
   * Pseudo action:   ms1_demoted_0
   * Pseudo action:   ms1_stop_0
   * Pseudo action:   rsc1:1_stop_0
   * Pseudo action:   ms1_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node2 ]
     * OFFLINE: [ node1 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node2
     * Clone Set: ms1 [rsc1] (promotable):
       * Unpromoted: [ node2 ]
       * Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/ticket-promoted-3.summary b/cts/scheduler/summary/ticket-promoted-3.summary
index ee8912b2e9..80ff84346b 100644
--- a/cts/scheduler/summary/ticket-promoted-3.summary
+++ b/cts/scheduler/summary/ticket-promoted-3.summary
@@ -1,31 +1,31 @@
 Current cluster status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Promoted: [ node1 ]
       * Unpromoted: [ node2 ]
 
 Transition Summary:
-  * Stop       rsc1:0     (   Promoted node1 )  due to node availability
-  * Stop       rsc1:1     ( Unpromoted node2 )  due to node availability
+  * Stop       rsc1:0     ( Promoted node1 )  due to node availability
+  * Stop       rsc1:1     (  Unpromoted node2 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   ms1_demote_0
   * Resource action: rsc1:1          demote on node1
   * Pseudo action:   ms1_demoted_0
   * Pseudo action:   ms1_stop_0
   * Resource action: rsc1:1          stop on node1
   * Resource action: rsc1:0          stop on node2
   * Pseudo action:   ms1_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-9.summary b/cts/scheduler/summary/ticket-promoted-9.summary
index f116a2eea0..788573facb 100644
--- a/cts/scheduler/summary/ticket-promoted-9.summary
+++ b/cts/scheduler/summary/ticket-promoted-9.summary
@@ -1,36 +1,36 @@
 Current cluster status:
   * Node List:
     * Online: [ node1 node2 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node1
     * Clone Set: ms1 [rsc1] (promotable):
       * Promoted: [ node1 ]
       * Unpromoted: [ node2 ]
 
 Transition Summary:
   * Fence (reboot) node1 'deadman ticket was lost'
   * Move       rsc_stonith     ( node1 -> node2 )
-  * Stop       rsc1:0          ( Promoted node1 )  due to node availability
+  * Stop       rsc1:0          (   Promoted node1 )  due to node availability
 
 Executing Cluster Transition:
   * Pseudo action:   rsc_stonith_stop_0
   * Pseudo action:   ms1_demote_0
   * Fencing node1 (reboot)
   * Resource action: rsc_stonith     start on node2
   * Pseudo action:   rsc1:1_demote_0
   * Pseudo action:   ms1_demoted_0
   * Pseudo action:   ms1_stop_0
   * Pseudo action:   rsc1:1_stop_0
   * Pseudo action:   ms1_stopped_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ node2 ]
     * OFFLINE: [ node1 ]
 
   * Full List of Resources:
     * rsc_stonith	(stonith:null):	 Started node2
     * Clone Set: ms1 [rsc1] (promotable):
       * Unpromoted: [ node2 ]
       * Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/whitebox-ms-ordering-move.summary b/cts/scheduler/summary/whitebox-ms-ordering-move.summary
index 6a5fb6eaeb..c9b13e032d 100644
--- a/cts/scheduler/summary/whitebox-ms-ordering-move.summary
+++ b/cts/scheduler/summary/whitebox-ms-ordering-move.summary
@@ -1,107 +1,107 @@
 Current cluster status:
   * Node List:
     * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * GuestOnline: [ lxc1@rhel7-1 lxc2@rhel7-1 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-3
     * FencingPass	(stonith:fence_dummy):	 Started rhel7-4
     * FencingFail	(stonith:fence_dummy):	 Started rhel7-5
     * rsc_rhel7-1	(ocf:heartbeat:IPaddr2):	 Started rhel7-1
     * rsc_rhel7-2	(ocf:heartbeat:IPaddr2):	 Started rhel7-2
     * rsc_rhel7-3	(ocf:heartbeat:IPaddr2):	 Started rhel7-3
     * rsc_rhel7-4	(ocf:heartbeat:IPaddr2):	 Started rhel7-4
     * rsc_rhel7-5	(ocf:heartbeat:IPaddr2):	 Started rhel7-5
     * migrator	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * Clone Set: Connectivity [ping-1]:
       * Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
       * Stopped: [ lxc1 lxc2 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ rhel7-3 ]
       * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
     * Resource Group: group-1:
       * r192.168.122.207	(ocf:heartbeat:IPaddr2):	 Started rhel7-3
       * petulant	(service:DummySD):	 Started rhel7-3
       * r192.168.122.208	(ocf:heartbeat:IPaddr2):	 Started rhel7-3
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started rhel7-3
     * container1	(ocf:heartbeat:VirtualDomain):	 Started rhel7-1
     * container2	(ocf:heartbeat:VirtualDomain):	 Started rhel7-1
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Promoted: [ lxc1 ]
       * Unpromoted: [ lxc2 ]
 
 Transition Summary:
   * Move       container1     ( rhel7-1 -> rhel7-2 )
-  * Restart    lxc-ms:0       (      Promoted lxc1 )  due to required container1 start
+  * Restart    lxc-ms:0       (        Promoted lxc1 )  due to required container1 start
   * Move       lxc1           ( rhel7-1 -> rhel7-2 )
 
 Executing Cluster Transition:
   * Resource action: rsc_rhel7-1     monitor on lxc2
   * Resource action: rsc_rhel7-2     monitor on lxc2
   * Resource action: rsc_rhel7-3     monitor on lxc2
   * Resource action: rsc_rhel7-4     monitor on lxc2
   * Resource action: rsc_rhel7-5     monitor on lxc2
   * Resource action: migrator        monitor on lxc2
   * Resource action: ping-1          monitor on lxc2
   * Resource action: stateful-1      monitor on lxc2
   * Resource action: r192.168.122.207 monitor on lxc2
   * Resource action: petulant        monitor on lxc2
   * Resource action: r192.168.122.208 monitor on lxc2
   * Resource action: lsb-dummy       monitor on lxc2
   * Pseudo action:   lxc-ms-master_demote_0
   * Resource action: lxc1            monitor on rhel7-5
   * Resource action: lxc1            monitor on rhel7-4
   * Resource action: lxc1            monitor on rhel7-3
   * Resource action: lxc1            monitor on rhel7-2
   * Resource action: lxc2            monitor on rhel7-5
   * Resource action: lxc2            monitor on rhel7-4
   * Resource action: lxc2            monitor on rhel7-3
   * Resource action: lxc2            monitor on rhel7-2
   * Resource action: lxc-ms          demote on lxc1
   * Pseudo action:   lxc-ms-master_demoted_0
   * Pseudo action:   lxc-ms-master_stop_0
   * Resource action: lxc-ms          stop on lxc1
   * Pseudo action:   lxc-ms-master_stopped_0
   * Pseudo action:   lxc-ms-master_start_0
   * Resource action: lxc1            stop on rhel7-1
   * Resource action: container1      stop on rhel7-1
   * Resource action: container1      start on rhel7-2
   * Resource action: lxc1            start on rhel7-2
   * Resource action: lxc-ms          start on lxc1
   * Pseudo action:   lxc-ms-master_running_0
   * Resource action: lxc1            monitor=30000 on rhel7-2
   * Pseudo action:   lxc-ms-master_promote_0
   * Resource action: lxc-ms          promote on lxc1
   * Pseudo action:   lxc-ms-master_promoted_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
     * GuestOnline: [ lxc1@rhel7-2 lxc2@rhel7-1 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started rhel7-3
     * FencingPass	(stonith:fence_dummy):	 Started rhel7-4
     * FencingFail	(stonith:fence_dummy):	 Started rhel7-5
     * rsc_rhel7-1	(ocf:heartbeat:IPaddr2):	 Started rhel7-1
     * rsc_rhel7-2	(ocf:heartbeat:IPaddr2):	 Started rhel7-2
     * rsc_rhel7-3	(ocf:heartbeat:IPaddr2):	 Started rhel7-3
     * rsc_rhel7-4	(ocf:heartbeat:IPaddr2):	 Started rhel7-4
     * rsc_rhel7-5	(ocf:heartbeat:IPaddr2):	 Started rhel7-5
     * migrator	(ocf:pacemaker:Dummy):	 Started rhel7-4
     * Clone Set: Connectivity [ping-1]:
       * Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
       * Stopped: [ lxc1 lxc2 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ rhel7-3 ]
       * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
     * Resource Group: group-1:
       * r192.168.122.207	(ocf:heartbeat:IPaddr2):	 Started rhel7-3
       * petulant	(service:DummySD):	 Started rhel7-3
       * r192.168.122.208	(ocf:heartbeat:IPaddr2):	 Started rhel7-3
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started rhel7-3
     * container1	(ocf:heartbeat:VirtualDomain):	 Started rhel7-2
     * container2	(ocf:heartbeat:VirtualDomain):	 Started rhel7-1
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Promoted: [ lxc1 ]
       * Unpromoted: [ lxc2 ]
diff --git a/cts/scheduler/summary/whitebox-ms-ordering.summary b/cts/scheduler/summary/whitebox-ms-ordering.summary
index 921f6d068d..4d23221fa6 100644
--- a/cts/scheduler/summary/whitebox-ms-ordering.summary
+++ b/cts/scheduler/summary/whitebox-ms-ordering.summary
@@ -1,73 +1,73 @@
 Current cluster status:
   * Node List:
     * Online: [ 18node1 18node2 18node3 ]
 
   * Full List of Resources:
     * shooter	(stonith:fence_xvm):	 Started 18node2
     * container1	(ocf:heartbeat:VirtualDomain):	 FAILED
     * container2	(ocf:heartbeat:VirtualDomain):	 FAILED
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Stopped: [ 18node1 18node2 18node3 ]
 
 Transition Summary:
   * Fence (reboot) lxc2 (resource: container2) 'guest is unclean'
   * Fence (reboot) lxc1 (resource: container1) 'guest is unclean'
   * Start      container1     (     18node1 )
   * Start      container2     (     18node1 )
-  * Recover    lxc-ms:0       (   Promoted lxc1 )
-  * Recover    lxc-ms:1       ( Unpromoted lxc2 )
+  * Recover    lxc-ms:0       ( Promoted lxc1 )
+  * Recover    lxc-ms:1       (  Unpromoted lxc2 )
   * Start      lxc1           (     18node1 )
   * Start      lxc2           (     18node1 )
 
 Executing Cluster Transition:
   * Resource action: container1      monitor on 18node3
   * Resource action: container1      monitor on 18node2
   * Resource action: container1      monitor on 18node1
   * Resource action: container2      monitor on 18node3
   * Resource action: container2      monitor on 18node2
   * Resource action: container2      monitor on 18node1
   * Resource action: lxc-ms          monitor on 18node3
   * Resource action: lxc-ms          monitor on 18node2
   * Resource action: lxc-ms          monitor on 18node1
   * Pseudo action:   lxc-ms-master_demote_0
   * Resource action: lxc1            monitor on 18node3
   * Resource action: lxc1            monitor on 18node2
   * Resource action: lxc1            monitor on 18node1
   * Resource action: lxc2            monitor on 18node3
   * Resource action: lxc2            monitor on 18node2
   * Resource action: lxc2            monitor on 18node1
   * Pseudo action:   stonith-lxc2-reboot on lxc2
   * Pseudo action:   stonith-lxc1-reboot on lxc1
   * Resource action: container1      start on 18node1
   * Resource action: container2      start on 18node1
   * Pseudo action:   lxc-ms_demote_0
   * Pseudo action:   lxc-ms-master_demoted_0
   * Pseudo action:   lxc-ms-master_stop_0
   * Resource action: lxc1            start on 18node1
   * Resource action: lxc2            start on 18node1
   * Pseudo action:   lxc-ms_stop_0
   * Pseudo action:   lxc-ms_stop_0
   * Pseudo action:   lxc-ms-master_stopped_0
   * Pseudo action:   lxc-ms-master_start_0
   * Resource action: lxc1            monitor=30000 on 18node1
   * Resource action: lxc2            monitor=30000 on 18node1
   * Resource action: lxc-ms          start on lxc1
   * Resource action: lxc-ms          start on lxc2
   * Pseudo action:   lxc-ms-master_running_0
   * Resource action: lxc-ms          monitor=10000 on lxc2
   * Pseudo action:   lxc-ms-master_promote_0
   * Resource action: lxc-ms          promote on lxc1
   * Pseudo action:   lxc-ms-master_promoted_0
 
 Revised Cluster Status:
   * Node List:
     * Online: [ 18node1 18node2 18node3 ]
     * GuestOnline: [ lxc1@18node1 lxc2@18node1 ]
 
   * Full List of Resources:
     * shooter	(stonith:fence_xvm):	 Started 18node2
     * container1	(ocf:heartbeat:VirtualDomain):	 Started 18node1
     * container2	(ocf:heartbeat:VirtualDomain):	 Started 18node1
     * Clone Set: lxc-ms-master [lxc-ms] (promotable):
       * Promoted: [ lxc1 ]
       * Unpromoted: [ lxc2 ]
diff --git a/cts/scheduler/summary/whitebox-orphan-ms.summary b/cts/scheduler/summary/whitebox-orphan-ms.summary
index 0d0007dcc6..7e1b45b272 100644
--- a/cts/scheduler/summary/whitebox-orphan-ms.summary
+++ b/cts/scheduler/summary/whitebox-orphan-ms.summary
@@ -1,87 +1,87 @@
 Current cluster status:
   * Node List:
     * Online: [ 18node1 18node2 18node3 ]
     * GuestOnline: [ lxc1@18node1 lxc2@18node1 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started 18node2
     * FencingPass	(stonith:fence_dummy):	 Started 18node3
     * FencingFail	(stonith:fence_dummy):	 Started 18node3
     * rsc_18node1	(ocf:heartbeat:IPaddr2):	 Started 18node1
     * rsc_18node2	(ocf:heartbeat:IPaddr2):	 Started 18node2
     * rsc_18node3	(ocf:heartbeat:IPaddr2):	 Started 18node3
     * migrator	(ocf:pacemaker:Dummy):	 Started 18node1
     * Clone Set: Connectivity [ping-1]:
       * Started: [ 18node1 18node2 18node3 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ 18node1 ]
       * Unpromoted: [ 18node2 18node3 ]
     * Resource Group: group-1:
       * r192.168.122.87	(ocf:heartbeat:IPaddr2):	 Started 18node1
       * r192.168.122.88	(ocf:heartbeat:IPaddr2):	 Started 18node1
       * r192.168.122.89	(ocf:heartbeat:IPaddr2):	 Started 18node1
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started 18node1
     * container2	(ocf:heartbeat:VirtualDomain):	 ORPHANED Started 18node1
     * lxc1	(ocf:pacemaker:remote):	 ORPHANED Started 18node1
     * lxc-ms	(ocf:pacemaker:Stateful):	 ORPHANED Promoted [ lxc1 lxc2 ]
     * lxc2	(ocf:pacemaker:remote):	 ORPHANED Started 18node1
     * container1	(ocf:heartbeat:VirtualDomain):	 ORPHANED Started 18node1
 
 Transition Summary:
   * Move       FencingFail     ( 18node3 -> 18node1 )
   * Stop       container2      (            18node1 )  due to node availability
   * Stop       lxc1            (            18node1 )  due to node availability
-  * Stop       lxc-ms          (      Promoted lxc1 )  due to node availability
-  * Stop       lxc-ms          (      Promoted lxc2 )  due to node availability
+  * Stop       lxc-ms          (        Promoted lxc1 )  due to node availability
+  * Stop       lxc-ms          (        Promoted lxc2 )  due to node availability
   * Stop       lxc2            (            18node1 )  due to node availability
   * Stop       container1      (            18node1 )  due to node availability
 
 Executing Cluster Transition:
   * Resource action: FencingFail     stop on 18node3
   * Resource action: lxc-ms          demote on lxc2
   * Resource action: lxc-ms          demote on lxc1
   * Resource action: FencingFail     start on 18node1
   * Resource action: lxc-ms          stop on lxc2
   * Resource action: lxc-ms          stop on lxc1
   * Resource action: lxc-ms          delete on 18node3
   * Resource action: lxc-ms          delete on 18node2
   * Resource action: lxc-ms          delete on 18node1
   * Resource action: lxc2            stop on 18node1
   * Resource action: lxc2            delete on 18node3
   * Resource action: lxc2            delete on 18node2
   * Resource action: lxc2            delete on 18node1
   * Resource action: container2      stop on 18node1
   * Resource action: container2      delete on 18node3
   * Resource action: container2      delete on 18node2
   * Resource action: container2      delete on 18node1
   * Resource action: lxc1            stop on 18node1
   * Resource action: lxc1            delete on 18node3
   * Resource action: lxc1            delete on 18node2
   * Resource action: lxc1            delete on 18node1
   * Resource action: container1      stop on 18node1
   * Resource action: container1      delete on 18node3
   * Resource action: container1      delete on 18node2
   * Resource action: container1      delete on 18node1
 
 Revised Cluster Status:
   * Node List:
     * Online: [ 18node1 18node2 18node3 ]
 
   * Full List of Resources:
     * Fencing	(stonith:fence_xvm):	 Started 18node2
     * FencingPass	(stonith:fence_dummy):	 Started 18node3
     * FencingFail	(stonith:fence_dummy):	 Started 18node1
     * rsc_18node1	(ocf:heartbeat:IPaddr2):	 Started 18node1
     * rsc_18node2	(ocf:heartbeat:IPaddr2):	 Started 18node2
     * rsc_18node3	(ocf:heartbeat:IPaddr2):	 Started 18node3
     * migrator	(ocf:pacemaker:Dummy):	 Started 18node1
     * Clone Set: Connectivity [ping-1]:
       * Started: [ 18node1 18node2 18node3 ]
     * Clone Set: master-1 [stateful-1] (promotable):
       * Promoted: [ 18node1 ]
       * Unpromoted: [ 18node2 18node3 ]
     * Resource Group: group-1:
       * r192.168.122.87	(ocf:heartbeat:IPaddr2):	 Started 18node1
       * r192.168.122.88	(ocf:heartbeat:IPaddr2):	 Started 18node1
       * r192.168.122.89	(ocf:heartbeat:IPaddr2):	 Started 18node1
     * lsb-dummy	(lsb:/usr/share/pacemaker/tests/cts/LSBDummy):	 Started 18node1