diff --git a/doc/README.hb2openais b/doc/README.hb2openais
index 84612d1c9a..99768f3515 100644
--- a/doc/README.hb2openais
+++ b/doc/README.hb2openais
@@ -1,246 +1,274 @@
 Heartbeat to OpenAIS cluster stack conversion
+=============================================
 
 Please read this description entirely before converting to
 OpenAIS. Every possible precaution was taken to preclude
 problems. Still, you should run the conversion only when you
 understood all the steps and the consequences.
 
 You need to know your cluster in detail. The conversion program
 will inform you about changes it makes. It is up to you to verify
 that the changes are meaningful.
 
 Testing the conversion
+----------------------
 
 It is possible (and highly recommended) to test the conversion
 with your heartbeat configuration without making any changes.
 This way you will get acquainted with the process and make sure
 that the conversion is done properly.
 
 Create a test directory and copy ha.cf, logd.cf, cib.xml, and
 hostcache to it:
 
 $ mkdir /tmp/hb2openais-testdir
 $ cp /etc/ha.d/ha.cf /tmp/hb2openais-testdir
 $ cp /var/lib/heartbeat/hostcache /tmp/hb2openais-testdir
 $ cp /etc/logd.cf /tmp/hb2openais-testdir
 $ sudo cp /var/lib/heartbeat/crm/cib.xml /tmp/hb2openais-testdir
 
 Run the test conversion:
 
-$ /usr/lib/heartbeat/hb2openais.sh -T /tmp/hb2openais-testdir -U
+$ /usr/lib/heartbeat/hb2openais.sh -T /tmp/hb2openais-testdir
+
+Here is the scripts usage:
+
+usage: hb2openais.sh [-UF] [-u user] [-T directory] [revert]
+
+	-U: skip upgrade the CIB to v1.0
+	-F: force conversion despite it being done beforehand
+	-u user: a user to sudo with (otherwise, you'd
+	         have to run this as root)
+	-T directory: a directory containing ha.cf/logd.cf/cib.xml/hostcache
+	         (use for testing); with this option files are not
+	         copied to other nodes and there are no destructive
+	         commands executed; you may run as unprivileged uid
 
 Note: You can run the test as many times as you want on the same
 test directory. Copy files just once.
 
 Note: The directory where hb2openais.sh resides may be different,
 e.g. /usr/lib64/heartbeat.
 
 Read and verify the resulting openais.conf and cib-out.xml:
 
 $ cd /tmp/hb2openais-testdir
 $ less openais.conf
 $ crm_verify -V -x cib-out.xml
 
 The conversion takes several stages:
 
 1. Generate openais.conf from ha.cf.
 
-2. Removal of the nodes section from the CIB.
+2. Rename nodes ids.
 
 3. Upgrade of the CIB to Pacemaker v1.0 (optional)
 
 4. Addition of pingd resource.
 
 5. Conversion of ocfs2 filesystem.
 
-6. Replacement of EVMS2 with LVM2.
+6. Conversion of EVMS2 CSM containers to cLVM2 volumes.
+
+7. Replacement of EVMS2 with clvmd.
 
 Conversion from the Heartbeat to OpenAIS cluster stack is
 implemented in hb2openais.sh which is part of the pacemaker
 package.
 
 Prerequisites
+-------------
 
 /etc/ha.d/ha.cf must be equal on all nodes.
 
 /var/lib/heartbeat/crm/cib.xml must be equal on all nodes. This
-should be enforced by the CRM.
-
-Heartbeat must be down on all nodes.
-
-The ocfs2 filesystem and EVMS2 resources must be down.
+should have been enforced by the CRM and users should refrain
+from making manual changes there.
 
-It is possible to keep other services running: just put all
-resources into the unmanaged mode before stopping heartbeat.
+The ocfs2 filesystems must not be mounted.
 
 sshd running on all nodes with access allowed for root.
 
 The conversion process
+----------------------
 
 This procedure is supposed to be run on one node only. Although
 the main cluster configuration (the CIB) is automatically
 replicated, there are some files which have to be copied by other
 means. For that to work, we need sshd running on all nodes and
 root access working.
 
 For some operations root privileges are required. Either run
 this script as the root user or, if you have a working sudo
 setup, specify the privileged user (normally root) using the -u
 option:
 
 # /usr/lib/heartbeat/hb2openais.sh -u root
 
-Do not run this procedure on more than one node!
+NB: Do not run this procedure on more than one node!
 
 1. Generate openais.conf from ha.cf.
 
 /etc/ha.d/ha.cf is parsed and /etc/ais/openais.conf
 correspondingly generated.
 
 Whereas heartbeat supports several different communication
 types (broadcast, unicast, multicast), OpenAIS uses only
 multicasting. The conversion tries to create equivalent media,
 but with some network configurations it may produce wrong
 results. Pay particular attention to the "interface"
 sub-directive of the "totem" directive. The openais.conf(5) man
 page is the reference documentation.
 
 Make sure that your network supports IP multicasts.
 
 OpenAIS does not support serial communication links.
 
 In addition, an OpenAIS authentication key is generated.
 
-2. Removal of the nodes section from the CIB.
+2. Rename nodes ids.
 
 Since the nodes UUID are generated by OpenAIS in a different
-manner, the nodes section must be removed. This section is
-automaticaly generated when the cluster is started.
-
-If you had node attributes defined in the nodes section, they
-will have to be manually recreated.
+manner, the id fields of nodes must be renamed to the node uname.
 
 3. Upgrade of the CIB to Pacemaker v1.0 (optional)
 
 There are significant changes introduced in the CIB since
 heartbeat versions before and including 2.1.4 and the previous
 pacemaker stable version 0.6. The new CRM in pacemaker still
 supports the old CIB, but it is recommended to convert to the new
 version. You may do so by passing the -U option to the
 hb2openais.sh program. If this option is not specified, the
 program will still ask if you want to upgrade the CIB to the new
 version.
 
 If you don't convert to the new CIB version, the new crm shell
 and configuration tool will not work.
 
 4. Addition of pingd resource.
 
 In heartbeat the pingd daemon could be controlled by the
 heartbeat itself through the respawn ha.cf directive. Obviously,
 it is not possible anymore, so a pingd resource has to be created
 in the CIB. Furthermore, hosts from the "ping" directives (the
 "ping" nodes) are inserted into the "host_list" pingd resource
 attribute.
 
 5. Conversion of ocfs2 filesystem.
 
 The ocfs2 filesystem is closely related to the cluster stack
 used. It must be converted if the stack is changed. The
 conversion script will do this automatically for you. Note that
 for this step it will start the cluster stack. The conversion is
 performed by the tunefs.ocfs2 program:
 
 	tunefs.ocfs2 --update-cluster-stack
 
 For more details on ocfs2 conversion refer to the ocfs2
 documentation.
 
-6. Replacement of EVMS2 with LVM2.
+Skip the following two items in case you don't have EVMS2 CSM
+containers.
+
+6. Conversion of EVMS2 CSM containers to cLVM2 volumes.
+
+All EVMS2 CSM containers found on the system are converted by
+csm-converter (see README.csm-converter for more details).
+
+For volume groups referenced in existing resources the CIB
+(/dev/evms/<csm-container>/lvm2/<vgname>/<lvname>), new LVM
+resources are created. Order and collocation constraints are
+created for those resources and new LVM resources to ensure
+proper start/stop order and resource placement.
+
+7. Replacement of EVMS2 with clvmd.
 
 Skip this in case you don't have EVMS2 resources.
 
 EVMS2 is replaced by the clustered LVM2. The conversion program
-removes all Evmsd resources and converts the EvmsSCC to LVM
-resources.
-
-Please supply the name of the LVM volume group when the program
-asks.
+replaces Evmsd resources with clvmd resources. The EvmsSCC
+resource is removed.
 
 Note on logging
+---------------
 
 The CRM still does not share the logging setup with the OpenAIS,
 i.e. it does not read the logging stanza from openais.conf. This
 will be rectified in future, but in the meantime the logging
 configuration has to be replicated in /etc/sysconfig/pacemaker,
 for instance:
 
 USE_LOGD=yes
 SYSLOG_FACILITY=local7
 
 Enforcing conversion
+--------------------
 
 There is a simple mechanism which prevents running the conversion
 process twice in a row. If you know what you are doing, it is
 possible to force the conversion using the -F option.
 
 After the conversion
+--------------------
 
 Once the conversion has been finished, you may start the new
 cluster stack:
 
 # /etc/init.d/ais start
 
 Put resources back to the managed mode in case they were
 previously unmanaged.
 
 TODO: What happens to the tunefs.ocfs2 process? We should know
 when it's done and stop the cluster stack.
 
 Backup
+------
 
 The conversion procedure also creates backup of all affected
 files. It is possible to revert to the version from the time of
 backup:
 
 # /usr/lib/heartbeat/hb2openais.sh revert
 
 Note that the revert process is executed only on the node on
 which the conversion took place.
 
 TODO: Check effect of hb_uuid files removal on other nodes! They
 have to be regenerated and will be different from the nodes
 section. Perhaps backup/revert should take place on all nodes.
 
 Affected files
+--------------
 
 All file processing is done on the node where conversion runs.
 
 The CIB is the only file which is converted:
 
 /var/lib/heartbeat/crm/cib.xml
 
 The CIB is removed on all other nodes.
 
 The following files are generated:
 
 /etc/ais/openais.conf
 /etc/ais/authkey
 
 The following files are removed on all nodes:
 
 /var/lib/heartbeat/crm/cib.xml.sig
 /var/lib/heartbeat/crm/cib.xml.last
 /var/lib/heartbeat/crm/cib.xml.sig.last
 /var/lib/heartbeat/hostcache
 /var/lib/heartbeat/hb_uuid
 
 The OpenAIS specific files are copied to all nodes using ssh.
 
 The CIB is automatically replicated by the CRM and it is not
 copied to other nodes.
 
 References
+----------
 
 Configuration_Explained.pdf
 openais.conf(5)
diff --git a/tools/hb2openais-helper.py b/tools/hb2openais-helper.py
index e813527fc9..f30e24c605 100755
--- a/tools/hb2openais-helper.py
+++ b/tools/hb2openais-helper.py
@@ -1,424 +1,513 @@
 #!/usr/bin/env python
 
- # Copyright (C) 2008 Dejan Muhamedagic <dmuhamedagic@suse.de>
+ # Copyright (C) 2008,2009 Dejan Muhamedagic <dmuhamedagic@suse.de>
  # 
  # This program is free software; you can redistribute it and/or
  # modify it under the terms of the GNU General Public
  # License as published by the Free Software Foundation; either
  # version 2.1 of the License, or (at your option) any later version.
  # 
  # This software is distributed in the hope that it will be useful,
  # but WITHOUT ANY WARRANTY; without even the implied warranty of
  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
  # General Public License for more details.
  # 
  # You should have received a copy of the GNU General Public
  # License along with this library; if not, write to the Free Software
  # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  #
 
 import os,sys
 import getopt
 import xml.dom.minidom
 
 def usage():
     print >> sys.stderr, "usage: %s [-T] [-c ha_cf] {set_property <name> <value>|analyze_cib|convert_cib|manage_ocfs2 {start|stop}|print_ocfs2_devs}"%sys.argv[0]
     sys.exit(1)
 
 TEST = False
 try:
     optlist, arglist = getopt.getopt(sys.argv[1:], "hTc:")
 except getopt.GetoptError:
     usage()
 for opt,arg in optlist:
     if opt == '-h':
         usage()
     elif opt == '-c':
         HA_CF = arg
     elif opt == '-T':
         TEST = True
 if len(arglist) < 1:
     usage()
 
 def load_cib():
     doc = xml.dom.minidom.parse(sys.stdin)
     return doc
 def is_whitespace(node):
     return node.nodeType == node.TEXT_NODE and not node.data.strip()
 def rmnodes(node_list):
     for node in node_list:
         node.parentNode.removeChild(node)
         node.unlink()
 def set_id2uname(node_list):
     for node in node_list:
+        if node.tagName != "node":
+            continue
         id = node.getAttribute("id")
         uname = node.getAttribute("uname")
         if uname:
             node.setAttribute("id",uname)
         else:
             print >> sys.stderr, "WARNING: node %s has no uname attribute" % id
 def is_element(xmlnode):
     return xmlnode.nodeType == xmlnode.ELEMENT_NODE
 def xml_processnodes(xmlnode,filter,proc):
     '''
     Process with proc all nodes that match filter.
     '''
     node_list = []
     for child in xmlnode.childNodes:
         if filter(child):
             node_list.append(child)
         if child.hasChildNodes():
             xml_processnodes(child,filter,proc)
     if node_list:
         proc(node_list)
 def skip_first(s):
     l = s.split('\n')
     return '\n'.join(l[1:])
 def get_attribute(tag,node,p):
     attr_set = node.getElementsByTagName(tag)
     if not attr_set:
         return ''
     attributes = attr_set[0].getElementsByTagName("attributes")
     if not attributes:
         return ''
     attributes = attributes[0]
     for nvpair in attributes.getElementsByTagName("nvpair"):
         if p == nvpair.getAttribute("name"):
             return nvpair.getAttribute("value")
     return ''
 def get_param(node,p):
     return get_attribute("instance_attributes",node,p)
 def mknvpair(id,name,value):
     nvpair = doc.createElement("nvpair")
     nvpair.setAttribute("id",id + "-" + name)
     nvpair.setAttribute("name",name)
     nvpair.setAttribute("value",value)
     return nvpair
-def set_attribute(tag,node,p,value):
+def set_attribute(tag,node,p,value,overwrite = True):
     attr_set = node.getElementsByTagName(tag)
     if not attr_set:
-        return
-    id = node.getAttribute("id")
+        return ["",False]
+    set_id = attr_set[0].getAttribute("id")
     attributes = attr_set[0].getElementsByTagName("attributes")
     if not attributes:
         attributes = doc.createElement("attributes")
         attr_set[0].appendChild(attributes)
     else:
         attributes = attributes[0]
     for nvp in attributes.getElementsByTagName("nvpair"):
         if p == nvp.getAttribute("name"):
-            nvp.setAttribute("value",value)
-            return
-    attributes.appendChild(mknvpair(id,p,value))
+            if overwrite:
+                nvp.setAttribute("value",value)
+            return [nvp.getAttribute("value"),overwrite]
+    attributes.appendChild(mknvpair(set_id,p,value))
+    return [value,True]
 
 doc = load_cib()
 xml_processnodes(doc,is_whitespace,rmnodes)
 resources = doc.getElementsByTagName("resources")[0]
 constraints = doc.getElementsByTagName("constraints")[0]
 nodes = doc.getElementsByTagName("nodes")[0]
 crm_config = doc.getElementsByTagName("crm_config")[0]
 if not resources:
     print >> sys.stderr, "ERROR: sorry, no resources section in the CIB, cannot proceed"
     sys.exit(1)
 if not constraints:
     print >> sys.stderr, "ERROR: sorry, no constraints section in the CIB, cannot proceed"
     sys.exit(1)
 if not nodes:
     print >> sys.stderr, "ERROR: sorry, no nodes section in the CIB, cannot proceed"
     sys.exit(1)
 
 if arglist[0] == "set_node_ids":
     xml_processnodes(nodes,lambda x:1,set_id2uname)
     s = skip_first(doc.toprettyxml())
     print s
     sys.exit(0)
 
 if arglist[0] == "set_property":
-    if len(arglist) != 3:
+    overwrite = False
+    if len(arglist) == 4:
+        if arglist[3] == "overwrite":
+            overwrite = True
+    elif len(arglist) != 3:
         usage()
-    set_attribute("cluster_property_set",crm_config,arglist[1],arglist[2])
+    p = arglist[1]
+    value = arglist[2]
+    set_value,rc = set_attribute("cluster_property_set", \
+        crm_config,p,value,overwrite)
+    if not set_value and not rc:
+        print >> sys.stderr, \
+            "WARNING: cluster_property_set not found"
+    elif not rc:
+        print >> sys.stderr, \
+            "INFO: cluster property %s is set to %s and NOT overwritten to %s" % (p,set_value,value)
+    else:
+        print >> sys.stderr, \
+            "INFO: cluster property %s set to %s" % (p,set_value)
     s = skip_first(doc.toprettyxml())
     print s
     sys.exit(0)
 
 if arglist[0] == "analyze_cib":
     rc = 0
     for rsc in doc.getElementsByTagName("primitive"):
         rsc_type = rsc.getAttribute("type")
         if rsc_type == "EvmsSCC":
             print >> sys.stderr, "INFO: evms configuration found; conversion required"
             rc = 1
         elif rsc_type == "Filesystem":
             if get_param(rsc,"fstype") == "ocfs2":
                 print >> sys.stderr, "INFO: ocfs2 configuration found; conversion required"
                 rc = 1
     sys.exit(rc)
 
 if arglist[0] == "print_ocfs2_devs":
     for rsc in doc.getElementsByTagName("primitive"):
         if rsc.getAttribute("type") == "Filesystem":
             if get_param(rsc,"fstype") == "ocfs2":
                 print get_param(rsc,"device")
     sys.exit(0)
 
 def rm_attribute(tag,node,p):
     attr_set = node.getElementsByTagName(tag)
     if not attr_set:
         return ''
     attributes = attr_set[0].getElementsByTagName("attributes")
     if not attributes:
         return ''
     attributes = attributes[0]
     for nvpair in attributes.getElementsByTagName("nvpair"):
         if p == nvpair.getAttribute("name"):
             nvpair.parentNode.removeChild(nvpair)
 def set_param(node,p,value):
     set_attribute("instance_attributes",node,p,value)
 def rm_param(node,p):
     rm_attribute("instance_attributes",node,p)
 def evms2lvm(node,a):
     v = node.getAttribute(a)
     if v:
         v = v.replace("EVMS","LVM")
         v = v.replace("Evms","LVM")
         v = v.replace("evms","lvm")
         node.setAttribute(a,v)
 def replace_evms_strings(node_list):
     for node in node_list:
         evms2lvm(node,"id")
         if node.tagName in ("rsc_colocation","rsc_order"):
             evms2lvm(node,"to")
             evms2lvm(node,"from")
 
 def get_input(msg):
     if TEST:
         print >> sys.stderr, "%s: setting to /dev/null" % msg
         return "/dev/null"
     while True:
         ans = raw_input(msg)
         if ans:
             if os.access(ans,os.F_OK):
                 return ans
             else:
                 print >> sys.stderr, "Cannot read %s" % ans
         print >> sys.stderr, "We do need this input to continue."
 def mk_lvm(rsc_id,volgrp):
+    print >> sys.stderr, \
+        "INFO: creating LVM resource %s for vg %s" % (rsc_id,volgrp)
     node = doc.createElement("primitive")
     node.setAttribute("id",rsc_id)
     node.setAttribute("type","LVM")
     node.setAttribute("provider","heartbeat")
     node.setAttribute("class","ocf")
     operations = doc.createElement("operations")
     node.appendChild(operations)
     mon_op = doc.createElement("op")
     operations.appendChild(mon_op)
     mon_op.setAttribute("id", rsc_id + "_mon")
     mon_op.setAttribute("name","monitor")
     interval = "120s"
     timeout = "60s"
     mon_op.setAttribute("interval", interval)
     mon_op.setAttribute("timeout", timeout)
     instance_attributes = doc.createElement("instance_attributes")
     instance_attributes.setAttribute("id", rsc_id + "_inst_attr")
     node.appendChild(instance_attributes)
     attributes = doc.createElement("attributes")
     instance_attributes.appendChild(attributes)
     attributes.appendChild(mknvpair(rsc_id,"volgrpname",volgrp))
     return node
 def mk_clone(id,ra_type,ra_class,prov):
     c = doc.createElement("clone")
     c.setAttribute("id",id + "-clone")
     meta = doc.createElement("meta_attributes")
     c.appendChild(meta)
     meta.setAttribute("id",id + "_meta")
     attributes = doc.createElement("attributes")
     meta.appendChild(attributes)
     attributes.appendChild(mknvpair(id,"globally-unique","false"))
     attributes.appendChild(mknvpair(id,"interleave","true"))
     p = doc.createElement("primitive")
     c.appendChild(p)
     p.setAttribute("id",id)
     p.setAttribute("type",ra_type)
     if prov:
         p.setAttribute("provider",prov)
     p.setAttribute("class",ra_class)
     operations = doc.createElement("operations")
     p.appendChild(operations)
     mon_op = doc.createElement("op")
     operations.appendChild(mon_op)
     mon_op.setAttribute("id", id + "_mon")
     mon_op.setAttribute("name","monitor")
     interval = "60s"
     timeout = "30s"
     mon_op.setAttribute("interval", interval)
     mon_op.setAttribute("timeout", timeout)
     return c
-def add_ocfs_clones(id):
+def add_ocfs_clones():
     c1 = mk_clone("o2cb","o2cb","ocf","ocfs2")
     c2 = mk_clone("dlm","controld","ocf","pacemaker")
+    print >> sys.stderr, \
+        "INFO: adding clones o2cb-clone and dlm-clone"
     resources.appendChild(c1)
     resources.appendChild(c2)
     c1 = mk_order("dlm-clone","o2cb-clone")
     c2 = mk_colocation("dlm-clone","o2cb-clone")
     constraints.appendChild(c1)
     constraints.appendChild(c2)
 def mk_order(r1,r2):
     rsc_order = doc.createElement("rsc_order")
     rsc_order.setAttribute("id","rsc_order_"+r1+"_"+r2)
     rsc_order.setAttribute("from",r1)
     rsc_order.setAttribute("to",r2)
     rsc_order.setAttribute("type","before")
     rsc_order.setAttribute("symmetrical","true")
     return rsc_order
 def mk_colocation(r1,r2):
     rsc_colocation = doc.createElement("rsc_colocation")
     rsc_colocation.setAttribute("id","rsc_colocation_"+r1+"_"+r2)
     rsc_colocation.setAttribute("from",r1)
     rsc_colocation.setAttribute("to",r2)
     rsc_colocation.setAttribute("score","INFINITY")
     return rsc_colocation
-def add_ocfs_constraints(rsc,id):
+def add_ocfs_constraints(rsc):
+    node = rsc.parentNode
+    if node.tagName != "clone":
+        node = rsc
+    rsc_id = node.getAttribute("id")
+    print >> sys.stderr, \
+        "INFO: adding constraints for o2cb-clone and %s" % rsc_id
+    c1 = mk_order("o2cb-clone",rsc_id)
+    c2 = mk_colocation("o2cb-clone",rsc_id)
+    constraints.appendChild(c1)
+    constraints.appendChild(c2)
+def add_lvm_constraints(lvm_id,rsc):
     node = rsc.parentNode
     if node.tagName != "clone":
         node = rsc
-    clone_id = node.getAttribute("id")
-    c1 = mk_order("o2cb-clone",clone_id)
-    c2 = mk_colocation("o2cb-clone",clone_id)
+    rsc_id = node.getAttribute("id")
+    print >> sys.stderr, \
+        "INFO: adding constraints for %s and %s" % (lvm_id,rsc_id)
+    c1 = mk_order(lvm_id,rsc_id)
+    c2 = mk_colocation(lvm_id,rsc_id)
     constraints.appendChild(c1)
     constraints.appendChild(c2)
 def change_ocfs2_device(rsc):
     print >> sys.stderr, "The current device for ocfs2 depends on evms: %s"%get_param(rsc,"device")
     dev = get_input("Please supply the device where %s ocfs2 resource resides: "%rsc.getAttribute("id"))
     set_param(rsc,"device",dev)
 def set_target_role(rsc,target_role):
     node = rsc.parentNode
     if node.tagName != "clone":
         node = rsc
     id = node.getAttribute("id")
     l = rsc.getElementsByTagName("meta_attributes")
     if l:
         meta = l[0]
     else:
         meta = doc.createElement("meta_attributes")
         meta.setAttribute("id",id + "_meta")
         node.appendChild(meta)
         attributes = doc.createElement("attributes")
         meta.appendChild(attributes)
     rm_param(rsc,"target_role")
     set_attribute("meta_attributes",node,"target_role",target_role)
 def start_ocfs2(node_list):
     for node in node_list:
         set_target_role(node,"Started")
 def stop_ocfs2(node_list):
     for node in node_list:
         set_target_role(node,"Stopped")
 def is_ocfs2_fs(node):
     return node.tagName == "primitive" and \
         node.getAttribute("type") == "Filesystem" and \
         get_param(node,"fstype") == "ocfs2"
 def new_pingd_rsc(options,host_list):
     rsc_id = "pingd"
     c = mk_clone(rsc_id,"pingd","ocf","pacemaker")
     node = c.getElementsByTagName("primitive")[0]
     instance_attributes = doc.createElement("instance_attributes")
     instance_attributes.setAttribute("id", rsc_id + "_inst_attr")
     node.appendChild(instance_attributes)
     attributes = doc.createElement("attributes")
     instance_attributes.appendChild(attributes)
     if options:
         attributes.appendChild(mknvpair(rsc_id,"options",options))
     set_param(node,"host_list",host_list)
     return c
 def new_cloned_rsc(rsc_class,rsc_provider,rsc_type):
     return mk_clone(rsc_type,rsc_type,rsc_class,rsc_provider)
-def replace_evms_ids():
-    return c
 def find_respawn(prog):
     rc = False
     f = open(HA_CF or "/etc/ha.d/ha.cf", 'r')
     for l in f:
         s = l.split()
         if not s:
             continue
         if s[0] == "respawn" and s[2].find(prog) > 0:
             rc = True
             break
     f.close()
     return rc
 def parse_pingd_respawn():
     f = open(HA_CF or "/etc/ha.d/ha.cf", 'r')
     opts = ''
     ping_list = []
     for l in f:
         s = l.split()
         if not s:
             continue
         if s[0] == "respawn" and s[2].find("pingd") > 0:
             opts = ' '.join(s[3:])
         elif s[0] == "ping":
             ping_list.append(s[1])
     f.close()
     return opts,' '.join(ping_list)
+
+class NewLVMfromEVMS2(object):
+    def __init__(self):
+        self.vgdict = {}
+    def add_rsc(self,rsc,vg):
+        if vg not in self.vgdict:
+            self.vgdict[vg] = []
+        self.vgdict[vg].append(rsc)
+    def edit_attr(self,rsc,rsc_id,nvpair,vg,lv):
+        v = "/dev/%s/%s" % (vg,lv)
+        attr = nvpair.getAttribute("name")
+        nvpair.setAttribute("value",v)
+        print >> sys.stderr, \
+            "INFO: set resource %s attribute %s to %s"%(rsc_id,attr,v)
+    def proc_attr(self,rsc,rsc_id,nvpair):
+        v = nvpair.getAttribute("value")
+        path_elems = v.split("/")
+        if v.startswith("/dev/evms/"):
+            if v.find("/lvm2/") and len(path_elems) == 7:
+                vg = path_elems[5]
+                lv = path_elems[6]
+                self.add_rsc(rsc,vg)
+                self.edit_attr(rsc,rsc_id,nvpair,vg,lv)
+            else:
+                print >> sys.stderr, \
+                    "WARNING: resource %s attribute %s=%s obviously"%(rsc_id,attr,v)
+                print >> sys.stderr, \
+                    "WARNING: references an EVMS volume, but I don't know what to do about it"
+                print >> sys.stderr, \
+                    "WARNING: Please fix it manually before starting this resource"
+    def check_rsc(self,rsc,rsc_id):
+        for inst_attr in rsc.getElementsByTagName("instance_attributes"):
+            for nvpair in inst_attr.getElementsByTagName("nvpair"):
+                self.proc_attr(rsc,rsc_id,nvpair)
+    def mklvms(self):
+        for vg in self.vgdict.keys():
+            node = mk_lvm("LVM"+vg,vg)
+            resources.appendChild(node)
+            lvm_id = node.getAttribute("id")
+            for rsc in self.vgdict[vg]:
+                add_lvm_constraints(lvm_id,rsc)
+
+def process_evmsd(rsc,rsc_id):
+    print >> sys.stderr, "INFO: Evmsd resource %s will change type to clvmd"%rsc_id
+    rsc.setAttribute("type","clvmd")
+    rsc.setAttribute("provider","lvm2")
+    add_ocfs_constraints(rsc)
+def process_evmsSCC(rsc,rsc_id):
+    '''
+    This is on hold until Xinwei figures out what to do about
+    non-lvm EVMS volumes.
+    '''
+    return
+    print >> sys.stderr, "INFO: EvmsSCC resource is going to be replaced by LVM"
+    vg = get_input("Please supply the VG name corresponding to %s: "%rsc_id)
+    node = mk_lvm(rsc_id,vg)
+    parent = rsc.parentNode
+    parent.removeChild(rsc)
+    parent.appendChild(node)
+    rsc.unlink()
+def process_evmsSCC_2(rsc,rsc_id):
+    print >> sys.stderr, "INFO: EvmsSCC resource is going to be removed"
+    parent = rsc.parentNode
+    parent.removeChild(rsc)
+    rsc.unlink()
 def process_cib():
     ocfs_clones = []
     evms_present = False
+    lvm_evms = NewLVMfromEVMS2()
 
     for rsc in doc.getElementsByTagName("primitive"):
         rsc_id = rsc.getAttribute("id")
         rsc_type = rsc.getAttribute("type")
+        lvm_evms.check_rsc(rsc,rsc_id)
         if rsc_type == "Evmsd":
-            print >> sys.stderr, "INFO: Evmsd resource %s will change type to clvmd"%rsc_id
-            rsc.setAttribute("type","clvmd")
-            rsc.setAttribute("provider","lvm2")
-            print >> sys.stderr, "INFO: adding constraints for %s"%rsc_id
-            add_ocfs_constraints(rsc,rsc_id)
+            process_evmsd(rsc,rsc_id)
         elif rsc_type == "EvmsSCC":
             evms_present = True
-            print >> sys.stderr, "INFO: EvmsSCC resource is going to be replaced by LVM"
-            vg = get_input("Please supply the VG name corresponding to %s: "%rsc_id)
-            node = mk_lvm(rsc_id,vg)
-            parent = rsc.parentNode
-            parent.removeChild(rsc)
-            parent.appendChild(node)
-            rsc.unlink()
+            process_evmsSCC_2(rsc,rsc_id)
         elif rsc_type == "Filesystem":
             if get_param(rsc,"fstype") == "ocfs2":
-                if get_param(rsc,"device").find("evms") > 0:
-                    change_ocfs2_device(rsc)
                 ocfs_clones.append(rsc)
                 id = rsc.getAttribute("id")
-                print >> sys.stderr, "INFO: adding constraints for %s"%id
-                add_ocfs_constraints(rsc,id)
+                add_ocfs_constraints(rsc)
+    lvm_evms.mklvms()
     if ocfs_clones:
-        print >> sys.stderr, "INFO: adding required cloned resources for ocfs2"
-        add_ocfs_clones(id)
+        add_ocfs_clones()
     if evms_present:
         xml_processnodes(doc,lambda x:1,replace_evms_strings)
 
 if arglist[0] == "convert_cib":
     opts,pingd_host_list = parse_pingd_respawn()
     if pingd_host_list:
         clone = new_pingd_rsc(opts,pingd_host_list)
         resources.appendChild(clone)
     if find_respawn("evmsd"):
         resources.appendChild(new_cloned_rsc("ocf","lvm2","clvmd"))
     process_cib()
     s = skip_first(doc.toprettyxml())
     print s
     sys.exit(0)
 
 if arglist[0] == "manage_ocfs2":
     if len(arglist) != 2:
         usage()
     if arglist[1] == "stop":
         xml_processnodes(doc,is_ocfs2_fs,stop_ocfs2)
     elif arglist[1] == "start":
         xml_processnodes(doc,is_ocfs2_fs,start_ocfs2)
     s = skip_first(doc.toprettyxml())
     print s
     sys.exit(0)
 
 # shouldn't get here
 usage()
 
 # vim:ts=4:sw=4:et:
diff --git a/tools/hb2openais.sh.in b/tools/hb2openais.sh.in
index 5d07a53fbe..895b37d67e 100755
--- a/tools/hb2openais.sh.in
+++ b/tools/hb2openais.sh.in
@@ -1,750 +1,770 @@
 #!/bin/sh
 
- # Copyright (C) 2008 Dejan Muhamedagic <dmuhamedagic@suse.de>
+ # Copyright (C) 2008,2009 Dejan Muhamedagic <dmuhamedagic@suse.de>
  # 
  # This program is free software; you can redistribute it and/or
  # modify it under the terms of the GNU General Public
  # License as published by the Free Software Foundation; either
  # version 2.1 of the License, or (at your option) any later version.
  # 
  # This software is distributed in the hope that it will be useful,
  # but WITHOUT ANY WARRANTY; without even the implied warranty of
  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
  # General Public License for more details.
  # 
  # You should have received a copy of the GNU General Public
  # License along with this library; if not, write to the Free Software
  # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  #
 
 . @sysconfdir@/ha.d/shellfuncs
 . $HA_NOARCHBIN/utillib.sh
 . $HA_NOARCHBIN/ha_cf_support.sh
 
 PROG=`basename $0`
 # FIXME: once this is part of the package!
 PROGDIR=`dirname $0`
 echo "$PROGDIR" | grep -qs '^/' || {
 	test -f @sbindir@/$PROG &&
 		PROGDIR=@sbindir@
 	test -f $HA_NOARCHBIN/$PROG &&
 		PROGDIR=$HA_NOARCHBIN
 }
 
 # the default syslog facility is not (yet) exported by heartbeat
 # to shell scripts
 #
 DEFAULT_HA_LOGFACILITY="daemon"
 export DEFAULT_HA_LOGFACILITY
 AIS_CONF=/etc/ais/openais.conf
 AIS_KEYF=/etc/ais/authkey
 AUTHENTICATION=on
 MAXINTERFACE=2
 MCASTPORT=5405
 RRP_MODE=active
 SUPPORTED_RESPAWNS="pingd evmsd"
 
 PY_HELPER=$HA_BIN/hb2openais-helper.py
 CRM_VARLIB=$HA_VARLIB/crm
 CIB=$CRM_VARLIB/cib.xml
 CIBSIG=$CRM_VARLIB/cib.xml.sig
 CIBLAST=$CRM_VARLIB/cib.xml.last
 CIBLAST_SIG=$CRM_VARLIB/cib.xml.sig.last
 HOSTCACHE=$HA_VARLIB/hostcache
 HB_UUID=$HA_VARLIB/hb_uuid
 DONE_F=$HA_VARRUN/heartbeat/.$PROG.conv_done
 BACKUPDIR=/var/tmp/`basename $PROG .sh`.backup
 RM_FILES=" $CIBSIG $HOSTCACHE $HB_UUID $CIBLAST $CIBLAST_SIG"
 REMOTE_RM_FILES=" $CIB $RM_FILES"
 BACKUP_FILES=" $AIS_CONF $AIS_KEYF $REMOTE_RM_FILES "
 DIST_FILES=" $AIS_CONF $AIS_KEYF $DONE_F "
 MAN_TARF=/var/tmp/`basename $PROG .sh`.tar.gz
 
 : ${SSH_OPTS="-T"}
 
 usage() {
 	cat<<EOF
 
 usage: $PROG [-UF] [-u user] [-T directory] [revert]
 
 	-U: skip upgrade the CIB to v1.0
 	-F: force conversion despite it being done beforehand
 	-u user: a user to sudo with (otherwise, you'd
 	         have to run this as root)
 	-T directory: a directory containing ha.cf/logd.cf/cib.xml/hostcache
 	         (use for testing); with this option files are not
 	         copied to other nodes and there are no destructive
 	         commands executed; you may run as unprivileged uid
 
 EOF
 	exit
 }
 
 SUDO_USER=""
 MYSUDO=""
 TEST_DIR=""
 FORCE=""
 UPGRADE="1"
 while getopts UFu:T:h o; do
 	case "$o" in
 		h) usage;;
-		U) UPGRADE=0;;
+		U) UPGRADE="";;
 		F) FORCE=1;;
 		u) SUDO_USER="$OPTARG";;
 		T) TEST_DIR="$OPTARG";;
 		[?]) usage;;
 	esac
 done
 shift $(($OPTIND-1))
 [ $# -gt 1 ] && usage
 [ "$TEST_DIR" ] && [ $# -ne 0 ] && usage
 
 if [ "$TEST_DIR" ]; then
 	cp $TEST_DIR/cib.xml $TEST_DIR/cib-out.xml
 	CIB=$TEST_DIR/cib-out.xml
 	HOSTCACHE=$TEST_DIR/hostcache
 	HA_CF=$TEST_DIR/ha.cf
 	AIS_CONF=$TEST_DIR/openais.conf
 	if [ "$SUDO_USER" ]; then
 		warning "-u option ignored when used with -T"
 	fi
 else
 	ps -ef | grep -wqs [c]rmd &&
 		fatal "you must first stop heartbeat on _all_ nodes"
 	if [ "$SUDO_USER" ]; then
 		MYSUDO="sudo -u $SUDO_USER"
 	fi
 fi
 
 CIB_file=$CIB
 CONF=$HA_CF
 LOGD_CF=`findlogdcf $TEST_DIR /etc $HA_DIR`
 export CIB_file LOGD_CF
 
 prerequisites() {
 	test -f $HA_CF ||
 		fatal "$HA_CF does not exist: cannot proceed"
 	iscfvartrue crm || grep -w "^crm" $HA_CF | grep -wqs respawn ||
 		fatal "crm is not enabled: we cannot convert v1 configurations"
 	$DRY test -f $CIB ||
 		fatal "CIB $CIB does not exist: cannot proceed"
 	[ "$FORCE" ] && rm -f "$DONE_F"
 	if [ -f "$DONE_F" ]; then
 		info "Conversion to OpenAIS already done, exiting"
 		exit 0
 	fi
 }
 # some notes about unsupported stuff
 unsupported() {
 	respawned_progs=`awk '/^respawn/{print $3}' $HA_CF |while read p; do basename $p; done`
 	grep -qs "^serial" $HA_CF &&
 		warning "serial media is not supported by OpenAIS"
 	for prog in $respawned_progs; do
 		case $prog in
 		mgmtd|pingd|evmsd) : these are fine
 			;;
 		*)
 			warning "program $prog is being controlled by heartbeat (thru respawn)"
 			warning "you have to find another way of running it"
 			;;
 		esac
 	done
 }
 #
 # find nodes for this cluster
 #
 getnodes() {
 	# 1. hostcache
 	if [ -f $HOSTCACHE ]; then
 		awk '{print $1}' $HOSTCACHE
 		return
 	fi
 	# 2. ha.cf
 	getcfvar node
 }
 #
 # does ssh work?
 #
 testsshuser() {
 	if [ "$2" ]; then
 		ssh -T -o Batchmode=yes $2@$1 true 2>/dev/null
 	else
 		ssh -T -o Batchmode=yes $1 true 2>/dev/null
 	fi
 }
 findsshuser() {
 	for u in "" $TRY_SSH; do
 		rc=0
 		for n in `getnodes`; do
 			[ "$node" = "$WE" ] && continue
 			testsshuser $n $u || {
 				rc=1
 				break
 			}
 		done
 		if [ $rc -eq 0 ]; then
 			echo $u
 			return 0
 		fi
 	done
 	return 1
 }
 important() {
 	echo "IMPORTANT: $*" >&2
 }
 newportinfo() {
 	important "the multicast port number on $1 is set to $2"
 	important "please update your firewall rules (if any)"
 }
 changemediainfo() {
 	important "openais uses multicast for communication"
 	important "please make sure that your network infrastructure supports it"
 }
 multicastinfo() {
 	info "multicast for openais ring $1 set to $2:$3"
 }
 netaddrinfo() {
 	info "network address for openais ring $1 set to $2"
 }
 backup_files() {
 	[ "$TEST_DIR" ] && return
 	info "backing up $BACKUP_FILES to $BACKUPDIR"
 	$DRY mkdir $BACKUPDIR || {
 		echo sorry, could not create $BACKUPDIR directory
 		echo please cleanup
 		exit 1
 	}
 	if [ -z "$DRY" ]; then
 		tar cf - $BACKUP_FILES | gzip > $BACKUPDIR/$WE.tar.gz || {
 			echo sorry, could not create $BACKUPDIR/$WE.tar.gz
 			exit 1
 		}
 	else
 		$DRY "tar cf - $BACKUP_FILES | gzip > $BACKUPDIR/$WE.tar.gz"
 	fi
 }
 revert() {
 	[ "$TEST_DIR" ] && return
 	test -d $BACKUPDIR || {
 		echo sorry, there is no $BACKUPDIR directory
 		echo cannot revert
 		exit 1
 	}
 	info "restoring $BACKUP_FILES from $BACKUPDIR/$WE.tar.gz"
 	gzip -dc $BACKUPDIR/$WE.tar.gz | (cd / && tar xf -) || {
 		echo sorry, could not unpack $BACKUPDIR/$WE.tar.gz
 		exit 1
 	}
 }
 pls_press_enter() {
 	[ "$TEST_DIR" ] && return
 	cat<<EOF
 
 Please press enter to continue or ^C to exit ...
 EOF
 	read junk
 	echo ""
 }
 introduction() {
 	cat<<EOF
 
 This is a Heartbeat to OpenAIS conversion tool.
 
 * IMPORTANT * IMPORTANT * IMPORTANT * IMPORTANT * IMPORTANT *
 
 Please read this and don't proceed before understanding what
 we try to do and what is required.
 
 1. You need to know your cluster in detail. This program will
 inform you on changes it makes. It is up to you to verify
 that the changes are meaningful. We will also probably ask
 some questions now and again.
 
 2. This procedure is supposed to be run on one node only.
 Although the main cluster configuration (the CIB) is
 automatically replicated, there are some things which have to
 be copied by other means. For that to work, we need sshd
 running on all nodes and root access working.
 
 3. Do not run this procedure on more than one node!
 EOF
 	pls_press_enter
 	cat<<EOF
 The procedure consists of two parts: the OpenAIS
 configuration and the Pacemaker/CRM CIB configuration.
 
 The first part is obligatory. The second part may be skipped
 unless your cluster configuration requires changes due to the
 change from Heartbeat to OpenAIS.
 
 We will try to analyze your configuration and let you know
 whether the CIB configuration should be changed as well.
 However, you will still have a choice to skip the CIB
 mangling part in case you want to do that yourself.
 
 The next step is to create the OpenAIS configuration. If you
 want to leave, now is the time to interrupt the program.
 EOF
 	pls_press_enter
 }
 confirm() {
 	while :; do
 		printf "$1 (y/n) "
 		read ans
 		if echo $ans | grep -iqs '^[yn]'; then
 			echo $ans | grep -iqs '^y'
 			return $?
 		else
 			echo Please answer with y or n
 		fi
 	done
 }
 want_to_proceed() {
 	[ "$TEST_DIR" ] && return 0
 	confirm "Do you want to proceed?"
 }
 intro_part2() {
 	cat<<EOF
 
 The second part of the configuration deals with the CIB.
 According to our analysis (you should have seen some
 messages), this step is necessary.
 EOF
 	want_to_proceed || return
 }
 
 gethbmedia() {
 	grep "^[bum]cast" $HA_CF
 }
 pl_ipcalc() {
 perl -e '
 # stolen from internet!
 my $ipaddr=$ARGV[0];
 my $nmask=$ARGV[1];
 my @addrarr=split(/\./,$ipaddr);
 my ( $ipaddress ) = unpack( "N", pack( "C4",@addrarr ) );
 my @maskarr=split(/\./,$nmask);
 my ( $netmask ) = unpack( "N", pack( "C4",@maskarr ) );
 # Calculate network address by logical AND operation of addr &
 # netmask
 # and convert network address to IP address format
 my $netadd = ( $ipaddress & $netmask );
 my @netarr=unpack( "C4", pack( "N",$netadd ) );
 my $netaddress=join(".",@netarr);
 print "$netaddress\n";
 ' $1 $2
 }
 get_if_val() {
 	test "$1" || return
 	awk -v key=$1 '
 	{ for( i=1; i<=NF; i++ )
 		if( match($i,key) ) {
 			sub(key,"",$i);
 			print $i
 			exit
 		}
 	}'
 }
 netaddress() {
 	ip=`ifconfig $1 | grep 'inet addr:' | get_if_val addr:`
 	mask=`ifconfig $1 | grep 'Mask:' | get_if_val Mask:`
 	if test "$mask"; then
 		pl_ipcalc $ip $mask
 	else
 		warning "could not get the network mask for interface $1"
 	fi
 }
 
 sw=0
 do_tabs() {
 	for i in `seq $sw`; do printf "\t"; done
 }
 newstanza() {
 	do_tabs
 	printf "%s {\n" $1
 	let sw=sw+1
 }
 endstanza() {
 	let sw=sw-1
 	do_tabs
 	printf "}\n"
 }
 setvalue() {
 	name=$1
 	val=$2
 	test "$val" || {
 		warning "sorry, no value set for $name"
 	}
 	do_tabs
 	echo "$name: $val"
 }
 setcomment() {
 	do_tabs
 	echo "# $*"
 }
 setdebug() {
 	[ "$HA_LOGLEVEL" = debug ] &&
 		echo "on" || echo "off"
 }
 
 WE=`uname -n`  # who am i?
 
 if [ "$1" = revert ]; then
 	revert
 	exit
 fi
 
 test -d $BACKUPDIR &&
 	fatal "please remove the backup directory: $BACKUPDIR"
 
 prerequisites
 
 introduction
 
 backup_files
 
 unsupported
 
 # 1. Generate the openais.conf
 
 prochbmedia() {
 	while read media_type iface address rest; do
 		info "Processing interface $iface of type $media_type ..."
 		case "$media_type" in
 			ucast|bcast) mcastaddr=226.94.1.1 ;;
 			mcast) mcastaddr=$address ;;
 		esac
 		if [ -z "$local_mcastport" ]; then
 			local_mcastport="$MCASTPORT"
 		fi
 		netaddress="`netaddress $iface`"
 		if [ "$netaddress" ]; then
 			let local_mcastport=$local_mcastport+1
 			newportinfo $iface $local_mcastport
 			echo "$netaddress" "$mcastaddr" "$local_mcastport"
 		else
 			warning "cannot process interface $iface!"
 		fi
 	done
 }
 
 openaisconf() {
 
 info "Generating $AIS_CONF from $HA_CF ..."
 
 # the totem stanza
 
 cpunum=`grep -c ^processor /proc/cpuinfo`
 setcomment "Generated by hb2openais on `date`"
 setcomment "Please read the openais.conf.5 manual page"
 
 newstanza aisexec
 setcomment "Run as root - this is necessary to be able to manage resources with Pacemaker"
 setvalue user	root
 setvalue group	root
 endstanza
 
 newstanza service
 setcomment "Load the Pacemaker Cluster Resource Manager"
 setvalue name	pacemaker
 setvalue ver	0
 if uselogd; then
 	setvalue use_logd	yes
 	important "Make sure that the logd service is started (chkconfig logd on)"
 fi
 if grep -qs "^respawn.*mgmtd" $HA_CF; then
 	setvalue use_mgmtd	yes
 fi
 endstanza
 
 newstanza totem
 setvalue version 2
 setcomment "How long before declaring a token lost (ms)"
 setvalue token          10000
 setcomment "How many token retransmits before forming a new configuration"
 setvalue token_retransmits_before_loss_const 20
 setcomment "How long to wait for join messages in the membership protocol (ms)"
 setvalue join           60
 setcomment "How long to wait for consensus to be achieved before"
 setcomment "starting a new round of membership configuration (ms)"
 setvalue consensus      4800
 setcomment "Turn off the virtual synchrony filter"
 setvalue vsftype        none
 setcomment "Number of messages that may be sent by one processor on receipt of the token"
 setvalue max_messages   20
 setcomment "Limit generated nodeids to 31-bits (positive signed integers)"
 setvalue clear_node_high_bit yes
 setcomment "Enable encryption"
 setvalue secauth $AUTHENTICATION
 if [ "$AUTHENTICATION" = on ]; then
 	setvalue threads $cpunum
 else
 	setvalue threads 0
 fi
 setcomment "Optionally assign a fixed node id (integer)"
 setcomment "nodeid:         1234"
 ring=0
 gethbmedia | prochbmedia |
 sort -u |
 while read network addr port; do
 	if [ $ring -ge $MAXINTERFACE ]; then
 		warning "openais supports only $MAXINTERFACE rings!"
 		info "consider bonding interfaces"
 		warning "skipping communication link on $network"
 		setcomment "$network skipped: too many rings"
 		continue
 	fi
 	newstanza interface
 	setvalue ringnumber $ring
 	setvalue bindnetaddr $network
 	netaddrinfo $ring $network
 	multicastinfo $ring $addr $port
 	setvalue mcastport $port
 	setvalue mcastaddr $addr
 	let ring=$ring+1
 	endstanza
 done
 mediacnt=`gethbmedia 2>/dev/null | prochbmedia 2>/dev/null | sort -u | wc -l`
 if [ $mediacnt -ge 2 ]; then
 	setvalue rrp_mode $RRP_MODE
 fi
 changemediainfo
 endstanza
 
 # the logging stanza
 
 getlogvars
 # enforce some syslog facility
 debugsetting=`setdebug`
 newstanza logging
 setvalue debug $debugsetting
 setvalue fileline off
 setvalue to_stderr no
 if [ "$HA_LOGFILE" ]; then
 	setvalue to_file yes
 	setvalue logfile $HA_LOGFILE
 else
 	setvalue to_file no
 fi
 if [ "$HA_LOGFACILITY" ]; then
 	setvalue to_syslog yes
 	setvalue syslog_facility $HA_LOGFACILITY
 else
 	setvalue to_syslog no
 fi
 endstanza
 
 newstanza amf
 setvalue mode disabled
 endstanza
 
 }
 
 if [ -z "$DRY" ]; then
 	openaisconf > $AIS_CONF ||
 		fatal "cannot create $AIS_CONF"
 	grep -wqs interface $AIS_CONF ||
 		fatal "no media found in $HA_CF"
 else
 	openaisconf
 fi
 
-[ "$AIS_KEYF" ] &&
-if [ "$TEST_DIR" ]; then
-	info "Skipping OpenAIS authentication key generation ..."
-else
+[ "$AIS_KEYF" ] && {
 	info "Generating a key for OpenAIS authentication ..."
-	$DRY ais-keygen ||
-		fatal "cannot generate the key using ais-keygen"
-fi
+	if [ "$TEST_DIR" ]; then
+		echo would run: $DRY ais-keygen
+	else
+		$DRY ais-keygen ||
+			fatal "cannot generate the key using ais-keygen"
+	fi
+}
 
 # remove various files which could get in a way
 
 if [ -z "$TEST_DIR" ]; then
 	$DRY rm -f $RM_FILES
 fi
 
 fixcibperms() {
 	[ "$TEST_DIR" ] && return
 	uid=`ls -ldn $CRM_VARLIB | awk '{print $3}'`
 	gid=`ls -ldn $CRM_VARLIB | awk '{print $4}'`
 	$DRY $MYSUDO chown $uid:$gid $CIB
 }
 upgrade_cib() {
 	$DRY $MYSUDO cibadmin --upgrade --force
 }
 py_proc_cib() {
 	tmpfile=`maketempfile`
 	$MYSUDO sh -c "python $PY_HELPER $* <$CIB >$tmpfile" ||
 		fatal "cannot process cib: $PY_HELPER $*"
 	$DRY $MYSUDO mv $tmpfile $CIB
 }
 set_property() {
-	py_proc_cib set_property $1 $2
-	info "the $1 cluster property was set to $2."
+	py_proc_cib set_property $*
 }
 
 # remove the nodes section from the CIB
 py_proc_cib set_node_ids
-info "Edited the nodes's ids in the CIB"
+info "Edited the nodes' ids in the CIB"
 
 numnodes=`getnodes | wc -w`
 [ $numnodes -eq 2 ] &&
 	set_property no-quorum-policy ignore
 
-set_property expected-nodes $numnodes
+set_property expected-nodes $numnodes overwrite
 
 info "Done converting ha.cf to openais.conf"
 important "Please check the resulting $AIS_CONF"
 important "and in particular interface stanzas and logging."
 important "If you find problems, please edit $AIS_CONF now!"
 #
 # first part done (openais), on to the CIB
 
 analyze_cib() {
 	info "Analyzing the CIB..."
 	$MYSUDO sh -c "python $PY_HELPER analyze_cib <$CIB"
 }
 check_respawns() {
 	rc=1
 	for p in $SUPPORTED_RESPAWNS; do
 		grep -qs "^respawn.*$p" $HA_CF && {
 			info "a $p resource has to be created"
 			rc=0
 		}
 	done
 	return $rc
 }
 
 part2() {
 	intro_part2 || return 0
 	opts="-c $HA_CF"
 	[ "$TEST_DIR" ] && opts="-T $opts"
 	py_proc_cib $opts convert_cib
 	info "Processed the CIB successfully"
-	tune_ocfs2 `$MYSUDO sh -c "python $PY_HELPER $opts print_ocfs2_devs <$CIB"`
 }
 # make the user believe that something's happening :)
 some_dots_idle() {
+	[ "$TEST_DIR" ] && return
 	cnt=0
 	printf "$2 ."
 	while [ $cnt -lt $1 ]; do
 		sleep 1
 		printf "."
 		ctn=$((cnt+1))
 	done
 	echo
 }
 print_dc() {
 	crm_mon -1 | awk '/Current DC/{print $3}'
 }
 dcidle() {
 	dc=`$MYSUDO print_dc`
 	if [ "$dc" = "$WE" ]; then
 		maxcnt=60 cnt=0
 		while [ $cnt -lt $maxcnt ]; do
 			stat=`$MYSUDO crmadmin -S $dc`
 			echo $stat | grep -qs S_IDLE && break
 			[ "$1" = "-v" ] && echo $stat
 			sleep 1
 			printf "."
 			cnt=$((cnt+1))
 		done
 		echo $stat | grep -qs S_IDLE
 	else
 		some_dots_idle 10 #just wait for 10 seconds
 	fi
 }
 wait_crm() {
+	[ "$TEST_DIR" ] && return
 	cnt=10
 	dc=""
 	while [ -z "$dc" -a $cnt -gt 0 ]; do
 		dc=`$MYSUDO print_dc`
 		cnt=$((cnt-1))
 	done
 
 	if [ x = x"$dc" ]; then
 		echo "sorry, no dc found/elected"
 		exit 1
 	fi
 	dcidle
 }
 manage_cluster() {
-	$DRY /etc/init.d/openais $1
+	if [ "$TEST_DIR" ]; then
+		echo would run: /etc/init.d/openais $1
+	else
+		$DRY /etc/init.d/openais $1
+	fi
 }
 tune_ocfs2() {
-	[ $# -eq 0 ] && return
 	cat<<EOF
 The ocfs2 metadata has to change to reflect the cluster stack
 change. To do that, we have to start the cluster stack on
 this node.
 EOF
 	pls_press_enter
 	py_proc_cib manage_ocfs2 stop
-	[ "$TEST_DIR" ] || {
 	manage_cluster start
 	some_dots_idle 10 "waiting for crm to start"
 	if $DRY wait_crm; then
 		for fsdev; do
 			info "converting the ocfs2 meta-data on $fsdev"
-			[ "$TEST_DIR" ] || $DRY tunefs.ocfs2 --update-cluster-stack -y $fsdev
+			if [ "$TEST_DIR" ]; then
+				echo would run: tunefs.ocfs2 --update-cluster-stack -y $fsdev
+			else
+				$DRY tunefs.ocfs2 --update-cluster-stack -y $fsdev
+			fi
 		done
 	else
 		fatal "could not start pacemaker; please check the logs"
 	fi
 	manage_cluster stop
-	}
 	py_proc_cib manage_ocfs2 start
 }
+convert_csm() {
+	info "converting all EVMS2 CSM containers"
+	if [ "$TEST_DIR" ]; then
+		echo would run: /usr/sbin/csm-converter --scan
+	else
+		$DRY /usr/sbin/csm-converter --scan ||
+			fatal "CSM conversion failed! Aborting"
+	fi
+}
 
 analyze_cib
 rc=$?
 [ $rc -gt 1 ] && fatal "error while analyzing CIB"
 if [ $rc -eq 1 ] || check_respawns; then
 	part2
 else
 	info "No need to process CIB further"
 fi
 
 # upgrade the CIB to v1.0
 if [ "$UPGRADE" ] || confirm "Do you want to upgrade the CIB to v1.0?"; then
 	upgrade_cib
 	info "Upgraded the CIB to v1.0"
 else
 	info "Skipped upgrading the CIB to v1.0"
 	important "You should do this sooner rather than later!"
 fi
 fixcibperms
 
+ocfs2_devs=`$MYSUDO sh -c "python $PY_HELPER $opts print_ocfs2_devs <$CIB"`
+[ "$ocfs2_devs" ] &&
+	tune_ocfs2 $ocfs2_devs
+convert_csm
+
 [ "$TEST_DIR" ] && exit
 
 $DRY touch $DONE_F
 
 # finally, copy files to all nodes
 info "Copying files to other nodes ..."
 info "(please provide root password if prompted)"
 ssh_opts="-l root $SSH_OPTS"
 rc=0
 for node in `getnodes`; do
 	[ "$node" = "$WE" ] &&
 		continue
 	if [ "$DRY" ]; then
 		$DRY "(cd / && tar cf - $DIST_FILES) |
 		ssh $ssh_opts $node \"rm -f $REMOTE_RM_FILES &&
 			cd / && tar xf -\""
 	else
 		echo "Copying to node $node ..."
 		(cd / && tar cf - $DIST_FILES) |
 		ssh $ssh_opts $node "rm -f $REMOTE_RM_FILES &&
 			cd / && tar xf -"
 		let rc=$rc+$?
 	fi
 done
 info "Done transfering files"
 if [ $rc -ne 0 ]; then
 	warning "we could not update some ssh nodes"
 	important "before starting the cluster stack on those nodes:"
 	important "copy and unpack $MAN_TARF (from the / directory)"
 	important "and execute: rm -f $REMOTE_RM_FILES"
 	(cd / && tar cf - $DIST_FILES | gzip > $MAN_TARF)
 fi