diff --git a/doc/Pacemaker_Explained/en-US/Ch-Basics.txt b/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
index 5134e69a2c..8d80215976 100644
--- a/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
+++ b/doc/Pacemaker_Explained/en-US/Ch-Basics.txt
@@ -1,393 +1,393 @@
 = Configuration Basics =
 
 == Configuration Layout ==
 
 The cluster is defined by the Cluster Information Base (CIB),
 which uses XML notation. The simplest CIB, an empty one, looks like this:
 
 .An empty configuration
 ======
 [source,XML]
 -------
 <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" admin_epoch="1" epoch="0" num_updates="0">
   <configuration>
     <crm_config/>
     <nodes/>
     <resources/>
     <constraints/>
   </configuration>
   <status/>
 </cib>
 -------
 ======
 
 The empty configuration above contains the major sections that make up a CIB:
 
 * +cib+: The entire CIB is enclosed with a +cib+ tag. Certain fundamental settings
   are defined as attributes of this tag.
 
   ** +configuration+: This section -- the primary focus of this document --
      contains traditional configuration information such as what resources the
      cluster serves and the relationships among them.
 
     *** +crm_config+: cluster-wide configuration options
     *** +nodes+: the machines that host the cluster
     *** +resources+: the services run by the cluster
     *** +constraints+: indications of how resources should be placed
 
   ** +status+: This section contains the history of each resource on each node.
     Based on this data, the cluster can construct the complete current
     state of the cluster.  The authoritative source for this section
     is the local resource manager (lrmd process) on each cluster node, and
     the cluster will occasionally repopulate the entire section.  For this
     reason, it is never written to disk, and administrators are advised
     against modifying it in any way.
 
 In this document, configuration settings will be described as 'properties' or 'options'
 based on how they are defined in the CIB:
 
 * Properties are XML attributes of an XML element.
 * Options are name-value pairs expressed as +nvpair+ child elements of an XML element.
 
 Normally you will use command-line tools that abstract the XML, so the
 distinction will be unimportant; both properties and options are
 cluster settings you can tweak.
 
 == The Current State of the Cluster ==
 
 Before one starts to configure a cluster, it is worth explaining how
 to view the finished product.  For this purpose we have created the
 `crm_mon` utility, which will display the
 current state of an active cluster.  It can show the cluster status by
 node or by resource and can be used in either single-shot or
 dynamically-updating mode.  There are also modes for displaying a list
 of the operations performed (grouped by node and resource) as well as
 information about failures.
 
 Using this tool, you can examine the state of the cluster for
 irregularities and see how it responds when you cause or simulate
 failures.
 
 Details on all the available options can be obtained using the
 `crm_mon --help` command.
       
 .Sample output from crm_mon
 ======
 -------
   ============
   Last updated: Fri Nov 23 15:26:13 2007
   Current DC: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec)
   3 Nodes configured.
   5 Resources configured.
   ============
   
   Node: sles-1 (1186dc9a-324d-425a-966e-d757e693dc86): online
       192.168.100.181    (heartbeat::ocf:IPaddr):    Started sles-1
       192.168.100.182    (heartbeat:IPaddr):         Started sles-1
       192.168.100.183    (heartbeat::ocf:IPaddr):    Started sles-1
       rsc_sles-1         (heartbeat::ocf:IPaddr):    Started sles-1
       child_DoFencing:2  (stonith:external/vmware):  Started sles-1
   Node: sles-2 (02fb99a8-e30e-482f-b3ad-0fb3ce27d088): standby
   Node: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec): online
       rsc_sles-2    (heartbeat::ocf:IPaddr):    Started sles-3
       rsc_sles-3    (heartbeat::ocf:IPaddr):    Started sles-3
       child_DoFencing:0    (stonith:external/vmware):    Started sles-3
 -------
 ======
       
 .Sample output from crm_mon -n
 ======
 -------
   ============
   Last updated: Fri Nov 23 15:26:13 2007
   Current DC: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec)
   3 Nodes configured.
   5 Resources configured.
   ============
 
   Node: sles-1 (1186dc9a-324d-425a-966e-d757e693dc86): online
   Node: sles-2 (02fb99a8-e30e-482f-b3ad-0fb3ce27d088): standby
   Node: sles-3 (2298606a-6a8c-499a-9d25-76242f7006ec): online
 
   Resource Group: group-1
     192.168.100.181    (heartbeat::ocf:IPaddr):    Started sles-1
     192.168.100.182    (heartbeat:IPaddr):        Started sles-1
     192.168.100.183    (heartbeat::ocf:IPaddr):    Started sles-1
   rsc_sles-1    (heartbeat::ocf:IPaddr):    Started sles-1
   rsc_sles-2    (heartbeat::ocf:IPaddr):    Started sles-3
   rsc_sles-3    (heartbeat::ocf:IPaddr):    Started sles-3
   Clone Set: DoFencing
     child_DoFencing:0    (stonith:external/vmware):    Started sles-3
     child_DoFencing:1    (stonith:external/vmware):    Stopped
     child_DoFencing:2    (stonith:external/vmware):    Started sles-1
 -------
 ======
 
 The DC (Designated Controller) node is where all the decisions are
 made, and if the current DC fails a new one is elected from the
 remaining cluster nodes.  The choice of DC is of no significance to an
 administrator beyond the fact that its logs will generally be more
 interesting.
 
 == How Should the Configuration be Updated? ==
 
 There are three basic rules for updating the cluster configuration:
 
  * Rule 1 - Never edit the +cib.xml+ file manually. Ever. I'm not making this up.
  * Rule 2 - Read Rule 1 again.
  * Rule 3 - The cluster will notice if you ignored rules 1 &amp; 2 and refuse to use the configuration.
 
 Now that it is clear how 'not' to update the configuration, we can begin
 to explain how you 'should'.
 
 === Editing the CIB Using XML ===
 
 The most powerful tool for modifying the configuration is the
 +cibadmin+ command.  With +cibadmin+, you can query, add, remove, update
 or replace any part of the configuration. All changes take effect immediately,
 so there is no need to perform a reload-like operation.
 
 The simplest way of using `cibadmin` is to use it to save the current
 configuration to a temporary file, edit that file with your favorite
 text or XML editor, and then upload the revised configuration. footnote:[This
 process might appear to risk overwriting changes that happen after the initial
 cibadmin call, but pacemaker will reject any update that is "too old". If the
 CIB is updated in some other fashion after the initial cibadmin, the second
 cibadmin will be rejected because the version number will be too low.]
       
 .Safely using an editor to modify the cluster configuration
 ======
 --------
 # cibadmin --query > tmp.xml
 # vi tmp.xml
 # cibadmin --replace --xml-file tmp.xml
 --------
 ======
 
 Some of the better XML editors can make use of a Relax NG schema to
 help make sure any changes you make are valid.  The schema describing
 the configuration can be found in +pacemaker.rng+, which may be
 deployed in a location such as +/usr/share/pacemaker+ or
 +/usr/lib/heartbeat+ depending on your operating system and how you
 installed the software.
 
 If you want to modify just one section of the configuration, you can
 query and replace just that section to avoid modifying any others.
       
 .Safely using an editor to modify only the resources section
 ======
 --------
-# cibadmin --query --obj_type resources > tmp.xml
+# cibadmin --query --scope resources > tmp.xml
 # vi tmp.xml
-# cibadmin --replace --obj_type resources --xml-file tmp.xml
+# cibadmin --replace --scope resources --xml-file tmp.xml
 --------
 ======
 
 === Quickly Deleting Part of the Configuration ===
 
 Identify the object you wish to delete by XML tag and id. For example,
 you might search the CIB for all STONITH-related configuration:
       
 .Searching for STONITH-related configuration items
 ======
 ----
 # cibadmin -Q | grep stonith
  <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
  <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="1"/>
  <primitive id="child_DoFencing" class="stonith" type="external/vmware">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:1" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:2" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
  <lrm_resource id="child_DoFencing:3" type="external/vmware" class="stonith">
 ----
 ======
 
 If you wanted to delete the +primitive+ tag with id +child_DoFencing+,
 you would run:
 
 ----
 # cibadmin --delete --crm_xml '<primitive id="child_DoFencing"/>'
 ----
 
 === Updating the Configuration Without Using XML ===
 
 Most tasks can be performed with one of the other command-line
 tools provided with pacemaker, avoiding the need to read or edit XML.
 
 To enable STONITH for example, one could run:
 
 ----
 # crm_attribute --name stonith-enabled --update 1
 ----
 
 Or, to check whether *somenode* is allowed to run resources, there is:
 
 ----
 # crm_standby --get-value --node somenode
 ----
 
 Or, to find the current location of *my-test-rsc*, one can use:
 
 ----
 # crm_resource --locate --resource my-test-rsc
 ----
 
 Examples of using these tools for specific cases will be given throughout this
 document where appropriate.
 
 [NOTE]
 ====
 Old versions of pacemaker (1.0.3 and earlier) had different
 command-line tool syntax. If you are using an older version,
 check your installed manual pages for the proper syntax to use.
 ====
 
 [[s-config-sandboxes]]
 == Making Configuration Changes in a Sandbox ==
 
 Often it is desirable to preview the effects of a series of changes
 before updating the configuration atomically.  For this purpose we
 have created `crm_shadow` which creates a
 "shadow" copy of the configuration and arranges for all the command
 line tools to use it.
 
 To begin, simply invoke `crm_shadow --create` with
 the name of a configuration to create footnote:[Shadow copies are
 identified with a name, making it possible to have more than one.],
 and follow the simple on-screen instructions.
 
 [WARNING]
 ====
 Read this section and the on-screen instructions carefully; failure to do so could
 result in destroying the cluster's active configuration!
 ====
       
       
 .Creating and displaying the active sandbox
 ======
 ----
 # crm_shadow --create test
 Setting up shadow instance
 Type Ctrl-D to exit the crm_shadow shell
 shadow[test]: 
 shadow[test] # crm_shadow --which
 test
 ----
 ======
 
 From this point on, all cluster commands will automatically use the
 shadow copy instead of talking to the cluster's active configuration.
 Once you have finished experimenting, you can either make the
 changes active via the `--commit` option, or discard them using the `--delete`
 option.  Again, be sure to follow the on-screen instructions carefully!
       
 For a full list of `crm_shadow` options and
 commands, invoke it with the `--help` option.
 
 .Using a sandbox to make multiple changes atomically, discard them and verify the real configuration is untouched
 ======
 ----
  shadow[test] # crm_failcount -G -r rsc_c001n01
   name=fail-count-rsc_c001n01 value=0
  shadow[test] # crm_standby -v on -N c001n02
  shadow[test] # crm_standby -G -N c001n02
  name=c001n02 scope=nodes value=on
  shadow[test] # cibadmin --erase --force
  shadow[test] # cibadmin --query
  <cib cib_feature_revision="1" validate-with="pacemaker-1.0" admin_epoch="0" crm_feature_set="3.0" have-quorum="1" epoch="112"
       dc-uuid="c001n01" num_updates="1" cib-last-written="Fri Jun 27 12:17:10 2008">
     <configuration>
        <crm_config/>
        <nodes/>
        <resources/>
        <constraints/>
     </configuration>
     <status/>
  </cib>
   shadow[test] # crm_shadow --delete test --force
   Now type Ctrl-D to exit the crm_shadow shell
   shadow[test] # exit
   # crm_shadow --which
   No active shadow configuration defined
   # cibadmin -Q
  <cib cib_feature_revision="1" validate-with="pacemaker-1.0" admin_epoch="0" crm_feature_set="3.0" have-quorum="1" epoch="110"
        dc-uuid="c001n01" num_updates="551">
     <configuration>
        <crm_config>
           <cluster_property_set id="cib-bootstrap-options">
              <nvpair id="cib-bootstrap-1" name="stonith-enabled" value="1"/>
              <nvpair id="cib-bootstrap-2" name="pe-input-series-max" value="30000"/>
 ----
 ======
 
 [[s-config-testing-changes]]
 == Testing Your Configuration Changes ==
 
 We saw previously how to make a series of changes to a "shadow" copy
 of the configuration.  Before loading the changes back into the
 cluster (e.g. `crm_shadow --commit mytest --force`), it is often
 advisable to simulate the effect of the changes with +crm_simulate+.
 For example:
       
 ----
 # crm_simulate --live-check -VVVVV --save-graph tmp.graph --save-dotfile tmp.dot
 ----
 
 This tool uses the same library as the live cluster to show what it
 would have done given the supplied input.  Its output, in addition to
 a significant amount of logging, is stored in two files +tmp.graph+
 and +tmp.dot+. Both files are representations of the same thing: the
 cluster's response to your changes.
 
 The graph file stores the complete transition from the existing cluster state
 to your desired new state, containing a list of all the actions, their
 parameters and their pre-requisites. Because the transition graph is not
 terribly easy to read, the tool also generates a Graphviz
 footnote:[Graph visualization software. See http://www.graphviz.org/ for details.]
 dot-file representing the same information.
 
 For information on the options supported by `crm_simulate`, use
 its `--help` option.
 
 .Interpreting the Graphviz output
  * Arrows indicate ordering dependencies
  * Dashed arrows indicate dependencies that are not present in the transition graph
  * Actions with a dashed border of any color do not form part of the transition graph
  * Actions with a green border form part of the transition graph
  * Actions with a red border are ones the cluster would like to execute but cannot run
  * Actions with a blue border are ones the cluster does not feel need to be executed
  * Actions with orange text are pseudo/pretend actions that the cluster uses to simplify the graph
  * Actions with black text are sent to the LRM
  * Resource actions have text of the form pass:[<replaceable>rsc</replaceable>]_pass:[<replaceable>action</replaceable>]_pass:[<replaceable>interval</replaceable>] pass:[<replaceable>node</replaceable>]
  * Any action depending on an action with a red border will not be able to execute. 
  * Loops are _really_ bad. Please report them to the development team. 
 
 === Small Cluster Transition ===
 
 image::images/Policy-Engine-small.png["An example transition graph as represented by Graphviz",width="16cm",height="6cm",align="center"]      
 
 In the above example, it appears that a new node, *pcmk-2*, has come
 online and that the cluster is checking to make sure *rsc1*, *rsc2*
 and *rsc3* are not already running there (Indicated by the
 *rscN_monitor_0* entries).  Once it did that, and assuming the resources
 were not active there, it would have liked to stop *rsc1* and *rsc2*
 on *pcmk-1* and move them to *pcmk-2*.  However, there appears to be
 some problem and the cluster cannot or is not permitted to perform the
 stop actions which implies it also cannot perform the start actions.
 For some reason the cluster does not want to start *rsc3* anywhere.
 
 === Complex Cluster Transition ===
 
 image::images/Policy-Engine-big.png["Another, slightly more complex, transition graph that you're not expected to be able to read",width="16cm",height="20cm",align="center"]
 
 == Do I Need to Update the Configuration on All Cluster Nodes? ==
 
 No. Any changes are immediately synchronized to the other active
 members of the cluster.
 
 To reduce bandwidth, the cluster only broadcasts the incremental
 updates that result from your changes and uses MD5 checksums to ensure
 that each copy is completely consistent.