Page Menu
Home
ClusterLabs Projects
Search
Configure Global Search
Log In
Files
F3151791
README
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Flag For Later
Award Token
Size
1 KB
Referenced Files
None
Subscribers
None
README
View Options
glv = gulm lock verify
This is a tool to verify that the lock state machine in gulm is working.
This tool has its own server client system. The clients of glv are also
gulm services. (they link against libgulm) The server is started with a
test file, and a list of nodes. It sends out a command to start the client
on these nodes, where gulm is already up and running. It then walks
through the test file, making sure that it sees the results it expects.
Tests don't specifiy which nodes to run on, instead they specifiy how many
they need. Nodes are all referenced by an index number. Then for the
actual run, the real node names on on the cmdline.
Running a test is mostly just picking the test to run. Making sure you have
the require number of nodes. Starting lock_gulmd on all of them. Copying
glvc to each. Then running glvd with the name of the testfile and the node
names.
If a test fails, the reason is sent to stderr, and glvd returns with 1.
Otherwise a success message gets printed to stdout, and glvd returns 0.
glv can only work with what is returned. To really know things are
working, you should know what residue a test leaves behind, and investigate
the lock dump from the server to make sure it is correct. With that in
mind, it is worth the effort to make all tests unlock everything before
finishing. That way, if nothing else is running, the lockspace will be
empty.
Also, all tests assume a clean lock space. So if a test is failing,
restart the lock servers first.
File Metadata
Details
Attached
Mime Type
text/plain
Expires
Mon, Feb 24, 11:09 AM (16 h, 13 m ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1447576
Default Alt Text
README (1 KB)
Attached To
Mode
rF Fence Agents
Attached
Detach File
Event Timeline
Log In to Comment