One of the things that I've been thinking about lately is how to do more testing of builds in updates-testing without needing to rely on pre-written scripted test cases. There's nothing wrong with scripted testcases but they're a bit painful to write and they aren't always the best way to test a package.
Session-Based Test Management (SBTM) is a technique for testing that doesn't rely on pre-scripted test cases and produces more useful output than "I did some testing on foo". Ideally, this output can be used to both obtain some record of how foo was tested in addition to providing some loose documentation for others on how foo could be tested in the future. I figured that I would do a simplistic testing session to demonstrate how to use a testing session to decide what kind of karma to give to critpath update in updates-testing. I chose to do parted-2.3-11.fc15 because
- I'm already familiar with at least part of what parted is supposed to do
- It's interface is relatively simple
- parted is critpath and the newest F15 build is still in updates testing (at the time I'm writing this).
Yes, this is a somewhat manufactured test and doing a testing session for updates-testing validation is probably overkill. However, I want to start small and simple until I can better figure out how I want to format testing logs and the tools best used for a testing session.
I made a screencast of my testing session which is available on youtube.
The raw testing log after this session is available here and the screen log is available here.
Now that my session is done, I want to go through the raw log and do the following
- Add the actual commands that I used to my log
- I'm leaving out the command output because it can be rather verbose and I've included the screen log if there is something interesting in there.
- Change the order of the log to reflect what I actually did
- Research anything that I was confused by
- Determine whether there are any bugs to file
- Come to a conclusion on what kind of karma to file.
After editing, my final testing log looks like this. I think that the notes should be self explanatory on how I actually put them together but I still have some questions to answer:
What is a better way to record things like '@action' and '@unclear'?
I don't like the use of @blah for this, it doesn't feel natural but I'm not really thinking of anything better at the moment
Do I limit reporting my results to giving karma?
In this case, there were no bugs to file and I didn't attempt to verify any fixes but I can see how referencing a testing log in a bug report or fix verification report would be useful.
Are testing logs like this useful to anyone?
I probably need to do something more complicated before attempting to answer this.
If logs like this are useful, how do I share my testing logs so that others can find them and use them?
fedorapeople.org is great for storing files but there are limits to how tied together everything can be if other people started doing this, too.
I'm going to keep trying to figure out if this style of testing is beneficial for fedora but would love to hear thoughts on the topic (good idea, bad idea, suggestions for improvement etc.). Otherwise, I'm planning to do another, more complicated session soon in an attempt to figure some of this out.