Down With Test Automation! Long Live Task Automation!

It seems like at least once or twice a year, there is a renewed call for more automated testing or someone starts talking about some new form of automated testing which we could do to make Fedora better. Every single time this comes up, the response is along the lines of "Yeah, that would be cool ... but AutoQA can't do that because of X, Y and/or Z. If we had more resources, it'd get done faster but we're hoping to get that real-soon-now © ...". I imagine that you all are tired of hearing it, but I can pretty much guarantee you that we're even more tired of saying it.

With that in mind, I've been thinking about how to go forward with AutoQA and automated "testing" in Fedora. There has been a little bit of conversation around requirements , using beaker versus continuing to use autotest but very little of that has actually been productive - mostly talk.

When I was at PyCon NA this year, I was talking to some other Fedora contributors about a testing idea and three things happened:

  1. I gave the usual "AutoQA can't ..." statement
  2. I said that it would be great to fix/replace AutoQA
  3. I asked what they were doing for the sprints as I hadn't figured out what I would be contributing to yet.

"Why not take a stab at fixing AutoQA?" - the question that pretty much defined what I did at the sprints and have been working on since as time has allowed.

What Needs to be Changed?

Previous automation efforts in Fedora have focused on an initial goal which we call the Package Update Acceptance Test Plan (PUATP). The basic idea being that there should be automation support to make reasonable assurances that a package isn't going to blow up and cause huge problems once it arrives in the stable repositories.

Those tests are running reasonably well in AutoQA but we'd really like to start running more kinds of tests:

  • Kernel tests
  • Automated installation tests
  • Cloud image sanity tests
  • RHEL sourced tests (things that are currently run by Red Hat which would be really useful for Fedora)
  • Any number of other ideas which aren't mentioned here

In its current state, AutoQA isn't capable of running any of these things and it's going to require a non-trivial amount of effort to really support them well.

The problem is compounded by a few details in the current design of AutoQA

  • Tests have to be included in the main AutoQA package

    • Test updates are a huge hassle since everything has to be updated at the same time.
    • Accepting test contributions from non-core devs is more problematic than it should be
  • There is very tight coupling between AutoQA tests, production Fedora infrastructure (koji, bodhi etc.) and the test runner (autotest).

    • Development environment setup is a huge pain and is not very well documented
    • Testing anything is far more difficult than it should be. We've worked around this a little bit by writing a fake bodhi provider but that has its own set of problems.

As I was going through the process of just figuring out what types of things we would want to support, I started thinking about test image generation on new anaconda builds, running automated installation smoke tests on those new images and even other things that either don't fit into the traditional paradigm of "test automation" or can't be done with the current iteration of AutoQA.

Why limit ourselves to "tests" from the start? If QA meant "Quality Assurance", I can kind of see that but on the other hand, as I've written about before, I reject the concept of "Software Quality Assurance", especially in the context of Fedora QA.

If we think of QA as "Quality Assistance", why limit ourselves to just working on more traditional test automation? Why not work on things that help improve the quality of Fedora even if that doesn't include things generally associated with test automation.

Making a Case for Task Automation

We could figure out how to shoehorn ephemeral test clients and beakerlib into autotest or cloud instance creation and non-beaker tests into beaker without using the full stack which beaker is designed for ... or we could focus on running tasks and try to keep enough flexibility to let people run pretty much whatever they can come up with.

Am I suggesting that we do away with all automated testing or to burn the currently existing AutoQA to the ground? No, I'm not - that would waste quite a bit of effort. I'm not sure how much of the existing AutoQA codebase could be moved at this point but I'd prefer to keep as much of it as possible.

Getting Started

I assume that I am not smart or experienced enough to anticipate every single use case that Fedorans (developers or testers) could come up with and I'd probably be a bit full of myself if I did pretend I was that smart. With AutoQA, we've proven that we don't have enough human resources within the current AutoQA developers to expand our automated testing very far - let's make something which is easy enough for any developer or tester with scripting experience to use and flexible enough to do any kind of quality task that they could come up with.

Obviously, there is a cost-benefit trade-off with being too flexible but I think we could come up with something which is a good enough balance between complexity and flexibility so that it's actually used and useful.

I spent quite a bit of time at PyCon and some time since then coming up with a proof-of-concept system for what I'm calling "Task Automation" - an overall system decouples task scheduling, task execution, result reporting and machine instance management to the point where all the bits can work together but some parts (specifically the tests and reporting) can operate standalone. For now, I'm calling the overall system Taskbot.

I have this proof-of-concept running on my own systems for now but I'm going to be transitioning to publicly-viewable systems soon - I see little point in trying to take this very far by myself, behind closed doors.

Anyhow, this post is long enough already, so I'll start explaining the nitty-gritty details of my vision for Taskbot and its components in another article :)