While I could sit here and write pages on what I learned at PyCon, I think that it would be a bit more productive and readable to list a few of the highlights. Going into PyCon, my major interests were testing technique (for AutoQA), data analysis and visualization (mostly for school right now but I'm hoping to make that useful for Fedora). Most of the data analysis stuff I learned can't be summarized well as bullet points but it should show up as I make progress on my school projects and start making those results more public.
One of the biggest non-python things I learned while at PyCon was about the need to coordinate with others before an event like PyCon. Outside of FUDCon, PyCon 2012 was my first conference and I wasn't quite sure what to expect. While talking with other Fedora contributors (lmacken, threebean, dmalcom), I really wish that I had stayed for the sprints following PyCon. In particular, it would have been nice to work with Ralph and Luke on moksha and learn a bit more about what they're planning for Fedora Messaging and Bodhi 2.0 but it isn't the end of the world.
General Takeaways
- IPython notebook is AWESOME
- Pandas is really powerful and relatively easy to use
- People are working on better ways to record presentations and facilitate remote presentations - I will be talking about this more in other posts
I tried working with dmalcom to start creating an AutoQA test for the python extension analysis code he's been working on and discovered that I don't know nearly enough about creating new tests for AutoQA. We found some oddities in there that I don't quite understand but I'm looking into whether that is an issue with the code or just a lack of experience on my part
Testing Technique
- If you need more human time to analyse and process results than you need
to actually run your tests, you have failed.
- I think that this is applicable to a point - sometimes it doesn't make sense to automate but at the same time,
- Fine grained unit tests around bad code will tend to "bake in" the crappy API that is already there. Writing tests around bad code is great as a safety net for refactoring but don't take it too far
- Overly complicated unit test setup is a test smell - either you're doing
- something wrong in the tests or the code under test needs to be refactored.
- Unfortunately, this tends to be what some of my tests end up looking like. I'm not sure why this never registered as a code smell but it will now.
- You should care for your unit tests and mocks like you would care for your code base
- If the tests are difficult to run or take too long, they will be ignored. If they aren't testing what you think they're testing, they're useless.
What am I going to do with all of this?
- Learn how to write new AutoQA tests and make sure that our documentation is good.
- Even though we can't support external tests well right now, I still want to understand what it would take to write and submit a new test in order to get a better idea of where the pain points are and what should change in AutoQA so that we can start accepting external tests.
Start looking at how we can break up AutoQA to be more modular
- Some of this has already been brought up on autoqa-devel and we're planning to start working on it after 0.8 is released.
- Decoupling some parts should lend in better self-test coverage
Refactor the test runner for AutoQA's self tests
- It isn't structured well and I've learned a lot since I wrote it the first time.
- The tests need to be easier to run and better set up so that we can start writing more of them
Granted, all of this is contingent on how much time I have leftover after testing Fedora 17. The beta release is coming up and there is no shortage of testing to do.