h1

From manual to automatic

July 22, 2011

Every well seasoned tester knows the advantages and disadvantages that manual/semi-manual and automatic tests have when compared to each other. A manual test is easy to create, just a few simple words and you have your test. Automatic tests allow you to (almost) fire and forget about them with your only concern being the PASS/FAIL at the end. A semi-manual test is a funny hybrid of the two, usually only used in a situation where a fully automated test is almost physically impossible (e.g. verifying screenshots and tests involving peripherals). Manual tests are not good in situations where the same test must be run many times across a large number of configurations. This is exactly what we have in hardware certification, where we must run tests across ~100 systems on a very regular basis. To this end we’ve been taking the opportunity this development cycle to update some of our older tests to be more automated.

One of the tests that I updated was one which would cycle through available resolutions on the system (using the xrandr tool) and request the tester to verify that they all looked okay with no graphical corruption. This is the sort of test that is fine when someone is running the tests on a one-off basis, it’s not so good when one tester needs to supervise 50+ systems during a certification run. One of the main problems is that it causes too much context switching, with the tester constantly needing to keep an eye on all the systems to see if they’ve reached this test yet. Obviously, it being a graphical test, it’s difficult to do fully automated verification so a compromise needed to be reached. The solution I came up with was to integrate screen capture into the test and then upload these screens in a tgz file as an attachment with the test submission. Everything going well, the tester can sit down at their own computer and go through the screens and confirm they’re okay. In fact the person verifying the screens doesn’t even need to be in the lab! The task can be distributed amongst any number of people, anywhere in the world.

Another test that looked like a prime candidate for automation was one for testing the functioning of the wireless card before and after suspending the system. Previously the test case was:

– Disconnect the wireless interface.
– Reconnect and ensure you’re online.
– Suspend the system.
– Repeat the first two steps

This was all specified to be done manually. I am currently updating this test to use nmcli to make sure a connection can be made, then disconnect and reconnect just as would happen if the tester did the steps manually using nm-applet. The one thing I haven’t got down pat yet is connecting to a wireless network where a connection didn’t exist before. This step may be optional as it could be expected that the tester will do this manually at some point during the setup of the tests and we can trust a connection to be available already. This will mean this test has gone from manual to fully automated and hopefully should shave potentially some significant number of minutes off the whole test run!

Saving time on our existing tests will allow us to introduce new tests where appropriate, so we’re able to provide even more thorough certification testing.

h1

Ubuntu Bug Control Membership

July 7, 2011

I attended a company rally in Dublin last week so didn’t get around to blogging, but having received some good news just yesterday I though I’d make a posting about it.

After the Ubuntu Developer Summit in Budapest one of the objectives I set for myself this year was to become more active in the Ubuntu community. Being a QA person, one of the most obvious routes was through the well structured bug handling initiatives that the community has in place (thus my posting about the Bug Squad and bug days). To this end I’ve devoted what spare time I have to these initiatives. Apart from Bug Days I also take part in the 5-a-day program which encourages participants to update five bugs a day (by commenting, changing status or updating titles – anything that could be construed as a valuable change to a bug). This is going pretty well and I’m on a 3-week streak of fulfilling my 5-a-day quota (according to the 5-a-day report).

As a result of all this activity I’d gathered enough experience (and evidence of that experience) to file an application for the ubuntu-bug-control team in Launchpad, who have permission to change bug Importance and set the status to ‘Triaged’. This involved testifying to having read some documentation and then providing some example bugs to demonstrate that I understood the bug triage process. It took a week and a half to get enough positive responses (two) to have my application accepted, but as of yesterday I am officially a member of ubuntu-bug-control!

I’ll do my best to use these powers effectively and diligently and hopefully make a big difference to the effectiveness of bug triage in Ubuntu.

h1

My favourite aliases…

June 24, 2011

Something I recently (embarrassingly) discovered is that bash supports the concept of aliases, which are like shorthand for commonly used commands. Ubuntu comes with a few as default already in your .bashrc, e.g. ‘ll’ for ‘ls -alF’ (long listing). You’re free of course to add your own in .bashrc, so here I present some of the ones I use:

alias chx='chmod +x'
alias rvim='sudo vim' (if you use VIM that is ;) )
alias sagi='sudo apt-get install -y'
alias sagr='sudo apt-get remove'
alias sagu='sudo apt-get update'
alias saar='sudo add-apt-repository'

I find that especially the apt ones save a lot of typing. Hope you find them useful!

(oh yeah, just put the lines in your ~/.bashrc and run ‘source ~/.bashrc’)

h1

Me and my Pandaboard

June 22, 2011

As discussed at last months Ubuntu Developer Summit in the session ‘ARM and other architectures certification program‘, there’s a plan to start certifying ARM hardware, or at least start investigating how we’ll do it. To this end I’ve received on loan a TI OMAP4 Pandaboard from Canonical’s ARM QA team. I’ve actually had it here in the office for quite a few weeks now but for some reason or another I haven’t got around to blogging about it yet!

So, without further adieu – here are a couple of shots of my setup:

I like it because it’s really compact and smacks of geekiness, with all the exposed circuits, yet is really quite easy to use in a lot of ways. The monitor is plugged in via the HDMI port on the right hand side (because of an issue with my monitor I can only get 640×480 out of it, so everything is very squeezed on the screen) and the wireless desktop receiver which handles my mouse and keyboard plugs right in to one of the two full sized USB 2.0 ports. The whole thing is powered by my laptop (even when it’s suspended) via USB-AC 5v connector, also on the right-hand side.

It’s running Natty/Unity 2D installed on the 8GB SDHC card on the left of the board. This means that the whole setup cost (if I had have payed for rather than borrowed it) just under $200. The white labeled chip on the top left hand side of the board is the WiFi/Bluetooth chip and that works *perfectly* out of the box – often picking up a better signal than the laptop sitting right next to it. I also have the option of plugging in my USB headset in the the same USB hub as the wireless receiver (it’s a tight squeeze but it just about fits) and that too works perfectly.

Cons are that I don’t have a USB HDD so Ubuntu is running on flash memory (notoriously bad performance) and that if I decide to power down my laptop but forget the Pandaboard has some task running on it then all is lost 😦 Overall though it’s a really nice piece of equipment and because of all the good work that has been done around it, I could recommend one to anyone with a bit of technical know-how (no ARM experience required!)

h1

The magic of OSS

June 15, 2011

In my travels around Launchpad looking for bugs to triage, I came across an old one that I noticed (but not before others apparently) in the Alpha 1 release of Oneiric Ocelot. This was a problem with update-manager not ‘seeing’ that network-manager had a connection because the new version of network-manager (0.9) uses different codes to express ‘connected’.

This issue was bugging me, so I decided I’d take it upon myself to patch it up. Someone had done a similar patch in software-center so I already had all of the knowledge needed right there (i.e. what are the new codes). I jumped into my Oneiric VM, branched the update-manager code and hacked away at a couple of Python modules, tweaked, buffed and polished until lo and behold, on starting update-manager it picked up the connection! A few command lines (bzr stat, bzr commit, bzr push) and a few clicks in Launchpad later my merge request was with the update-manager project maintainer (Michael Vogt aka mvo). Minutes later it was merged and the next day with the help of my patched version of update-manager 🙂 I was able to update update-manager with the patch.

Looking at my own name there in update-manager’s description of the change, I couldn’t help but think how awesome it is that I’m able to do this with my favourite operating system. That’s what makes OSS magic for me…

h1

Hug a Bug

June 9, 2011

When I worked in the Symbian Foundation, as part of the (Symbian) Bug Squad activities that I helped run we would (try) to have regular get together on IRC where the community would come together and work on something in particular. Mainly this was getting triaging done. We didn’t have the benefit of a lot of experience, so this would be done in something of an ad-hoc way with everyone discussing each bugs status and priority until we reached a conclusion.

Now that I’m at Canonical and trying to participate heavily in Ubuntu’s Bug Squad activities, it’s comforting to know that something similar goes on here (maybe we were subconsciously influenced by it ?). It also happens to be on the same day (Thursday) of the week. I’m of course referring to Hug Days, which are co-ordinated by the QA team. I’ve been involved in them over the last few weeks as a participant (rather than an organiser) and I find the structure to be very good and very accessible. Quite simply there is a list of bugs with different statuses (New, Confirmed or Incomplete) and simple instructions on what to do with each bug. New bugs need to be either Confirmed or set to Incomplete if you find you need to ask the reporter for extra details to be able to reproduce the bug. Confirmed bugs themselves need to be revisited and a check done to make sure the bug is still happening, leading to the bug either being Triaged or set back to Incomplete if it’s not happening and you need the reporter to reconfirm. Lastly, Incomplete bugs should be checked for a response from the reporter to the information request. If they gave the necessary info then the bug should be Confirmed. If not a follow up question should be asked and the bug left as Incomplete.

Some tools that are handy to have to assist with the process of going through all these bug reports and updating them correctly are the Hug Day tools, which semi-automate the process of ‘closing’ Hug Day bugs (they aren’t being closed as bugs, but as tasks on the Hug Day), as well as the Firefox Launchpad Improvements, which are useful not just for Hug Days but any bug work. The improvements include canned bug comments for common scenarios such as when an inexperienced bug filer has provided little info on the bug and you need to tell them to provide simple steps to reproduce the bug.

Each Hug Day is based on a particular package (which helps to focus the effort) and this weeks Hug Day is on Nautilus, Ubuntu’s file browser. I have and will be participating in this as much as I can, so if you decide to participate in it then say Hi on Freenode IRC #ubuntu-bugs where there are lots of knowledgeable Ubuntu people waiting to help out newcomers with the task at hand. See you there!

h1

Working towards Bug Control

June 1, 2011

As a means to control the activity on bugs, Launchpad implements a sort of permissions system for who’s allowed to change certain properties of a bug. Mainly this is applied to the ‘Importance’ field of the bug as well as some values of ‘Status’ such as ‘Triaged’. This is because these are used for managing workloads and it wouldn’t be desirable for people with little experience of managing bugs to just go changing them without having a fair idea of what they were doing.

For the Ubuntu project, the team which has these permissions (amongst others) is Ubuntu Bug Control. This is a group of Ubuntu contributors who have, over a period of time demonstrated their capability at bug analysis. Joining this group is done through a merit system, whereby you have to apply and back up your application with evidence. You need to state that you’ve read the relevant documentation on bug importance, status, assigning bugs and triaging. You are also quizzed on the requirements that need to be met in order to mark a private Apport crash bug public. Then you need to give a commentary on your ‘best’  five bugs, explaining what actions you took on them (status changes, any actions taking place around the bug such as working on a fix or discussing issues with fellow community members).

In all, the process is not too demanding for someone who has been doing this a while, so I plan to make my own application soon. I’m still working on my ‘bug resume’, since I have the slight disadvantage that a lot of the packages my team works with are private, so triaging I do there can’t be used as examples. So for a few more weeks I will do a few bugs each day and probably submit my application before the end of June.