From manual to automaticJuly 22, 2011
Every well seasoned tester knows the advantages and disadvantages that manual/semi-manual and automatic tests have when compared to each other. A manual test is easy to create, just a few simple words and you have your test. Automatic tests allow you to (almost) fire and forget about them with your only concern being the PASS/FAIL at the end. A semi-manual test is a funny hybrid of the two, usually only used in a situation where a fully automated test is almost physically impossible (e.g. verifying screenshots and tests involving peripherals). Manual tests are not good in situations where the same test must be run many times across a large number of configurations. This is exactly what we have in hardware certification, where we must run tests across ~100 systems on a very regular basis. To this end we’ve been taking the opportunity this development cycle to update some of our older tests to be more automated.
One of the tests that I updated was one which would cycle through available resolutions on the system (using the xrandr tool) and request the tester to verify that they all looked okay with no graphical corruption. This is the sort of test that is fine when someone is running the tests on a one-off basis, it’s not so good when one tester needs to supervise 50+ systems during a certification run. One of the main problems is that it causes too much context switching, with the tester constantly needing to keep an eye on all the systems to see if they’ve reached this test yet. Obviously, it being a graphical test, it’s difficult to do fully automated verification so a compromise needed to be reached. The solution I came up with was to integrate screen capture into the test and then upload these screens in a tgz file as an attachment with the test submission. Everything going well, the tester can sit down at their own computer and go through the screens and confirm they’re okay. In fact the person verifying the screens doesn’t even need to be in the lab! The task can be distributed amongst any number of people, anywhere in the world.
Another test that looked like a prime candidate for automation was one for testing the functioning of the wireless card before and after suspending the system. Previously the test case was:
- Disconnect the wireless interface.
– Reconnect and ensure you’re online.
– Suspend the system.
– Repeat the first two steps
This was all specified to be done manually. I am currently updating this test to use nmcli to make sure a connection can be made, then disconnect and reconnect just as would happen if the tester did the steps manually using nm-applet. The one thing I haven’t got down pat yet is connecting to a wireless network where a connection didn’t exist before. This step may be optional as it could be expected that the tester will do this manually at some point during the setup of the tests and we can trust a connection to be available already. This will mean this test has gone from manual to fully automated and hopefully should shave potentially some significant number of minutes off the whole test run!
Saving time on our existing tests will allow us to introduce new tests where appropriate, so we’re able to provide even more thorough certification testing.