End user testing of systems and particularly systems changes is the last line of defence. It’s more valuable than the development itself.
Why then is testing often not done well, or completely, nor sufficient funding ad resource dedicated to it?
The simple answer is, end user testing is not only the last line of defence, but also the last step in the chain. We conceive, design, plan, develop, test and implement. Usually, there is a project plan and timeline, with, invariably, a specified “live date”. As every other phase of the project progressively runs late and burns through contingency budgets of time and money, the available time and resource for testing is inevitably whittled away.
Testing is hard. Not only are we to test the new developments and changes, but also test everything that (even in the slightest chance) could be affected by the changes, to ensure the non-changed areas of the system are still operating to spec. Given the constraints on testing, these tests are almost always compromised, and focus on the new developments.
The result, invariably, is poor quality testing which leads to user dissatisfaction with product quality, re-work and yet more budget (time and money) overruns.
Is there a solution?
In truth, probably not. Automated testing can help, whereby pre-canned scenarios and known outcomes are programmatically run as scripted tests, results compared, and auto-pass/fail statistics compiled. It’s good, but it’s not perfect. Similarly, dedicated test teams and 3rd party test resources are not so effective. Sure, they follow the testing rules, but often, the problems seem to follow the rules too, but they’re still problems.
There are often highly nuanced situations that arise in testing which not only need a human to validate, but an experienced subject-matter expert. The bad news is these difficult nuanced issues are often what can bring down the whole house of cards if not found and dealt with in this final testing operation.
So what should we do?
As is often the case, and not often desirable, hard work is the answer. Ideally, the following should be programmed into the development / update cycle.
Find the subject-matter experts who really know what’s going on
Backfill their daily roles for the whole desired period of testing with external hired resources. Back filling business as usual operations is much easier and more effective than hiring test teams.
Be serious and realistic about the length of the test program, and make sure to include time for re-work, issue resolution and retesting.
Follow test plans, but also don’t. Test plans test what is expected, subject matter experts see things that are not in the test plan, are not expected, or have subtly changed. This is invaluable data and could be the thing that really does save you.
Seriously filter and prioritise issues. Some complete failures may actually be low impact and low priority, and some minor differences could be widespread and a major problem if let through.
Start testing backwards. What the hell does that mean?
Assume the process works and test for the final outcome. If the final outcome is good, there is a high probability that most intermediate processes are either working, or working well enough.
If the final outcome is not as desired, go backwards one step and test there, and so on.
Testing forwards can waste a lot of time and effectively miss the forest for the trees.
Now this one is obvious, yet obviously not well understood from what we’ve seen. Record the EXACT steps followed to create a failure condition. Include ALL the data elements used when you found it.
Whenever you establish a failure, test it as a different (class of) user, then test it as the original user on a different hardware / operating system / browser platform.
Do it again.
Comments