Having just returned from CAST 2010, I approached some co-workers and discussed the possibility of changing our testing methodology.
Current Project Status Quo
- There is a Test Plan, which is a large document with many test cases
- When a new major feature is added to the product, the Test Plan is usually updated, adding new test cases
- The theory is that we execute every test in the test plan before every release, in order to find regressions
- We very rarely actually get the whole test plan completed (ie: once in 10 releases has the whole test plan completed)
- There are a suite of unit-tests which are run automatically every night (as well as by the developers).
I proposed/discussed with the project lead and project manager that we should consider changing our “Testing” from Checking to Testing. I discussed the idea of performing Exploratory Testing. Exploratory Testing is not commonly practiced here, and thus in order to try to further convey the difference between checking and testing, I used the term Sapient Testing, which appeared to have enough connotations to convey my meaning.
Everyone agreed that:
- The Test Plan doesn’t get executed every release
- We want the test plan to be executed every release, so that we can know about regressions
- Usually, some sections (those most relevant (in someone’s mind) to the changes being made) are run after each release.
I tried to convince them that it would be more beneficial to use Exploratory Testing, but the counter-argument that was brought forward by my project manager:
- “If the Tester explores a feature of the product in order to test it, how do we ensure there are no regressions on the next release?”
- i.e. Without documentation about the tests that were performed when Exploring, we have no reproducible way to ensure that these features continue to work from release to release
The Wikipedia article agrees with me on the subject of Checking losing value quickly:
Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored
My Thoughts And Counter-Proposal
I did not have a great counter-argument to this concern at the time, and thus I tried to respond on the basis of:
- We agree that the Test Plan never gets fully executed
- We only have a limited amount of time each release for testing
- Having a tester “Explore” the feature area will likely find critical issues more quickly than following a plan
- I am having difficulty sustaining/selling this idea:
- If the Test Plan is all the “concerning” paths through the product, wouldn’t a failure in there be a “critical issue”?
- Exploring a feature doesn’t ensure that all of the “Test Plan” paths will be tested, so how can we make the claim that Exploratory Testing is better at finding critical issues?
- I am having difficulty sustaining/selling this idea:
- Having different people “Explore” the feature (over various Testing Cycles on various Releases) will uncover things that a Test Plan would have missed (due to omission)
- My point here is that different testers approach the product differently (as do different users), and will try things differently. This variation is desirable, as it will uncover bugs that would otherwise be missed.
Also, now that I have had time to consider this position further, I think that I may need to integrate some thoughts from Session-based Testing into my discussion.
Session-based Testing is a way of accounting/reporting on the testing activity; it adds a level of formality over the less-formal Exploratory Testing techniques. It allows a written trail of accountability which can be used to keep track of areas which have had some testing, versus areas which have had none.
I have formulated the following counter-proposal:
- A test plan describes the main features/interactions/paths that our customers are expected to follow
- This is a good starting point for a Tester to investigate, but not necessarily required in order for the Tester to uncover issues
- Each time a bug is found in the system, an automated-unit-test should be written which exposes the bug (coordination with the development team is required)
- This way, once the bug is fixed, we have a daily-run automated test which ensures that the particular bug does not occur again
- When performing Exploratory Testing, Session-based Testing techniques should be adhered to in order to provide a level of accountability and reporting
An additional point that I’m considering pursuing:
- On those rare releases when the complete Test Plan is executed, are there still bugs found by customers? (the answer is yes)
- Since there are still bugs found, can we agree that executing the large Test Plan may not be the complete answer to all our quality woes?
Is my proposed solution essentially a compromise because the “Test Team” doesn’t have an automated-testing-framework?
- Should the test team’s first goal be to automate some of the Test Plan tests?
- And then, whenever an issue is found, add an automated test to the Test Team’s automated tests?
Another hurdle which I encountered:
- We have a staff of people who are currently tasked with executing the Test Plan whenever we have time to do so. Some of these people may not be suited to doing Exploratory Testing instead of Checking (ie: have the desire to acquire the new skills). Should I propose doing both (duplicating some work)?
- I think that we should pursue doing both Checking and Exploratory Testing, as the Checking ensures a specific set of features, while the Exploratory Testing will more likely find risks/issues more quickly.
Exploratory Testing Articles: