Hurdles encountered when proposing Exploratory testing

Having just returned from CAST 2010, I approached some co-workers and discussed the possibility of changing our testing methodology.

Current Project Status Quo

  • There is a Test Plan, which is a large document with many test cases
  • When a new major feature is added to the product, the Test Plan is usually updated, adding new test cases
  • The theory is that we execute every test in the test plan before every release, in order to find regressions
  • We very rarely actually get the whole test plan completed (ie: once in 10 releases has the whole test plan completed)
  • There are a suite of unit-tests which are run automatically every night (as well as by the developers).

Initial Proposal

I proposed/discussed with the project lead and project manager that we should consider changing our “Testing” from Checking to Testing.  I discussed the idea of performing Exploratory Testing.  Exploratory Testing is not commonly practiced here, and thus in order to try to further convey the difference between checking and testing, I used the term Sapient Testing, which appeared to have enough connotations to convey my meaning.

Everyone agreed that:

  • The Test Plan doesn’t get executed every release
  • We want the test plan to be executed every release, so that we can know about regressions
  • Usually, some sections (those most relevant (in someone’s mind) to the changes being made) are run after each release.

Their Counter-Argument

I tried to convince them that it would be more beneficial to use Exploratory Testing, but the counter-argument that was brought forward by my project manager:

  • “If the Tester explores a feature of the product in order to test it, how do we ensure there are no regressions on the next release?”
  • i.e. Without documentation about the tests that were performed when Exploring, we have no reproducible way to ensure that these features continue to work from release to release

The Wikipedia article agrees with me on the subject of Checking losing value quickly:

Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored

My Thoughts And Counter-Proposal

I did not have a great counter-argument to this concern at the time, and thus I tried to respond on the basis of:

  • We agree that the Test Plan never gets fully executed
  • We only have a limited amount of time each release for testing
  • Having a tester “Explore” the feature area will likely find critical issues more quickly than following a plan
    • I am having difficulty sustaining/selling this idea:
      • If the Test Plan is all the “concerning” paths through the product, wouldn’t a failure in there be a “critical issue”?
      • Exploring a feature doesn’t ensure that all of the “Test Plan” paths will be tested, so how can we make the claim that Exploratory Testing is better at finding critical issues?
  • Having different people “Explore” the feature (over various Testing Cycles on various Releases) will uncover things that a Test Plan would have missed (due to omission)
    • My point here is that different testers approach the product differently (as do different users), and will try things differently. This variation is desirable, as it will uncover bugs that would otherwise be missed.

Also, now that I have had time to consider this position further, I think that I may need to integrate some thoughts from Session-based Testing into my discussion.

Session-based Testing is a way of accounting/reporting on the testing activity; it adds a level of formality over the less-formal Exploratory Testing techniques. It allows a written trail of accountability which can be used to keep track of areas which have had some testing, versus areas which have had none.

I have formulated the following counter-proposal:

  • A test plan describes the main features/interactions/paths that our customers are expected to follow
    • This is a good starting point for a Tester to investigate, but not necessarily required in order for the Tester to uncover issues
  • Each time a bug is found in the system, an automated-unit-test should be written which exposes the bug (coordination with the development team is required)
    • This way, once the bug is fixed, we have a daily-run automated test which ensures that the particular bug does not occur again
  • When performing Exploratory Testing, Session-based Testing techniques should be adhered to in order to provide a level of accountability and reporting

An additional point that I’m considering pursuing:

  • On those rare releases when the complete Test Plan is executed, are there still bugs found by customers? (the answer is yes)
    • Since there are still bugs found, can we agree that executing the large Test Plan may not be the complete answer to all our quality woes?

Is my proposed solution essentially a compromise because the “Test Team” doesn’t have an automated-testing-framework?

  • Should the test team’s first goal be to automate some of the Test Plan tests?
    • And then, whenever an issue is found, add an automated test to the Test Team’s automated tests?

Another hurdle which I encountered:

  • We have a staff of people who are currently tasked with executing the Test Plan whenever we have time to do so. Some of these people may not be suited to doing Exploratory Testing instead of Checking (ie: have the desire to acquire the new skills). Should I propose doing both (duplicating some work)?
    • I think that we should pursue doing both Checking and Exploratory Testing, as the Checking ensures a specific set of features, while the Exploratory Testing will more likely find risks/issues more quickly.

Additional References

Exploratory Testing Articles:
http://www.satisfice.com/articles/et-article.pdf
http://www.satisfice.com/articles/what_is_et.shtml

About these ads

About Robin Dunlop

Software Developer/Tester/Jack-of-all-Trades
This entry was posted in Software Development. Bookmark the permalink.

One Response to Hurdles encountered when proposing Exploratory testing

  1. Hi, Robin…

    I’m really sorry that we didn’t meet at CAST 2010. Next time, let’s try to make a point of it!

    Perhaps you haven’t seen these yet:

    http://www.developsense.com/blog/2010/08/questions-from-listeners-2a-how-to-handle-regression-testing/

    http://www.developsense.com/articles/2007-01-OneStepBackTwoStepsForward.pdf

    http://www.developsense.com/blog/2009/11/testing-checking-and-convincing-boss-to/

    http://www.developsense.com/blog/2006/09/regression-testing-part-i/

    http://www.developsense.com/blog/2006/09/regression-testing-part-2/

    Plus this gem from Karen Johnson:

    http://www.testingreflections.com/node/view/8333

    I hope these help, along with a gentle reminder to your manager that regression problems can be detected by testing, but they can neither be prevented nor solved by it.

    One more thing: you might do well to think of a test plan less as a set of specific tests to run, and more as a set of principles that would help you choose which tests to run. When you say “the theory is that we execute every test in the test plan before every release, in order to find regressions”, can you see how there’s a pair of logical problems with that theory? One assumption is that the old tests will find newly introduced problems. Another is that new tests won’t find problems that have been newly introduced. Yet tests that you’ve performed before won’t necessarily find new problems; and tests that you use to find old problems won’t necessarily be repeated tests.

    —Michael B.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s