What is testing? At first glance, you might think it’d be the easiest question in the world for me, but it’s one I’ve long struggled to answer.

When people ask what I do for a living, they’re normally met by my ‘deep in thought’ face, as I try to describe what I do – and what value I bring to the team.

To explain what I understand testing to be, I first need to explore what it is I do. And ‘explore’ is the right word to use, since exploration is at the very core of what software testing is all about. I’ll dig into this statement a little later on.

To consider an activity as “testing”, there needs to be a purpose. Let’s look at the following scenario:

I am testing a product and the time has come for me to talk the client through the testing we’re doing. Given my knowledge of the product, I select tests that will not highlight any flaws or known issues, be they performance, functional or “other” (any issue that could paint the product in a bad light). [1]

Is this testing? I am at no point planning to use the information that comes out of this demonstration, as I am already aware of the results that these selected “tests” will produce. This activity has no purpose other than to show the software appearing to work under a specific set of criteria. (I should emphasise that I consider this to be bad practice and transparency is the key to building a successful product.)

Testing centres on learning. This can be about the product, processes or different ideas. As we learn about the product, we’re also gathering information, which we can use to form further test ideas about how we want to use the software. If I go back the scenario I mentioned earlier, there is no learning going on here (at least in relation to the software). To stay on track when it comes to testing, always focus on what information you are trying to obtain.

It’s also important to consider the questions we ask of software to ensure we get the appropriate information in return, whether it be good or bad. Scrolling through Twitter recently, I saw a tweet that analogises this quite well:

Today's lesson about asking the right question to get meaningful data, brought to you by a 4 year old:

Me: Should I put a banana in your lunch today?
Him: Sure! They are healthy & I'm supposed to bring healthy food.
Me: Will you eat it?
Him: Definitely not. I don't like bananas.

— Molly Telford (@mollytelfordMRX) February 3, 2018

We can ask any question we wish of the software, but will it give us the right answers, so we can understand the level of quality built into the product? That should always be our end goal.

People often assume that testing is something that occurs after a product has been built. For a long time, I was of this opinion: after all, how can you test something that doesn’t exist yet? As my career has moved forward in the last few years, I have come to realise that what we are referring to as “testing” at the end of this process is actually “checking”. Consider the following questions:

  1. Does the button turn blue when pressed?
  2. Does the page reload when the link is pressed?

These are checking activities. Due to the explicit nature of their assertions, we can script these activities. We have a clear input and an output. Checking is something that is also prime for being automated but that is a whole discussion for another day.
Testing can (and should) happen throughout the entire development process. All of the following tasks embody what testing is concerned with:

  • “Three Amigos” sessions encompass testing when the team discuss how the software should behave
  • Pair programming with a developer allows for testing in parallel
  • Code review is a form of static analysis, which in turn is testing
  • Monitoring product metrics is a form of testing.

That is not to say that interacting with the product post development is no longer a testing activity. It 100 percent is. The above allow us to be better informed in our post-development work. These activities are also best when they are exploratory.

For a long while, I saw exploratory testing as an activity that was only carried out when the requirements were insufficient – or too vague. In the world of Agile, we should embrace change, and pivot as and when required. We should use our pre-development testing activities to drive what we do once a product has been formed.

If we stick to the notion that test scripts are all compiled and finalised at the start of the project, it creates an incredible amount of overhead when it comes to evolving requirements.

Test scripts are also innately poor in their description. For the most part (at least in my own experience of running and writing test scripts), there are always gaps, which assumes knowledge of a product or process. As testers, we fill in those gaps with our knowledge and plough on, which means we are already exploring but within the bounds of a script. Wouldn’t it be easier to strip away all the unnecessary bloat and just explore?

That is not to say that exploration is unstructured – far from it. With the information collected earlier in the process, we can define our testing approach while also using the wider scope to give us greater flexibility and freedom in our testing.

In closing, my description here is quite broad. This was entirely intentional, because although different products require different approaches, the overall goal should remain the same. How can we probe the software sufficiently to discover the level of quality built into the product?

I also think it’s important to regularly assess and review what testing means to us. This will ensure it remains an integral part of the Software Development Life-cycle.

[1] Perfect Software and other illusions about testing - Gerald Weinberg