A typical development and testing activity normally begins with taking a user story and digesting the information in it. From this shared understanding, we start to construct a set of use cases as a means of verifying the implementation and its completeness.
You could say that the story is complete once all the use cases have passed testing. The user story has now made it to the next stage and is being validated by the User Acceptance Testing team. The UAT team report a number of issues and the development team start to triage them.
While issues are being discussed and triaged, there could be this one particular “bug” (let’s call it bug x) but you may be thinking: “Is this really a bug or a feature request? Did the customer specifically ask for this scenario, or was it missed in the story refinement session prior to development? Also, is this the development team’s fault, who didn’t think about this scenario, or is it an entirely invalid use case?”
So, the development team become defensive and start to push back on the reported issue (bug x) arguing that this was never stated as an agreed acceptance criteria, while the UAT team are adamant it’s a bug.
Unfortunately, in some cases, there is no direct relation between a bug and acceptance criteria or a user story. If that was the case, the development team would know about it before UAT had their hands on the software.
If we can trace a bug directly to particular acceptance criteria on a user story then that is clearly a missed or an incorrect implementation.
Okay, before we begin debating whether it’s a bug or a feature request, let’s begin with the definition of a bug.
A bug or a defect is a result of a missed acceptance criteria or an erroneous implementation of a piece of functionality, usually traced back to a coding mistake. Furthermore, a bug is a manifestation of an error in the system and is a deviation from the expected behaviour. But what is the expected behaviour and who defines it?
Expected behaviour normally comes in the form of acceptance criteria, formerly referred to as requirements.
When a product owner writes a user story, he or she may (or may not) have a clear idea of how the functionality should behave. Through a series of discussions with the development team, which includes QAs, the details of the user story are fleshed out and the outcome is a set of well-defined and unambiguous acceptance criteria. From the acceptance criteria, we devise a number of acceptance tests.
In my mind, there is a clear distinction between acceptance criteria and acceptance tests. Acceptance criteria are what the customer accepts the user story to be, as complete and functional. Acceptance tests, on the other hand, are a means of verifying those criteria.
Now that we have ironed out some fundamental concepts, let’s explore the above question with a real example.
Very recently I was involved in a project which had a registration page to test. The registration page had the usual fields such as first name, last name, email address, telephone number etc. The registration page also included a “link” to allow users to add more telephone numbers if they wished. Once the link was clicked, you remain on the same page but will be presented with an additional field for a second telephone number.
www.example.com/register (the registration page)
www.example.com/login (the login page)
www.example.com/# (a self-referential javascript link to add additional field)
The default www.example.com would redirect to www.example.com/login
Once the registration page was thoroughly tested by the development team, the UAT reported a “bug”. This was that, instead of left clicking on the link to add an additional field, if you right click on the link and select open in new tab, you are presented with the login page. However, the users should not be able to see this login page.
Then, there was a whole debate within the development team whether this is a bug or a feature request or an enhancement. After all, there were no acceptance criteria for it. Consequently, I, as the QA in the development team, had to look at the situation from all sides. One could argue that, yes, because there were no concrete acceptance criteria for how the right click on the link should function, then this is not a bug. But the reality is that the person writing the user story cannot be thinking of all the gazillion ways a user might interact with the system! One could also argue that that is a bug and has an undesired behaviour.
You really should look at each individual “issue” and the side-effect(s) on its own merits, and that merit must be based on risk.
Taking a risk-based approach to the reported issues, one could assess what is the likelihood of a user using the system in unexpected ways, and the impact of that misbehaviour, e.g. by right-clicking on the link, are they able to access the my-account page bypassing the login screen.
The truth of the matter is, the longer you test, the more likely you are to find all those weird and wonderful issues that nobody else thought of. But we also know we won’t have all the time in the world to test a feature or a system to death. So, as the sheer nature of testing is (and should be) based on risk, when we are triaging issues, we should naturally use a risk-based approach.
Next time, instead of debating whether an issue is a bug or an enhancement, be sure to take a pragmatic view and assess the risks of the likelihood of something occurring, and the impact it will have.