If you are reading this article, there’s a good chance that you are working as a software tester or intend to. Congratulations…and sorry for you. Why should I be sorry for you? Because this is far from being the easiest job in the world and you probably have to defend your positions and idea almost every day. However, if you’re passionate about your job, then I’m sure that it’s a kind of pleasure to discuss about it, argument with people having a different point of view and explain them why you can congratulate yourself about having made the good choice with this activity.
So what are the misconceptions about Software testing you may have to discuss?
Testers are breaking the product
“I didn’t break the software. It was already broken when I got it.” — Michael Bolton
I love this one from M. Bolton. Of course testers don’t break the product: instead, just by using it, they try to dispel the illusion that everything works like a charm. Your developer friend will always say to you “it works” and I’m sure each time you hear this you have this strange feeling that something must be wrong somewhere. That’s normal, you’re a tester with this specific mindset who likes to explore things, check the known knowns, evaluate the known unknowns, try to reach the unknown unknowns. If you break something while testing, you’ll have to gather all information possible (logs, screenshots, steps to reproduce, environments involved, etc.) because your job is not to break the product (users will do this for you) but to give information to stakeholders about the state of the product.
You are a tester because you are not technical enough to be a developer
I started my career in IT as a developer. At the time the IDE was vi or emacs, and you needed to execute make in a terminal and wait for the end of compilation to know if what you did was correct or not. Trust me, it was not easy without a real IDE. I guess that if I had made the choice to continue as a developer then 15 years later I probably would not be that ‘bad’ developer we sometimes meet. Being a tester is most of the time a choice, because it’s a very exciting activity. Those who haven’t really practiced testing may think it is boring, and may spread the wrong idea that you don’t need any technical background to be good at it.
Of course, if you intend to develop automated checking, you will be far more efficient if you are a brilliant developer as it is far from easy. Besides, in order to understand some issues, and try to reproduce them, simply clicking on buttons within a browser won’t be enough: you need to understand the system under test, find and analyze the right server or client log, be able to use tools to slow down network, and a lot more. You can be a penetration tester, a security tester, an API tester… A software tester is not a dethroned developer who just clicks on buttons and cross his fingers waiting for a bug to magically appear.
Something does not work in production, it has not been well tested
What? I have already heard that sentence several times: “It works like a charm, congrats to developers”, and also that one: “There’s a problem in production, why did testers let this happen”. Why should ‘fame’ be for developers (and management of course) when everything is ok, and ‘shame’ be for testers when something is wrong? I hope this is no more the case, but it was usual some days in companies with blaming test cultures.
“Testing is not responsible for the bugs inserted into software any more than the sun is responsible for creating dust in the air.” — Dorothy Graham
It is impossible to test ‘everything’: even if, hypothetically, you think you covered 100% of the tests scenario, the next day may bring some surprises whose effects can not be predicted by anyone: update of a tier library, new configuration, new environment, etc.
“No amount of testing can prove a software right, a single test can prove a software wrong.” — Amir Ghahrai
Most of the time testers are part of the Quality Assurance department. This name (QA) is some kind of non-sense: testers cannot be held as the only people accountable for the software quality. Testers don’t take any decision regarding the issues to fix before the release, that’s the Product Owner responsibility. Testers cannot fix bugs, that’s the job of developers. Hence, as a tester, how could you assure Quality if you cannot improve the product by any way. Our job is to give information, we can only be responsible for not telling stakeholders what is not tested, not telling where a risk is, etc.
Testing is here to find all the bugs
“Program testing can be used to show the presence of bugs, but never to show their absence” — Edsger Dijkstra
I think this is the perfect answer for this misconception. Besides the impossibility of testing everything, you will never find all the bugs. Just imagine you are testing an Android app and think about the Android fragmentation: based on that annual report from OpenSignal, 682,000 devices are surveyed. Even if you decide to support only one of them, let’s say a Samsung Galaxy S6, it is not only about the device itself, but also regarding the Android version, depending on where you bought it you may have an overlay software that your reseller added (like Orange). Similarly, if you decide to only support one model with one specific ROM, how many configuration options do you have in your software, in the client and in the server? There are probably so many configuration options in your software, client and server that it will lead to an infinity of possibilities. Specifically with smartphones, I’ve seen several issues reproducible on one device (the customer one) and not on another one (the one in test).
Testing can be automated
“It’s automation, not automagic.” — Jim Haze
There are a lot of articles about checking versus testing, this one and this other one from James Marcus Bach and Michael Bolton are still a good reference even after the twitter storm about this perception of automation and personality of CDT (Context-Driven Testing) leaders. At least using checking (for what is done by a machine automated, or by a non-expert following steps without trying to find anything else that what is written in test cases) and testing (for all activities that a real human tester can do) helps to communicate and separates two kind of activities by using two distinct words easy to understand.
Testing is more than checking. Do you remember the last time you found an issue while testing something else? It happens almost all the time, I love serendipity of testing. Do you also remember observing a small glitch that only your eyes can see? Some elements flickering on the screen, a part of the product disappearing for an unknown reason… There is a high chance that automated checks executed by a computer will miss those (unless you specifically ask to check this part), but not your brain.
The main problem here is that some may think that everything can and should be automated. In fact, we should only automate most boring tests, those that are repeated numerous times with only a small variation, those that can be used with a lot of data as an input…not everything needs to be automated and must not be automated. Don’t forget that each test has a cost in development, running and maintenance.
Anyone can test
Anyone can code, anyone can play guitar, anyone can be a manager, anyone can be a CEO, anyone can cook..etc. As any other skill, testing needs experience, practice, learning. Being a real good tester requires a specific mindset. Though you may work very close to developers and far from real users, you still need to advocate for what is the best behavior for the product, and continuously stand your position for the good of future users. You also need to restrain any emotional involvement when issues are classified as “working as expected” because the code dictates the behavior.
“Pretty good testing is easy to do (that’s partly why some people like to say ‘testing is dead’ — they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time). Excellent testing is quite hard to do.” — James Bach
Pointing at the weaknesses and unspotted issues in a product which has filled in the team with hope may not be well received at first, however remaining confident at that precise moment may spare your company, workmates and future users lots of hassle later.
To put it in a nutshell, not everyone can be a good tester.
Testing is done at the end
If you read above, you now know that testing is not clicking like a monkey in an interface in order to find all the bugs, and it’s not only checking at the end. You can test at any time, and you can test anything. You can test the requirements, the user stories, you can test before any line of code has been written. You can prepare your future test session with any documents given to you. You don’t only test a User Interface, but when available you can test an API, or the server with curl commands. As a tester, I also like to read code reviews. Even if I don’t have a lot of interesting comments to give (specifically with those weird javascript languages) it helps me understand what to test and sometimes I may find something inconsistent without waiting for the first Release Candidate ready to be tested.
“The more an issue is found early, the less it will cost to fix it” — Me
Testing can be estimated
You’ll never know in advance what you will find. This may be perceived as a weakness, but I have never been able to estimate a testing phase accurately. You can give an accurate estimate of testing only if you know everything about what you have to test. However, in order to have a good understanding of what is needed to be tested, you need to test, explore further and learn new things which will themselves lead to new tests and maybe to a scope change.
“Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous.” — James Bach