Are these really lame reasons not to hire tester? (part 2)

This is the second part of my blog series looking at the “lame” reasons why companies do not hire testers as published by bughuntress. You can see the original post here:- LINK.

just to remind you, or if you have not read part 1; the general gist of the post by bughuntress is giving details of 5 lame reasons why companies do not hire competent testers. Through this series of blog posts I will look at each of those 5 “Lame” reasons and assess whether they are as lame as proclaimed by the original post.

This post I look at:

We don’t need to test a product until it is finished.

What bughuntress says:

Waiting for product to be finished is not rational, as it becomes more expensive and too difficult to make changes. The sooner you find and fix bugs the less expenses you will have and the better your reputation will be. Even if you release Beta versions before the final one, your customers would not be pleased by the product full of lags and defects. Do you need to take such a risk to lose your precious customers? Hardly.

OK, so lets start with the positives. I agree with the sentiment of the title, testing certainly does not need to wait until the product is finished. Testing can be, and often should be involved as early as possible in my view, although in my experience this does have its limits which I wont go into details here, but I may in a future post.

Now for the negatives. Firstly I have never in my life heard this given as an excuse for not hiring testers, so I am going to have to take their word for that. My bigger problem is with what the actual details of this section on their site is stating.

The details go into the old view point that defects found late in the development lifecycle cost more. If I said this was being excessively generalised I feel I would be being overly nice. People much more knowledgeable and qualified to explain why this is not (always) true have written plenty on this and a quick Google will find information on it, but to say that all defects found late in the process cost x times more to the project is just as incorrect as saying all defects found early in the life cycle cost x times more.

I would say that a more valid reason for wanting to start testing as early as possible is to allow for more opportunity to examine and find out useful information about the product. This can then feedback into the process which will help shape and maximum value the product provides to those that will use it. Might it save you some cost and headaches by finding something earlier than if you found it later? possibly but that is not always going to be the case.

My Verdict: A lame excuse (if it were used).  Not hiring a tester because you are not going to test until everything is completed does seem to me an ill-informed reason, which is probably why in 15 years I have never heard it being used as one. This excuse is more likely to be used as one for not hiring “right now” on a project. Do I agree with Bughuntress justifications for testing early? No, but I do agree there are benefits to be had by testing early.

Are these really lame reasons not to hire tester? (part 1)

I came across the following post the other day via a link on Linked in that a friend posted. you can see the original post here:- LINK

The general gist of the post by bughuntress is giving details of 5 lame reasons why companies do not hire competent testers, but while reading through it I could not help but feel that maybe the post was being a bit harsh by calling them lame. Therefore over the next few blog posts I will look at each of those 5 “Lame” reasons and assess whether they are as lame as proclaimed by the original post.  This post we look at lack of time or budget.

Lack of budget or time.

What bughuntress says:

Usually this excuse leads to making programmers do testers’ work. Indeed, testing by programmers themselves can save you some money. But you should take into the consideration, that hiring a tester is cheaper than hiring a programmer of the same level, so you will pay programmers for work that testers can do for less salary.

Another obvious thing is that when testing their own code programmers tend to miss some errors which testers wouldn’t. All in all, while developing a complex project it will become clear that testers are more of an investment than useless spending of money

So is lack of budget or time a lame excuse for not hiring testers? in my opinion the answer is a very short one. two letters short.. its “NO”.

Getting into the specifics of what bughntress is saying; I agree that when you don’t have dedicated testers, but you do wish to perform some form of testing, then there is a good chance that the testing will be done by your developers, or maybe your analysts, or BA’s or stakeholders. Are testers cheaper than these other people? probably, but are your testers (should you hire them) going to be always 100% utilised as testers?  if they are not then there will be time when they will be burning cost without adding value, unless they bring other skills in which case your still using one of those resource I mentioned a minute ago.

Also, I think that the view a developer testing their own code are more likely to miss an error that a tester would not is a little old fashioned and a bit of a myth in my experience. Even if they were the case just because you don’t have a dedicated test team, does not mean that a developer has to “mark his own homework”; peer reviews or any of the other roles I mentioned above could add a different viewpoint on the product being tested and add value. With these other things some pretty descent testing can still be achieved without the additional outlay for a dedicated tester in my opinion.

My Verdict: Not a lame excuse. There are very valid reasons why time or budget restrictions would mean that it was not viable or indeed sensible to hire a dedicated tester, and to assume that developers cant test is both wrong and disrespectful in my opinion.

Testing with Questions

Testing is about questioning the software and using that information from the answer we receive to inform our own decisions and those of our stakeholders. Whether we write all our tests out in scripts before we start execution or we explore the software as we go, everything we do will resort to asking a question that we are looking to have answered. I have always felt that when I am testing something I am always looking to learn something new that I did not already know, and by having the fact that my tests are questions at the forefront of my mind it helps me to make the most efficient use of my time while testing.

My definition for testing is “a set of questions used as a means of evaluating the capabilities, effectiveness, and adherence of software”.

I find however that when I speak with other testers about this they either look at me like I am going mad or they say something like “oh, never thought of it that way” which always surprises me somewhat. I find it difficult to see how people who spend their days thinking about, documenting, and executing tests do not make the association that each of their tests is asking a question.

I have also seen that where testers do not make this association they falling into some traps. One of the most common traps I see is where they repeatedly ask the same question over and over again. If a test is not telling you something new then it is not the best use of your time running it. Therefore when they are repeatedly asking the same question they are not learning anything new about the software and given that all testing phases are hamstrung by time this means that potential areas of the system remain a mystery (normally until a production user gets hold of it).

Another thing that I notice is when testers devise tests and have not considered the question they are asking. They lack an understanding of why they want to know the answer that the test is going to give them. When you consider your test a question it forces you to think about not only what you are going to do, but it also why you want to know, and what you are going to do with the information you get from it.

There are 3 main categories of tests that I ask when testing, and they are:

  1. Verifying questions
  2. Investigative questions
  3. Clarifying questions

Verifying questions are those where I am looking to prove or disprove the truth of a known expectation.  These types of questions are primarily those that relate to requirements testing.  That is where you have a requirement of what the software must/must not be able to do and tests that you run will look to specifically answer these questions.

Investigative questions are those where I am examining the software in an attempt to learn something hidden or complex. This is what I see as exploratory testing. Where I don’t have a specific requirement that I am looking to verify, but more where I have a hunch or I am inquisitive to know what will happen under a specific condition. An investigative question I find is often born out of other questions, such as the answer to a verifying question that was unexpected and leads me to think of other potential issues, or scenarios based on this new information.

Finally, Clarifying questions are where I want to challenge an answer that I have already been given by the software and I am either suspicions of or more likely just want to make sure that I fully understand the answer that it has given me. Now I know I said above that asking the same question over and over again is a waste of a testers time, and I stand by that, that is more in relation to the tester not realising they are asking the same question. A clarifying question is where I specifically know that I am asking the same question (maybe in a slightly different way) to ensure that I understand what the current information that I have which is very different.