Archive for the 'Testing' Category

The Bug of Quality

One of the perks of being a contractor is that I get to see all of the different ways companies handle the software process, such as the quality aspect of a product. This exposes me to different thoughts of how to improve and maintain quality, and allows me to evaluate them on their effectiveness. It is quite the learning experience.

As you might expect, everyone manages the quality assurance process differently. However companies can be broken down into two basic groups: those who know what they’re doing, and those who would better serve society by violating child labor laws while manufacturing some shoddily made rusty springs targeted as toys for preschoolers.

I’ll let you decide which group your company falls in.

Kicking the Bee’s Nest

Most companies in the latter group place a lot of emphasis on the quantity of bugs as a measure of how well the Quality Assurance group is doing. The result of this is an overflow of worthless bugs, and an irate quality assurance group who doesn’t want to talk to anyone from whom they can’t steal bugs. This can lead to ridiculous extremes where time is wasted just so QA can have another bug attributed to them.

In one instance I wrote a bug describing how a program’s updater functionality needed to display a progress indicator. I included steps for reaching the place in the program where the progress dialog should be, the actual results (no updater), and the expected results (a progress dialog.) I even included what should be in the progress dialog. The QA assigned as the owner did not like the fact that she did not get credit for the bug. So my bug was deep-sixed, and a new bug was given to me (credited to the QA) with simply the phrase: “The updater should show progress.”


This just goes to show that measuring by quantity means that QA is going to try to game the system. That is, they’re going to go find the easiest to see bugs and write those up. This leads to a lot of typos and graphical nits being written up, while the really bad, functional bugs are ignored, because finding those requires an investment in time. In other words, the quality of the software goes down, but the number of bugs being written goes up. And management never has a true picture of the software quality.

Moreover, an emphasis on quantity means that bugs are going to have incomplete information. This means that engineers are going to either have to send the bug back to QA to complete it, which wastes time, or the engineer will have to figure out the missing information themselves. This is problematic because the engineer’s time costs more than the QA’s time, at least in any company I’ve ever worked for.

Too much emphasis on the quantity of bugs will give you the wrong results. Actually, let me take that a step further: any emphasis on the quantity of bugs will give you bad results.

Grading quality assurance people by how many bugs they write up is like grading engineers by how many lines of code they write. The quantity of bugs has absolutely no correlation to the quality of software, or even the performance of a particular QA. Sixty cosmetic bugs don’t indicate a bigger problem than six crashing bugs. Remember, nothing improves in the software unless you can actually do something with the bugs that are written up.

Quality is the Queen Bee

To use an overused phrase: It’s about quality, not quantity. So what is a quality bug? Well, to start off with, let’s define what a bug is:

A bug is a description of a defect in the software.

Although that is an accurate description of what a bug is, it doesn’t help us determine what a quality bug is and isn’t. To do that we need to remind ourselves of the purpose of a bug:

The purpose of a bug is to reliably identify flaws in the software, and ultimately allow the software to be improved in some way.

I’d like to point out that “improving the software in some way” does not necessarily mean fixing the bug. It might be providing a work around, or just documenting its existence so management knows about it.

In order to do anything with a bug, it has to be reproducible. This is probably the most important aspect of a bug. If the engineer is going to fix it, he or she needs to be able to see what’s going on. For this reason, it is important that each and every bug have precise steps on how to reproduce it.. I can’t tell you how many bugs I have been assigned that say “feature Foo does not work,” with no steps on how to reach feature Foo, which is invariably an obscure feature only accessible by navigating through no less than three menu options, six modal dialogs, and the Secret Squirrel handshake.

Furthermore, a bug should contain any additional information that will help reproduce the bug. For example, a screenshot of the dialog that’s all wonked. Or the actual file that QA used to cause the crash they’re reporting. Anything that helps the reader of the bug reproduce the bug, improves the quality.

Although this might seem obvious, next to being reproducible, the bug needs to state what’s wrong. A lot of people in the QA profession seem to think that after giving steps, the defect would be obvious. Unfortunately, often it’s not. Because of this, every bug must contain the actual results of executing the steps, and the results QA wanted to see (the expect results).

For example, I once received a bug that gave, in excruciating details, how to access a certain dialog. I easily repeated the steps, and stared at the dialog for half an hour. I couldn’t figure out what was wrong with it. The wording was correct, the UI was laid out fine, and it had all the functionality of its Windows counterpart. After sending the bug back to the QA, I found out that she simply didn’t want the dialog to exist at all. I’m still trying to figure out how I was supposed to know that.

Bugs should be self contained. Anyone who looks at the bug should not have to go looking in various documents and file servers just to replicate the bug, or determine what it’s about. Once, I received a bug that simply said:

Program violates Section VII.A.2.c of the spec.

My jaw just dropped when I read this. First, I didn’t have access to said spec, and assuming I did, I wasn’t about to go read it just to figure out what the author was talking about. Mainly because it still wouldn’t tell me how to reproduce the bug or in what way it violated that portion of the spec. The author of the bug had determined all of this information, and should have recorded it in the bug. Wasting the reader’s time is the mark of a low quality bug.

Probably the easiest way for me, as an engineer, to determine who on the QA team is merely adequate, and who is a superstar, is to see how well they whittle down the required steps to reproduce a bug. The merely adequate will simply use the first set of steps they found to reproduce the bug. The really exceptional QA will go a step further and determine which steps are actually required, and which can be removed and still have the bug reproduce reliably.

The benefit of having the minimal number of steps to reproduce is twofold. First, it saves the reader time when reproducing the bug. Second, it helps zero in on what piece of functionality in the program is actually broken. This is beneficial to the engineer because I can narrow down my search for the bug in the codebase. It’s beneficial to the manager because he or she has a better idea of which components in the software are stable and which need more work.

Back to the Hive

In order to “measure” how well Quality Assurance is doing, some companies have completely lost the point of writing a bug. The point of a bug is to reliably identify a defect in the software, so that it may be addressed. If the bug can’t reliably do that, it is not worth anything, and in fact, has a negative worth because of the time wasted creating it and the time trying to reproduce it.

In order to get worthwhile bugs, the emphasis must be on quality, with no attention paid to quantity. Quality demands that bugs always contain certain information, such as steps to reproduce, actual results, and expected results. This allows the development team to have a better understanding of the quality of the product, and to take the proper action.

Respect for the testers

Sure it’s a lot fun to tease them and try to make their lives miserable, but really, if we didn’t have testers who would we, as engineers, have to torment? The marketing people? Please, they’re not even self aware enough to know that we’re doing it, and that’s no fun. Testers, on the other hand, are not only fun to use as scape goats, but they also provide an important service for the product.

Namely one I never want to do.

Despite that, I found myself doing exactly that recently. My WordPress plugin is now code complete, but is in need of testing. I looked around the apartment for suitable candidates, but the lizards around here are so small they cannot even depress the keys on the keyboard despite jumping on them. That’s how this unmentionable task fell to me: I had to write and execute test plans.


Now writing test plans and such is something that I learned about in college. At the time I thought “Bah! This is nice for reference and all, but I’m never going to use this. I’m an engineer! I create the bugs, not find them!” Oh, how wrong I was. While in college, I also worked as an intern. Although I was supposed to be working on developing internal tools, I often got pulled into doing QA work. (Note for the unexperienced: the QA department is always understaffed. Hide behind the nearest potted plant if the QA manager ever comes within ten feet of your cubicle.) It was a never ending battle: me trying to escape QA work, the QA manager pulling me back in, and the other engineers laughing at me the entire time.

Testing and quality assurance work is never fun. When writing a test plan, you have to think of all the possible ways that a feature can break, and make sure all the different angles are covered. But that’s balanced by the fact that you can’t test everything so you have to be smart about what you test. That way you get the maximum possible coverage for the least amount of work. After you write the mind-numbingly boring test plan, some unlucky bloke has to run it. The experience is much like putting a portable drill to your temple and pressing really hard.

I’ve actually managed to get the test plans for my plugin written now. I found that writing them myself was a good exercise. I had to change my attitude from “how do I make this work?” to “how do I crush this pathetic excuse for software, and send the developer running home to his mommy?” I found several bugs just by thinking through how to test the different features. I also found that there were features that weren’t as usable as they should have been, since I hadn’t been looking at them from the point of the user, but that of an engineer. All of this, and I hadn’t even run the test plan. Good stuff.

I’m not looking forward to running my test plans. I have to run them at least three times: once on Safari, once on Firefox, and once on my arch-nemesis, Internet Explorer. May God have mercy on me.

I say all of this to show that I respect the testers and quality assurance people out there. Sure I go through this each time I have to do some sort of testing myself, or a tester finds a bug that I wouldn’t have caught myself, but it bears repeating. Testers are there to make to make the engineers look good. Unless the tester wants your parking spot. Then they’re probably trying to get you fired so they can have a shorter walk to the building.