Testing tips

Testing is the process of comparing the invisible to the ambiguous, so as to avoid the unthinkable happening to the anonymous. --James Bach [source]

Overview

Ensuring adequate testing is a critical success factor for your project. Often good teams produce inferior products due to inadequate testing.

Tips for test planning

Determine how much to test

A frequent question from students: "how many test cases should we write?"; Answer: "As many as you want" (Huh?)

There is no need to write a single test case if you are sure that your system works perfectly. Coming back to the reality, it is not unusual to have more testing code (i.e., test cases) than functional code. On the other hand, the number of test cases by itself does not guarantee a good product. It is the quality of the test cases that matters.

More importantly, you should adjust your level of testing based on ...

Do not underestimate the testing effort

Testing coupled with subsequent debugging and bug fixing will take the biggest bite out of your project time. However, if you check your project plan right now you will realize that you gave testing a much smaller share of resources than development. Most student teams underestimate the testing effort.

When correctness is essential, at least ~25-35% of schedule should be for system level testing. That is not counting developer testing.

Another good rule of thumb is unit tests should be at least as same size as production code [UML Distilled]

Have a plan

"We test everything" is not a test plan. "We test from 13th to 19th" is not detailed enough. A test plan is not a list of test cases either.

A test plan includes  what will be tested in what order by whom, when will you start/finish testing, and what techniques and tools will be used.

Put someone in charge

Testing is so important that you should put someone in charge of it even if you are not following the guru concept.  However, this does not mean only that person will do testing.

Insist on an iterative process

If you have to move 100 bricks from point A to B in within 10 hours, which method do you prefer: carry all 100 bricks and run from A to B during the last 10 minutes, or walk 10 times from A to B carrying 10 bricks at a time? If you prefer the latter, insist that the team follows an iterative development process; it will make the testing feel like walking with 10 bricks rather than running with 100 bricks.

Consider Test-Driven Development (TDD)

Code Complete (page 504) says "... test-first programming is one of the most beneficial software practices to emerge during the past decade and is a good general approach". Test-Driven Development (TDD) advocates writing test cases before writing the code. While TDD has its share of detractors, it is considered an exciting way to develop code, and its benefits outweigh the drawbacks (if any). It is certainly suitable for student projects. It might feel a bit counter-intuitive at first, but it feels quite natural once you got used to it.

Have testing policies

Decide what testing policies you are going to apply for your project (e.g., how to name test classes, the level of testing expected from each member, etc.). This can be done by the testing guru.

Here  are some reasonable test policies you could adopt (examples only):

Insist on developer testing

Supervisors often get the question "Isn't it enough just to do system testing - If the whole works, then parts must surely work, right?"

Use cross testing (if you must)

Cross-testing means you let a teammate test a module you developed. This does not mean you do not test it yourself; cross-testing is done in addition to your own testing. Cross-testing is additional work, delays the project, and is against the spirit of "being responsible for the quality of your own work". You should use it only when there is a question of "low quality work" by a team member or when the module in question is a critical component. Any bug found during cross-testing should go on the record, and should be counted against the author of the code.

Take care when choosing dedicated testers

Every one must unit-test their own code, and do a share of other types of testing as well (i.e., integration/system testing). If your course allows choosing dedicated testers, choose someone competent. While you may or may not choose your best resource as the tester, testing is too important to entrust to the weakest member of your team.

Automate testing, as much as possible

While the instructor might not insist on fully automated testing, note the following.

Furthermore, it is natural to automate unit and integration testing as lower level components cannot be tested manually anyway because they do not have a user interface.

Tips to increase testability

Testability depends on factors such as controllability, observability, availability (of executables, information), simplicity, stability, separation of components, and availability of oracles.

opt for simplicity

Simpler designs are often easier to test. That is one more reason to choose simplicity over complexity. Besides, we are more likely to mess up when we try to be clever.

Use assertions, use exceptions

Use exceptions where appropriate. Use assertions liberally. They are not the same (find out the difference), and the two are not interchangeable.

It has been claimed that MSOffice has about 250,000 LOC of assertions (about 1%). Various past studies have shown up to 6% of code as assertions in various software. Microsoft found that code with assertions has a lesser defect density.

Provide a testing API

Testability does not come for free. You may have to add methods to increase testability. Some examples:

Use built-in self-tests

For correctness-critical parts, you can develop the same function using multiple algorithms and include them all in the system. During runtime, the system will use an internal voting mechanism to make sure all algorithms gives the same result. If the results differ, it will fire an assertion.

Decouple the UI

User interface testing is harder to automate. Decoupling the UI from the system allows us to test the rest of the system without UI getting in the way. Here are some hints:

Use logging

If your system periodically writes to a log file, this file can be a valuable resource for testing and debugging. Sometimes, you can simply use the log file to verify the system behavior during a test case.

Tips for developer testing

do it

Make sure you do a plenty of developer testing (unit and integration testing). Note that debugging is not developer testing.

Follow the implementation strategy

In the design by contract type coding, users of your module are responsible for the validity of the input values passed to your module. Your module does not guarantee anything if the input values are wrong. That means you do not have to test the module for invalid input. If the language does not have inbuilt support for DbC, you can use assertions to enforce validity of input values.

In defensive coding, you do not assume that others will use your code the way it is supposed to be used; you actively prevent others from misusing it. Testing should follow the same philosophy. Test the module to see whether it behaves as expected for invalid inputs.

Tips for system testing

When system testing, be a system tester

When you integrate everything and test it, it is still integration testing. System testing begins when you test the system as a whole, based on the system specification of the system. System testing requires a different mindset than integration testing. That is why system tests are usually done by a separate QA team. Students projects often do not have the luxury of a QA team. However, try to change your mindset when you transition from integration testing to system testing. Another trick that could help here is to plan system testing so that each team member tests functionalities implemented by someone else in the team.

Allocate manpower to minimize risk

Here are some tactics to mitigate risks during system testing:

Use error seeding to gauge the test coverage

After the 'tester' of your code says he/she is done testing it, you can purposely insert subtle bugs into your code (this is called error seeding) and see how the test cases respond. If no test case failed, the tester has not done a good job. If a large number of test cases failed and the tester had to scratch his head for hours to figure out what to put in the a bug report, the coverage is good but not the quality of the test cases. 

You can make it a fun game in which you get points for sneaking in a bug that is not caught by the test cases.

Keep the forest in view

While testing needs to be meticulous and thorough, there is no point being fanatical about it. If you get bogged down by trying to achieve high coverage over one part of the system, you might end up not testing some other parts at all.

Test broadly before you go deep. Check all parts of the program quickly before focusing. Start with core/primitive functions. You may have to fix those before you test the rest.

Start with obvious and simple tests.   If any such case fails, developers will want to know it sooner rather than later. While you need to check invalid and unexpected input, check valid and expected input first.

Start by writing conceptual test cases

Before you implement test cases in code, you may want to define them at conceptual level. These are easier to cross-check against the specification. Doing this early will help you in test planning and estimating the testing effort.

Here are some example test cases specified at conceptual level:

* Test the scroll bar of the browser for long web pages
* Test the scroll bar for displaying non-html objects, such as images
* ...

As you can see from the above examples, these can later go into the documentation of test cases.

Look in the right places

Pragmatic Software Testing says "hunt where the birds are, fish where the fish are", meaning, test where the bugs are likely to be. Be smart about where you focus your testing energy. Do not test areas randomly. Do not test functionality in the order they appear in the API. Examples of places where bugs are likely to be:

Make bug reports work

Your bug report should not stop at saying "hey, I found a bug"; it should help your colleague as much as it can to locate the bug. When a test case fails, investigate further. See if similar failures occur elsewhere in the system. Figure out the minimum required input that can reproduce the failure. Try to characterize the failure in a more general manner.

An example:

"Test case 1103 failed" [not very helpful]
"System fails when I enter the text '100% coverage expected <line break> terms fixed' into the description field" [better than the above]
"System fails when I enter symbols such as '%' into any text field in the 'new policy' UI. This does not happen with other UIs" [much more helpful]

Some useful policies for bug reporting:

Do not forget to test for non-functional qualities

The system may need to be checked for performance, usability, scalability, installability, uninstallability, portability, ...

Some general testing tips

Be systematic

Writing random test cases that just "feels right" is no good.

Accept your fallibility

Do not expect a piece of code to be perfect just because you wrote it yourself  (there is no such thing as perfect code).

"Trivial" code is not immune from bugs

Very small and simple modules can contain errors. They are so easy to overlook precisely because you do not expect them to have bugs.

Do not test gently

Being the author of the code, you tend to treat it gingerly and test it only using test cases that you (subconsciously) know to work. Instead, good testing require you to try to break the code by doing all sorts of nasty things to it, not try to prove that it works. The latter is impossible in any case.

Make test code self-documenting

It is unlikely that you will write a document that systematically describe every test case you have. But you could still make your test code self-documenting. Add on comments (or some other form of external documentation) for information that is not already apparent from the code.

For example, the following two test cases (written in xUnit fashion) executes the same test, but the 2nd one is more self-documenting than the first because it contains more information about the test case.

assertEquals(p.getTotalSalary("1/1/2007", "1/1/2006"), 0); 
print ("testing getTotalSalary(startDate, endDate) of Payroll class");
assertEquals(p.getTotalSalary("1/1/2007", "1/1/2006"), 0, "Case 347: system does not return 0 when end date is earlier than start date");

Only one invalid value per test case

When you are testing how a system respond to invalid inputs, each test case should have no more than one invalid input. For example, the following test case uses two invalid values at the same time, one for startDate and one for endDate.

print ("testing getTotalSalary(startDate, endDate) of Payroll class");
assertEquals(p.getTotalSalary("40/40/1036", "40/40/1037"), 0,
"Case 142: system does not return -1 when the date format is wrong");

If we wrote two test cases like this instead, we get to learn about the error handling for startDate as well as endDate.

print ("testing getTotalSalary(startDate, endDate) of Payroll class");
assertEquals(p.getTotalSalary("40/40/1036", "1/1/2007"), 0,
"Case 142: system does not return -1 when the startDate format is wrong"); assertEquals(p.getTotalSalary("1/1/2007", "40/40/1036"), 0,
"Case 143: system does not return -1 when the endDate format is wrong");

Note that if we want to test the error handling of each component of the date (i.e., day, month, and year), we have to write more test cases (yes, that's right. that's why testing is hard work :-)

Make every test case count

Every test case you write must strive to find something about the system the rest of the  existing test cases do not tell you). Document what that is (that is the "objective" of the test case).

Some students try to boost their LOC count by duplicating existing test cases with minor modifications.  This will not happen if you have the policy "every test case should have a clearly specified and unique objective that adds value to testing".

For example, if one test case tests a module for a typical input value, there is no point having another test case that tests the same module for another typical input value. That is because  objectives for the two cases are the same. Our time is better spent writing a test case with a different objective, for example, a test case that tests for a boundary of the input range.

Have variety in your test cases

Consider the two test case below that tests the getTotalSalary(startDate, endDate). Note how they repeat the same input values in multiple test cases.

id startDate endDate Objective and expected result
345a 1/1/2007 40/40/1000 Tests error handling for endDate, error message expected
345b 40/40/1000 1/1/2007 Tests error handling for startDate, error message expected


Now, note how we can increase the variety of our test cases by not repeating the same input value. This increases the likelihood of discovering something new without increasing the test case count.

id startDate endDate Objective and expected result
345a 1/1/2007 40/40/1000 Tests error handling for endDate, error message expected
345b -1/0/20000 8/18/2007 Tests error handling for startDate, error message expected
 

Know how to use "unverified" test cases

A test case has an expected output. A proper test case should have its expected output calculated manually, or by some means other than using the system itself to do it. But this is hard work. What if you use the system being tested to generate the expected output? Those test cases - let us call them unverified test cases - are not as useful as proper test cases as they pass trivially, because the expected output is exactly the same as the actual output (duh!). However, they are not entirely useless either. Keep running them after each refactoring you do to the code. If a certain refactoring broke one of those unverified test cases, you know immediately that the behavior of the system changed when you did not intend it to! That's how you can still make use of unverified test cases to keep the behavior of the system unchanged. But make sure that you have plenty of proper (verified) test cases as well.

Tips for debugging

Write the test before the fix

When a system/integration test failure was traced to your module,  it usually means one thing: you have not done enough developer testing! There is at least one unit/integration test case you should have written but did not.

Avoid collateral damage during bug Chasing/fixing

If you experiment with your code to find a bug (or, to find the best way to fix a bug), use your SCM tool to rollback the experimental modifications that you did (but no longer required). 

Use the debugger

Use the debugger of your IDE when trying to figure our where things go wrong. This is a far superior technique than inserting print statements all over the code.

Grading tips

Submit your bank of test cases as part of the deliverable. Be sure that you can run the whole lot of them at a moments notice. Unless explicitly prohibited by your instructor, add a couple of sample test cases to your report. Be sure to pick those that show your commitment to each type of testing. It is best to showcase those interesting and challenging test cases you managed to conquer – not those obvious and trivial cases. You can use a format similar to the following:

Test Purpose: explain what you intend to test in this test case.
Required Test Inputs: what input must be fed to this test case.
Expected Test Results: specify the results expected when you run this test case.
Any Other Requirements: describe any other requirements for running this test case; for example, to run a test case for a program module in isolation from the rest of the system, you may need to implement stubs/drivers.
Sample Code: to illustrate how you implemented the test case.
Comments: Why do you think this case is noteworthy?

Further resources

Giving feedback

Any suggestions to improve this book? Any tips you would like to add? Any aspect of your project not covered by the book? Anything in the book that you don't agree with? Noticed any errors/omissions? Please use the link below to provide feedback, or send an email to damith[at]comp.nus.edu.sg

Sharing this book helps too!

 

---| This page is a part of the online book Tips to Succeed in Software Engineering Student Projects V1.9, Jan 2009, Copyrights: Damith C. Rajapakse |---

free statistics free hit counter javascript