Full statement coverage may be necessary for good testing coverage, but it isn't sufficient. Two places where statement coverage will be inadequate are branches and loops. In this episode, we'll look at branches, and specifically the differences between statement coverage and branch coverage.
Let's consider a case where branch coverage and statement coverage aren't the same. Suppose we test the following snippet. We can get complete statement coverage with a single test by using a berserk EvilOverLord:
bool DeathRay::ShouldFire(EvilOverLord& o, Target& t) { double accumulated_rage = 0.0; if (o.IsBerserk()) accumulated_rage += kEvilOverlordBerserkRage; accumulated_rage += o.RageFeltTowards(t); return (accumulated_rage > kDeathRayRageThreshold);}
But what if DeathRay should fire at this Target even with a non-berserk Overlord? Well, we need another test for that. What should the test be? Let's rewrite the code a little bit. We would never see code like this in the real world, but it'll help us clarify an important point.
bool DeathRay::ShouldFire(EvilOverLord& o, Target& t) { double accumulated_rage = 0.0; if (o.IsBerserk()) { accumulated_rage += kEvilOverlordBerserkRage; } else { } accumulated_rage += o.RageFeltTowards(t); return (accumulated_rage > kDeathRayRageThreshold);}
Why do we add an else clause if it doesn't actually do anything? If you were to draw a flowchart of both snippets (left as an exercise – and we recommend against using the paper provided), the flowcharts would be identical. The fact that the else isn't there in the first snippet is simply a convenience for us as coders – we generally don't want to write code to do nothing special – but the branch still exists... put another way, every if has an else. Some of them just happen to be invisible.
When you're testing, then, it isn't enough to cover all the statements – you should cover all the the edges in the control flow graph – which can be even more complicated with loops and nested ifs. In fact, part of the art of large-scale white-box testing is finding the minimum number of tests to cover the maximum number of paths. So the lesson here is, just because you can't see a branch doesn't mean it isn't there – or that you shouldn't test it.
Remember to download this episode of Testing on the Toilet and post it in your office.
Testing Google Talk is challenging -- we have multiple client implementations, between the Google Talk client, the Google Talk Gadget, and Gmail chat, while also managing new features and development. We rely heavily on automation. Yet there's still a need to do manual testing before the release of the product to the public.We've found that one of the best ways to unearth interesting bugs in the product is to use Exploratory Testing (http://www.satisfice.com/articles/et-article.pdf) The trouble with ET is that while there appears to be a genetic disposition to be naturally good at exploring the product effectively, it's very easy to miss great swathes of the product when one follows their intuition through the product rather than focusing on coverage metrics. And speaking of coverage, how do we measure how well a team is doing finding bugs and getting coverage over the functional use cases for the product? All of the things that we rely on to measure the quality of the product, boundary and edge cases being covered? Plus, if not everyone is proficient at ET, how do we solve the overhead of having an experienced team member looking over people's shoulders to make sure they are executing well?To do this, we start with the definition of a Test Strategy. This is where we outline the approach we are taking to the testing of the product as a whole. It's not super-detailed -- instead it mentions the overarching areas that need to be tested, whether automation can be used to test the area, and what role manual testing needs to play. This information lets developers and PMs know what we think we need to test for the product, and allows them to add unit tests etc to cover more ground.Some basic test case definition go into the Test Plan. The aim of the test plan (and any test artifacts generated) is not to specify a set of actions to be followed in a rote manner, but instead a rough guide that encourages creative exploration. The test plan also acts as the virtual test expert, providing some framework under which exploratory testing can be executed effectively by the team. The plan decomposes the application into different areas of responsibility, that are doled out to members of the team in sessions that are one-working-day or less duration. By guiding people's thinking, we can cover the basics, fuzzy cases, and avoids a free-for-all, duplication, and missed areas.Finally we get a status report from the testers every day, that describes the testing that was performed that day, any bugs raised, and blocking issues identified. The reports acts as an execution of the "contract" and gives traceability, and the ability to tweak exploratory testing that has gone off track from where we've determined we need to concentrate efforts. We can use these status reports along with bug statistics to gauge the effectiveness of the test sessions.This is approach is fairly simple, but sometimes simple works best. Using this method has allowed us to make the best use of test engineers and maximized the effectiveness of each test pass. It's proven itself to be a fruitful approach and balances the need for reporting and accountability with the agility of exploratory testing.
public class Client { public int process(Params params) { Server server = Server.getInstance(); Data data = server.retrieveData(params); ... }}
public class Client { private final Server server; public Client(Server server) { this.server = server; } public int process(Params params){ Data data = this.server.retrieveData(params); ... }}
public void testProcess() { Server mockServer = createMock(Server.class); Client c = new Client(mockServer); assertEquals(5, c.process(params));}
bool SomeCollection::GetObjects(vector* objects) const { objects->clear(); typedef vector::const_iterator Iterator; for (Iterator i = collection_.begin(); i != collection_.end(); ++i) { if ((*i)->IsFubarred()) return false; objects->push_back(*i); } return true;}
bool SomeCollection::GetObjects(vector* objects) const { vector known_good_objects; typedef vector::const_iterator Iterator; for (Iterator i = collection_.begin(); i != collection_.end(); ++i) { if ((*i)->IsFubarred()) return false; known_good_objects->push_back(*i); } objects->swap(known_good_objects); return true;}