Stackify is now BMC. Read theBlog

Constructing Test Cases That Don’t Suck (and How to Avoid Common Mistakes)

By: Alexandra
  |  August 21, 2017
Constructing Test Cases That Don’t Suck (and How to Avoid Common Mistakes)

Software testing is a crucial component of the software development lifecycle. Without it, you could miss functionality issues or major usability flaws that end up frustrating your end users. But all test cases are not created equal. Writing high-quality, effective test cases is just as important as testing your applications. In fact, poor test cases can result in a testing process that’s little more effective than not testing at all.

So, what do poorly written test cases have in common? How can software testing professionals write quality test cases while avoiding common mistakes? To find out, we searched the web for the best advice for writing effective test cases and reached out to a panel of development pros and software testers and asked them to weigh-in on this question:

“What do poorly constructed test cases have in common (and how can developers avoid these mistakes)?”

Meet Our Panel of Software Testing Experts:

Find out what our pros had to say by reading their responses below.


Avaneesh DubeyAvaneesh Dubey

@qsometests

Avaneesh Dubey is the CEO of Qsome Tech. His career is dotted with a strong set of innovations in different aspects of business: People, Processes, and Technology. Now, he’s building Qsome Tech as a cutting-edge innovator in the area of Software Quality.

“There are a few things many poorly constructed test cases have in common…”

1. Too specific – they run only a specific test condition.

Test cases need to consider a variety of conditions that the software will be expected to handle. The test case must be able to comprehensively test the software module with almost all possible combinations of main conditions. To be able to comprehensively test all combinations of conditions, the author must find a way to present these conditions such that it is easy for others to review.

2. Cover a small part of functionality – they need to test a larger part of the system.

Test cases often focus on a specific function. Often this function is determined by the internal technical design of the software. Instead, the test cases need to reflect the usage patterns and flows. Every test case should try to cover as much of the flow as reasonably possible – going across technical boundaries of the underlying application.

3. Test as per a specific user role.

We have often seen test cases written for a specific user role. This limits them in their scope and therefore, compromises their effectiveness significantly. Test cases that are most effective reflect the usage patterns. A business application, for example, should be tested with test cases that are designed to test the whole business process – covering all the user roles and all the systems that might be involved in the business process.

4. Written to prove that the most common use-cases are covered well in the application.

This, in my opinion, is one of the most common problems and is a result of what I call a ‘lazy’ approach to test design. The test designer simply repeats the requirements document into test cases. The test designer should instead look for the ‘corner-cases’ or ‘boundary conditions.’ Most developers are easily able to write code for the most common use cases. The problems surface the moment there is a condition of the most common use case. A well-designed test case will catch these easily.

5. Any test case can become completely useless if not cataloged systematically and kept available for use.

Imagine a library with books not cataloged and not kept systematically on shelves. It would be impossible to use the books if you can’t find them with ease when you need them.

Often hundreds of test cases are written with much effort and then dumped into a shared folder structure. While this can work if you have very few test cases, this just collapses the moment the number of test cases increases. Therefore, we need a systematically tagging and cataloging test cases. Then a test management system should be able to ‘pull out’ test cases when they need to be run. Creating and maintaining multiple versions of test cases is crucial.


Wes HigbeeWes Higbee

@g0t4

Wes Higbee is the President of Full City Tech Company.

“Test code that was forced is usually worthless…”

Perhaps there’s a rule that there must be 100% code coverage, or that you can’t add code without at least one test. So people write just enough test code to satisfy that rule. It’s glorified paperwork, TPS reports (Office Space).

People learn by mimicking what they observe. So if you demand testing without helping people understand the value of testing then they tend to see it as ceremonious and just try to copy existing tests and shoehorn their use case.

Testing isn’t an end in itself. Testing, when beneficial, affords confidence, can save time, and can safeguard future changes. So if people that have experienced these benefits can convey them to peers, then other people will seek those benefits and that often leads to productive testing.

People that have experienced the benefits of testing should try to extol those benefits, not the testing itself. For example, if you take some complicated code in a real software system that in the past has been riddled with bugs, and you put some simple tests around it and you no longer have those bugs – then confidence will go up, people won’t be so afraid, and they’ll remember that and bring testing to the table when they next feel uncertain.

You don’t have to mandate testing if people have experienced the benefits firsthand.


Bernard PerlasBernard Perlas

@MyCorporation

Bernard Perlas is a Quality Assurance Manager at MyCorporation who trained at DeVry University. He performs tests for existing products as well as updates to products and internal systems. He works on designing UI automation testing and grey-box testing of site to ensure the information is correct.

“Poorly constructed test cases have vague test steps in common…”

To avoid these mistakes, you need to be specific regarding that which you are testing and be clear of your steps along the way. The more specific you are, the more you will be able to replicate problems and identify areas for improvement and act on them.


Benjamin WaldherBenjamin Waldher

@wildebeest_dev

Benjamin Waldher is the Lead Dev Ops Engineer at Wildebeest.

“Poor test cases are often very dependent on…”

The internal workings of the code being tested, which can mean that if internal details of a function are changed, then the test case will break – even if the function already works properly. If you feel like you’re making this mistake, it might be because the functions you’re testing are too long, or you may need to exercise more separation of concerns.


Matt MacPhersonMatt MacPherson

@itsMattMac

Matt MacPherson is the founder of Edabit.

“There are a few things poorly constructed test cases have in common…”

1. Testing irrelevant stuff

Testing without any real purpose is pointless. I’m a big fan of the single responsibility principle. Each unit test should test one thing and that’s it.

2. Testing right away

I’ll probably catch a lot of flak for this one, but I never unit test until my design decisions have solidified. This is because I know I’m bound to make rather large changes and those changes are likely to break many of my tests. This means that in practice I’ll either end up writing less tests or I won’t make big design changes. Both of these are bad, and waiting for design clarity solves the dilemma.

3. Integration testing vs. unit testing

I see lots of developers doing integration testing when they’re really trying to unit test (and vice versa). When you test the application as a whole for bugs, it’s considered an integration test. The unit test is concerned with one specific unit of codes behavior while the integration test is concerned with the behavior of the application as a whole.


Manu SinghManu Singh

@ClrMobile

Manu Singh is a Mobile Architect at Clearbridge Mobile with experience in Android Design, Development, and Testing. At Clearbridge, Manu manages project resources for a team of developers building world-class apps for enterprise clients.

“There are several commonalities that poorly constructed test cases share…”

First, the test cases might be too simple, meaning they only test the main function without testing too many extreme cases. To avoid this, a developer should cover as many corner cases as possible. If an interface is being developed and requires an input that isn’t trivial, then testing for bad or empty inputs is important.

Secondly, poorly constructed test cases don’t reflect how a user will perceive and use the functionality. Making test cases for this is harder to do, since you have to get into the mindset of another user. Consider segmenting your users and identifying their use cases. Then identify the quickest paths to complete a user journey, since a user will try to cut down the number of steps they need to input in order to complete a given journey. It is important to qualify different use cases, with designers and project managers to further identify these types of test cases.

And finally, a poorly constructed test case doesn’t test against aspects of an app that could cause it to be started in parallel or after another. A lot of the time the test cases only test for a specific feature of an app without seeing the ramifications of multiple features being enabled and interfaced with. Considering cross-functional test cases is important, especially if there are performance constraints. Two seemingly separate functions may consume the same resources, thus causing deadlocks.


Rick RamptonRick Rampton

@QASource

Rick Rampton is Head of Client Success at QASource, overseeing marketing, sales and customer management. He has been instrumental in building QASource, with a proven track record for building, managing, and retaining high-performing engineering teams offshore. Headquartered in Pleasanton, Calif., QASource is one of the world’s leading software QA providers.

“There are multiple identifiers using which we can recognize a poorly constructed test case…”

  1. Many times, a summary of the poor test cases does not highlight the objective of the test case, or what needs to be achieved.
  2. The prerequisite for executing the test case is not clearly defined or missing, which can cause confusion to the developer/tester on what test data or conditions are required before executing the test case.
  3. If there are missing steps in the test cases, then it leaves gaps for the developer/tester, causing them make assumptions when executing test cases. This can cause inaccurate test case execution and important scenarios to be missed.
  4. Absence of the right environment for executing the test cases is another indicator of poorly constructed test cases.
  5. If the expected result is not clearly described in the test case, then the tester won’t be sure about the results/he has to verify with that test case.


Rachel HonowayRachel Honoway

@rachelhonoway

Rachel is an experienced executive specializing in Internet Startups and Growth Mode Companies. Her career has been spent working hands-on with entrepreneurs to build and grow SaaS products and related services.

“The sequence of events is often over-simplified, leading to a poorly constructed test case…”

Developers tend to assume a straight, logical path from start to finish. In reality, though, end users will take an exponential number of paths, many of them illogical. When developers don’t take the time to consider all of the variables that may be affecting the user at the moment they are faced with the feature/page/function, they leave a lot of room for false positive tests.

Ideally, developers should observe several users as they interact with the platform, and then spend some time asking internal customer advocates (customer support, product managers, sales agents) to better understand the user. Sales agents can help define who the user is and describe the need that the application is satisfying. Customer support reps can identify common sticking points for the user base, like lost passwords or common attention detractors like on-page advertisements. Product managers can provide insight on how the user interacts with the application and how that interaction fits within the user’s greater workflow or daily routine, describing distractions occurring outside of the application itself (managing multiple screens at once, mobile users receiving notifications while driving, etc).

Understanding what may lead the end user away from the logical, straight path will help developers test for those variables and ensure a successful experience despite distractions, illogical paths and loss of focus.


Anthony BreenAnthony Breen

@BreenAnthony77

Anthony Breen is the Co-Founder and CEO of The Feasty Eats app. Feasty Eats is an integrated SaaS platform that helps restaurants drive traffic during their low-volume times.

“When considering test cases…”

It is important for developers to go back to some of the most simple, yet most important, aspects of the inquiry process. Just as in a science experiment, there are two factors that must be at the forefront of any developers mind in order to facilitate an effective and meaningful test: clarity and control. More specifically, clearly outlined hypotheses and careful attention to key performance indicators need to be communicated prior to the beginning of the test. Doing so creates a strong framework for further inquiry.

In addition, it is essential that all variables not only be identified but also carefully controlled. Bias must be eliminated from test populations and potential confounding variables need to be identified so that developers can acquire insightful, accurate data. A well-defined and clearly articulated purpose, when paired with a meticulously controlled test environment, will not only yield valuable results but more importantly prompt new questions and hypotheses that lead to expanded inquiry.


Devlin FentonDevlin Fenton

@devfen

Devlin Fenton is the CEO of Go99. Devlin leads a small, specialist team of engineers and programmers who are dedicated to solving problems by Industry, for Industry. This team has most recently spun off a digital freight matching platform set to disrupt North America’s $700B trucking industry. Devlin believes Build an expert team and let them thrive.

“The worst test cases are those that do not exist…”

By far, the biggest problem is that there are not enough of them and the system cannot be reliably retested in an automated fashion. They lack consideration for priority. Once developers do get into writing test cases, they often develop a lot of the easy ones, which test trivial, static, and low-risk code. Effort should be given to targeting complex and risky code and for that, sometimes the use case is also more complex and harder to develop.

The belief is that if it’s not a unit test it shouldn’t be done. Integration tests, data tests, performance tests, stress tests, and so on should be all part of the test suite.

Lack of focus in terms of the objective is another issue. For example, is this a unit test, integration test, performance test, or another type? Mixing these priorities makes the use cases more brittle and harder to fix. They need to have focus and be annotated accordingly so that the test suite can be executed in a targeted way.

Additionally, some test cases lack thoroughness by only covering a happy path and not enough of the alternate and exception flows. Effectively, the use case follows the developer’s make-it-work standard, instead of the testing make-it-fail standard.

Yet another issue is test cases that are not maintained, which would make them fail if they were run, or they are commented out altogether. In haste, developers will often change the code and bypass fixing the test cases.

Poorly constructed test cases may also lack clarity as to what is being tested including the object, method, or the scenario. Test cases should follow a convention (naming, comments, etc.) that makes it easy to identify the scope of the test.

Good test cases avoid a web of dependencies. Dependencies among test cases should be avoided; otherwise, a single failure can show many tests failing, making it more difficult to identify the cause. If there is a common logic for setting up preconditions, or evaluating results, such logic should be abstracted and shared instead of stacking test cases.


Rohit KeserwaniRohit Keserwani

@EquifaxInsights

Rohit Keserwani is a BI Consultant and Sr. Business Analyst for Equifax.

“I have observed the following features commonly present in bad test cases…”

1. Impact analysis not documented. When a test analyst doesn’t document the impact, assume he doesn’t know the impact. The result? POOF…!

2. The test pre-condition is either not defined or loosely articulated.

3. Overlooking the user background and assuming things. If you don’t understand the level of knowledge of the person who is going to test, don’t assume anything. Just document everything to the tiniest detail. If not, you might end up assuming some knowledge they lack, which can cause the user to test differently than expected.

4. Pass criteria and tolerance not clearly defined. If for any reason the pass criteria is not articulated, the person testing is left to his fancy to decide whether a particular test passed or failed. If you don’t have a clearly defined pass scenario, the tester won’t have a benchmark to compare with. This would create ambiguity and eventually put the entire testing effort at stake.


Pete Van HornPete Van Horn

@PraxentSoftware

Pete Van Horn is a Business Technology Analyst for Praxent.

“I have a lot of experience with poorly constructed test cases, and I’m also probably guilty of writing them from time to time…”

A lot of times, a test case is not easily executable. If I’m a developer, I should be able to have enough information and be positioned in a way where I can execute my test case to determine an outcome. I think one thing about test cases is that they don’t necessarily have to result in the intended outcome, but they always need to be executable from end to end. Even if you get a negative outcome, that’s better than getting stuck. A poorly constructed test case doesn’t enable whoever is doing the testing to execute it completely.

In terms of developers’ mistakes, maybe I’m thinking too much into it, but typically developers don’t write the test case. They might write unit tests, but a test case is usually written by a business analyst or a solution architect. They are in charge of creating what the expected outcome should be. I would say that having well-written test cases really finds its genesis in having really well-written requirements. Having good requirements enables you to create a good test case. And a good test case is able to be executed from end to end, without interruption.


Hans BuwaldaHans Buwalda

@logigear

Hans Buwalda is the Chief Technology Officer for LogiGear. He leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services.

“Test design can play a significant role in the success or failure of automation…”

A common spoiler for automation is a lack of focus in test cases. Tests should have a clear scope that should differentiate them from other tests. All steps and checks in the tests should then fit that scope. The scope of a test case should be very clear; otherwise, there is no knowing how detailed the tests steps should be and what checks should be performed.

A common practice in many projects is to have long sequences of detailed steps, each with one or more checks to verify their expected outcome. This makes tests hard to maintain. Navigation details and checks that do not contribute to the scope of the tests should be encapsulated in reusable high-level keywords or script functions. It will make tests more readable and easier to keep up to date. Testers, product owners, and developers can work together to obtain an optimal set of tests that can serve for long amounts of time with minimum efforts.


Ulf ErikssonUlf Eriksson

@ReQtester

Ulf Eriksson is one of the founders of ReQtest, an online bug tracking software developed in Sweden. Ulf’s goal is to make life easier for everyone involved in testing and requirements management. As a product owner, he strives to make ReQtest easy and logical for anyone to use. The author of a number of white papers and articles, mostly on the world of software testing, Ulf is also slaving over a book, which will be a compendium of his experiences in the industry.

NOTE: The following information is excerpted from How to Write Effective Test Cases via ReQtest. 

“When testers report defects based on the test case, they should indicate which test step failed, in order to make troubleshooting easier…”

When you write a test case, you don’t need to specify the expected result for each test step if the result is obvious. For example, ff the browser doesn’t open, the tester won’t be able to proceed to the next step.

If your test case has too many test steps, you ought to think about breaking up the test case into a set of smaller ones.

If the test case contains a long list of test steps, and an error occurs, the developer will have to backtrack and repeat all the test steps, which he or she might not do by accident, or out of laziness.

Having too many test steps can be a disadvantage for the tester, too. The tester may have to repeat each one of the test steps to ensure that the bug is fixed.


Sanoj SwaminathanSanoj Swaminathan

@rapidvalue

Sanoj Swaminathan is a Technical Lead – Quality Assurance at RapidValue Solutions.

NOTE: The following information is excerpted from Test Case Design and Testing Techniques: Factors to Consider via RapidValue Solutions.

“The process of test designing is of high priority. A poorly designed test will lead to…”

Improper testing of an application and thereby, yield the test wrong and with harmful results. This, in turn, will lead to a failure in identifying defects. As a consequence, an application containing errors may be released.

There are various types of designing techniques, and the challenge lies in selecting the right set of relevant test design techniques for the particular application. The different types of testing techniques have their own unique benefits. The use of any particular technique is considered only after much contemplation and by giving maximum emphasis on the type of application.


TestLodgeTestLodge

@TestLodge

TestLodge is an online test case management tool, allowing you to manage your test plans, requirements, test cases and test runs with ease.

NOTE: The following information is excerpted from What is Usability Testing? (With Example) via TestLodge.

“The key to major success is right here…”

Before you start the testing, clearly define the goals. Why are you conducting these tests in the first place? What motivated your organization or team to do this and what are you looking to achieve? What will define a successful test for you? Also, think about the hypothesis you have. Where do you believe you’ll encounter the most issues and why? Understanding and clearly stating the foundations are absolutely essential.

You should also lay down whichever specific methodology you intend to follow, both for making the running of tests easier and for facilitating replication later down the road, in case that becomes necessary for any reason.


Quick Software TestingAmandeep Singh

@quickswtesting

Amandeep Singh writes for Quick Software Testing, a blog dedicated to topics around Software Testing and Quality Assurance. Software test engineers can discuss automation and manual software testing tools and tutorials.

NOTE: The following information is excerpted from Top 13 Tips for Writing Effective Test Cases for Any Application via Quick Software Testing.

“While writing test cases, you should communicate all assumptions that apply to a test, along with any preconditions that must be met before the test can be executed…”

Below are the kinds of details you should cover:

  • Any user data dependency (e.g., the user should be logged in, which page should the user start the journey on, etc.)
  • Dependencies in the test environment
  • Any special setup to be done before Test Execution
  • Dependencies on any other test cases – does the Test Case need to be run before/after some other Test Case?


Kyle McMeekinKyle McMeekin

@QASymphony

Kyle McMeekin contributes to the QA Symphony blog. QA Symphony helps companies create better software by being the only provider of truly enterprise-level agile testing tools.

NOTE: The following information is excerpted from 5 Manual Test Case Writing Hacks via QASymphony.

“To be considered a ‘great software tester,’ you have to have an eye for detail…”

But you can’t be truly great unless you can effectively write test cases. Writing test cases is a task that requires both talent and experience.

The purpose of writing test cases is to define the “how” and “what.” For some testers, this is considered boring work, but if done well, test cases will become highly valuable, improve the productivity of the entire team, and help your company create higher quality software.

Keep it simple: No one is going to accept a test case that is overly complex and can’t easily be understood. Test cases have to be written in a simple language using the company’s template.

Make it reusable: When creating new test cases, you need to remember that the test cases will be reused so you need to get it right. The same test case might be reused in another scenario or a test step could be reused in another test case.


Software Testing ClassSoftware Testing Class

Software Testing Class is a complete website for software testing folks.

NOTE: The following information is excerpted from How to Write Good Test Cases via Software Testing Class.

“Test cases should be written in such a way that it should be…”

Easy to maintain. Consider a scenario where, after the completion of writing test cases, the requirement gets changed, then the tester should effortlessly able to maintain the test suite of test cases.

Each test case should have a unique identification number which helps to link the test cases with defects and requirements.


TO THE NEWAkanksha Goyal

@TOTHENEW

Akanksha Goyal contributes to TO THE NEW, a fast-growing and innovative digital technology company that provides end-to-end product development services.

NOTE: The following information is excerpted from Top 9 Tips to Write Effective Test Cases via TO THE NEW.

“Domain knowledge is the core of any software application…”

Business rules may vary as per the domain and can greatly impact business functions. Lack of domain knowledge in testing may result in business loss. Thus, to avoid conflicts between Standards of Domain, a product tester must attain this knowledge before writing test cases.

Do not assume anything; stick to the Specification Documents. Assuming features and functionality of software applications can create a gap between the Client’s specifications and the product under development, which can also impact the business.

# # #

At Stackify, we understand how crucial software testing is in the development lifecycle, so it’s a topic that we discuss regularly. Did you know that you can integrate APM into your testing strategy? Find out how here.

For more expert advice on writing quality test cases (and why it’s like the scientific method), check out this post, or visit this article for a list of 101 expert software testing tips and advice for getting the most out of your testing process. For a more in-depth look at the types of performance testing and software testing, performance testing steps, and best practices, visit this guide.

Featured Image Copyright: wrightstudio / 123RF Stock Photo

Improve Your Code with Retrace APM

Stackify's APM tools are used by thousands of .NET, Java, PHP, Node.js, Python, & Ruby developers all over the world.
Explore Retrace's product features to learn more.

Learn More

Want to contribute to the Stackify blog?

If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]