Getting Started With API Test Planning and Design

This is part 2 of our API testing series. In part 1, we looked at why you want to test your APIs earlier and how to be more effective with your tests.

Take a look at the user interface for your online banking software. You'll see some obvious workflows, such as Log In, View Checking and Savings Accounts, Transfer Money Between Accounts, and Make a Payment. These workflows direct customers through the business process, guiding them along each step they need to take to reach their goals. The workflows also give developers good starting points for testing.

If only API testing were that clear.

At best, testers get some documentation to dig through listing endpoints and parameters and access to a developer who can fill in the gaps. At worst, the tester is handed a code base and set free. But there is more to testing an API than passing in a piece of data and waiting for an HTTP 200. How do teams figure out where to start, and how do they know when they're done?

Test Planning

Test planning in modern Agile development environments is usually simplified to the point of "automate statements in the user story, and then ship." Developing a lightweight strategy can guide companies to better products; therefore, a team should be sure to ask a few primary questions when building their strategy.

What is reasonable to automate?

Having automation that you can run every time new code is committed is generally helpful, although deciding how much is enough is important. Teams that build too many automated tests at the wrong layer end up with a suite of tests that require more and more maintenance. Normally, these tests attack the software in the simplest way possible, the happy path. A team might build tests that cover statements in the user story (for example, users must be able to set their birth date). Then the team might add several more tests of the same functionality that probably won't expose any new information about the product.

Testing at the API level usually involves exploring different combinations, odd input, and also odd data setup. Good testers will have more ideas than they have time, and turning those into automated tests that can run every time takes much more time than one-time exploration. The challenge of testing, then, is twofold: What should you test and which small subset of those tests should you institutionalize as automation that you can rerun on-demand? In the old days, we called that the "regression test set" versus the "functional test set."

What are the project risks?

Every software release is a grouping of code, configuration, and platform changes. Any time software changes, new risk is introduced. You'll usually discover this when a new change breaks something that used to work. Automation is good as a change detection system. If someone makes a change that breaks how many decimal places can be passed into a product discount field in an API endpoint, chances are an automated test will discover that problem. Automation is not nearly as good at discovering surprises.

Thinking about product risk can help guide the development team to areas where discovery through automation isn't so easy.

What skill set is available?

Each team member has strengths and weaknesses. Developers are often good at building product and using automation to help them design better code. Testers excel at answering questions about how a product might fail a user's expectations. One team that is heavy on the development skill side might have information to share about building automation and tooling. A team with testing experts can coach developers on testing practices hopefully leading to better automated tests. When teams have the right skill mix, development, and testing, skills rub off in both directions and the product is better for it.

These questions help API teams understand how much they want to use human testing, how much they want to lean on tools, and where to focus their efforts right now.

Test Design

Testing and automation performed by developers is often done to answer the question, "Is this feature done?" Each test written is used as a tool to check the functionality of a few lines of code. Software testers use different test design techniques to discover different types of defects. Boundary testing looks at behavior with extreme values, while performance testing can find discrepancies between response time and the Terms of Service (or whatever your service level agreement — SLA — promises at the various tiers of financial commitment).

Here are a handful of techniques to help move past simple, confirmatory testing.

Domain Testing

This is arguably the most used test technique in the works. Think about a website advertising a beer brand. Before anyone can enter the main site, they're presented with a verification page. That page has one function, the user enters a birth date, and behind the scenes the software computes whether or not that person is 21 years old. There are two obvious ways to slice up testing data for this date field — values that calculate to an age of 21, and values that calculate to 21 years or older. One approach to testing these categories would be to choose one value that calculates to fewer than 21 years old, one that is exactly 21 years, and another that is greater than 21 years. That gives you three equivalence classes. There are often hidden classes within classes as well. Say someone is born on February 29 on a leap year, can the user get into the site on March 1 of a non-leap-year when they turn 21?

Testing probably shouldn't stop there, though. There are some other less obvious ways to categorize values for that variable — good and bad date formats, reasonable values, values that might calculate to a negative age or a very old age, and values that aren't dates at all.

If this were an API, there would likely be a REST call that encodes the date of birth — something like an HTTP POST to the endpoint using the JSON {"birthDate": "03/02/1981"}. Domain testing might result in tests to run against the API to see if it returns tooYoung or oldEnough. If you wanted to, you could create a for {} loop in code and run through tens of thousands of days, splitting expected results only on the boundaries of the equivalence classes. That covers the happy path, but not the less obvious ways previously mentioned.

Domain testing gives testers a way to categorize and understand the values they are testing with, along with a powerful tool for talking about coverage.


Software is designed to be used by someone. Often, a paying customer has expectations of how that product will help them accomplish work. While domain testing is focused on input and output, scenarios focus on a user's flow through the system. If a development team were working on a banking app, then a reasonable flow might be to login, check the balance on a checking account, check the balance on a savings account, transfer some money from the checking account to the savings account, check that the balances updated, and then logging out.

This is a simplistic version of a scenario. In reality, customers rarely perform a workflow in the same way two times in a row. And when they do perform these workflows, they take a meandering path through the app, starting and stopping a workflow several times before completing a task. If the API is truly de-coupled from the user interface (as it should be), then you can just test the user Interface once, record the API calls, store them, re-play just those API calls in order, and change the values and expected results.

In many cases, the API is available first. Here at ProgrammableWeb, we're strong advocates of API-first design — a principle whereby APIs are designed long before the apps that call them are built. Quite often, particularly where a public API is involved, the API designer has no idea what developers will try to accomplish with an API. But if you can talk to the programmers about how they envision the API enabling different business flows, you'll be much better off. This will catch errors ("Oh, that scenario just isn't handled by the APIs.") while also enabling early testing.


Every time an API is hit with a request, the server creates a response. Depending on how well the API was designed, its response contains an HTTP code to tell the developer if the call succeeded or how it failed and in either case, it may also contain information about the data that was changed. Usually there will be a line with the response time or else it will be in the server logs.

Most APIs, especially those that other developers will consume for making apps, have Terms of Service or SLAs specifying some base information about performance. There is much more to performance testing than response time. But if the API is failing to meet its Terms of Service under no load, there is no way it will succeed in the wild. Another easy option is to simulate a reasonable number of testers, then run API tests while the system is under load, looking not for failures but instead at the timing data. Such load tests could reveal weaknesses in your API's ability to scale past certain thresholds, perhaps testing your assumptions about the stack of technology that's driving your API in the first place. Fortunately, most such problems are easily solved between today's cloud, container (e.g., Docker), and even serverless function (e.g., Amazon Lambda) technologies.

Description Tools

Just a few years ago, if developers wanted API documentation, it had to be made the old fashioned way. Someone either typed info into a wiki and hoped it was correct, or a tester wanting to learn about API functionality had to go spelunking in the source code repository. Tools like the Open API Specification (OAS) and the RESTful API Modeling Language (RAML) have made documentation both better and easier to generate. Popular API testing tools can use the output of these tools to create baseline test suites. A developer could generate an API spec using RAML, feed that into SmartBear's SoapUI (covered later in this series) or some other tool, and end up with a test suite that passes a value to each API endpoint.

Generally, these tests are context free. They don't take into consideration project risks or what your customers value. But they can be a good way to break the ice on API testing, and they can guide the development team towards what they should test next.

Final Thoughts Test planning and design are the basics of API testing. Planning gives the development team a North Star and it shows them where they should focus their testing efforts. Good test design helps teams to discover important information about what they're building, how that might fail, and what really matters to the customer. Having one without the other means the API will suffer.

The next article in this series will talk about some of the latest options in tooling for API testing.

Justin Rohrman Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects such as conferences, the WHOSE skill book, and BBST software testing courses.

Comments (0)