Why You Should Test Your APIs and Test Them Often

This is the first part of our API testing series. In this series we'll look at why you should test your APIs, how to do it effectively, and present a number of tools to help you do it.

In a typical software project, the programming ends with the user interface. Testers pick up the product, perhaps with only a few days before the intended ship to production, and try to find ways to make it fail. These testers discover bugs in the product, inconsistencies in the user interface, and come up with questions about how the product is really meant to be used. Some of the discoveries end up causing delays in the release.

The machinery that makes software useful, however, is underneath the user interface — database connections, HTTP messages, security and authentication, and the API. A good API is the secret to testing earlier, which means finding problems earlier in the software process. Perhaps earlier enough to prevent the delays, certainly earlier enough to save time. If you want to test earlier and be more effective, this is the place to start.

Test Early, Test Often

Imagine this scenario. A team of programmers is building a small advertising website. It has a front end that is mainly product descriptions, store locators, support, and signups for coupons and discount notifications. The programmers are split into two groups — a front-end team that builds the user interface with JavaScript tooling (frameworks, SDKs, etc.) and a back-end team building API endpoints with a combination of Python and an API management solution. The back-end developers create a new API endpoint, hand that off to the front-end developers who build a UI around that API. The testing group comes in when the API and the user interface are wired up and test the product as a whole. The bugs they discover might be in the API, they might be in the UI, or they might be in more general aspects of the implementation like usability. These bugs have to be isolated to the component they live in, whether that is in the API code, JavaScript, database, etc., and then passed off to the person that wrote that code.

A slightly different approach could help that same team deliver software faster and with fewer surprises.

Rather than using a series of handoffs and loops, one of the back-end developers and a tester sit together and pair on a new endpoint. In the initial stages, before the developer has any executable code at all, the tester can ask questions about implementation and begin stubbing out automated tests in a library like frisby.js or airborne. While the API code is being written, the tester can ask questions and act like an invited backseat driver. What date format should we accept here? What precision should we return there? Are special characters and Unicode allowed in that field?

By the time the programmer is ready to check his or her code into the team's source code repository, it has gone through a set of automated tests and it has been explored and gone through bug fixing. These automated tests act as a monitoring system. Each time a change is made to the API, the automation is run by a continuous integration system to discover any new problems introduced into the code.


Testing a UI is called black box testing for a reason. Everything under the UI is a mystery and seems to be inaccessible.

What does a tester do if they submit a web form with some data and nothing happens? It might be a screen freeze, or maybe the insert worked and the UI design is simply nonstandard. Perhaps users didn't receive a notification that the submit happened successfully, they aren't navigated to another page in the app, and they don't get an email. Step one might be for the tester to open the JavaScript console to check for any errors thrown when a user clicked the submit button. After that the tester might check the application log for exceptions. Then the tester checks the database to see if the row was inserted, discovering it wasn't. The JavaScript console and application logs are helpful tools for a tester to collect data from, but they might not help isolate the bug. That tester will have to wait for the developer to figure out if the error was caused by one piece of data in the form, a button wired up the API incorrectly, or something else.

Submitting that same form from the API might yield different results. A tester can populate a JSON blob with the same test data he or she might use in the UI and then make the API call. In return, they will get a response from the API, an HTTP code to indicate what happened during the call, and maybe an error message. Rather than hunting through different log files and error consoles, all the information they need to isolate a problem is right there in the response. When the time comes to test the user interface, the tester can spy on the request to make sure it matches the request format in the automated tests.

Testers get faster, and often more precise, information about problems when they approach software from the API.

Testing from the API

There is a popular phrase in the testing world called "Shift-Left." This phrase represents the idea that testers can and should be involved earlier in the software development cycle. This often manifests as testers joining design and planning meetings, and then waiting for a full-blown user interface to do the real work. With this, an opportunity for a real shift toward easier and better testing happens when teams approach testing from the API.

The next part of the series will talk about the "how" of API testing: planning and test design.

Justin Rohrman Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects such as conferences, the WHOSE skill book, and BBST software testing courses.

Comments (0)