In this part of the API Testing Series, we review a tool called API Fortress that is billed as both a test tool and an API health tool, implying a focus on reliability and monitoring in production. Here's a closer look at API Fortress, its feature set, who might use it, and the current price models.
Next in our API Testing Series I'll review a tool called API Fortress. While most API tools in this series are "test" tools, API Fortress is billed as both a test tool and an API health tool, implying a focus on reliability and monitoring in production. In addition, API Fortress is designed to be nearly code-free and is built around modern API architectural patterns and practices. Development staff can use the tool during the sprint to run tests as they're making code changes, by testers to write new API tests, or as a part of a continuous integration process by running every time developers create a new build. Here's a closer look at API Fortress, its feature set, who might use it, and the current price models.
After logging into API Fortress, you'll see a dashboard that contains each of the projects you currently have set up. Here's a partial screenshot of my current dashboard:
Figure 1. A view of the API Fortress top level dashboard.
Drilling into one of those projects will display a list of the specific APIs you have access to along with a few things you can do with them. In this company page, you will have access to:
- An API dashboard A page where someone on the technical staff can create new tests
- A page with some API quality statistics
- Some administrative functionality for creating new projects
- A feed displaying information about the last several test runs
Here's the Atlassian Jira API in API Fortress (many of these screenshots can be magnified by clicking on them):
Selecting the Dashboard option takes you to a page that displays information on logging, performance metrics, and API uptime (availability) statistics.
Figure 3. The Logs dashboard displays information about each test performed from API Fortress — the call, date, and time it was performed, what cloud the call was performed from, and the typical pass/fail information.
Figure 3-3. The left-hand navigation of the Logs dashboard shows various functions and the opportunity to return to recent queries
Figure 4. The Metrics dashboard displays basic information about the performance of each API call made — latency, fetch time, and HTTP status. This isn't intended for deep performance testing, but it can be useful for spotting trends and guiding future rounds of exploration around performance.
The API Quality dashboard is a collection of information about test runs and statistics for each run similar to what you might see in a continuous integration (CI) system dashboard. Here, you'll find specific information about the number of tests that passed or failed in a particular run, the pass percentage, and the percent change in pass rate from the previous test run. This is useful information for seeing how API quality is changing over time. API Fortress is built on an API. Once tests are built, some companies leverage the API to trigger test runs through their CI servers, such as Jenkins, and review information in that dashboard rather than returning to the API Fortress dashboard. This workflow choice (manual or automated via API) is up to personal preference.
Figure 6. The test icons at the API level within API Fortress (see Figure 3) take you to the API test suite as shown.
You can build these tests one at a time or generate them from a API description based on the OpenAPI or the RESTful API Modeling Language (RAML) specifications. This ability to automate test harnesses is one of the downstream benefits of having such machine readable descriptions of your APIs (there are many others). Building from a specification file will get you a set of basic tests that will perform each possible call to your API endpoints along with some data permutations. More complex powerful integration tests can be created that perform tests across API endpoints similar to how a user might perform a workflow.
Like other API testing tools, API Fortress has the ability to perform an API call and then store the response from that call in a variable for use in the next call to create a workflow. If you're building tests using API Fortress, you don't need to be able to code, but you'll need to understand basic programming concepts, such as variable usage and conditionals. If you're more comfortable with code, you can create or edit tests through the API Fortress code view, which gives access to the test in a markup format.
One interesting feature of API Fortress is the ability to run tests from different clouds over the globe. This can provide information about regional outages and API performance from different points in the world so that problems can be resolved before production. Let's explore the Tests dashboard in the figures below.
Figure 7. A view of the Tests dashboard which provides features to "Get all projects," "Get user," and "Integration Test."
Figure 8. API Fortress has a Test Composer that steps you through the process. Shown is an example of how an integration test might be authored in the composer.
Figure 9. The code view of the Test Composer lets the user drill down into the test's nuts and bolts.
API Fortress has a tiered pricing model. The free 30-day trial gives access to five people to build up to 15 tests and run them every 15 minutes from any of the clouds based in the United States. The next tier is $160 a month for one user to build up to 10 tests that can be run every 15 minutes from U.S.-based clouds. Pricing for the enterprise tier must be negotiated with an API Fortress sales person. This tier gives access to an unlimited number of users building as many tests as they need and running those tests every five minutes from any API Fortress cloud. The enterprise tier also comes with increased data storage for test results and enhanced product support.