Using API Science to Monitor The Pulse of Your Own APIs and the Ones You Call

Today in our API Testing Series we'll take a look the tool API Science. This tool is different from the others you've seen so far in this series. Rather than being a tool for technical staff to use to interact with an API directly by sending POSTs or PUTs, API Science allows them to observe. API Science is an API monitoring and reporting tool that's set up in an environment to collect information and alert technical staff when something interesting happens. It competes against other monitoring tools like API Metrics and Runscope.

Let's take a deeper look.

How API Science Works

API Science was founded by John Musser who is also the founder of ProgrammableWeb. Some time after ProgrammableWeb was acquired by Alcatel-Lucent (it was later sold to MuleSoft, the current parent company), Musser started API Science. API Science is a Software-as-a-Service (SaaS) product that's accessible through a web page. To proceed, create a free, 30-day trial account and configure your endpoint and credentials. API Science will call the service and record the results, any errors, and how long the query took. Over time, the tool builds a database that records uptime, response time, and a history of responses (200 OK is good, 400 or 500 is not.) When your trial expires, they'll ask for a credit card to continue. Prices start at just $29 per month and go up as you add more accounts, users, and API calls.

API Science Checks, Uptime, and Alerts

API Science Checks, Uptime, and Alerts

Monitoring

Using API Science, technical staff can configure the monitoring of specific endpoints quickly. Each monitor is configurable, meaning you can select which environment to check and how often to perform that check. Advanced monitoring allows teams to monitor and collect information on complete workflows, such as creating a new user on an ecommerce website, logging in, and making a purchase. Ops groups will normally monitor an API for metrics like the length of time it takes to complete a call, the number of times a particular endpoint is called in an hour, where in the world API calls are coming from, and how much data is moving through a call. Once that information is gathered, developers can get a feeling for how good or bad the customer experiences are at a particular point in time. API Science offers a main dashboard that displays the setup of each API monitor. From there you can drill down into that monitor to a status page and view specific pieces of information, such as performance, uptime, and the most recent checks that were run.

API Science Monitor Dashboard monitoring the Google Freebase API

API Science Monitor Dashboard monitoring the Google Freebase API

Teams that are dependent on third-party APIs can also monitor those for uptime. This can help quickly diagnose whether a software problem is related to your product or to something in the dependency chain, like a public API on which it relies.

Reporting

The other important part of this product is the combination of reporting and alerting. Alerts are configured from the monitor overview page, and then you can report on API performance. For example, if an API takes more than 20 milliseconds to respond, a notification will appear on the API Science Dashboard and an email will go out to the people who are configured to receive that alert. API Science alerts also integrate with modern collaboration tools like HipChat, PagerDuty, and Slack.

But What Would I Want To Know?

As an API provider looking to provide the best quality service to your developers, first, use your analytics to get a list of API calls made to your APIs by popularity, including the number made in a day, a week, an hour, yesterday, or last week and set API Science to monitor those same calls. Also take a look at uptime, which is the percentage of time the service was available in the past day or week. On top of that, add average performance, which is the time for the service to go from request to response. Sorting a huge list of API endpoints by any of these values can help you find your outliers, or at least the things on which to focus your testing resources.

After looking at your own API calls, consider the calls your application makes to other services. Are they up, responsive, and reliable? Putting alerts on their performance means that when Twitter or Salesforce go down, you (and your customer service team) know before your customers do. These alerts also make measuring the business impact of those outages easier, and they help you realize just how reliant on third-party services you are. If you've negotiated Service Level Agreements (SLAs) with the API providers that your apps rely on, then you can use API Science's reports to check on those providers' compliance.

Once the data is under monitor, you can also do historical trending to see if an impression that the service is "slower" has merit, and how and where things slowed down. Tie that back to traffic (a surge in the number of users) or software engineering changes to pinpoint what went wrong when.

Finally, there's the "a picture is worth a thousand words" aspect. Dashboards, graphs, and other visualizations show performance of the API in real time. Immediately after Development or IT make changes to production, they can walk to the monitor to see if the change affected performance at the API level. That kind of fast feedback makes having fast rollouts possible, which reduces time to market while increasing uptime and quality.

Pricing

API Science is a commercial tool. It currently has four graduated pricing levels available:

  • Basic starts at $29 per month and supports three users and 100,000 API calls per month
  • Team is $159 per month and supports 10 users and 500,000 API calls per month
  • Business is $599 per month and supports 25 users and four million API calls
  • Enterprise is customized based on your organizational needs
Justin Rohrman Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects such as conferences, the WHOSE skill book, and BBST software testing courses.
 

Comments